bko - thanks for these tutorials! i think i can speak for all the newbies here that your tutorials are the most useful and cogent content on this site. i was ready to give up then i found your stuff. HOWEVER, now that i have buttered you up, what i need next is an example of a two way conversation between a core and a server much like would be used to obtain record from.a database. thanks in advance.
Hi @captnkrunch
Thanks for the kind words! So it would help to know what kind of application you are thinking of? There is some data on the server and the core needs it? Or there is some data on a core and the server wants it?
A simple idea is best: for these tutorials I picked core uptime since you donāt need any external peripherals at all to make it work.
Hi David,
it all works perfectly fine in a browser, in my case Chrome, but my goal is really to consume these events on the server - mainly because the access token does not get exposed this way, also for further storing and analyzing the data. I had a look at the node lib āsseā - which is as you said mainly a server lib for sending the events. The client code mentioned on the sse page I believe is browser code.
For node, one lib that tried unsuccessfully is āeventsourceā - it does get the events, but strangely the events were empty when I tried it yesterday. e.g. there was no way to get at the data of the event.
Is the idea behind the publish feature / SSE to be consumed by browsers only? if so, then I can live well with not consuming them on the server. I know a new feature will be callbacks, which then would let me collect these events on the server (securely).
Thx
Sven
Ah definitely not..I use them on the Spark-CLI
Good point @kennethlimcp ! That code is here:
https://github.com/spark/spark-cli/blob/master/js/commands/SubscribeCommand.js
I know it's not a spark api limitation but waiting for callbacks/webhooks is a better strategy.
If your goal is to listen to events for more than just short term testing, listening to sse on the server isn't a good idea.
You don't want to consume server resources hanging on to connections long term. That's not the intent of sse.
A with most things, it all depends on what you're trying to achieve.
Great write-up, thanks. I canāt figure out what value TTL provides. I donāt see anyway to query the event stream to get events that are still live. It seems that if you miss the publish, you miss it for good.
It also doesnāt seem to be published, so I canāt leverage it in some downstream system.
Appreciate any info on this.
- Mark
Hi @mwdll
The TTL or time to live value is set to 60 seconds in all the published responses. To the best of my knowledge this is a āreserved-for-future-expansionā thing and is not used right now.
The stream should be continuously broadcasting (including keep-alives every 9 seconds) but it does die from time to time. There are three variables, the core, the cloud and the browser and generally speaking it is the browser that fails for me, but it is sometimes hard to know. You can use curl
to log events to a file on the command line on a PC too.
You can remove the connect button and just always connect to the stream when the html page loads. That would give you the best chance of restarting, but it makes debugging harder.
If you are worried about missed events, you should consider Spark.variables instead. With Spark.publish like in this tutorial, you get push-type behavior, but with Spark.variables it would pull-type behavior. Both are good but they just have different uses.
I am note sure what you mean by:[quote=āmwdll, post:47, topic:3469ā]
It also doesnāt seem to be published, so I canāt leverage it in some downstream system.
[/quote]
So if you are having a specific problem, let me know and I will try to help.
Thanks @bko. That was exactly what I needed to know, that Iām not missing some great capability of related to TTL. I was hoping there was some short-term storage of events that TTL was defining⦠maybe that is for the future. I want to capture events on a server, but I donāt want to listen constantly. Will have to wait for webhooks and keep using a REST call to my server using TCPClient.
@bko, Iāve played with both of these tutorials with a great deal of success! Theyāre excellent write ups and the firmware and html is easy to follow!!!
Iāve searched at great length to find some example html just to show variables. That would seem an easier task than displaying an event, something that updates a web page as often as specified, but Iāve come up short trying to modify these spark.publish examples to show variables instead of events. Frankly, I think itās just a matter of retrieving, parsing, and displaying the json, but again, cominā up short⦠Any chance you could shed some light on how to do such a simple task?
You can use the Spark.publish()
to āpublishā the variables you need.
Example:
// publishjson.ino -- Spark Publishing Example
unsigned long lastTime = 0UL;
int my_var_1 = 0;
void setup() {
}
void loop() {
unsigned long now = millis();
//Every 15 seconds publish uptime
if (now-lastTime>15000UL) {
lastTime = now;
Spark.publish("my_var_1",String(my_var_1));
}
}
I will write something up tonight and start a new thread on reading Spark.variables with an HTML web page.
I put this up over here:
Hi @bko, I am just at the #thingscon in Munich, met @zach and solved my problem to consume these events with a plain node.js program, e.g. from the server. I ripped the code off the spark CLI into a small standalone program and just post this here for others in case they need it. Below code uses the node request library, ripped out of the spark cli
var request = require('request');
var extend = require('xtend');
var requestObj = request({
uri: 'https://api.spark.io/v1/devices/xxx/events?access_token=xxx',
method: "GET"
});
var chunks = [];
var appendToQueue = function(arr) {
for(var i=0;i<arr.length;i++) {
var line = (arr[i] || "").trim();
if (line == "") {
continue;
}
chunks.push(line);
if (line.indexOf("data:") == 0) {
processItem(chunks);
chunks = [];
}
}
};
var processItem = function(arr) {
var obj = {};
for(var i=0;i<arr.length;i++) {
var line = arr[i];
if (line.indexOf("event:") == 0) {
obj.name = line.replace("event:", "").trim();
}
else if (line.indexOf("data:") == 0) {
line = line.replace("data:", "");
obj = extend(obj, JSON.parse(line));
}
}
console.log(JSON.stringify(obj));
};
var onData = function(event) {
var chunk = event.toString();
appendToQueue(chunk.split("\n"));
};
requestObj.on('data', onData);
Hi @hansamann
Thanks for that! Node.js is not my thing so it will really help folks. Maybe you even want to write up your own tutorial using itāthat would be great!
I wish I was there at Thinkscon with you and @zach ! Munich is a great city and I really enjoyed Wursthaus in der Au and of course, the beer. Have fun!
ups⦠Berlin, I meant
Well, you wonāt find my favorite Bavarian pub there, but it is still a nice place. And thereās beer.
@bko, that new writeup was exactly what I was after! A simple writeup for displaying variables AND published streams gets people off on the right foot! I tried pulling apart spark helper, but as wonderful as that tool is, the code behind it was just too complex to try to simplify. Thanks so much for all that you do!
HI @bko, just did that.
Here we go:
http://techblog.hybris.com/2014/05/02/consuming-spark-core-sse-events-via-node-js/
Hi @captnkrunch
I put a new mini-tutorial up over here that builds a table of published core data dynamically.
I hope this give you some ideas for your database record idea.