Using Spark.publish() with Simple JSON Data

bko… Great instructor… Still looking for debugging help. Have invested HOURs today attempting to learn the WEB code and debugging tools. Very little progress, just confusion.

Went back to loading the core directly from the CLOUD.
Created many versions of uptime.html. with various debugging aids.
Again, some basic questions.

CLOUD Response

  1. IF I send the WRONG ID or Token, I PROPERLY get a rejection by spark.io CLOUD
  2. When I send the correct numbers, I simply get a pending. Why is the CLOUD NOT sending me a timeout???

Attempted to use your debug aid:

view-source:https://api.spark.io/v1/events/?access_token=MYTOKEN

ONCE and only once did I get a response; a LONG list of core data (over 250 values). And the question here is why does the response return Many other cores data, from other users??? I see BaseTemp, AccMag and MY core Uptime values.
Then by bit bucket magic, I could not again get ANY responses???

Your Uptime.html

I placed console debug statements in your uptime.html.
I blow through the code to the end without ever seeing the Opened! or Errored! section execute. Still attempting to learn this code, but expected to see at least ONE of these.

Postman

Tried to format manual commands for PostMan, but these also remain in the forever pending state.

After this much time, and knowing others are easily getting responses, I must be doing something simple, but repetitively wrong.

Hi @LZHenry

When you write your code for the Spark core, you get to decide when and how often to publish events. In the example above the event is published every 15 seconds, which is a good choice since if you go too fast, the Spark cloud software will limit you. So you should see new data on the web page only every 15 seconds with the above code. This also means that when you start the connection with the “Connect” button, you need to wait sometimes as long 15 seconds for the first event to reach you.

By default, your event is public but you can change that. Public means that anyone with a access token can see all the public events. In the debugging aid I mentioned of view-source: URL was set to view all public events so that your could make sure your internet and router connects worked, which it certainly looks like they do work if you saw all these events. This connection does not time out–it should be getting a carriage return every 9 seconds as a keep-alive and then show whatever event you are listening for when they are sent.

I don’t know why you are having such problem and at this point I can only recommend going back to the above example code, changing it only to add your access token and device id, and trying it in the most recent version of chrome for your platform, which I thought was Windows. Load up the web page with file:/// URL and click the connect button once, and wait at least one minute to make sure your core is transmitting before doing anything else.

Are you running Windows 7 or 8? What version of Chrome are you running? You can type “chrome://chrome” in the URL box or click the about menu choice to find the version.

Thanks again for the quick response, and for clearing my understanding of the public messages and responses.

I have reloaded the core, multiple times, base code from you, from the CLOUD just be stable. 15 seconds is not an issue… I have left it stand for hours without response. Tried Chrome, IE and FF; all show pending after the connect is pressed.

9 seconds… should I be seeing this transmission in the console window, or the see it in the network window???

Chrome claims it is up to date: 33.0.1750.154m
All three brousers hang in the same way.
PostMan hangs also
I am operating W7/64;

I am starting to conjecture wild things, like my comcast router, or my AVIRA virus protector is involved. Others are succeeding; this should not be so hard. Can you provide some additional debug suggestions that I can use with PostMan??? I have used this tool to debug my function and variable implementations.

I don't think PostMan does events--just HTTP POST requests so I am not sure how that will help.

You have a recent Chrome on Win7/64 which I have used as well. We use Kasperski on that PC so I can't help with ARIVA.

One other point: you can use any access key to view all the public events, but if you have a URL like the example above with both /devices/<<device id>> and ?access_token=<<my token>> they have to be a matched set. You have to use a current access token that is paired with the device id in the cloud.

Maybe you should try cutting and pasting them from the webIDE again. You can get your access token by clicking the little gear on the left side bottom of the editor. You device id can be found by clicking the target button on the left side and then clicking the little ">" triangle for the particular core you are using. Cut and paste--don't re-type is my advice.

Thanks… I will stop bothering you for tonight and go back to some other code.

Shut off AVIRA… no change in response.

With the SINGLE response I did get, I am confused because the return message provided 5 time stamped messages from my device interspersed with the other 200 messages returned.

Yes, I have been copy/paste the codes from my master file I have been using for 2 months; have rechecked on the CLOUD, just because I am now paranoid. Not knowing the WEB things, I am out of debugging things to attempt.

Hi bko!
me again :slight_smile: How can I collect the published data? google spreadsheet allows 1 min frequency but my published event can be at 1 sec
thanks!

Hi @Dup

It depends on what you mean by collect it. It is intentionally hard to write files from a web browser for security reasons and the only browser I know that even lets you is IE (and that’s a good reason not to use it!).

If you just want to log data to a file, any host on the net with curl could do:

curl --no-buffer https://api.spark.io/v1/devices/<<device_id>>/events/EventName/?access_token=<<access_token>> > log.txt

If you just want to graph it, that’s easy on a web page with some Javascript and the <canvas > tag.

You could use @kareem613 Atomiot service, which looks really easy and nice.

You could write your own Perl/Java/JQuery/C program to log events. Or listen for them and POST them on a web form.

There are a lot ways other than Google to deal with your data–it all depends on what you have to use, and what you want to do with the data.

If you genuinely have a need for per second logging, you should probably take a periodic burst approach.
In many cases pushing data will take more than a second and given that’s a blocking operation, it means you’ll be skewing data anyways.
Don’t forget, the internet lies between your device and anything you try to log to.

I suggest you build up as much data as you can locally on the device, then every minute or so send it all up at once.

Days can be pooled in memory(very limited amount) or on some persistent storage like an sd card.

What are you logging that needs to be every second?

Hi,
Yes, the plan is to use an SD card and I was hoping I could send it to something like Dropbox for storage then analysis. I want to visualize the data from an accelerometer. How could I send the data?
Thanks!

Hi @Dup

Accelerometers are a lot of fun to goof around with! If you are looking at big gestures or big movements relative to periods of little or no movement, then you can figure lots of stuff out, like in a Wii-mote type application.

But walking around with one or moving one around and trying to do things like double integrate the acceleration to get relative position is really, really hard. There is a lot of noise and the numeric stability is really tough to manage. You are also integrating over time and so the exact timing is also critical, which seems like a challenge on the Spark core.

So what kinds of things were you thinking to do with the accelerometer data?

Maybe a little research on what other folks have done with say Arduino and accelerometers would be a good start?

Hi bko!,
I want to measure vibration generated by mechanical equipment and so, I would like to capture data at a high rate so I can create a graph for analysis and thresholds for alarms. Can I save data on SD card, encrypt it then send it to Dropbox via TCPclient? :smile:
You mentioned any host on the net with curl could do; any suggestions?
Thanks!

Hi @Dup

Ok–vibration seems like a good area!

First off: any host on the net including your own computer can run curl. If you already have a web server, that could do it too. Almost anything will work just to log data.

Getting your relatively high-rate data off of the core is going to be hard. It can be done, but not easily with Spark.publish() or Spark.variable() since they are designed for lower rate applications. Dropbox is hard to get your data into since they require security, but there are other services. The best choice in my opinion would be a dedicated server on the net that you can control–either your own PC or rented host.

Have you thought about doing more analysis on the core? I would think that having things like:

  • When the vibration goes from near zero to above a threshold, send a Spark.publish() event saying the machine started
  • When the vibration goes from above the threshold to near zero, publish and event saying the machine stopped.
  • When the machine is running, analyze the data on the core (I would use FFT–there is another thread for that) and report significant changes, such as when the peak vibration goes above some limit or when the harmonics increase. Publish an alarm event when above the alarm threshold limit.
  • The threshold limits could get set via a Spark.function() so that your PC can dynamically adjust the limits when needed.

You are going to be a lot better off on the communication side by doing more work on the core.

The best way to just log data in my experience, is with a processor writing data to an SD card. You don’t lose data very easily and transferring the data to a PC for analysis is easy–you just walk the card over and plug it in. That might step one of your Spark project so you can get baseline data for what is normal operation.

1 Like

Thanks bko!
I appreciate the input and recommendations! :smile:
I will check out the web services out there. I was thinking about AWS but it seems a bit complicated for me. There is also Rackspace…Also, maybe I should spend more time evaluating Xively
Also, I will look at FFT. Good point on not being so lazy and simply walking over to get the SD card hehe

Thanks again!

Thx @bko for this nice tutorial. I want to consume these events in a node.js application. Unfortunately it does not seem to work with the eventsource node.js library, whcih seems to work with other SSE servers though. Did anyone get it up running with a node.js client?

my client.js (run with node client.js):
var EventSource = require(‘eventsource’);

var es = new EventSource(‘https://api.spark.io/v1/devices/xxx/events/myevent?access_token=xxx’);
//var es = new EventSource(‘http://demo-eventsource.rhcloud.com/’)

es.onmessage = function(e){
console.log( e.data);
};

es.onerror = function(){
console.log(‘ES Error’);
};

BTW - yes - the event sent out is myevent and I changed that in the spark core code. It works nicely in CURL.

Hi @hansamann

I tried to answer in your other thread. I think you have name mismatch issues that are easy to fix. Hope it works for you!

1 Like

OK, I got a slight progress. Below code does seem to be triggered by the events I send. Problem is only that the event IN receive in the eventlistener is empty, e.g. {}.

var EventSource = require('eventsource');

var es = new EventSource('https://api.spark.io/v1/devices/xxx/events/myevent?access_token=xxx');
es.addEventListener('myevent', function(event) {
	console.log(event);
});

It logs:
{}
{}
{}
...

Are you seeing the keep alives every 9 seconds?

I would take myevent out of the URL and listen all events from core with that dev id.

I’ve removed the specific event name, e.g… just GET …/events in the URL. I am now getting events, but they are all empty in the node.js console.log output. It works nicely with CURL, it must be some kind of eventsource library issue with node.js.

Spark seems to be very node.js friendly, I was wondering what library you use at Spark for consuming events?

Cheers
Sven

You question about what the Spark team uses is a good one for @Dave but I know a lot of the team is traveling today. I know that they do like node.js quite a bit.

If on the Spark core you are publishing every minute, you will get 60 secs divided by 9 sec/keep-alive = 6 to 7 keep-alives for every real data event. I would try publishing every 5 seconds for testing.

I am not sure what would be different about node.js versus Javascript. I would think they would behave the same way. Can you put a break point the debugger and look around?

For the most part, Node.js and Javascript are fairly similar, except Node doesn’t operate in a browser context, etc. The stock “EventSource” browser classes should handle subscriptions to events,

Here’s a node.js package that can help you play around with sending events, the protocol is pretty simple, they also include sample client code:

https://www.npmjs.org/package/sse

Hope that helps!
Thanks,
David

2 Likes