Tutorial: Getting Started with Spark.publish()

So I am having the same problem that @MiltSchreiber was having.

The webpage will open and when I click on Connect it says “Opened!” however it just breathes cyan and says waiting for data.

I have checked the accessToken and I have checked the deviceID and I have reflashed, saved, and reloaded all of the code.

I am using Google Chrome and coding in the Spark IDE and flashing it from there.

I just can’t seem to find the issue that is causing this…

Any suggestions? I know I am not giving much information on this but I am not sure what I need to provide. I can get whatever you may need to help troubleshoot this.

Thank you for your help

Hi @busterdavidson

Let’s try to figure out which part is not working, the core or the web.

There are currently three users publishing the “Uptime” event to the public event stream that this tutorial uses:

Do you see your core’s device id in the table? If so that means you are publishing, just not receiving. If not, we can look at the code on the core. You would likely be one of the first two, since the third entry is for the next tutorial using JSONs.

In Chrome you can open the Javascript console by pressing Ctrl-Shift-J on Windows. Reload the web page and click connect again. See if any messages come up on the console like “Opened!” or “Errored!”. If you get the opened message, click on the Network tab at the top of the debug console to see the streams in use. You should see a stream with your access token and the size should be increasing periodically.

Let me know and we can go through the steps to debug it.

1 Like

No mine was not listed but at the time I also was not running that piece of code. I can re flash my core and attempt again.

I appreciate your help!

@bko, what if i want to continuously listen to events published by my core and simply print on a new line in a webpage? :smiley:

So for logging, I like to use curl with the --no-buffer option piped into a file–works great.

curl --no-buffer https://api.spark.io/v1/events/Uptime/?access_token=<<tokenhere>> > test.txt

If you really need a web page then have the event handler find a table you have set up in the HTML by Id and call insertRow(); on in followed by insertCell on the new row.

1 Like

Is there any IOS example on how to get the published event sent by Spark Core? I mean an iphone app instead of the html page

The events published by your core are HTML5 server-side events, so any framework that can talk to the modern web should be able to handle them. The easiest way to handle them is with a Javascript interface (web or stand-alone) but there are lots of other ways I am sure.

The code for the official iOS app from Spark is here, but it does not handle server-sent events.

Sorry I don’t have more for you–I wish I had more time to learn about how to do this on iOS too!

Thanks for the reply. I will also work on that. If I can solve the issue I will share with you all

Hello. Thanks for the publish() tutorial. Maybe someone could help fill in a bit more of the picture. I want to run a Linux process with python or perl that watches for Spark.publish() events and then takes action. I could write the script to simply poll by periodically doing an http GET to the Spark cloud – Maybe once a minute. Or, I could use the publish() interface? Does anyone have a sample code using Perl or Python? If I understand things right, the server code connects with the Spark Cloud, keeps the TCP connection open, and then waits for events?

Hi @geeklair; yes this is correct, the server opens a connection and then waits for events. It uses part of the HTML5 spec called “Server-Sent Events” (SSEs). I did a quick google search for SSEs in Python, and I found this:

https://pypi.python.org/pypi/sseclient/0.0.8

Let us know if that works for you?

2 Likes

Thanks! Works well. Here is the code I used, for those trying this at home (‘000’ replacing my secret values). Start with @bko publishme.ino code above, and then use this Python after installing the SSE package, as @zach recommends with the link above:

#!/usr/bin/python
from sseclient import SSEClient
    
deviceID = '000000000000'
accessToken = '0000000000000000'
sparkURL = 'https://api.spark.io/v1/devices/' + deviceID + '/events/?access_token=' + accessToken

messages = SSEClient(sparkURL)

for msg in messages:
    print 'Processing Spark Event: ', msg

All is good, you run the python script, it connects to the Spark Cloud, then holds the connection open waiting for events. The output from @bko program is shown below. The only curious bit is why the first event comes back empty…

Output:

$ python ./notify-listener.py
Processing Spark Event:
Processing Spark Event:  {"data":"0:33:0","ttl":"60","published_at":"2014-10-15T00:31:52.888Z","coreid":"00000"}
Processing Spark Event:  {"data":"0:33:15","ttl":"60","published_at":"2014-10-15T00:32:07.893Z","coreid":"00000"}
Processing Spark Event:  {"data":"0:33:30","ttl":"60","published_at":"2014-10-15T00:32:22.889Z","coreid":"00000"}
Processing Spark Event:  {"data":"0:33:45","ttl":"60","published_at":"2014-10-15T00:32:37.894Z","coreid":"00000"}

I suppose something to ponder is how resilient this is compared to polling… While I won’t get data promptly by polling once a minute, each GET is an independent TCP connection, and quickly works or does not. If I use publish(), I wonder what I should do for timeouts, reconnects, network burps, etc. That’s not really a Spark Core issue, so I won’t ask python SSE questions here, but it is worthwhile for the community to think about, as folks build reliable infrastructure with Spark bits – probably a different discussion thread.

1 Like

HI @geeklair

You might want to look at my other tutorials on Spark.variables and Spark.functions. Because Spark.publish can do a burst of up to 4 and an average of 1 per second, I can get quicker responses from publish than I can from say Spark.variable, but it depends on how “far” you are from the cloud in terms of network latency. With the local cloud, you can make latency effectively zero.

The empty event that you got is likely the “keep-alives” that the cloud sends down the event connection every 9 seconds when there are no other events. If you make your event publish every 60 seconds, you will likely see more of these.

Glad you go the Python running too!

1 Like

Realy good tutorial. have just one question. Can i change timestamp so it will be correct at my timezone somehow? Anyone know how to do it?

If you mean the timestamp returned by the Spark cloud to you, it is in “Zulu” or GMT time. In order to convert it on a web page, you can use the Javascript Data object like this:

<!DOCTYPE html>
<html>
<body>
<button onclick="myFunction()">Convert it</button>

<p id="datespot">Unknown</p>

<script>
function myFunction() {
    var d = new Date("2014-10-15T00:31:52.888Z"); // example--put date stamp from cloud here
    document.getElementById("datespot").innerHTML = d.toString();
}
</script>

</body>
</html>
1 Like

Thank you. Now with your code as input i found a Javascript reference.

For me following worked as well as your code:

var d = new Date(parsedData.published_at);

tsSpan.innerHTML = "At timestamp " + d.toLocaleString();
            tsSpan.style.fontSize = "9px";
1 Like

Thanks much! As a newbie and non-C programmer, I wasn’t familiar with sprintf()and its available formatting options. A good reference can be found here.

For example,

printf(publishString,"%02u:%02u:%02u",hours,min,sec);

will pad the time display with leading zeros (e.g. “030:08:14” instead of “3:8:14”).

This leads me to a general observation and suggestion regarding the Spark Core documentation and tutorials. At present, they seem to presume the user has at least a rudimentary knowledge of certain C programming commands and concepts. I don’t. I’m an entrepreneur that comes from the marketing/business development side, not the CS side, but I have some ideas I’m playing out for personal hobby projects and, possibly, commercial projects. I’m hoping the documentation and tutorials will get fleshed out to provide more handholding (or pointers to other sites) for dweebs like me who had to use Google to figure out what the heck ‘sprintf()’ was. Or to provide a formal command reference for things like ‘Spark.publish()’. Stuff like that. Maybe I can help with that.

1 Like

I agree with much of what you say. Thanks for the tip on sprintf();. I would add that the community is still quite young. The spark core was a kickstarter campaign in May 2013. Since then I get the feeling the spark core faithfull have been victims of their own success. Never the less there are members of the community that wear their underpants on the outside of their trousers. If you run into thoes guys, I think there is no problem you have (spark wise) that can not be sorted.

Having just gone through the mill on the spark.publish() thing. Could I reqest one action and recommend another?

Could I request that the documents section has more qxamples of different itterations of code, that way it is easier for the uninitiated to pick up some clues as to how to go about things,
A general hints and tips section would be good, I could start that by saying 1) make sure the folder you want to compile in spark dev has only one .ino file and that all the file names contain no spaces. 2) If you spark.publish an event the corresponding subscribe commant must use exactly the same (it is case sensitive) name. and finally, if you get stuck 1) go back to the last time the code did work and try again. 2) if after 24 hours, much gnashing of teeth and several gallons of coffee, you can still not make sense of it, search the forums and then ask the community, they are simply the best asset of the spark core platform.
:smile:Most of all enjoy what you do. :wink:
Happy new year.

2 Likes

This is a wonderful tutorial! I am however confused about the proxy stuff. I need to develop a simple web application that uses my core’s published data, but I do not want to put my access token in that page as per your recommendation. I am not too good at javascript or php, but I can hack my way through generally. So if I want to use the proxy linked to above, I don’t have to modify it in any way to use with publish rather than with variables? I just use ajax to get the access token? —wait, no, the proxy builds the URL needed to access the core. the javascript passes the core id and the resource i want from the core to the proxy, and the proxy returns the url i need to pass to the evensSource() constructor?

Hi @giantmolecules

The main idea is to store your access token in a secure place on your server not visible to the general web, but allow substitution in the URL used for api.spark.io to add the token to the URL.

If on the other hand you can store the web page locally and ensure that no one on the general internet can get to it, then you can just code your access token into the HTML file as the examples above do. I do this with Dropbox for instance.

There are a lot of ways to accomplish the URL substitution for the general internet–you can use PHP on a server, you can use a proxy server etc. but the idea is the same–the HTML the user sees does not have the token visible and there is a hidden substitution process that adds the access token to any URL that hits api.spark.io.

Hope this helps to clear it up.

Hi bko-- thanks, I understand the why, the how eludes me. In terms of talking my way through a transaction, does the above seem correct? Just trying to verify if I’m reading the php and Ajax stuff correctly.