Tutorial: Getting Started with Spark.publish()

Is there any IOS example on how to get the published event sent by Spark Core? I mean an iphone app instead of the html page

The events published by your core are HTML5 server-side events, so any framework that can talk to the modern web should be able to handle them. The easiest way to handle them is with a Javascript interface (web or stand-alone) but there are lots of other ways I am sure.

The code for the official iOS app from Spark is here, but it does not handle server-sent events.

Sorry I don’t have more for you–I wish I had more time to learn about how to do this on iOS too!

Thanks for the reply. I will also work on that. If I can solve the issue I will share with you all

Hello. Thanks for the publish() tutorial. Maybe someone could help fill in a bit more of the picture. I want to run a Linux process with python or perl that watches for Spark.publish() events and then takes action. I could write the script to simply poll by periodically doing an http GET to the Spark cloud – Maybe once a minute. Or, I could use the publish() interface? Does anyone have a sample code using Perl or Python? If I understand things right, the server code connects with the Spark Cloud, keeps the TCP connection open, and then waits for events?

Hi @geeklair; yes this is correct, the server opens a connection and then waits for events. It uses part of the HTML5 spec called “Server-Sent Events” (SSEs). I did a quick google search for SSEs in Python, and I found this:

https://pypi.python.org/pypi/sseclient/0.0.8

Let us know if that works for you?

2 Likes

Thanks! Works well. Here is the code I used, for those trying this at home (‘000’ replacing my secret values). Start with @bko publishme.ino code above, and then use this Python after installing the SSE package, as @zach recommends with the link above:

#!/usr/bin/python
from sseclient import SSEClient
    
deviceID = '000000000000'
accessToken = '0000000000000000'
sparkURL = 'https://api.spark.io/v1/devices/' + deviceID + '/events/?access_token=' + accessToken

messages = SSEClient(sparkURL)

for msg in messages:
    print 'Processing Spark Event: ', msg

All is good, you run the python script, it connects to the Spark Cloud, then holds the connection open waiting for events. The output from @bko program is shown below. The only curious bit is why the first event comes back empty…

Output:

$ python ./notify-listener.py
Processing Spark Event:
Processing Spark Event:  {"data":"0:33:0","ttl":"60","published_at":"2014-10-15T00:31:52.888Z","coreid":"00000"}
Processing Spark Event:  {"data":"0:33:15","ttl":"60","published_at":"2014-10-15T00:32:07.893Z","coreid":"00000"}
Processing Spark Event:  {"data":"0:33:30","ttl":"60","published_at":"2014-10-15T00:32:22.889Z","coreid":"00000"}
Processing Spark Event:  {"data":"0:33:45","ttl":"60","published_at":"2014-10-15T00:32:37.894Z","coreid":"00000"}

I suppose something to ponder is how resilient this is compared to polling… While I won’t get data promptly by polling once a minute, each GET is an independent TCP connection, and quickly works or does not. If I use publish(), I wonder what I should do for timeouts, reconnects, network burps, etc. That’s not really a Spark Core issue, so I won’t ask python SSE questions here, but it is worthwhile for the community to think about, as folks build reliable infrastructure with Spark bits – probably a different discussion thread.

1 Like

HI @geeklair

You might want to look at my other tutorials on Spark.variables and Spark.functions. Because Spark.publish can do a burst of up to 4 and an average of 1 per second, I can get quicker responses from publish than I can from say Spark.variable, but it depends on how “far” you are from the cloud in terms of network latency. With the local cloud, you can make latency effectively zero.

The empty event that you got is likely the “keep-alives” that the cloud sends down the event connection every 9 seconds when there are no other events. If you make your event publish every 60 seconds, you will likely see more of these.

Glad you go the Python running too!

1 Like

Realy good tutorial. have just one question. Can i change timestamp so it will be correct at my timezone somehow? Anyone know how to do it?

If you mean the timestamp returned by the Spark cloud to you, it is in “Zulu” or GMT time. In order to convert it on a web page, you can use the Javascript Data object like this:

<!DOCTYPE html>
<html>
<body>
<button onclick="myFunction()">Convert it</button>

<p id="datespot">Unknown</p>

<script>
function myFunction() {
    var d = new Date("2014-10-15T00:31:52.888Z"); // example--put date stamp from cloud here
    document.getElementById("datespot").innerHTML = d.toString();
}
</script>

</body>
</html>
1 Like

Thank you. Now with your code as input i found a Javascript reference.

For me following worked as well as your code:

var d = new Date(parsedData.published_at);

tsSpan.innerHTML = "At timestamp " + d.toLocaleString();
            tsSpan.style.fontSize = "9px";
1 Like

Thanks much! As a newbie and non-C programmer, I wasn’t familiar with sprintf()and its available formatting options. A good reference can be found here.

For example,

printf(publishString,"%02u:%02u:%02u",hours,min,sec);

will pad the time display with leading zeros (e.g. “030:08:14” instead of “3:8:14”).

This leads me to a general observation and suggestion regarding the Spark Core documentation and tutorials. At present, they seem to presume the user has at least a rudimentary knowledge of certain C programming commands and concepts. I don’t. I’m an entrepreneur that comes from the marketing/business development side, not the CS side, but I have some ideas I’m playing out for personal hobby projects and, possibly, commercial projects. I’m hoping the documentation and tutorials will get fleshed out to provide more handholding (or pointers to other sites) for dweebs like me who had to use Google to figure out what the heck ‘sprintf()’ was. Or to provide a formal command reference for things like ‘Spark.publish()’. Stuff like that. Maybe I can help with that.

1 Like

I agree with much of what you say. Thanks for the tip on sprintf();. I would add that the community is still quite young. The spark core was a kickstarter campaign in May 2013. Since then I get the feeling the spark core faithfull have been victims of their own success. Never the less there are members of the community that wear their underpants on the outside of their trousers. If you run into thoes guys, I think there is no problem you have (spark wise) that can not be sorted.

Having just gone through the mill on the spark.publish() thing. Could I reqest one action and recommend another?

Could I request that the documents section has more qxamples of different itterations of code, that way it is easier for the uninitiated to pick up some clues as to how to go about things,
A general hints and tips section would be good, I could start that by saying 1) make sure the folder you want to compile in spark dev has only one .ino file and that all the file names contain no spaces. 2) If you spark.publish an event the corresponding subscribe commant must use exactly the same (it is case sensitive) name. and finally, if you get stuck 1) go back to the last time the code did work and try again. 2) if after 24 hours, much gnashing of teeth and several gallons of coffee, you can still not make sense of it, search the forums and then ask the community, they are simply the best asset of the spark core platform.
:smile:Most of all enjoy what you do. :wink:
Happy new year.

2 Likes

This is a wonderful tutorial! I am however confused about the proxy stuff. I need to develop a simple web application that uses my core’s published data, but I do not want to put my access token in that page as per your recommendation. I am not too good at javascript or php, but I can hack my way through generally. So if I want to use the proxy linked to above, I don’t have to modify it in any way to use with publish rather than with variables? I just use ajax to get the access token? —wait, no, the proxy builds the URL needed to access the core. the javascript passes the core id and the resource i want from the core to the proxy, and the proxy returns the url i need to pass to the evensSource() constructor?

Hi @giantmolecules

The main idea is to store your access token in a secure place on your server not visible to the general web, but allow substitution in the URL used for api.spark.io to add the token to the URL.

If on the other hand you can store the web page locally and ensure that no one on the general internet can get to it, then you can just code your access token into the HTML file as the examples above do. I do this with Dropbox for instance.

There are a lot of ways to accomplish the URL substitution for the general internet–you can use PHP on a server, you can use a proxy server etc. but the idea is the same–the HTML the user sees does not have the token visible and there is a hidden substitution process that adds the access token to any URL that hits api.spark.io.

Hope this helps to clear it up.

Hi bko-- thanks, I understand the why, the how eludes me. In terms of talking my way through a transaction, does the above seem correct? Just trying to verify if I’m reading the php and Ajax stuff correctly.

OK here is the scenario for the PHP server. Let’s say you have a web server running Apache on a host somewhere. On that web server you want to serve up an HTML page that has data from a Spark core via publish or variable or even a control function that does something on the core, but you don’t want your access token to be visible. What do you do?

You set up @wgbartley 's PHP script and .htaccess on your server so that URLs in your HTML page are automatically rewritten to include your access token.

The proxy takes relative URLs and adds all the stuff to make it a api.spark.io URL as I recall. Your event source URL needs to be rewritten, so you just write it relative with this PHP script, I believe.

1 Like

Ok, still confused here, not necessarily with what is going on here, but how one does it. It took me a while to look at the javascript and php to see what was happening, but I think I got it. I’m writing some test code:

<!DOCTYPE HTML>
<html>
	<head>
		<title>test</title>
		<style>
			body { margin: 0; }
			canvas { width: 100%; height: 100% }
		</style>
		<meta charset="utf-8">
	</head>
	<body>
		<div align = "center" id="div">DIV</div>
		<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" type="text/javascript" charset="utf-8"></script>
		<script>

		var CORE_ID = '########################';

		test();

		function test(){

			var div = document.getElementById("div");

			div.innerHTML = "MAKING REQUEST";
			
			$.get('/proxy.php?'+CORE_ID+'/variable_name', function(response) {
        		console.log(response);
        		div.innerHTML=response;
			});
		}
		</script>
	</body>
</html>

So from the requesting HTML page, an ajax request is made to proxy.php using the GET method, and in this request, the core ID is sent along with what resource. It used to be a variable name, but I put events in there because I want to subscribe.

Here’s wgbartley’s PHP proxy script I copied from his gist here:

https://gist.github.com/wgbartley/11337650

<?
// Set your access token here
define('ACCESS_TOKEN', 'your_access_token_here');

// All responses should be JSON
header('Content-type: application/json');

// Build the URL.  Since it's possible to accidentally have an
// extra / or two in $_SERVER['QUERY_STRING], replace "//" with "/"
// using str_replace().  This also appends the access token to the URL.
$url = 'https://'.str_replace('//', '/', 'api.spark.io/v1/devices/'.$_SERVER['QUERY_STRING'].'?access_token='.ACCESS_TOKEN);


// HTTP GET requests are easy!
if(strtoupper($_SERVER['REQUEST_METHOD'])=='GET')
        echo file_get_contents($url);

// HTTP POST requires the use of cURL
elseif (strtoupper($_SERVER['REQUEST_METHOD'])=='POST') {
        $c = curl_init();
        
        curl_setopt_array($c, array(
                // Set the URL to access
                CURLOPT_URL => $url,
                // Tell cURL it's an HTTP POST request
                CURLOPT_POST => TRUE,
                // Include the POST data
                // $HTTP_RAW_POST_DATA may work on some servers, but it's deprecated in favor of php://input
                CURLOPT_POSTFIELDS => file_get_contents('php://input'),
                // Return the output to a variable instead of automagically echoing it (probably a little redundant)
                CURLOPT_RETURNTRANSFER => TRUE
        ));

        // Make the cURL call and echo the response
        echo curl_exec($c);

        // Close the cURL resource
        curl_close($c);
}
?>

What I was mistaken about is what happens next. I now (correctly?) think that PHP takes the information in the GET method and puts it into the $url variable in the right spot so you have a well formed URL for requesting a variable from a spark core with the access token also put in.

Now here’s where my greatest amount of confusion lies: In the subscribe tutorial, a URL is created in the HTML page and is passed to the page’s eventSource() constructor. If I don’t want the access token to be in the HTML file, use a proxy, but the proxy does not return the URL, it returns the contents of that URL. If I were looking for a spark variable, the value is what would be echoed back, not it’s location. So I don’t have a URL to give to the eventSource() constructor to subscribe to.

The PHP script probably won’t work for the eventSource stuff. When it proxies the request, it makes the call to the Spark API and doesn’t return anything until the HTTP connection to the Spark API is closed. I may be able to do the eventSource stuff in the same script, but mileage may vary drastically from server to server. Apache and PHP max execution times may prematurely kill an eventSource connection. That, and output buffering and flushing can also behave differently from server to server (IIS has always been a pain, but I’m pretty sure Apache and nginx play nicely).

If the kids go to bed at a reasonable hour tonight, I may be able to give it a try!

2 Likes

That would be awesome! So there’s no current method of securely subscribing to spark SSE’s? (remotely, not on a local machine)

Another “lateral thinking” idea that might help you is to ask to join the IFTTT beta since it can handle published events.

So it can be done for events, but it might not be as simple as a PHP proxy.

1 Like