Thanks! Works well. Here is the code I used, for those trying this at home (‘000’ replacing my secret values). Start with @bko publishme.ino code above, and then use this Python after installing the SSE package, as @zach recommends with the link above:
#!/usr/bin/python
from sseclient import SSEClient
deviceID = '000000000000'
accessToken = '0000000000000000'
sparkURL = 'https://api.spark.io/v1/devices/' + deviceID + '/events/?access_token=' + accessToken
messages = SSEClient(sparkURL)
for msg in messages:
print 'Processing Spark Event: ', msg
All is good, you run the python script, it connects to the Spark Cloud, then holds the connection open waiting for events. The output from @bko program is shown below. The only curious bit is why the first event comes back empty…
Output:
$ python ./notify-listener.py
Processing Spark Event:
Processing Spark Event: {"data":"0:33:0","ttl":"60","published_at":"2014-10-15T00:31:52.888Z","coreid":"00000"}
Processing Spark Event: {"data":"0:33:15","ttl":"60","published_at":"2014-10-15T00:32:07.893Z","coreid":"00000"}
Processing Spark Event: {"data":"0:33:30","ttl":"60","published_at":"2014-10-15T00:32:22.889Z","coreid":"00000"}
Processing Spark Event: {"data":"0:33:45","ttl":"60","published_at":"2014-10-15T00:32:37.894Z","coreid":"00000"}
I suppose something to ponder is how resilient this is compared to polling… While I won’t get data promptly by polling once a minute, each GET is an independent TCP connection, and quickly works or does not. If I use publish(), I wonder what I should do for timeouts, reconnects, network burps, etc. That’s not really a Spark Core issue, so I won’t ask python SSE questions here, but it is worthwhile for the community to think about, as folks build reliable infrastructure with Spark bits – probably a different discussion thread.