Introduction to Core Comunications

When I got my core I discovered that Spark provide various methods of communicating with it. This page overviews my understanding of those methods and compares their strengths and weaknesses:

My Core: I added two more red LEDs (controlled by D0 and D1) and a temperature sensor (controlled by A7) to my Spark Core (as per My core is powered up and running in my home 24 by 7.

Physical: If I’m physically in the same location as my Core, I can use a USB cable to connect my laptop to the Core and communicate down the cable to it. But communication over a USB cable is not aligned with the “Internet of Things” that Spark is pushing. So physical connections are not the recommended approach. More of a backup approach. But here are the “physically connected” methods anyway:

  • Serial I can talk to the Core over a USB cable using a protocol called Serial or UART.

  • Applications There are free tools that implement the Serial protocol including PuTTY, Arduino IDE and GNU screen. There’s also a popular Mac GUI app called CoolTerm. As an example, you can use the PuTTY tool to find the unique device ID of a Core that has never been connected to the Internet.

  • CLI This DOS-like “command line interface” can be used to flash (aka download) new firmware (aka program) to a physically connected Core.

  • Program As a developer, I can also write programs to communicate over Serial e.g. using node-serialport to open a Serial port in Node.js.

Internet-based: If the Core is connected to the Internet, there are several ways to talk to the Core. Starting from the simplest they are:

  • Tinker Once I’ve downloaded the Tinker application from the app store onto my smart phone, I can control my core from my smart phone using this app no matter where I am. Tinker allows me to do things like call analogread on A7 which returns an integer value between 0 and 4095 representing the temperature.

  • CLI As a developer, there is an API that lets me call functions programmed onto the Core and query the value of variables on the Core. This API comes in a number of “wrappers” including CLI and …

  • SparkJS This is simply a Javascript wrapper around the API, but it makes a lot of things easier.

  • AtomIot Using the free browser application I’ve set up a schedule to read the temperature sensor reading from the core every 5 minutes. Atomiot also graphs the results nicely.

Publish Querying the core every 5 minutes is useful but has limitations. What if it suddenly gets cold for 2 minutes? This event might not show up in my queries. I want the core to “publish” events immediately it detects something interesting e.g. temperature exceeds a specified limit. Spark have released various capabilities that allows cores and the Spark Cloud to “publish” events and “subscribe” (aka listen) to events.

  • Server Sent Events (aka SSE) This is a protocol that allows programs to “subscribe” (aka listen) to events sent by servers. The Spark Cloud supports SSE. This allows the Spark Core to send events as soon as they occur to the Spark Cloud, and allows a program to hear and respond to events as soon as they occur.

  • Browser SSE Most browsers (including Chrome) support SSE, which allows me to have a browser window displays Spark Core events as soon as they occur.

  • Server SSE Having a browser session doing the listening has limitations. I can’t leave my browser running 24 by 7. My browser can’t store the data permanently. If I want a robust method of listening to and recording published events 24 by 7, I can write a program that listens for events using SSEs and then host the program somewhere like Heroku.

There are other server-based approaches to communicating with Cores:

  • Local Cloud Spark has open-sourced the software that powers their Spark Cloud. I could build a server to run this software and listen to events then extend it with my custom code to store and analyse events. Be aware that Spark will be improving, maturing and extending this open-source software over time. At least short-term, I choose to avoid this approach because of the cost and effort to me of duplicating the Spark Cloud.

  • Webhooks Spark have a capability in beta called webhooks.I’m not sure of the release ETA. Webhooks are a scalable cloud-to-cloud implementation of publish/subscribe. I’d rather use this approach as it lets me focus on developing the server functionality that is unique to my application. In my case this is storing and analysing the data sent from the Core. Webhooks are in private beta now, and if you want to participate, @dave is the man with the keys.

That’s my understanding. If I’ve got anything wrong I’d appreciate feedback so I can correct the above (and learn!)


Great overview @philipq! Here are some things that I would add/modify:

  • “Putty” is really one of a number of ways to talk to the Core using a protocol called Serial, or UART. You could also use a number of other Serial terminals, such as the Arduino IDE or GNU screen. There’s also a popular Mac GUI app called CoolTerm. You can also communicate over Serial programatically, such as using node-serialport to open a Serial port in Node.js
  • I would add to your list of “Internet-based” communications “calling a function” and “getting a variable”, both of which can be done through the API, SparkJS, or the CLI
  • In fact SparkJS might be worth adding to your list; it’s simply a Javascript wrapper around the API, but it makes a lot of things easier
  • Server Sent Events can also be done in a web application, so it’s not just for browsers. You could write an app that listens for events using SSEs and then host it somewhere like Heroku
  • Webhooks are in private beta now, and if you want to participate, @dave is the man with the keys

Hope that’s helpful!

1 Like

Thanks for the feedback. I’ve integrated it all into my original post (thereby hiding my errors!)
Thanks again.

Great post @philipq! Thanks so much!