How Spark Cloud works?

Hi all,

Im making a a project that consist of an Android app that controls ON/OFF of a water heater using Spark Core.

Using an smarthphone, the user can turn on or off the equipment. Also, the user can see the temperature of the water heather on their phone. Everything thanks to Spark Core.

So far I have done a test with my laptop (using CURL), and I can activate/desactive a PIN of the Spark Core (using Spark.function on the code and POST on my PC), and I can read varaibles (using Spark.variable on the code and GET on my PC).

I plan to present this project to my University, but I would like to know exactly WHAT is the Spark Cloud and HOW it works. I mean, how the communication flows from between PC - Spark Clould - Spark Core when the user makes a POST or GET request.I don’t completely understant (technically talking) what means to expose a varaible or a function to the Clould.

Please if you have any docummentation that can help me to complete understand this topic, I will be grateful.



I would suggest you read over the spark-protocol Github repo:

1.) Core to :cloud: communication

This is done through COAP via TCP (the standard is UDP i believe) using port 5683

There is some initial handshake etc and you can see it here:

2 things involved here that are stored in the core external flash memory:

  • Server public key - used to authenticate and point to the correct :cloud: etc
  • Core private key - used to encrypt messages to and fro plus identifying itself during the handshake


This is entirely in the :cloud: and the open source code can be found at:

It’s the bare minimum version without some features but sufficient for a small user group environment

When you perform a CURL, it does not go to the core directly. The CURL request goes to what we call the “device-server” that processes the API call and send the corresponding COAP messages to the core.

3.) As for exposing variables and functions…

This works like a registration upon start-up. When your core initially goes online, it will tell the :cloud: of all the variables and functions to be made available via REST Api. (not 100% sure about this)

This simplified diagram might be a better idea:

1 Like

Hi @alvarad25

You have great answers there from @kennethlimcp and @harrisonhjones but I just want to add one more point.

Because of the way the default rules for home WiFi routers are setup, the only way the cloud can work simply and easily is if the core initiates the connection to the cloud. Lots of folks have thought about doing it the other way around but they run into port, IP address and firewall issues.

Today’s cloud acts essentially as a proxy, so that when you use curl to request a variable from the cloud, the cloud asks the core for the current value and then returns it to your curl request. But that is not the only thing the cloud could do–it could also store previously read values and cache them with expiry time. That would isolate the core the end-use request which could be very useful in the situation where you have one core but many readers of the variable. Similarly for published events, the core publishes to the cloud which then republishes over the event stream. In the future the cloud could store events along with an expiry time and republish them at some other rate.

I know the Spark team has worked out the details around the cloud model, even if they have only implemented the “direct proxy” version of the cloud so far. The advanced cloud actions are really needed to scale the cloud up for wider dissemination of data sent to the cloud in the future.


Hi @bko,

I think your proposed addition to cloud functionality (it could also store previously read values and cache them with expiry time) would make a lot of sense. That way a core could stay not connected for a long time and just occasionally update variables in the cloud or receive function calls that previously were stored in the cloud.
=> core and PC/Phone should not need to be connected to the cloud at the same time

1 Like

Just to be clear: this is not my proposal, this is what I have heard unofficially from various Spark sources.