Im making a a project that consist of an Android app that controls ON/OFF of a water heater using Spark Core.
Using an smarthphone, the user can turn on or off the equipment. Also, the user can see the temperature of the water heather on their phone. Everything thanks to Spark Core.
So far I have done a test with my laptop (using CURL), and I can activate/desactive a PIN of the Spark Core (using Spark.function on the code and POST on my PC), and I can read varaibles (using Spark.variable on the code and GET on my PC).
I plan to present this project to my University, but I would like to know exactly WHAT is the Spark Cloud and HOW it works. I mean, how the communication flows from between PC - Spark Clould - Spark Core when the user makes a POST or GET request.I don’t completely understant (technically talking) what means to expose a varaible or a function to the Clould.
Please if you have any docummentation that can help me to complete understand this topic, I will be grateful.
It’s the bare minimum version without some features but sufficient for a small user group environment
When you perform a CURL, it does not go to the core directly. The CURL request goes to what we call the “device-server” that processes the API call and send the corresponding COAP messages to the core.
3.) As for exposing variables and functions…
This works like a registration upon start-up. When your core initially goes online, it will tell the of all the variables and functions to be made available via REST Api. (not 100% sure about this)
Because of the way the default rules for home WiFi routers are setup, the only way the cloud can work simply and easily is if the core initiates the connection to the cloud. Lots of folks have thought about doing it the other way around but they run into port, IP address and firewall issues.
Today’s cloud acts essentially as a proxy, so that when you use curl to request a variable from the cloud, the cloud asks the core for the current value and then returns it to your curl request. But that is not the only thing the cloud could do–it could also store previously read values and cache them with expiry time. That would isolate the core the end-use request which could be very useful in the situation where you have one core but many readers of the variable. Similarly for published events, the core publishes to the cloud which then republishes over the event stream. In the future the cloud could store events along with an expiry time and republish them at some other rate.
I know the Spark team has worked out the details around the cloud model, even if they have only implemented the “direct proxy” version of the cloud so far. The advanced cloud actions are really needed to scale the cloud up for wider dissemination of data sent to the cloud in the future.
I think your proposed addition to cloud functionality (it could also store previously read values and cache them with expiry time) would make a lot of sense. That way a core could stay not connected for a long time and just occasionally update variables in the cloud or receive function calls that previously were stored in the cloud.
=> core and PC/Phone should not need to be connected to the cloud at the same time