I’m working with about 10 cores right now and each of them has a status variable that I am trying to continually monitor through a node.js server. I’m making an ajax call to the spark cloud to find the variable every 30 seconds on each of these cores. Here’s the node.js code
Each core seems to be losing connection to wifi every hour or so for a few minutes at a time. Recently when I’ve added cores to the server though, some of them will lose connection permanently until I unplug and replug them in. I’m wondering if calling for variables in the manner overloads the spark cloud somehow and if I should move to code that uses publishing events and signaling the core for connection status every so often.
I don’t have experience with a node.js server monitoring the status of Core variables regularly.
But I do have experience in the other direction:
I have multiple Cores running in the field.
The firmware on each Core publishes an event every 15 seconds that contains the analog readings from A0 to A7 and some other state information. The event looks like this “AR:2108,2066,2099,2072,2111,2070,2083,1982,2,1,1,0”. The event also implicitly says “The Core is powered, running user code, and connected to the Erge Cloud”.
On powering up, the firmware on each Core waits a couple of seconds, then publishes a one off “I am alive” event which looks like this “AL:V13:” + String( Time.now() ) + “:” + Time.timeStr(); This event implicitly says “The Core was unpowered for some time before this event”.
I have a NodeJS server running on Heroku listening for these events and writing them to a MySQL database.
I graph the results when users want to see them and gaps in the record highlight gaps in the event sequence.
It is all pretty solid:
The Cores run for hundreds of hours, publishing events, without pause. I’ve not noticed any reboots apart from ones I’ve deliberately caused by flashing firmware or toggling the power
Its taken me a while but the NodeJS server code is stable now - even in the face of intermittent outages of the MySQL Database.
I’ve not seen any instability in the Spark Cloud. It seems robust.
The greatest instability is my home internet connection while is sometimes iffy. But the core in my home struggles on, trying to re-connect, flashing madly, until the internet connection stabilises and it reconnects.
One point: In writing Core firmware I always test “if( Spark.connected() )” before I publish an event. This may make the program a tad more robust, but it certainly reminds me that I can’t take internet connectivity for granted.
This was very helpful. I redesigned all my code with this in mind and it’s working great. Saved me a ton of time with the if(spark.connected) tip as well.