I want my remote spark to fetch and use a longish data table from a central location under my control. I do this at present by including the data in the code as an array and then reflashing the remote spark. This seems pretty crude.
Would appreciate pointers to any useful examples.
Most of the examples I see concern transferring data out of the spark - I want to go the other way.
@goingalong, you can use Spark.function() to send data to your Core. You may have to break up your array into several call but you get the idea. 
In addition to @peekay123 's suggestion you could also use Spark.subscribe()
.
As for the “fetching” part if you want your Core to actively retrieve data rather than you pushing data to the Core the new Webhooks feature might become handy for you too.
Apart from these high level Spark features, you’ve also got TCPClient
, HTTPClient
and other lower level options.
Depending on the size of your table and the number of fields you actually change at once per update session, you might prefer one way over the other (Spark.function/subscribe
for few/little changes vs. low level for full table replacement).
1 Like
You will have to excuse my lack of knowledge … I am really concentrating on the product and getting updated prototypes into the field as fast as possible. Previously I used Arduino Pro Mini’s so a switch to spark kept the same form factor with the benefit of remote software update. I am not a networking expert.
My 2D array is in the order of 600-800 slots at present and would be updated once/day at the most.
Not sure how that would work with Spark.function() which looks to me to be more about initiating an action in the remote.
Webhooks sounds interesting, when will it be available?
I am just about to build a buddy unit that will use a second spark to provide local control of the remote (speed, mode etc) and I plan to use Spark.subscribe() for that but I did not see the function as a major data passing tool.
Consequently it looks as though the low level approach is going to be the way forward. Any examples would still be welcome.
Thanks for your help.
At present Spark.function
and Spark.subscribe
are the two immediate ways the Core can receive cloud data, while Spark.variable
and Spark.publish
are the ways for the Core to provide/send data to the cloud.
While Spark.variable
can provide strings up to 622 bytes (for the time being) the other ways are limited to just over 60 byte (if things haven’t changed already ;-)).
Webhooks are already available (for three days or so now officially ;-)).
So, given your 600-800 slots (of what datatype? int
would mean up to 3.2KB net data) you’d be best off with some sort of TCP transfer.
I agree with @ScruffR that it would be better to use TCPClient to have the Core fetch the data at regular intervals from your server. 