I currently develop the urge to have a set of callbacks around the moment an exposed variable is queried from the ‘cloud’. I am adding code to achieve this to the class files spark_protocol.cpp and .h It seems to work, still writing code…
But suddenly I realize that this alteration can only be implemented when one compiles locally.
Is there a way to ‘override’ the spark_protocol files in the IDE? That would be NICE!
Or if the Spark team feels the same way I do shall we implement a callback mechanism. Like:
on_variable_request(char *variable_key, int buflen) //called just before blocking_send()
on_variable_send(char *variable_key, int buflen) //called after blocking_send().
These callbacks will make a lot of extra functionality possible, e.g. allocate memory, send the variable and free memory, or setting parameters and resetting them et cetera.
If there are more people interested in my solution, I can share my code when it is tested
@marcus, don’t know I got your Idea correctly or not, but can we use the Spark function for this purpose?. Instead of reading a variable from outside world we call a function and take the return value?
The exposed functions only return an int. In many cases one would like to expose a JSON array or another blob of data. The exposure of variables is meant for this purpose. And that works fine. But sometimes you want to alter the variable when it is ‘read’ by the cloud, See my earlier post.
So after some testing and juggling with meber pointers of classes, a weak point of C++! I decided to take a different approach. See my post, how to attach events to exposed variables!
By doing this, aren’t you basically changing a GET request into a POST request? By design, a GET request is made not to edit information on the receiver, but merely request it. POST on the other hand, is made for provoking receiver action.
So what you actually want is not to change the variables function, but rather have a POST request which can return more than an int? Keeping the REST approach in mind, that would make more sense.
You could also do something like: SSE notifies you of new data, which you get with a GET request, after which you change it with a POST request?
Hmm, according to the conventions you are right off course. My alteration is not meant to change data but to be able to give the core more air to breath by using dynamically allocated memory for a variable that is to be exposed. The contents of this variable are located in user flash. And to get them out of the core I was obliged to allocate a big buffer, whilst the core all ready allocates 640 bytes for the queue. And memory is valuable.
This is the scenario I use:
handle_received_message -> spark.function ‘callback’ -> malloc() -> spark.variable(same var,new location) return to handle_received_message -> send queue -> spark.function ‘callback’ -> free memory.
So basically it is still conform the REST protocol. Only under the hood we save a lot of memory.
An extra is that we can change the data on the fly
I have now also hacked the spark.variable function so it accepts dynamic alterations to the memory address of a variable during the execution of the program.
This together with the first hack makes the above scenario possible.
Your suggestion to use SSE is something to look in to, but for now I just need to save memory.
Did you implement SSE in combination with an Android app all ready?