It seems that if I call Spark.publish() from within a function defined in Spark.function(), I get a timeout when trying to make the HTTP POST cloud API call. When I make the call, it publishes to the SSE stream, but the API call returns with a 408 timeout.
API Call: curl https://api.spark.io/v1/devices/50ff70065067545635220287/pubfunc -d access_token=<lalala> -d args=test
The SSE stream publishes immediately, but the `Spark.function()` API call always times out. It feels like some sort of contention or race condition. I can call the `Spark.publish()` outside of the `Spark.function()` call, but it seems like it may be a bug to be investigated.
Firmware:
Someone else reported this but I can’t find the post at the moment. I think the only solution currently is to set a flag in the Spark.function() and call Spark.publish() in loop when you come back around.
Ah, yup. Thanks @wgbartley for the exceedingly clear firmware example app. When it’s boiled down like that I think I can imagine what’s probably happening.
The buffer in the SparkProtocol object is being used to construct a function response. Parts of the message have been written to the buffer before the call to the Spark.function.
The publish call assumes it has free reign to write to the buffer and thus overwrites the parts of the function call that had already been written and sends the event.
The Spark.function returns, assuming the buffer is in the state it was before the Spark.function was called, which is no longer true. Probably it encrypts and sends a (garbage) message successfully, and the Core thinks it has finished its job. The Cloud, however, would (1) see that garbage message, (2) continue waiting on a function return that never comes, and (3) time out.
Just added this issue to the core-communication-lib: