Spark.function() + Spark.publish() = Spark.function() timeout?

It seems that if I call Spark.publish() from within a function defined in Spark.function(), I get a timeout when trying to make the HTTP POST cloud API call. When I make the call, it publishes to the SSE stream, but the API call returns with a 408 timeout.

API Call:
curl https://api.spark.io/v1/devices/50ff70065067545635220287/pubfunc -d access_token=<lalala> -d args=test

API Response:

{
  "ok": false,
  "error": "Timed out."
}```

SSE Call:
`curl https://api.spark.io/v1/devices/50ff70065067545635220287/events?access_token=<lololo>`

SSE Response:

:ok

event: wgbartley.beta.pubfunc
data: {“data”:“test”,“ttl”:“60”,“published_at”:“2014-03-25T19:35:39.340Z”,“coreid”:“50ff70065067545635220287”}


The SSE stream publishes immediately, but the `Spark.function()` API call always times out.  It feels like some sort of contention or race condition.  I can call the `Spark.publish()` outside of the `Spark.function()` call, but it seems like it may be a bug to be investigated.


Firmware:

void setup() {
Spark.function(“pubfunc”, pubfunc);
}

void loop() {
// Do nothing
}

int pubfunc(String command) {
Spark.publish(“wgbartley.beta.pubfunc”, command.c_str());

return 1; 

}```

It sounds like 2 connection being opened by the core when you attempt to do so.

One is halfway through the returning value from the function call and another got opened due to publish.

Interesting use case we have here! Let me try doing the same thing :
:wink:

Someone else reported this but I can’t find the post at the moment. I think the only solution currently is to set a flag in the Spark.function() and call Spark.publish() in loop when you come back around.

Here is the other thread on Spark.publish() during a Spark.function() call.

@Dave said he would take a look at it.

1 Like

Ah, yup. Thanks @wgbartley for the exceedingly clear firmware example app. When it’s boiled down like that I think I can imagine what’s probably happening.

  • The buffer in the SparkProtocol object is being used to construct a function response. Parts of the message have been written to the buffer before the call to the Spark.function.
  • The publish call assumes it has free reign to write to the buffer and thus overwrites the parts of the function call that had already been written and sends the event.
  • The Spark.function returns, assuming the buffer is in the state it was before the Spark.function was called, which is no longer true. Probably it encrypts and sends a (garbage) message successfully, and the Core thinks it has finished its job. The Cloud, however, would (1) see that garbage message, (2) continue waiting on a function return that never comes, and (3) time out.

Just added this issue to the core-communication-lib:

Thanks!

1 Like