[Solved] Check if Spark.publish() has completed?

I’m using Spark.publish() to push data about once every 6 hours, wait 15 seconds then go to deep sleep to save battery power until the next time. Is there a way to tell is the Spark.publish() has completed sending the data to the cloud, so I could go to sleep as soon as it has finished sending the data? Also if it fails to send is there a way to see why?


Good question @chunda. When the call to Spark.publish() returns, the MCU has sent data to the CC3000. From there, however, we have no way to know when the CC3000 finishes sending the data in its buffers.

People usually handle this by adding a delay() after the publish call before going to sleep. Experimentally find a delay time that consistently works for you.

Do you have a general timeline from MCU to CC3000, for instance, after any Spark commands to Cloud and expected completion from CC3000. So the delay will be in term of 10ms or 100ms ?

Other question : could application at least know the command was sent over ? Or the assumption is, when Loop() is called, command is already passed over to CC3000 ?

Thanks, that’s what I’ve been doing just using 10-15 seconds so far. When the data has hit the CC3000 has all the DNS etc been done, so should be able to keep it short as it’s just needs time to send the data to the cloud, which is already connected.

At the moment I seem to be loosing 50% or so of the data. I’ve been testing with a deep sleep time of 60 seconds (quicker than every 6 hours) I see the core wake and connect to breathing cyan, but nothing shows up in the event data and then it goes to deep sleep after the timeout. Problems on my infrastructure side, or anything else to look at?

Yes, the socket to the cloud is already open, so no DNS resolution takes place and no new connection has to be made. The only thing that needs to be done is for the CC3000 to build a packet from the buffer and push it out the antenna. I don’t know how long this takes, but I would think one second would be plenty of time to send the packet and get the TCP ACK from the server.

1 Like

Thanks Zach,

I’ll put a sniffer on my network to see if I can see any issues with the data getting out of the local network. I ran a test using a 2 minute sleep interval for 5 hours and I only got 38% of the events received. I’m re-running the test using USB power and not from 4xAA batteries to see if there is any difference.

I have run previous tests on my setup polling variables from 3 cores as fast as possible and out of 1 million plus requests was only loosing a few hundred.

Definitely keep us updated. No idea why you’d be losing so many packets compared to polling for variables.

The delays suggested by others increase the likelihood the data is sent, but do not ensure it is. I guess you are interested in whether the data is seen as well as sent? You’ll need some application level ack as well as the underlying networking acks and naks. There are now ways to allow jnode progs to call a function in your Spark code. I would do something like this:

Publish every minute until my Ack() function is called and then go back to sleep for 6 hours.

I once asked elsewhere about the semantics of Spark.publish(). Is a Spark.publish() at-least once or at-most once? If three events are published, are they guaranteed to be seen in order, if at all? [I believe the answer is yes.] If two events are published is it possible to see the second and for the 1st never to be seen? [I *think* the answer is no.] Can one see duplicates? [No, I think.] This would be a useful addition to the documentation, if known.

Best info I’ve seen is here [quote=“Dave, post:4, topic:4403”]
The current messaging system is quite reliable, …

Thanks, I was thinking I might subscribe to my own cores’ events, that way we can ensure a full round trip. Not sure if that is implemented yet in the core, I haven’t tried that yet.

Just trying to setup a system to allow me to sniff the network packets to see if they appear to be leaving ok. I’ve tried the core on a different network, different hardware (same core), using a different ISP for internet traffic too and seem to get the same issue of missing event messages. I’ll have some fun playing with WireShark anyway to see what’s going on :smile:

Ok, been working on this a bit over the week and running various tests.

If core is always powered, connection is very reliable, issue appears to be when trying to send data immediately after reset or coming out of deep sleep.

I found the issue is that after the main loop starts running something is still not connected all the way though to the cloud properly, so it not ready to receive the publish event data immediately all of the time, so sometimes it gets lost.

The fix seems to be to add a delay before trying to send the data. I was using some test code like:

void loop() {
    unsigned long preDelay;
    preDelay = 5000;
    delay(10000 - preDelay);
    // Core will deep sleep then reset

For a pre-publish delay of about 3 seconds or more it was reliably sending the data. For less than this time quite often the data would be lost, for 2 seconds about 50% of them are not received.

So my fix is just to delay after start-up for 3 seconds before publishing the data and going back to deep sleep. That is running reliably now. :smile:

So my conclusion is something on the cloud side sometimes needs a few seconds to start-up after connection is established before any published data gets handled properly.


1 Like

Maybe something like Spark.connected() and Spark.ready() can be used to decide if publishing occurs?



I’m still running in the default power-up mode, so the main loop() only gets call after the cloud connection has started, and Spark.connected() should be returning true immediately anyway. The core is in breathing cyan mode at this point.

In my application code’s state engine I also check it inside my loop anyway that Spark.connected() returns true in case connection was lost very quickly, so I can then abort and go into power down mode. Haven’t fully switch my code to manual connection mode yet where I would control the connection etc. which I need to do so I can power down if say the WiFi was not available so I could go back to sleep and not lose too much battery power.

I’m thinking something on the cloud API sometimes takes a few seconds to spool up before it is ready to process the event data. If anyone is interested I can WireShark the TCP/IP communications for comparison between a good run, and a bad run.


Hi @chunda,

You’re absolutely right! I discovered this issue as well this last week, it’s an unwanted delay as a result of a core asserting its place on the cloud. I built a fix for this last week, but I’m still testing it out, it be rolled out in the coming weeks. :slight_smile:



Thanks @Dave,

I’ll give it a test too when the fix is live :smile:


Was a reliable fix ever found for this besides putting a lengthy delay before and after the publish call?

1 Like


Just made a test over night and out of 378 events none were missing with no delay before calling the Spark.publish(), so the server side issue appears to be resolved. I used IFTTT to store them to Google Drive as a spreadsheet so easier to check the data :slight_smile:

Not sure if the core can subscribe to it’s own core messages, that would be the only way to confirm if they made the round trip to the Spark servers. That doesn’t confirm delivery to any custom back-end you may have though. At the moment I have a sensor running with a 10 second delay before going back to deep sleep and that is working fine.


1 Like


you can indeed subscribe to your own events :wink: