How to know when partile.publish() responds

I am using Particle.publish() with the WITH_ACK flag, and I am using it to catch the response from a DRF web API that I have created to serve binary data.

I need some sort of way to know when one request has completed round trip, so I can begin the next one. I know that there is a hard delay of 1000ms for multiple back-to-back requests, I am wondering if this delay is changing in my firmware or if I am somehow interacting with a silent publish limiter?

Description of the problem

I need to know when a particle publishes has finished round trip, for example:

void someFunction(){
    // let's say just for demonstration purposes, that I need to fetch the event response from 50 items,
    // this is a realistic use case for my scenario
    for(size_t i = 0; i < 50; i++){
        delay(1000); // hard publish rate limit, respect that
        Particle.publish("testEvent", "test", PRIVATE, WITH_ACK);
        // this is where I need to wait on the response. i need to use the data 
        // inside of the response handler that I have subscribed to this event with
        // before I make the next publish


Is there any way to be ABSOLUTELY CERTAIN that a Particle.publish() has:

  1. returned the response to the device,


  2. Failed the request

I cannot make the next request, without first being sure of these things.

So far here is my solution:
this is frankly silly and stinky and there should be a better way to do this,

// after we make all 50 requests with a second in between
uint64_t waitTime = numberOfRequests * 1000;
uint64_t requestWaitForResponseTimerStart = millis();
while(millis() - requestWaitForResponseTimerStart)
    delay(1000); // this is the only way to give the publishes any time,
// a smaller delay will result in the responses not being sent to the device
    if(detectMissingResponse() == -1)

I hope I don’t need to say that this is a terrible solution to this problem, but it works. There should be some function of the particle OS keeping track of where the publishes are going and whether or not they are landing instead of trying 2/3 times when we think it might have failed

The overarching problem here is that the device has no way of knowing the status of the external cloud, it just has a few vague rules that it must follow if you expect things to work correctly, and if you overlook one of those rules, your firmware is going to have lots of bugs and you’re going to be doing a lot of head-scratching

Proposed solution: somehow use the internal promise logic of the particle publish to incorporate what is actually happening with the cloud, instead of giving loose guidelines as to how to use it in the documentation, make some boolean return functions that will give us further insight, I would be willing to contribute to these changes if the OS itself is open source but there are lots of scenarios where the particle.publish() setup could be better, it’s just defining how it could be better where I am struggling, just some constructive criticism

I think the best approach would be to configure your web API that processes the web hook to send a response back to Particle which then your device Subscribes to. This is configured under integrations. More information is here: Particle Particle.subscribe() - Cloud Functions

And here:

Before you call Particle.Publish() reset a flag
pubSub_ResponseRx = false;

within the subscriptionHandler set the flag to true:
pubSub_ResponseRx = true;

Then you can condition your Particle.Publish with something like.


I personally do this to wait the minimal amount of time before the device goes back to sleep after publishing data to the cloud. In other words… the device wakes up, publishes data, data is received by the backend through a web hook, the backend sends a response and the device subscribes to that response. That way I know for certain the data was received by my backend before the device sleeps again. This allows me to send new configuration parameters to the device despite the device waking/sleeping. A response is always sent to the device so the device knows either way if new configuration data is required or not. For my use case, if it never receives a response after 10 seconds, it just falls back asleep anyhow and tries again next time.

For your use case, you’ll have to decide if you want to re-send your prior data or just move on.

Does that make sense? I could probably provide more code snippets if you need.


One other item worth mentioning is you may want to have a look at GitHub - rickkas7/PublishQueueAsyncRK: A library for asynchronous Particle.publish

It handles the metering of events every 1 second as well as waiting for the ACK before removing an event from the Queue.

I recommend using WITH_ACK. The worker thread will wait for the ACK from the cloud before dequeing the event. This allows for several tries to send the event, and if it does not work, the send will be tried again in 30 seconds if cloud-connected. New events can still be queued during this time.

This makes sure the data made it to the Particle Cloud but that doesn’t make sure it made it to your own backend API to process the web hook. I think the only way to guarantee that is wait until the response is received by your device via Subscribe.

This will get you confirmation of publishes to the Particle cloud. If you want end-to-end acknowledgements you will need to use webhooks and webhook responses as suggested by @jgskarda.

You can see an example of this in cloud_service.h/cpp of GitHub - particle-iot/fw-config-service: Library to allows creation of configuration objects that which is used in Tracker One and GitHub - particle-iot/tracker-edge: Particle Tracker reference application. The library lets you publish in a non-blocking fashion with a requested level of ack (none, cloud, end-to-end via webhooks) with callbacks on success/fail/timeout. Obviously if you implement end-to-end via webhooks you also need matching webhooks and server-side code on your end but that can be worth the effort for critical data.

Hi @joel , glad to hear from you. I have some more context that has become relevant, consider a use case as follows:

The task:

Consume large amounts of binary data, over HTTPS

The most efficient solution(at least as I see it):

  1. Convert binary data to indexed Base64 encoded data chunks
  2. Ask our external api for the indexed data in order, but the order of the response is not important as the response contains indexing information
  3. Send the Base64 encoded data as ASCII along with the index via the command string in a JSON format, we make that call to the particle cloud in the same api referenced earlier
  4. Save the data once it is received on the B402

The discrepancy I have with the current cloud functionality

Maybe I am misunderstanding some of the documentation, but I have extensively dealt with both Particle.publish(), Particle.subscribe(), and Particle.function() and the conclusion that I have arrived at through the research of myself and my peers is as follows:


If there is some sort of functionality that I am misunderstanding the implementation of that is a strong possibility, but if my points are valid Particle.subscribe() seems relatively redundant when compared to calling a Particle.function() on the calling device with the response data is more efficient for multiple reasons

Thanks in advance,

Disclaimer: I know that the API is making a HTTP request to the particle cloud which is calling my device, not that I am directly communicating with the device over HTTP (in reference to the note on the top right of the board beside function())

I would tend to agree that Particle.function is better for reliable unicast to particular devices as compared to Particle.subscribe. Neither is particularly suited for sending large binary blobs though you can hack together a solution.

One idea to consider (which I’ve never used but seems OK in theory) is to share a decryption key via the Particle cloud connection and use regular HTTP to download the encrypted file directly. So the Particle function might be “go get a blob from this URI encoded with this key”.

Doesn’t necessary mean you should be passing around large binary blobs on the regular. There are still cellular data limits. But potentially a little more straightforward/efficient when you do as opposed to chaining across innumerable Particle.function requests.

You can Particle.subscribe to a unique event topic per device (ex. prefix your device-id) so you aren’t broadcasting to the entire fleet.

1 Like

I will explore the possibility of sharing a decryption key later, but currently, the hacked-together solution that I have devised as described in the initial post is serving the purpose that I need, (usually files less than 1MB but the solution can easily handle larger files)

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.