Conceptual method to upgrade PublishQueue functionality with reduced data operations

I wonder if there is an opportunity to update the PublishQueueAsyncRK library to add members to a JSON element with each call of PublishQueue.Publish().

For example: If a device takes readings every 5 minutes, calls PublishQueue.Publish() to store the data in a buffer and falls asleep (does not connect to the cloud to publish data) and then every 1 hour it wakes up, takes readings and then connects to the cloud to publish all the readings taken over the last 1 hour (i.e. 12 individual publish events spaced 1 second part as defined by PublishQueue. The PublishQueue library written by @rickkas7 works great for this use case and makes it very easy to store the data and publish at a later date. The complexity is abstracted away making it simple to use.

Prior to the price structure change, I think the amount of data used would be about the same weather it was 12 individual publish events vs 1 publish event containing the data from the 12 sensor readings in a single publish event. Since the billing used to be in MB/Month having PublishQueue send 12 individual publish events was not a big deal. However, now that Particle is metering data operations/month and is not focused much on MB/month, the ideal operation in the example above would be to transmit the 12 packets of data as a single Publish Event rather than 12 individual ones. This would reduce both data operations as well as improve backend throughput as it would make a single web hook with more data rather than 12 individual web hooks. I’d just update the backend to parse out the data from the array of JSON members accordingly.

Has anyone in the Particle Community investigated this or implemented this yet? I was just about to start looking at this so figured I’d ask.

I.e. Instead of publishing something like this 12 times using PublishQueue.publish()
{ "Sensor1": 1234, "Sensor2": 5678, DateTime:1623000246}
{ "Sensor1": 1234, "Sensor2": 5678, DateTime:1623000546}
{ "Sensor1": 1234, "Sensor2": 5678, DateTime:1623000846}

I would publish this once using either Particle JSONWriter - JSON
or GitHub - rickkas7/JsonParserGeneratorRK: JSON parser and generator for Particle devices
[ {"Sensor1": 1234, "Sensor2": 5678, DateTime:1623000246}, {"Sensor1": 1234, "Sensor2": 5678, DateTime:1623000546}, {"Sensor1": 1234, "Sensor2": 5678, DateTime:1623000846}...]

The effect would be 1 data operation vs 12.

Any thoughts or guidance is always appreciated!


That is certainly feasible, however I would probably do it a different way since you’re using the Boron.

The PublishQueuePosix library is a newer version of PublishQueueAsync that use the Gen 3 file system instead of the variety of methods in PublishQueueAsync. However, the reason why it’s useful in this case is that it’s built on a bunch of smaller libraries, one for managing the file queue, one for background publishing, etc…

The writing the events to files would be unchanged. However when publishing from queued files, instead of grabbing a single file and publishing it, you’d grab multiple files and add them to a JSON array and send that instead.


@rickkas7 Very good. Thanks for the quick reply. Nice to see some new/improved libraries for this! I didn’t know about PublishQueuePosix yet so good to see. I think I’ll first migrate what I have to use that library instead or just to keep it simple get some sample sketch going using that Library on my Boron. Once I get it functional I could then dig through the source code of the library to see what it would take to create the JSON object from all files in the buffer and publish once.

Ultimately, it would be nice to set a parameter when initializing the object within Setup() to set a “publish mode” as individual events spaced 1 second apart (As it does today) OR as an array of events. That way a user could configure which mode they want to use. I think it would also need to see how many bytes the JSON would be and if it needs to chunk it up into more than one publish event.

I’m fairly new to looking through and modifying the source code of libraries, I’ve done it a few times and updated a few things “behind the scenes” but nothing major. It certainly isn’t my strong suit but this gives me a reason to try/refine my skills a bit. I’ll let you know if I can figure it out. If so, maybe it’s a pull request to that repo so others can benefit as well.

That all said… if you want to make an “upgrade” to the PublishQueuePosix library with this functionality I certainly won’t be disappointed. :wink:

Thinking about this some more, I wouldn’t add it to the publishing library, though you can.

In the normal case of periodic publishing, you’d want to plan your data operations around them going out on a normal schedule, so having them bunch together on situations where connectivity is lost isn’t all that useful.

However, if you are in a situation where you always group data for batch uploading: wake, measure, sleep, then after a number of measurements, connect and publish, there’s a better way to do it.

Instead of saving each measurement in a publish, save the measurements in retained memory, EEPROM, or the flash file system. When you’re ready to upload, create the JSON object and publish that. The logic is way simpler and it’s more efficient.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.