"WITH_ACK or not to ACK, that is the question"
So I'm long overdue in migrating from the legacy PublishQueueAsyncRK to using the PublishQueuePosix library. As I transition the code to make it more robust, I was wonder if the WITH_ACK has the same behavior in the new library as the original.
Per Github of the original library: PublishQueueAsyncRK
I recommend using WITH_ACK. The worker thread will wait for the ACK from the cloud before dequeing the event. This allows for several tries to send the event, and if it does not work, the send will be tried again in 30 seconds if cloud-connected. New events can still be queued during this time.
Can I assume the new PublishQueuePosix library has the same functionality under the hood as the original. I.e. events are only taken out of the queue after an ACK.
Both methods are considered a single data operation, correct? I assume so given this: Particle.function() - Cloud functions | Reference | Particle
Each publish uses one Data Operation from your monthly or yearly quota. This is true for both WITH_ACK and NO_ACK modes.
So what would be a scenario where someone prefer to use NO_ACK over WITH_ACK? To me it seems, why not always use WITH_ACK? Or said differently, what's the disadvantage of using WITH_ACK?
A related question... when are files written to the flash file system? For example, I set the parameters like this:
PublishQueuePosix::instance().setup();
PublishQueuePosix::instance().withRamQueueSize(10);
PublishQueuePosix::instance().withFileQueueSize(50);
My understanding is once the RAM is filled up with more than 10 events, only then will it start writing to Flash. However, currently per the serial log. I'm getting this:
0001085058 [app.pubq] TRACE: publishCommon eventName=DataArray eventData=[{MY DATA}]
0001085060 [app.pubq] TRACE: fileQueueLen=1 ramQueueLen=1 connected=1
0001085149 [app.pubq] TRACE: writeQueueToFiles fileNum=4
Which to me indicates it just wrote file #4 to the flash file system despite the queue being a max of 10 and also being connected at the time. I just sleep, wake up, take readings, add the data to the queue and fall back asleep. Then every 20 minutes, it also connects and publishes the data out. In this scenario... why would the file be written to the flash file system?
Final Question, per github repo:
Setting it to 0 means all events will be written to the file system immediately to reduce the chance of losing an event. This has higher overhead and can cause flash wear if you are publishing very frequently.
How frequent is too frequent where this flash wear is something to be actually concerned over? Should I care if it writes to flash or not if I normally capture an event every 5 minutes?