Are there any libraries or has anyone done anything like caching or storing publish messages until the cloud is ready?
I have a situation where publish events could be generated before the device is connected to the cloud and it’d be nice to store them instead of waiting for the cloud or missing the message. I imagine others would have this problem occasionally as well. I can also imagine a low power situation where a user waited until a few publishes built up before turning on the Wifi and publishing.
I found the PublishQueue library that looks like it has the storage part of the equation so it shouldn’t be too much trouble to adapt. I’m happy to build one myself and make it available, but I’d rather not re-invent the wheel.
Thanks
This sounds like it would be really useful. My Cores are constantly going online/offline and if they were able to queue/cache messages to ensure send/delivery it would negate the WiFi connectivity problems I seem to be having.
I uploaded a new library to the Particle called “PublishManager” that implements a queue to store messages when your core is offline. I tested it on a photon, but it doesn’t reference any hardware so I don’t see why it wouldn’t work on the core or any other platform.
Let me know how it works for you.
Software timers are not implemented on Core, so that library may have to be refactored to use some other callback mechanism.
@bveenema, adding delays to allow the cache to clear is not a good mechanism IMO. You may want to add a class function that passes back a bool indicating there is room in the queue for a publish or that it is empty for example.
@bveenema, you make use of the std::queue objects which indirectly will use dynamic memory allocation for the queued items. This may or may not lead to heap fragmentation after some period of use so it may be worth carrying a warning on the library.
1 Like
@peekay123 Thanks for the feedback.
I only did this in the examples. In a real program, I would expect other operations to be going on.
This would also be good for knowing whether it is safe to go sleep.
std::queue seems like an appropriate container for the application. Would ensuring the cache is empty every so often help in reducing fragmentation?
Shoot! Wasn't aware of that. I don't think I have any cores anymore so I wouldn't be able to test anything. I may be able to make a #define aware of the core context and handle it that way. Would want someone to test it though.
It is "an appropriate container" but this is running on a small embedded memory-constrained environment so heap fragmentation can sneak up easily. It is not clear if dequeing releases the allocate memory or not. Ultimately, long term testing is the best way to ensure reliability.
@daneboomer or another willing subject with a core.
I’ve created v0.0.3 of PublishManager on Github that implements a method for using the library with a Core. I don’t own any Core’s anymore, so if someone is willing to test it then I will publish it to the Particle Cloud. I am testing to make sure nothing broke on Photons with the developmentcode.
I opted to use a process()
method for Core’s as I was finding it finicky to implement the freeRTOS timer in the library. There is an example file HowToUseOnCore
that should work.
Thanks
Update to v1.0.0 that removes the std::queue in favor of a circular buffer.
More details here: [New Library] PublishManager
1 Like