PublishQueueAsyncRK and Large Datasets


@rickkas7 Dude this new library is totally Awesome!


Would this be relatively easy to use to send to something that isn’t particle? My particular case is storing GPS cords and then sending them back to a server I have privately hosted and skips the cloud - for the data anyway, I’ve got some cloud functions to grab the current cords or to reboot the device etc. For the sake of simplicity though, lets just assume I’m not going to publish to the particle cloud etc.

Or is there a library more suited for the task? I’m assuming that I will be frequently out of cellular range, but want it to queue up data to be sent - potentially to an SD card (whatever works really). This isn’t for an electron though, its going through a hotspot (it has multiple uses :-P)


@rickkas7 Just had an idea while looking through the library, is there any issue with forking the library and replacing the following

  • Particle.connect() -> Mesh.connect()
  • Particle.publish() -> Mesh.publish()

It would be great to have a similar library for Mesh only messages.


@emile, the primary reason for the library is to maintain a 1-per-second publish rate limitation set by Particle. Such a limitation does not exist with Mesh publishes. Also, Mesh publishes have no ACK mechanism so they are fast. What is your use case for the new library?


@peekay123 Ah ok, nevermind. I was confused, and thought Particle.publish & Mesh.publish operated in a similar fashion.

I am only using Mesh.publish right now to pass data from the edge devices to the gateway. Having the ACK would be great because I’m currently publishing and then reseting my counters (not ideal).


@emile, you can create an ACK by having the gateway do a Mesh.publish() with an “ACK” message back to the node. I believe this was covered in a topic somewhere but I can’t recall where.


Great idea, it does add a little data-overhead on the network, but I’ll take it right now.

Thank you again!