Compressing and breaking data into chunks prior to transmission

Hi, all—I’ve read up on the recommendations in this forum for transferring larger files (10s to 100s of KB) from an Electron and/or E-Series using UDP/TCP but am still leaning toward sending via Particle.publish() calls due to what I perceive to be the improved robustness of the latter. If I’m interpreting that incorrectly, I’d love to hear that feedback too!

But assuming that my perception of the benefits of Particle.publish() are valid, I’m wondering what other recommendations the community has. For example, does it make sense to try to encode sensor readings in something like ASCII85 to buy a little more bang for my buck? On 16 bit/2 byte sensor records that are translated to their human-readable decimal numbers, we end up with many more digits which at the moment all have to be transferred at a one digit-to-one byte cost. But coming up with a recompression scheme prior to transmission could greatly reduce this, saving us time and probably lots of $$$, and we have a custom back-end picking everything up anyway, so we can just uncompress on that side.

Would greatly appreciate the community’s thoughts on this! I hear that a feature to be called something like Particle.stream() is coming to our rescue though not for a few months… Thanks!

Yes, and this has been proposed as way to send binary payload as string in multiple threads

However, with Particle.publish() you are not only limited to relatively small chunks of data that have to map into a subset of possible/allowed bytes, you also need to adhere to the rate limit of 1 publish per second carrying max 255 bytes of Base85 encoded binary data (equivalent to 191 byte of raw binary).

1 Like

Thanks, @ScruffR, I didn’t think to search “base85”—sorry.

I can live with the 1 pub per sec rule and just spend a few minutes transmitting, I think, if this is the more robust and Particle-preferred way to do it. TCP is tempting but the Particle docs that warn about potential increased data usage and the fact that error/dropped data handling looks tough have me leaning toward Particle.publish() though I’d be quite interested to hear if you or others recommend trying the TCP route. Looks like lots of dev effort to go that route so I’d be very happy to be able to chose one over the other early on if possible. Thanks again!

Unlike UDP where packets can be dropped TCP is quite reliable as the protocol ensures no packets to get lost. I’ve written some code that transfers +70KB JPEG images via TCP on an Electron and have no issue transmitting that. The image just needs chopping up into 512 byte chunks and send one chunk after the other - no data lost.
It’s just open connection, send chunks one-by-one, close connection, done - that simple.

The biggest advantage of Particle.publish() in this regard is probably the security aspect. TCP transfer is (by default) unencrypted while all Particle cloud communication is encrypted. If you need encryption with TCP you’d need to put a lot of effort in while it comes free with the cloud features.

2 Likes

Does the Particle Console provide a way to easily decode compressed data (say, modbus registers compressed with zlib.h on the particle), or is it just easier to pass that through to and decompress it at the end? It seems most logical to decompress it at the end, but I like being able to see the actual data on the Particle Console.

I haven’t managed to use the Console for much more than simple troubleshooting (e.g., are data coming in at all?). I would be very surprised (but pleased!) to learn that the Console could handle anything other than serving as a visualization of the Particle Cloud gateway.

Particle just published a blog post yesterday on new Console features but I’m not sure anything here fulfills your needs: https://blog.particle.io/2018/09/19/announcing-new-device-cloud-features-two-step-authentication-revamped-real-time-event-logs/.

Nope, while that might be cool, that's not what the console is meant for.