Hello,
I'm developing a "maintenance monitor" for one of our products using "Msom on a MUON" with HAT boards to collect 8 temperature and 8 current (amp) readings. The system is headless and uses Blynk as the interface for:
- Viewing sensor data
- Setting thresholds for alarms when parameters deviate from expected ranges
Current Setup:
- Sensor data stream: ~250 bytes
- Limit-setting data stream: ~250 bytes
- Communication: Cellular only (no Wi-Fi or LAN expected)
- Goal: Non-intrusive monitoring that supports remote diagnostics without interfering with product controls
Challenges:
I'm looking for best practices to reduce data usage while maintaining good resolution. Some ideas I'm considering:
- Sending data at 1-minute or 5-minute intervals
- Allowing on-the-fly adjustment of data frequency (e.g., increase frequency temporarily during troubleshooting)
Questions:
- Does this data volume (~500 bytes per cycle) seem excessive for a cellular connection?
- Are there recommended strategies for adaptive data throttling or event-driven reporting to optimize bandwidth?
- Any suggestions for efficient encoding or compression techniques that work well in embedded systems?
The goal is to keep the interface simple for technicians and avoid introducing a second UI, which could be confusing.
The recommended way of using Blynk is using their blueprint and firmware, which uses an integration (webhook). See Blynk integration for more information. Your data usage will mostly be measured by data operations. Each payload can consist of up to 1024 bytes, and will use one data operation.
You get 100,000 data operations per month on the free plan. On the basic plan, you get 720,000 data operations. You can find more information on the pricing plans page. There's a calculator on the page that can estimate the number of blocks you will need based on the number of devices and the frequency of publishing.
Instead of periodic uploads, maybe consider sending data only on significant change from the data source. For example, if the reading deviates by X%, start a faster sampling rate, then throttle back down. You can use Particle Ledger to configure these parameters.
You can also take advantage of the new binary payload support in Device OS 6.3+ to compress data and fit more readings in one 1024 byte publish. Consider aggregating a rolling window of compressed samples every few minutes. I suggest looking into Particle Logic to decompress your data back into a structured payload to be forwarded along to the Blynk integration.
Rick has a great library that's worth exploring to queue up publishes and make sure nothing gets dropped.
Thanks Rick and Eric, I am sure I will have more questions. should I keep responding to this post or think better to be more specific on topic to benefit others as more searchable when using the ledger and logic or the libraries? You guys are awesome!
You can add more questions to this thread, but if you want to separate out an in-depth discussion of ledger that might make sense.