Logging data while offline, delayed cloud dump

Hi All,

I’m building a bike computer with a Particle Photon, and eventually want a web app that will plot metrics. I’ve two hall effect sensors, a LSM303 IMU, and GPS as inputs at the moment. The thing is, most data collection will be off of a network (while riding my bike). What is ideal is when I take my bike computer into the house, and it connects to the network, it can dump that ride’s data to somewhere (like a google spreadsheet). A typical ride would provide something like 7000 timepoints, each with 11 datapoints.

My questions are these:

  1. How to store a growing dataset like this onboard the Photon while off network? (I’d like to not have to add an SD card) ie what data structure. I was thinking a big .json string, or maybe a dictionary… never done this on a microcontroller.
  2. How to get the data from the Particle to a cloud destination without clogging the Particle servers? I’m thinking of making a Heroku app, so maybe I can publish straight to their servers?

Any help appreciated!

@lewis, with 7000 timepoints of 11 datapoints which are most likely multi-byte you are looking at 100KB+ of data, so let’s assume 512KB. The Photon has no onboard storage to accomodate that so you can either use the P1 module which has external eeprom or use a Photon with an external eeprom, flash or FRAM.

If you send everything in JSON format, all your data will need to be converted to strings, including floats, time, etc. The only way to send this much data will use TCP either directly (and not the Cloud). You could use the existing HTTPClient or HTTPS libraries. :grinning:

  1. As @peekay123 suggested, even if you collected your data in RAM super efficiently as packed binary data (delta encoded?!) it’s likely that you will “fall off the edge” unless you’re collecting data very infrequently. In a previous application, we ended up collecting log data to a file on the SD card and periodically uploading batches of log data to a log collector (in our case, Loggly). We actually abandoned this approach because writing data reliably to SD cards is a hard problem, so I think you’re right to avoid SD cards. We considered using an external SPI NOR flash chip for ease of use and cheapness in low capacities, since we only needed a few MB of breathing room, and treating it like a hardware circular buffer for the log data. The downside is that while you can write at the byte level, you can only erase at the block level, so you have to be more careful about it than you would a software-based buffer. Edit: I forgot there was also this library written by @mdma for making Flash management easier: https://github.com/m-mcgowan/spark-flashee-eeprom

  2. You can store your data in whatever format is most efficient for you and only expand it into the API’s format (text/JSON) when you’re ready to upload. If you want to avoid writing your own middleman server, you can look into log collectors or analytics services that offer text or JSON-based API endpoints. If they support HTTP chunking, extra points, because then you can stream out your data as you read it in and format it.

1 Like

@indraastra, great info! Just a note that there is the fantastic flashee-prom library available which can be adapted for SPI eeprom. It managed wear leveling and supports amazing functionality. Though it was originally designed for the Core, I believe it can be adapted for the P1 and a user supplied external SPI device. Perhas the author, the amazing @mdma, can provide some insight?

Okay…I’m buttered up and ready to go! :smile:

The library includes a circular buffer that saves data to the flash memory. It could be used to buffer data before it’s sent out over the network.

The only weakness is that although the data is persisted, the current read/write pointers are not, so some action has to be taken to recover these on power loss/reset. This is application specific, and depends upon the data being logged.

1 Like

Ok thanks for the tips! I’m wanting to stay with the Photon because I’m using a sparkfun LiPo shield for it. I’m honestly noob with memory management, let me explore external SPI NOR.

@indraastra: I saw on here somewhere a spark raspberry pi server setup tutorial, might pursue that as a middleman.