Gathering continuous data at 200 Hz

Hi all,

I’ve been working with the Particle Photon for the past few months and would like to take continuous readings from a sensor on the device at about 200 Hz, for a few hours at a time. Power consumption concerns aside, what do you think is the best way to go about doing this? I was thinking of using particle.publish() to publish a new event to the cloud every 1/200 seconds, but the documentation says that I can only publish once per second, with a max burst of four per second. So this won’t work.

I also tried using particle.variable() to register a variable with the cloud, and then calling the variable using the Command Line Interface within Terminal, using particle get [variable]. But there is close to a one second delay every time I do this, even if I automate the script.

I’m out of ideas at this point, but there definitely has to be a way to read data at a relatively low frequency of 200 Hz. Anyone know what I could try next? Thanks!

Direct connections on a local network is probably where it’s at. With internet latency, there’s an increasingly small chance you’d get those rates if you send them over the web. If you’ve got a local thing you could connect to, you’d do away with the latency issues. UDP/TCP should/could both work.

2 Likes

You say you want to gather readings at that rate, but you don’t necessarily need to send the gathered data at the same rate you are reading it.
If you collect some readings and then bulk-publish you might open up a few more avenues, including Particle.publish()

But for my liking I’d go with a TCPClient approach but still block the transfer in chunks.

This got me thinking about how fast can you reasonably send sensor data off the Photon. To make a somewhat plausible proof-of-concept I connected an ADXL362 accelerometer to a Photon. It’s the sensor in the [Electron Sensor Kit] (https://docs.particle.io/datasheets/particle-shields/#electron-sensor-kit), but it’s just a basic SPI 12-bit X, Y, Z accelerometer.

I set it up to acquire 200 samples per second to a local server. It worked!

The accelerometer is programmed to gather the samples at the exact sample rate, independent of what the Photon is doing. It stores them in an on-sensor FIFO (1 Kbytes). The main loop pull samples out of the FIFO using asynchronous SPI DMA into a larger secondary buffer (128 buffers of 128 bytes each, 16K total). And finally the sender sends the data off by TCP one buffer at a time. All of these things run somewhat independently.

It’s only 1200 bytes per second, well within the capabilities of the Photon. It even works fine connecting to a server on the Internet. And even bumped up to 400 samples per second, the maximum supported by the sensor. The way the code is designed, if there’s a momentary network hiccup, the secondary data buffers accumulate, but as soon as the network comes back, it catches up. You could add many more buffers, if you have available RAM.

Here’s the code. There’s the Photon C++ source, an experimental partially implemented ADXL362 library that does SPI DMA, and a server program written in node.js.

WARNING: This is a proof-of-concept. The code likely has bugs, and it’s barely been tested! It’s not really intended to be the basis of an actual product!

4 Likes

I have a small program that streams data from the ADC to a Python client on my Mac through UDP. It samples at 25kHz and continually streams a block of 512 bytes. It is quicker than my python client can handle - streams continually for days on end without a problem. The Photon also sets up a TCP connection to listen for incoming commands which allows one to turn the acquisition of, change sample speed etc.

This is all done with the built-in UDP and TCP functions, no trickery. So getting 200 samples per sec off of the device is really not a problem but not through the particle.variable system, and you certainly should think to send it in chunks.

3 Likes

That makes sense, I’ll give particle.publish() a try. Thanks!

1 Like

Very nice! I’ll see if I can finaggle this for my own purposes.

For my product (https://sleeptrack.io/), I accumulate data in a buffer for several minutes, then connect periodically to unload the data to a NodeJS server. My photon code is here: https://github.com/lucwastiaux/gc/tree/master/gc2_firmware/data_collection
and my nodejs code is here: https://github.com/lucwastiaux/gc/blob/master/gc2_server/gc_server_connection.js

that approach works if you don’t absolutely need realtime data, but you can afford to have several minutes of lag. Also with my approach, there is an interruption in data collection of a few seconds every 7mn or so.

3 Likes

Hello I am trying to read a sensor value in 1 ms and then I would like to bulk publish like you are talking about but i was wondering do you have an example of how to do that?

Since the maximum publish rate is 622 characters every 1 second, it would be impossible to publish data sampled at 1000 Hz.

Possibly with compression, however, you then run into the problem where compressed data would be 8-bit binary, which would then need to be encoded in ASCII for publish (Base 64, Base 85), which would take away some of the gains from compression, so it probably still wouldn’t be possible.

Is there a way to store the data into an array and then bulk publish the array?

Sure, if the data is small enough to fit in RAM, say less than 40000 samples. In other words you are only sampling data for less than 40 seconds, not continuously, you can capture the data and publish it slowly, keeping within the limit of one publish of up to 622 characters every second, which may take a few minutes, depending on how the data is encoded.