How to send large file from Particle to server?

Hi all,

I am new to the Particle.io IDE/hardware. I am working on a project to record 10 second audio samples then store them in a cloud somewhere.

Right now, I’m thinking about recording the ADC into a buffer, then would like to send that to a database on a cloud to be converted to a .wav file, further FFT analysis etc.

Does anyone have suggestions (high-level are ok) on how to best do this? The Cloud API does not seem to have anything that supports this kind of data transfer (e.g. particle.publish)

Any and all suggestions would be greatly appreciated!

I am working on a project that does a lot of what you want.

I am using (currently) a SD card to store the samples - I will stored 1 second at a time because as I am sure you have found out there is not enought memory to store 10 * 16000 (Hz) * 2 (Short) on a photon.

Ultimatly if the SD card causes to much in the way of gaps between the 1 second samples, I have a cunning plan to use a pair of 4Mbit FRAMS in high speed DMA mode.

When you say ‘store in the cloud’ would I be correct in saying that you really mean ‘store off the photon’? If so I would contemplate using a TCP/IP transfer (many examples in this forum).

If I was doing it from my photon I would connect to a program on my MAC that stored the data into a MySQL database…

Stan

1 Like

Thank you for your response! Yes, there is definitely limited space on the Photon, I like your FRAM idea, although I was hoping to stream the data off the Photon as it was being gathered, maybe 10k at a time or so. I will be gathering sound at 6kHz, so that reduces my memory load a bit.

The idea is to have lots of these little guys at various test sites (around the world) and gathering data into a central database.

I am looking into the TCPclient library for the outgoing connection…

On the server end, are there any recommendations to receive/archive/process the data? Node.js running on… Amazon AWS? Any examples or tutorials I could start with?

This post (Sending large data off Photon using TCPClient) and this post (Local server in node.js example) are great, but they are on a local network.

Any more insight is greatly appreciated!

All the work will be going into the open source domain :smile:

So you are looking at frequencies of max. 3kHz to sample? (Nyquist-Shannon sampling theorem)
Will you then need 12bit ADC or would 8bit suffice too? That would cut the data rate by half again (also an option for you, @Stan?)

You can use the given samples on non-local servers too, as long you set the host accordingly?
You will just get considerably bigger latency.
But I'm sure @rickkas7 will be happy to assist getting his code global :wink:

That is an option @ScruffR - would cut the data usage in half. Because I am ‘attempting’ to ultimately send the waveform to a speech to text converter I may lose too much detail - experimenting as I go along… Must research the theorem you referenced.

The essence of Nyquist-Shannon is that you need to sample with at least twice the frequency of your source signal to get a reliable representation of it.

I like your explanation @ScruffR - I started to read the Wikipedia article and got an instant migraine. In my defence it is Friday evening here and I have been slaving over a hot sql server database all day.

Yes, the audio I am characterizing has all the interesting stuff under 3kHz. Although, the signal analyst I am working with is asking for 16bit, I’m going to try to see if the on-board 12bit ADC is sufficient :pray:

@Stan you would probably need to sample at 20kHz (typical human speech dynamic range 10kHz)

It has been a while since I’d done anything with audio, and that was on the Spark Core, so I thought I’d make a quick sample program. I used an [Adafruit analog microphone board] (https://www.adafruit.com/products/1713) connected to the Photon and a small server written in node.js. My server was local, but it should work over the Internet as well. When you press the SETUP button it streams audio live to the node server until you press the button again. Then the server writes out a wav file. It’s set up to sample at 16000 Hz, 8-bit, mono, but it’s adjustable. It works quite well, using the SparkIntervalTimer library to grab samples and a double-buffering scheme to feed the networking code.

The whole project is here:

5 Likes

@pnaylor1982 - thanks for the advice - will make the code mods for this.

@rickkas7 - thanks - will have a snoop at your code - Had my snoop - very interesting - some good stuff in there that is illuminating - such as setting the number of adc samples to ‘average’.

1 Like

This is great, Rick! So cool… thank you much!

I’ve tested it (locally) with a signal generator and get good results on the .wav end. I am having some trouble converting it to 16 bit buffers. The client.write(sb->data, SAMPLE_BUF_SIZE) seems to fail during compile when I change the structure size to 16bit: uint16_t data[SAMPLE_BUF_SIZE] … any suggestions?

I will be trying to test this on an AWS server soon.

You’re welcome!

You should just be able to cast sb->data as a (uint8_t *) and adjust for number of bytes vs. number of samples. I have a version that uses 16-bit samples and it works well, though if you’re making wav files be aware that 8-bit sample WAV files are unsigned and 16-bit WAV files are signed 2’s-complement little-endian. Go figure.

I have new version of the code. This one is more experimental but is quite cool: It does all of the sampling in hardware using the ADC, hardware timer and DMA, storing the samples in RAM at precise intervals without using the main CPU. It’s very efficient and this example works at a 32000 Hz sample rate with 16-bit samples.

4 Likes

Hi @pnaylor1982

Quick update on the FRAMS.

  1. They arrived (very quickly - with a HUGE box containing a tube with two teeny tiny frams) - Essentially DHL has just shipped me a cubic foot of air from the US!
  2. They (as far as I can see) are only available in a SOIC package - so two spark fun SOIC to DIP adapters plus a needle point soldering iron I was ready.
  3. Amazingly enough because…
    a. I have sausage fingers
    b. I am nearly 60
    d. My eyesight is poor
    e. I had to use the largest magnifying glass I could find!

Believe it or not - they both worked.

I have adapted a library (framlib) written a fair while ago by @peekay123 to interface to the frams. His library only handled smaller frams - my frams use a 3 byte address 0-7FFFF (1/2 MB per fram) plus there is a slight difference in the control register.

Initially I was using SPI_CLOCK_DIV2 on SPI - I have just converted the code to SPI1 (to take advantage of the faster speed) and it writes an entire half MB single byte at a time in about 13 seconds.

The next bit is to use DMA to perform the transfer and transfer in larger chunks (I believe 40 bytes and above is when DMA really starts to show improvements).

Once it is all complete I will post the forked library (or feed the changes back to Paul Kourany/Kenneth Lim (current library maintainer).

Have a good one

Stan

@stan, great work and I swear this stuff keeps getting smaller! Soon I’ll need a microscope just to solder! I look forward to seeing your results with DMA. :smiley:

Hi all,

Quick update, my Photon is now streaming nice audio to a AWS-based FTP server. Thank you @rickkas7 for the awesome DMA code. Works beautifully!

@Stan I’ve decided to go with an ol’fashioned uSD card… the audio files I want to ‘back-up’ are too big for the FRAM. And FRAM is F’expensive!

Couple of issues (I might open different conversation threads for these):

  1. I am using the SparkFun MEMS microphone (ADMP401) and there is some low frequency noise (<100Hz) coming through… any ideas what this might be? Antenna effect picking up 60Hz? Noise from the voltage regulator?
  2. I tried compiling the same code and running it on a Electron board, to no avail. Newbie question, but are the photon and electron cores different? Different DMA access?

@pnaylor1982 Glad to hear that things are going well. Once I get the DMA code tested with the FRAMS’s it may well be that there is no significant speed difference - I do actually have the DMA code written - just I am having a senior moment with casting in C - plus having a grandaughter hanging around is not that conducive to coding…

They should be the same (when it comes to the µC). If it's the streaming part on the other hand, the max. cellular bandwith is definetly lower than WiFi.
What does not work exactly? Just stating things don't work does not help at all!

Well, the code compiled ok: particle compile electron . and the Electron connects to the node.js server end, but the processor locks up (requires reset) after sending 1Kb. The resulting .wav file is a bunch of noise. I will try playing with the buffer/packet size…?

On that note, are there any good options for a debug interface where one could set breakpoints/look at memory/stacks?

Insight welcomed… Thank you!

1 Like

OpenOCD would be the first one that pops to mind, but for that you’d need some hardware too

There’s a brilliant walkthrough by @jvanier

2 Likes