ADC Sampling Rate


I have a Photon, the Wi-Fi Development Kit. I am using it’s 6 ADC channels from A0 to A5 to capture analog signals, sine wave from Function Generator, at 300 SPS. After capturing all six channels I am creating one packet and then send it to connected TCP Client.

I captured all the packets for around 10 seconds at TCP Client end, which is running on Linux Machine. But when I recreate the sine wave with captured samples, I found some anomalies and then when I match the Packet Counter, there were some packet loss. To debug it further I captured the packets with Wireshark and from there I found that TCP Server was not sending few packets.

Can someone please help me here?


1 Like

Hi @dhaval

I may be off track here but I suspect sampling 6 ADC channels continually at 300Hz may be using up a fair few of your CPU cycles.
Usually this wouldn’t be a problem but in the Particle abstraction layer they have made the ADC call take 10 samples and average them I believe, the ADC is also set to use dual slow interleaved mode. (The reason for doing this is the STM32F2xx series has quite a low ADC input impedance).
All this combined with 6 channels at 300Hz might be giving the processor a fair bit of work + doing the work for the TCP side of things. I’d suggest trying to reduce the ADC sample time for a start see here

Hi @G65434_2

Thanks for your prompt reply!

So you mean it is possible to achieve 300Hz sampling rate by setting ADC Sample Time using setADCSampleTime() function. But what is the time of a ‘cycle’. I mean, what will be the effective sample time if I set “ADC_SampleTime_3Cycles” sample time?

Also, what do you mean by the averaging of 10 ADC sample by Particle Abstraction layer? Does it mean that, Particle Abstraction layer captures 10 sample from each ADC channel and then average it out and return to the application layer? If yes, how can I improve that?


Hi again

I think it should be possible to do this at 300Hz but I suspect it may be a problem if you don’t reduce the sample time.
I know I tried to sample at 1kHz on 1 channel and saw that the processor was spending most of it’s time doing the sampling. (I set a pin high when the ADC started and low when it had finished and watched it on a scope).

So, I’m not an expert on ST micros (or any micro really) but on average most ADCs I’ve used take about 15 clock cycles to do the conversion depending on the accuracy. Now, I’m not sure what exactly is meant by the ADC_SampleTime_3Cycles. It won’t be 3 CPU cycles total as it takes far longer than this just to carry out the successive approximation in the ADC peripheral but perhaps it means the actual sampling duration is 3 CPU cycles?? I better stop talking here before I tell you something wrong!

To answer your last question, when you call analogRead, there is a layer of code in the background that gets called at compile time that sets up the ADC peripheral and the rest of the hard work. As I understand it, it is this code that also takes a 10 sample average, so yes.
To improve the total time to do this I’d suggest starting at reducing the sample time (I’m not sure what it is at default but it’s significantly more than 3).
If that isn’t enough you could try reading the datasheet and writing your own ADC function using something else than dual slow interleaved mode. Although take care, this was done for a reason so your sample accuracy may suffer.

Hi @G65434_2

Thanks for your explanation!

I have tried to change the sample time to “ADC_SampleTime_3Cycles”, but still some samples are missing. And that is also on TCP Server end i.e. on Photon side.

Is there any other way to send the captured samples to PC, other than Serial/TCP? Or how can I improve the transfer?



I profiled the single channel ADC sampling and it takes somewhere around 11us. Is it possible to improve this, may be by change in Photon Firmware?

Appreciate your quick reply.


Hi @dhaval

Sorry for the late reply. If you absolutely have to sample at 300Hz, then I might suggest writing your own ADC routine that doesn’t rely on the Particle abstraction. I’m confident that you’ll save at least a bit of time here, although you’ll need to make sure that the output impedance of whatever signal you’re measuring is low enough (as I said I’ve found these micros to have a pretty low input impedance - which affects the accuracy of the measured signal).
To reduce output impedance you can feed your signal through an op-amp buffer before it goes into the ADC.

However, I think it might be worth taking a look at an external signal acquisition block. Many companies sell dedicated 10/12bit + ADC chips that can sample multiple channels and send this information via I2C/SPI back to a host micro. This would free up a lot of your CPU cycles and probably provide a more accurate representation of your signal (especially if the alternative was compromising your sample time in the Particle micro).

I haven’t personally set up one of these devices before but someone else here may be able to help you.
Have a look here for example: Analog Devices A/D Converters

Hi @G65434_2,

Thanks for your suggestion!

Currently I am capturing from ADC using Particle Abstraction. For that I am sampling data in timer interrupt at every 3ms. And thus the sampling rate is 333Hz. I want to capture at 300Hz sampling rate. How can I achieve that, even by changing the firmware?

Also, I have to send the captured data over TCP or Serially to PC. But in both the ways, there are some packet loss… With serial there could be some data loss, but how is it possible with TCP within local network? As the local TCP connection has handsome bandwidth (100 Mbps link)!!

After debugging, I found that the data loss happens on TCP Server side i.e. on Photon side. I am using built-in Class TCPServer to create server and data exchange. Do I need to do some configuration?


Hi again

If you need to sample at 300Hz then as you’ve found the software timers won’t do it for you. I’d suggest setting up one of the spare hardware timers to generate an interrupt at your required interval. This will mean prescaling the microcontroller clock and selecting a counter value to trigger an interrupt off. I don’t have any code for you on hand sorry but browsing the forums here may turn up some info. On the other hand you could google something like STM32F2xx timer example.

I haven’t used TCP with the photon sorry so will be of limited help here. All I can say is that it’s possible again that if the microcontroller is struggling to keep up with sampling 6 channels at 300Hz then there’s a chance that you’ll drop a packet every now and then.

For hardware timers the awesome @peekay123 has put together a SparkIntervalTimer library which is available on Particle Build also and will do some heavy lifting for you.

But with some more bare metal insight you could just trigger the sampling via a timer (and not do the sampling in the ISR) and setup another ISR that gets triggered by the sample-complete interrupt, to just pull the results from the registers.
I’ve not yet done this myself with the STM32F family, tho’.

1 Like


Thanks for sharing the library. But how can I use that library with my code in Particle Abstraction?
Is there any guide if I want to add library of my own?

Also, I didn’t get the last thing i.e. I can trigger the sampling via a timer! Can you please explain?


What do you mean with “Particle Abstraction”?

To answer this, you’d have to tell what IDE you intend to use it with and if you intend to publish the library or only use it in your own projects.

The ADC sampling takes some time and the actual process of doing it, is an autonomous task of the chip.
So you would tell the chip to start this task from a function that gets called from the SparkIntervalTimer library, but you won’t wait for the sampling to complete, but rather set up another function that will be called by the chip once the task is finished.
But since 300Hz is relatively slow, you won’t need to go through all the bother of doing it this way :wink:

Hi @ScruffR,

I am using Particle Build online IDE for this. So I was using word Particle Abstraction.
I want to add library with that IDE only. So, how can I do that?

And you mean I don’t need to use SparkIntervalTimer library to capture ADC at 300Hz. is it?


Yes you should use it, but you don’t need to split the sampling into two parts (trigger task + wait for completion).

To add a library to Particle Build (that’s the official term - Web IDE is also commonly used - don’t make up new words ;-)), you can have a look at the docs here

1 Like

Hi @ScruffR,

I used the SparkInervalTimer library in my project and do the necessary initialization as shown in example code SparkIntervalTimerDemo.cpp. But I am not getting any timer interrupt!

//Global variable
// Spark Interval timer
IntervalTimer myTimer;

void setup()

   // Start Spark Interval Timer
   // To run at every 3.5ms (7 * .5 mSec)
   myTimer.begin(timerCallback, 7, hmSec);

Am I doing something wrong?


I can’t see the implementation of your timerCallback function :wink:

I don’t think it is depending on the implementation of timerCallback() function either. That function should be called at regular interval i.e. 3.5 ms in my case. But that is not happening…

Not seeing your whole code does not really help, but this complete demo does work on my Photon (v0.4.7) and Core (v0.4.7) and Electron (v0.0.3-rc2 - without SYSTEM_MODE())

#include "SparkIntervalTimer/SparkIntervalTimer.h"

SYSTEM_MODE(SEMI_AUTOMATIC);	//Just to have the blink start immediately

const uint8_t ledPin = D7;		
volatile int blinkCount;
IntervalTimer myTimer;

void setup(void) 
  pinMode(ledPin, OUTPUT);
  myTimer.begin(blinkLED, 7, hmSec);
  // for 3.5ms I'd rather use this one tho'
  //  myTimer.begin(blinkLED, 3500, uSec);

  digitalWriteFast(ledPin, HIGH);

void loop(void) 

void blinkLED(void) 
    if ((++blinkCount % 100) == 0) 
	  digitalWriteFast(ledPin, !pinReadFast(ledPin));

I am also having trouble getting a fast ADC sampling rate on the Photon. I am attempting to sample from two ADC pins as fast as possible (preferably ~80uS -> ~10kHz) but the Photon always seems to sample at 1 sample per 1000uS.

I tried using the interval timers you specified above set to myTimer.begin(capture, 100, uSec) with no luck.

So I took a step back and made a simple program that capture’s one ADC read, and captures the associated time via micros(), puts it into an array, and repeats this 50 times. I then return the array over TCP and the fastest ADC read time I can get is 1000uS per conversion. See my code below.

#include "HttpClient.h"
#include "application.h"
#include "SparkIntervalTimer.h"

int leftMicPin = A0;    // Left microphone
int rightMicPin = A5;    // Right microphone

// variables for snap recording
int snapBufferLeft;
int snapBufferRight;

// Wifi IP Address
String wifiIP = "none";

unsigned long leftMicros =0;
unsigned long rightMicros =0;

long microMonitor[49] = {0};
int microIndex = 0;

// Telnet defaults to port 23 - used to create a debugging server over telnet
TCPServer server = TCPServer(23);
TCPClient client;

void setup()
  // declare the leftMic and rightMic pins as INPUT's
  pinMode(leftMicPin, INPUT);
  //pinMode(rightMicPin, INPUT);

  wifiIP = String(WiFi.localIP());
  Particle.variable("WifiIP", wifiIP);


void loop()
  // Start by reading in audio values from the mic ADC pins
  snapBufferLeft = analogRead(leftMicPin);

  microMonitor[microIndex] = micros();
    for(int i=0; i <50; i++){

void tcpPrint(String str)
    if (client.connected()) {
    } else {
    // if no client is yet connected, check for a new connection
    client = server.available();

Here is an example of what is returned:


I can’t see you using the SparkIntervalTimer library, but only run your sampling through loop().
The cloud communication is handled between two iterations of loop() and takes about 1ms, hence the limited sampling speed.
If you want to run your test again without cloud delays, try SYSTEM_MODE(MANUAL).

BTW: Your for-loop should look like this I’d say for(int i=0; i<49; i++)

1 Like