ADC Sampling Rate

Hi @G65434_2

Thanks for your prompt reply!

So you mean it is possible to achieve 300Hz sampling rate by setting ADC Sample Time using setADCSampleTime() function. But what is the time of a ‘cycle’. I mean, what will be the effective sample time if I set “ADC_SampleTime_3Cycles” sample time?

Also, what do you mean by the averaging of 10 ADC sample by Particle Abstraction layer? Does it mean that, Particle Abstraction layer captures 10 sample from each ADC channel and then average it out and return to the application layer? If yes, how can I improve that?

Thanks,
Dhaval

Hi again

I think it should be possible to do this at 300Hz but I suspect it may be a problem if you don’t reduce the sample time.
I know I tried to sample at 1kHz on 1 channel and saw that the processor was spending most of it’s time doing the sampling. (I set a pin high when the ADC started and low when it had finished and watched it on a scope).

So, I’m not an expert on ST micros (or any micro really) but on average most ADCs I’ve used take about 15 clock cycles to do the conversion depending on the accuracy. Now, I’m not sure what exactly is meant by the ADC_SampleTime_3Cycles. It won’t be 3 CPU cycles total as it takes far longer than this just to carry out the successive approximation in the ADC peripheral but perhaps it means the actual sampling duration is 3 CPU cycles?? I better stop talking here before I tell you something wrong!

To answer your last question, when you call analogRead, there is a layer of code in the background that gets called at compile time that sets up the ADC peripheral and the rest of the hard work. As I understand it, it is this code that also takes a 10 sample average, so yes.
To improve the total time to do this I’d suggest starting at reducing the sample time (I’m not sure what it is at default but it’s significantly more than 3).
If that isn’t enough you could try reading the datasheet and writing your own ADC function using something else than dual slow interleaved mode. Although take care, this was done for a reason so your sample accuracy may suffer.

Hi @G65434_2

Thanks for your explanation!

I have tried to change the sample time to “ADC_SampleTime_3Cycles”, but still some samples are missing. And that is also on TCP Server end i.e. on Photon side.

Is there any other way to send the captured samples to PC, other than Serial/TCP? Or how can I improve the transfer?

Thanks,
Dhaval

Hi,

I profiled the single channel ADC sampling and it takes somewhere around 11us. Is it possible to improve this, may be by change in Photon Firmware?

Appreciate your quick reply.

Thanks,
Dhaval

Hi @dhaval

Sorry for the late reply. If you absolutely have to sample at 300Hz, then I might suggest writing your own ADC routine that doesn’t rely on the Particle abstraction. I’m confident that you’ll save at least a bit of time here, although you’ll need to make sure that the output impedance of whatever signal you’re measuring is low enough (as I said I’ve found these micros to have a pretty low input impedance - which affects the accuracy of the measured signal).
To reduce output impedance you can feed your signal through an op-amp buffer before it goes into the ADC.

However, I think it might be worth taking a look at an external signal acquisition block. Many companies sell dedicated 10/12bit + ADC chips that can sample multiple channels and send this information via I2C/SPI back to a host micro. This would free up a lot of your CPU cycles and probably provide a more accurate representation of your signal (especially if the alternative was compromising your sample time in the Particle micro).

I haven’t personally set up one of these devices before but someone else here may be able to help you.
Have a look here for example: Analog Devices A/D Converters

Hi @G65434_2,

Thanks for your suggestion!

Currently I am capturing from ADC using Particle Abstraction. For that I am sampling data in timer interrupt at every 3ms. And thus the sampling rate is 333Hz. I want to capture at 300Hz sampling rate. How can I achieve that, even by changing the firmware?

Also, I have to send the captured data over TCP or Serially to PC. But in both the ways, there are some packet loss… With serial there could be some data loss, but how is it possible with TCP within local network? As the local TCP connection has handsome bandwidth (100 Mbps link)!!

After debugging, I found that the data loss happens on TCP Server side i.e. on Photon side. I am using built-in Class TCPServer to create server and data exchange. Do I need to do some configuration?

Thanks,
Dhaval

Hi again

If you need to sample at 300Hz then as you’ve found the software timers won’t do it for you. I’d suggest setting up one of the spare hardware timers to generate an interrupt at your required interval. This will mean prescaling the microcontroller clock and selecting a counter value to trigger an interrupt off. I don’t have any code for you on hand sorry but browsing the forums here may turn up some info. On the other hand you could google something like STM32F2xx timer example.

I haven’t used TCP with the photon sorry so will be of limited help here. All I can say is that it’s possible again that if the microcontroller is struggling to keep up with sampling 6 channels at 300Hz then there’s a chance that you’ll drop a packet every now and then.

For hardware timers the awesome @peekay123 has put together a SparkIntervalTimer library which is available on Particle Build also and will do some heavy lifting for you.

But with some more bare metal insight you could just trigger the sampling via a timer (and not do the sampling in the ISR) and setup another ISR that gets triggered by the sample-complete interrupt, to just pull the results from the registers.
I've not yet done this myself with the STM32F family, tho'.

1 Like

Hi,

Thanks for sharing the library. But how can I use that library with my code in Particle Abstraction?
Is there any guide if I want to add library of my own?

Also, I didn’t get the last thing i.e. I can trigger the sampling via a timer! Can you please explain?

Thanks,
Dhaval

What do you mean with "Particle Abstraction"?

To answer this, you'd have to tell what IDE you intend to use it with and if you intend to publish the library or only use it in your own projects.

The ADC sampling takes some time and the actual process of doing it, is an autonomous task of the chip.
So you would tell the chip to start this task from a function that gets called from the SparkIntervalTimer library, but you won't wait for the sampling to complete, but rather set up another function that will be called by the chip once the task is finished.
But since 300Hz is relatively slow, you won't need to go through all the bother of doing it this way :wink:

Hi @ScruffR,

I am using Particle Build online IDE for this. So I was using word Particle Abstraction.
I want to add library with that IDE only. So, how can I do that?

And you mean I don’t need to use SparkIntervalTimer library to capture ADC at 300Hz. is it?

Thanks,
Dhaval

Yes you should use it, but you don't need to split the sampling into two parts (trigger task + wait for completion).

To add a library to Particle Build (that's the official term - Web IDE is also commonly used - don't make up new words ;-)), you can have a look at the docs here
https://docs.particle.io/guide/getting-started/build/photon/#adding-files-to-your-app
https://docs.particle.io/guide/getting-started/build/photon/#using-libraries
https://docs.particle.io/guide/getting-started/build/photon/#contribute-a-library

1 Like

Hi @ScruffR,

I used the SparkInervalTimer library in my project and do the necessary initialization as shown in example code SparkIntervalTimerDemo.cpp. But I am not getting any timer interrupt!

//Global variable
// Spark Interval timer
IntervalTimer myTimer;
...

void setup()
{
   ...

   // Start Spark Interval Timer
   // To run at every 3.5ms (7 * .5 mSec)
   myTimer.begin(timerCallback, 7, hmSec);
}

Am I doing something wrong?

Thanks,
Dhaval

I can’t see the implementation of your timerCallback function :wink:

I don’t think it is depending on the implementation of timerCallback() function either. That function should be called at regular interval i.e. 3.5 ms in my case. But that is not happening…

Not seeing your whole code does not really help, but this complete demo does work on my Photon (v0.4.7) and Core (v0.4.7) and Electron (v0.0.3-rc2 - without SYSTEM_MODE())

#include "SparkIntervalTimer/SparkIntervalTimer.h"

SYSTEM_MODE(SEMI_AUTOMATIC);	//Just to have the blink start immediately

const uint8_t ledPin = D7;		
volatile int blinkCount;
IntervalTimer myTimer;

void setup(void) 
{
  Serial.begin(115200);
  pinMode(ledPin, OUTPUT);
  myTimer.begin(blinkLED, 7, hmSec);
  // for 3.5ms I'd rather use this one tho'
  //  myTimer.begin(blinkLED, 3500, uSec);

  digitalWriteFast(ledPin, HIGH);
  Particle.connect();
}


void loop(void) 
{
    Serial.println(blinkCount);
    delay(1000);
}

void blinkLED(void) 
{
    if ((++blinkCount % 100) == 0) 
	  digitalWriteFast(ledPin, !pinReadFast(ledPin));
}
2 Likes

I am also having trouble getting a fast ADC sampling rate on the Photon. I am attempting to sample from two ADC pins as fast as possible (preferably ~80uS -> ~10kHz) but the Photon always seems to sample at 1 sample per 1000uS.

I tried using the interval timers you specified above set to myTimer.begin(capture, 100, uSec) with no luck.

So I took a step back and made a simple program that capture’s one ADC read, and captures the associated time via micros(), puts it into an array, and repeats this 50 times. I then return the array over TCP and the fastest ADC read time I can get is 1000uS per conversion. See my code below.

#include "HttpClient.h"
#include "application.h"
#include "SparkIntervalTimer.h"

int leftMicPin = A0;    // Left microphone
int rightMicPin = A5;    // Right microphone

// variables for snap recording
int snapBufferLeft;
int snapBufferRight;

// Wifi IP Address
String wifiIP = "none";

unsigned long leftMicros =0;
unsigned long rightMicros =0;

long microMonitor[49] = {0};
int microIndex = 0;

// Telnet defaults to port 23 - used to create a debugging server over telnet
TCPServer server = TCPServer(23);
TCPClient client;


void setup()
{
  setADCSampleTime(ADC_SampleTime_3Cycles);
  // declare the leftMic and rightMic pins as INPUT's
  pinMode(leftMicPin, INPUT);
  //pinMode(rightMicPin, INPUT);

  wifiIP = String(WiFi.localIP());
  Particle.variable("WifiIP", wifiIP);

}


void loop()
{
  // Start by reading in audio values from the mic ADC pins
  snapBufferLeft = analogRead(leftMicPin);

  microMonitor[microIndex] = micros();
  microIndex++;
  if(microIndex==49){
    for(int i=0; i <50; i++){
      tcpPrint(String(microMonitor[i]));
    }
    microIndex=0;
  }
}

void tcpPrint(String str)
{
    if (client.connected()) {
        if(client.available()){
            server.println(str);
        }
    } else {
    // if no client is yet connected, check for a new connection
    client = server.available();
    }
}

Here is an example of what is returned:

11641026
11642026
11643026
11644026
11645172
11646026
11647026
11648026
11649026
11650026
11651026
11652026

I can’t see you using the SparkIntervalTimer library, but only run your sampling through loop().
The cloud communication is handled between two iterations of loop() and takes about 1ms, hence the limited sampling speed.
If you want to run your test again without cloud delays, try SYSTEM_MODE(MANUAL).

BTW: Your for-loop should look like this I’d say for(int i=0; i<49; i++)

1 Like

@Garrett, as @ScruffR pointed out, you include the SparkIntervalTimer.h file but you don’t actually use any portion of that library. Also, I agree with his suggested fix for the for-loop as your index will go out of bounds since the array is declared as microMonitor[49], with an index range of 0 to 48.

Your loop() code has two elements that slow it down. The first being tcpPrint() and and the second being the system firmware background task which runs whenever loop() completes. As @ScruffR pointed out, that can take less than 1ms to possibly 5ms depending on the SYSTEM_MODE() and the status of the wifi/cloud connection.

High speed ADC sampling has to be decoupled from any “uncontrolled” timing conditions to work. As such, you can use hardware timer interrupts (via SparkIntervalTimer) coupled with a sampling queue/buffer that gets “serviced” in loop(). It can be a fixed buffer that once it is filled, ADC sampling stops until the buffer is serviced or a circular buffer which holds the latest N samples. This allows continuous sampling at the expense of possible loss of contiguous samples if loop() is not servicing the buffer fast enough.

I believe there are topics dealing with ADC sampling so a Community search is recommended. :grinning:

1 Like

Thanks @peekay123 you are definitely correct on both accounts, I removed the SparkIntervalTimer.h library before pasting in my code. You are also correct about the calls to the Spark Cloud slowing down user code within the main loop.

I followed your advice (and as you mentioned, advice in other threads as well) and was able to setup two SparkIntervalTimers on TIMER4 and TIMER7 at 20uS intervals to sample the ADC. One function samples and stores stereo microphone input, the other function compares the audio signals against a threshold to detect a step function. Once the step function is detected on both microphones (approx ~50dBA) then I perform Time Difference of Arrival to find the degree of arrival of the audio signal.

Right now I wrote architecturally poor code triggered off the second timer (although it currently works) that has too much logic in the interrupt functions, but I will solve this when I move on to doing convolution of two (stereo microphones) circular buffers.

Thanks again both of you, I apologize for the array-index-out-of-bounds code. That code was quickly thrown in to demonstrate the sampling speed problem I was trying to solve.

3 Likes