Timer driven analogRead w/ blocking?

I would like to perform analogReads on several pins at a rate of 1-4 kHz. I was able to achieve this, or something close to it on the Atmel parts by using a timer and two interrupts. One interrupt was attached to the timer, set to go off at the rate I desire. That handler started an analog conversion. The Atmel part had the ability to attach another interrupt to the conversion finishing, so you could then have a handler do something with the converted data. This allowed me to avoid blocking on the ADC.

It required the usual amount of register poking and hoping.

I was hoping I could accomplish something similar on the Spark without too much fuss and debugging. Is it possible?

Regards,
Dave J

@djacobow, for the timer part, I created an IntervalTimer library to access the STM32 hardware timers. For the ADC interrupts, at this time the core firmware does not support this functionality.

There were many issues with ADC input noise and these were addressed using “Dual Slow Interleaved ADC Mode” to achieve higher input impedance. The analogRead() code is in spark_wiring.cpp in the (open source) core-firmware library if you are interested in checking it out.

OK, thanks. That’s slightly disappointing, but expected. The code in spark_wiring.cpp looks designed to average a lot of samples do reduce noise, but it certainly is slow. It even uses DMA, but rather than using DMA like DMA, it waits for it to complete.

I guess I’ll be spending some time with the STM32 reference manuals. :smile:

@djacobow, I saw that also. Waiting for the DMA still makes for a fast ADC conversion and avoids having to deal with interrupts. Besides the reference manual, you can also dig in the core-common-lib library where all the low-level stuff is. What you want is definitely doable. :smile:

Hi @djacobow

My tests showed that over a large number of samples (256-512) I was able sample analog inputs at around 30.5 KS/s which is much faster than Arduino. The 10 sample average reduces noise and variance but the ADC is already way faster than other platforms.

It can be hard to measure the time required for a single analogRead() since the DMA process is interrupt based. But if you write a loop, I think the performance is quite good.

Hmm, I get about the same result in a tight loop of analogReads … about 32.7 us per call.

In my design, I have the Spark device plugged into a board where four hall-effect sensors measure the current on an AC plug. I’d like to be able to capture each of them 1200 times/second (that’s 20 samples per AC cycle) and calculate the RMS value. For four pins, that’s 4800 samples/second, or 208 us each. So that should leave plenty of time for the processor to do math and stuff.

I have to give more thought to why I can’t get my code to work. When I time my loop, it seems to take about 5000 us per loop of four samples. There is some math involved, but I don’t think it’s killer expensive; it’s all integer and no arrays. There are four calls to an integer sqrt, but I don’t think they explain the issue.

1000 us are taken up by the delay(1), but the other 4000 are unaccounted for. The four analogReads should take ~130. I can’t take out that call to delay, either, because it seems that the whole Spark magic breaks without it.

(Note also the shift/add stuff is to compute sliding averages without having to use long circular buffers. Essentially, they implement IIR filters: y[t+1] = f*y[t] + (1-f)s[t], where is f is afraction [0,1] and s[t] is the newest sample.)

Here is my preliminary code:

#include <stdint.h>
#include <math.h>


// a fast integer sqrt I cribbed from the 
// interwebz
unsigned int sqrt32(unsigned long n)  {  
 unsigned int c = 0x8000;  
 unsigned int g = 0x8000;  
 for(;;) {  
  if(g*g > n)  g ^= c;  
  c >>= 1;  
  if(c == 0)  return g;  
  g |= c;  
 }  
}  

int apins[] = {A0, A1, A2, A3}; // analog pins
int cpins[] = {D0, D1, D2, D3}; // control pins
int32_t rms[4] = { 0,0,0,0 };   // hold rms results
int32_t  avgs[4] = {0,0,0,0};   // hold avg results
int32_t  avsqs[4] = {0,0,0,0};  // hold avg of squared results
int32_t  stats[4] = {0,0,0,0};  // hold on/off status of each channel

unsigned long last;
unsigned long past = 0;
int count  = 0;


// Function to be exported to Spark API to allow on/off
// control of each channel
int setOutput(String command) {
 String nstring = command.substring(3);
 char channels[4] = {0,0,0,0};

 int action  = -2;
 if (nstring.length() == 4) {
  action++;
  for (int i=0;i<4;i++) {
   channels[i] = nstring.substring(i,i+1).toInt();
  }
 } 
 if (action > -2) {
  if (command.startsWith("set")) {
   action = 1;
  } else if (command.startsWith("clr")) {
   action = 0;
  }
 }
 if (action >= 0) {
  for (int i=0;i<4;i++) {
   if (channels[i]) { 
    digitalWrite(cpins[i],action);
    stats[i] = action;
   };
  }
  return 0;
 }
 return -1;
};

void setup() {

 Spark.variable("irms0", &rms[0], INT);
 Spark.variable("irms1", &rms[1], INT);
 Spark.variable("irms2", &rms[2], INT);
 Spark.variable("irms3", &rms[3], INT);
 Spark.variable("stat0", &stats[0], INT);
 Spark.variable("stat1", &stats[1], INT);
 Spark.variable("stat2", &stats[2], INT);
 Spark.variable("stat3", &stats[3], INT);
 Spark.variable("period", &past, INT);

 for (int i=0;i<4;i++) {
  pinMode(apins[i],INPUT);    
  pinMode(cpins[i],OUTPUT);    
  digitalWrite(cpins[i],stats[i]);
 }

 Spark.function("go",setOutput);

 last = micros();
}



void loop() {

 for (int i=0;i<4;i++) {

  int32_t v = analogRead(apins[i]) << 4;
  // we do the average value as a simple IIR LPF.
  int32_t avg = avgs[i];
  int32_t navg = avg - (avg >> 7) + (v >> 7);
  avgs[i] = navg;

  // and we do the average f squares the same way
  int32_t v2 = (v - navg) * (v - navg);
  int32_t avsq  = avsqs[i];
  int32_t navsq = avsq - (avsq >> 7) + (v2 >> 7);
  avsqs[i] = navsq;

  rms[i] = sqrt((uint32_t)avsqs[i] >> 0);
 }

 delay(1);
 uint32_t now  = micros();
 past = past - (past >> 3) + ((now - last) >> 3);
 
 last = now;
 count++;
}

@djacobow, delay(1) may call SPARK_WLAN_Loop() which would account for the extra delay. At the end of loop(), SPARK_WLAN_Loop() will also be called before loop() is call again.

Hi @djacobow

As @peekay123 says, in order to keep the cloud connection alive, delay() can call the Spark_WLAN_Loop function that does the cloud house keeping. That takes some amount of time depending on what you are asking it to do–in your case handling those Spark.variable’s.

I would try rewriting your loop() to use currentMicros = micros() and keep a lastMicros variable so that when currentMicros - lastMicros >= Threshold, you take your samples, sort of like you already do with now, past and last.

There is a hidden delay in between invocations of loop(), due to the way the current firmware talks to the CC3000 wifi chip.

(*) Usually 5mS if there is nothing to do (the vast majority of cases.) If there’s actually work to do, it may be shorter or longer if there is a packet that requires processing, and a possible reply.

Thanks, guys, I figured that’s what happens. The delay is not in the call to delay(1) but in the time between invocations of loop(). That’s fine. I get it. Still, 5000 us will be too long for my purpose.

I think what I will try to do is get all the “real time” stuff out of loop() and put it in an interrupt handler instead. I’ve already tried peekay123’s IntervalTimer library and it is not working for me; the callback isn’t getting called. I’m not sure why. But it might have something to do with the fact that I had to comment out the TIMx_IRQHandler() stubs in order to get the library to compile. (I can’t make the demo run, either. The firmware appears to download, but then the core resets twice and ends up with the old firmware.)

It really makes sense to have the analogRead stuff tied to a timer since sampling uniformly is somewhat important.

bko, your suggested to use the desired delay minus the time already elapsed was the first thing I tried! That’s when found out the loop time was much greater than I could accept in the first place.

I think now that I can see that the ADC isn’t a big time suck, I think I can work with just the one timer interrupt, presuming I can make that work!

1 Like

Ugh, timer pool library seems to work OK from the the CLI, but not the Web IDE.

@djacobow, I will be posting a fix for the IntervalTimer library very soon. Some weird compiler thing that did not appear when I first created the library (!!).

UPDATE: The updated IntervalTimer library is available on my github. :smile:

1 Like

Cool. Look forward to it!

UPDATE: Cool! It works!

I’ve only just started playing around with the Spark. There’s some stuff I like and stuff that I’m learning I don’t. It’s a very ambitious platform, though, much more complex than the Ards, so I understand the rough edges. It also has an enthusiastic community!

My project current project is a smart power strip. It can switch each outlet independently, as well as report the current use for each outlet. It also has an audio detector that can respond to claps, and a light sensor. Tying it all together will be a web app that can allow you to specify control logic using ant of those inputs plus timers, to decide which outlets are on,

I designed my board to take AVR and a nrf24l01 transeiver, but also put in a socket for the Spark to try out that world. I’m kicking myself for not putting a SWD connector for debug as well.

2 Likes

@djacobow, I look forward to hearing about your project! Let us know if we can help :wink: