Speeding up data transfer to the DAC on Photon

photon
Tags: #<Tag:0x00007fe22bec4428>

#1

Hi!
So I am attempting to implement a synchronous detection algorithm on the Photon. This involves pushing samples to the DAC as fast as possible while also sampling with the ADC and performing some simple computation. I am currently attempting to do this by creating some lookup tables and iterating through them in the ‘loop()’ section of my code. However, in the process of designing this I have found that writing to the DAC is by far the limiting factor in the speed of the algorithm. (I can only achieve ~500 samples/second with all of the other functions/calculations removed from the code)

#include <math.h>

// declare constants
const double pi = 3.1416;
const int16_t divisions = 10;
const int16_t resolution = 256-1;
const double frequency = 100;

// declare arrays
float demodulatedSamples[divisions] = {0};
float cosTable[divisions];
int16_t dacTable[divisions];
int16_t samples[divisions] = {0};

//declare increments;
int16_t i = 0;

//declare temporary storage
double val = 0;
double calc = 0;
int16_t sample = 0;

void setup() {
    //setADCSampleTime(ADC_SampleTime_3Cycles);
    analogWriteResolution(DAC1, 8);
    
  //Define lookup tables
  for (int x = 0; x < divisions; x++) {
    val = 2*pi*x/(divisions-1);
    cosTable[x] = (2/divisions)*cos(frequency*val);
    calc = resolution*(0.5*cos(val)+0.5);
    dacTable[x] = floor(calc);
  }
}

void loop() {
  if (i >= divisions) i = 0;
  analogWrite(DAC1, dacTable[i]);
  sample = analogRead(A0);
  calc = cosTable[i]*(sample - samples[i])/resolution;
  if (i > 0) {
    demodulatedSamples[i] = calc + demodulatedSamples[i-1];
  } else {
    demodulatedSamples[i] = calc + demodulatedSamples[divisions-1];
  }
  samples[i] = sample;
  i++;
}

I assume that the slow speeds associated with writing to the DAC is associated with how the RTOS handles the data transfer. I have been looking into implementing a fix with DMA (like in the particle-speaker library). However, to make this work I would need to have an interrupt that triggers every time a sample is transferred to the DAC so that the rest of my code can be executed. But, to be perfectly honest, I have almost no idea how to implement this.

If you guys have any suggestions as to how I could incorporate the interrupt into my code with the particle-speaker library, or maybe I’m looking at the problem from the wrong perspective, any help is appreciated.

Cheers!


#3

Okay, don’t want to bump myself but I’ve made some progress.

The particle-speaker library pretty much matches the implementation described in this application note.

Specifically, it transfers data to the DAC using DMA with Timer 6 used as the clock. We can see from this figure in the application note


that timer 6 mediates when a sample is converted by the DAC. Thus, I think it would be possible to attach a hardware interrupt to timer 6 that executes the ADC sample and calculation while the next sample is loaded into the DAC. Of course, this assumes that the interrupt won’t interfere with the DMA transfer to the DAC. I don’t think this will be an issue since the interrupt should only act on the system thread. I will also have to perform the calculations and sampling before the next value is loaded into the DAC, but this shouldn’t be too hard as long as the code is well optimized. Now I just have to figure out how the SparkIntervalTimer library implements these interrupts and I will hopefully be able to test it soon.


#4

Nope, interrupts are entirely independent of FreeRTOS threads and can interrupt everything apart from same or higher priority ISRs at any time.
That’s what you must keep the work load in an ISR very minimal.


#5

I realized that this is not possible with DMA, or at least, it is very difficult to implement.

If you encounter this same issue I recommend doing all of the data processing as you acquire it in chunks. There is a dual buffer configuration as well as half and full ‘transfer-complete’ flags that can be checked from within loop() so that you know when to process your data.