Best way to do real-time signal decoding on Spark (AKA how to avoid being clobbered by background processes)

Hi all,

I’m looking at making a real-time signal decoder on my Core. Ideally, I’d like to be able to sample and demodulate in real time, and then set a publish() event when a valid packet is received.

I’m sending manchester-coded sound pulses to an electret mic, which is then filtered by an analog frontend. From there, I will either pipe the received signal (carrier and all) directly into the Core’s ADC and do a small number of bins FFT, or go via a LMC567 tone decoder to get demodulated output directly (this would reduce processing overhead on the Core, since the actual demodulated data will be running fairly slowly.)

My question is - how do I do this and not have my sample rate clobbered by the Spark’s background processes? After doing a bunch of research, I’m thinking of this approach:

-Kick off an ADC conversion into a pretty big buffer via DMA
-While that is running, do Spark.process() and signal recovery stuff (sliding DFT if using ADC? simple pulse length counting if tone decoder works?)
-Wait until ADC job has finished
-Kick off another ADC job immediately
-etc

I know how all of these things work in theory, but I don’t have that much actual experience with them. So, my questions are:

-How long does Spark.process() take to run?
-Is there enough memory for the buffer size I’m going to need? (Sampling at 50kHz, so if I want 10ms of time to run Spark.process() and whatever data parsing stuff, that’s like 500 samples)
-Is there a way to DMA things into a circular buffer so I can just have it running continuously and have a different pointer into the buffer to process in realtime every time a new sample is added?
-What am I missing here/is this a sane approach?

Thanks!