Accesing Internal Clocks


#1

Hello,

I am developing a capacitive touch sensor for the Photon. I am driving one pin high then how long it takes to drive a sensor pin high using an external interrupt and micros(). I currently have the system tuned to take around 5 uS (baseline) and about 8 uS when “touched”. This is OK, but I am wondering if I can access an internal clock that runs faster than micros() to gain higher resolution measurements to reduce the sensing period.

Thanks in advance for your help.
-Joshua


#2

If the sensor is that dependent on tight timings and interrupts is it not going to be difficult/impossible to implement alongside users application code? Indeed the Photon OS being able to function properly can also be upset by timers interrupting it. Usually one would expect such a sensor to handle all of the “am I being touched” work itself with the assistance of a chip like AT42QT1012 .


#3

Does this mean that using any external interrupt (even a very low priority one) will cause problems with the OS and Cloud Connection? I have had problems losing connection to the cloud, even though my interrupt is low priority and its handler is short (see below):

void qTouch_ISR()
{
     u_tS_delay = micros()-u_tS;  //Capture time triggered and determine delay time
     detachInterrupt(qT_IN);  //Disable interrupt on qTouch sense pin
}

#4

micros() is derived from DWT->CYCCNT which corresponds to the µC clock cycle.


#5

Someone like @ScruffR would be better placed than I to comment on just how much of an effect it might have. To me trying to envisage what it means it just sounds potentially disruptive but I am quite prepared to stand corrected.


#6

@Viscacha, I just answered the question but didn’t intend to qulalify the usefulness of this in solution :wink:

I’m absolutely with you on that point. Time critical tasks in the single digit µs range will most likely give you flaky results under different µC load scenarios.
Alone the interrupt latency due to multi level indirection imposed by the framwork will give you a minimum bias of 1…5µs while “raw HW” interrupts should be executed in well under 1µs (more like a few dozen to hundred ns).

But the “OS” and “cloud connection” are nowhere near that kind of time critiality where a few dozend µs (or even ms) will play any role whatsoever.

I doubt that this ISR alone could cause any trouble with the cloud connection.