Control system thread duration or keeping a really fixed time loop

I'm working on an application where I'd like to update a led strip with a fixed rate of 2ms (so about 500 times a second). Some times I use that time of 2ms to do calculations, but other times I don't (so I have like 1 to 1.5ms free).

I read this in the manual about RTOS:

The RTOS is responsible for switching between the application thread and the system thread, which it does automatically every millisecond. This has 2 main consequences:
delays close to 1ms are typically much longer
application code may be stopped at any time when the RTOS switches to the system thread

Is it possible to control the time given to the system thread from the application thread? I don't want that the system thread takes longer then my time slot of 1.5ms, because after that I need to write out a frame.

Putting everything in a SINGLE_THREADED_BLOCK() is also not possible, because then no system process would happen anymore.

Would it actually be smart for me to use the Threaded system? I think not. But what do I lose?

Any ideas about other alternatives? Actually I won't use Cloud functionality except getting the time now and then (however that is not time critical, I can stop animation for a while) and doing OTA upgrades.

The only thing what is left (if I didn't miss anything) then is the WiFi connection. I've read that the system_thread makes sure it reconnects automatically when a connection is broken. Is that something I can do myself as well. I've found this code: https://github.com/kennethlimcp/particle-examples/blob/master/wifi-auto-reconnect/wifi-auto-reconnect.ino

For example always checking if Wifi.ready returns true before connecting to a client?

If read the manual about the application and system thread.

Assuming that “writing out a frame” just involves things like doing digital output, shifting out a bit of serial data, for example, here’s what I would do:

If you have two free pins, make one a digital output, making sure it’s compatible with PWM (tone). Make the other a digital input, making sure it’s compatible with attachInterrupt. Physically connect the pins together.

In setup, attach the interrupt handler using attachInterrupt(). Set a tone() on the output pin of 500 Hz.

Pre-calculate as much data as possible in loop(). Store it in a buffer in memory, for example.

In the interrupt handler, send out your bits. Now you need to avoid doing lots of calculations in the interrupt handler. You can’t delay, allocate memory, make networking calls, etc… But you can guarantee that you’ll output data at exactly 2 ms intervals, regardless of what’s going on elsewhere in the system.

This won’t work if you need to do heavy duty calculations or things like networking to write out a frame, however, but this is how I do very time-sensitive inputs or outputs.

@rickkas7, that’s a great idea! Another approach is to use software timers which have a minimum resolution of 1ms. Same rules as the ISR but without the pin sacrifice. :wink:

Thanks for your suggestions.

@peekay123
I suppose I can use your interval timer library then? For this purpose? https://github.com/pkourany/SparkIntervalTimer

@rickkas7
Why I have to prevent lots of calculations? Is that because I cross the 2ms limit (when another interrupt is fired) or because of something else?

Right now I just break up my animation calculations in different states and I make sure they won’t take more time to execute then 2ms (I measure in microseconds). There are copy actions, because I work with two frame buffers and blend between those two.

Still I’m not sure about one thing. Now I have a guarantee that my function is triggered every 2ms. So if I put all my (carefully timed) code in this SINGLE_THREADED_BLOCK() I won’t have any problem that the system thread breaks the calculations that I’ve timed?

In the library documentation of @peekay123 I read about volatile variables, I didn’t dive into that subject yet, although I’m afraid it might get difficult for me. Would that be necessary in the setup described above?

I guess you could do more extensive calculations in an interrupt handler, but I usually run them with interrupts disabled, so if you spend too much time in an interrupt handler everything else would eventually stop working. But at 2 ms. you’d probably be fine. I’d try the software timer first, but if that doesn’t work you could give the interrupt trick a try.

Any variable that’s accessed from both an interrupt service routine or timer callback and the regular loop code should be declared volatile.

And, yea, software timers would be much easier at 2 milliseconds. I came up with the interrupt technique before the software timers existed. Also, you can crank it way up, I was able to do 16000 Hz (every 62.5 microseconds) on a Core, no less!

2 Likes

The description that the RTOS switches between the system thread and app thread every 1ms is a nice simple explanation - easy to grasp, but it is a simplification. :wink: In practice, the system thread spends much of its time sleeping unless there is actual work to do (such as pulling in cloud events.)

The reason to use the threading system is that it prevents your application thread from being blocked from executing during network outages. You can mitigate this using SYSTEM_MODE(MANUAL) and control exactly when the background thread runs, which gives you the ultimate control, but typically using the threading system is simpler.

2 Likes

Thanks for all the suggestions. I’ve made some example code that works fine. It uses an example command protocol to control leds. Designed for use with FastLED, but you won’t need it. This is more to show the principle.

C code:

Processing application to send data over TCP:

I’m using your SparkIntervalTimer library right now, but is it true that it now implemented in the default Firmware?
https://docs.particle.io/reference/firmware/photon/#software-timers

Sorry. I’ve already found it. Software-timers have a resolution of 1ms. Your library supports Timers on microsecond level.

1 Like