Difference between Software Timer and using mills()

Hey guys I was just wondering if there is a reason to use Software Timers over just counting elapsed time with the mills() function. I think I read somewhere that they use different hardware, so does that mean one is more accurate than the other? I want to use these to read values from some sensors every minute, or sync the clock every 24 hours for example. I am using a photon

I prefer to use millis() checked from loop whenever possible. The reason is that software timers can get you into all sorts of unexpected problems because they run in a separate thread with a small stack. There are calls you just can’t make from a software timer, and there’s always a possibility of a thread synchronization error, which can be hard to debug.

Just remember to always use the pattern:

if (millis() - lastTime >= timeBetweenCalls)

As long as you always write your code like that, it will behave correctly when the mills() counter rolls over, every 49 days. This post explains why that works.


I second @rickkas7, but Software Timers are just nice and convenient to use if you don’t do a lot in there and don’t want to bother having your other code block free :wink:
Sometimes lazy is not all wrong :sunglasses:

1 Like

Yea I figured millis would be more reliant, and yea I know about the whole overflow thing with them. Thanks guys

Another reason to use millis() is situation, when you need to process a lot of different time periods, i.e., more than the number of available timers, e.g., individually for every sensor.

Although that can be done with Timers too, as you can multiplex.
e.g. Have a timer trigger every 10ms and read one sensor every visit but another only every other visit or every (visitCount % x == 0).

1 Like