Micros() rolling over after 59.652323 seconds, not 71 minutes [SOLVED]

I’m running the following code:

void setup (void)
{
Serial.begin(38400);
}

void loop (void)
{
Serial.println ("millis() = " + String(millis()) + "micros() = " + String(micros()));
delay (1000);
}

Produces the following results:

millis = 57996, micros = 58017760
millis = 59004, micros = 59025514
millis = 60013, micros = 381959
millis = 61020, micros = 1389481
millis = 62029, micros = 2398259
millis = 63038, micros = 3407027

millis = 116470, micros = 56839482
millis = 117479, micros = 57848252
millis = 118488, micros = 58857025
millis = 119495, micros = 212224
millis = 120504, micros = 1221001
millis = 121513, micros = 2230067

Does anyone have any ideas? The micros() code in spark_wiring.cpp is reading a hardware register and scaling the result.

Thanks.

Hi @mgssnr,

I think there is an issue open for this on the firmware repo here: https://github.com/spark/core-firmware/issues/133

Thanks!
David

Technically it wraps after 59.652323 seconds.

Is there something you are intending to use micros() for that wrapping after this period of time causes an issue?

If you are looking to precisely time something that’s taking longer than 59.652323 seconds, I would recommend using the millis() counter. If you are dead set on using the micros() counter for some reason, and you just don’t want to have the wrapping of the counter affect you… ensure that you use unsigned long or uint32_t for your code and perform subtraction like this:

uint32_t lastTime = micros();
while( (micros() - lastTime) < 5000) {
  // run code for 5 milliseconds
}

The subtraction will ensure when the counter wraps from a high value to a low value that the result is still correct. You must use an unsigned 32 bit variable though.

EDIT: Technically this is wrong since we are not doing full MAX_INT unsigned subtraction… this works for the millis() timer though. - BDub

1 Like

I would simply like to know why it doesn’t work the way it should.

I’ve been doing (very) low-level ARM microcontroller work for many years. Typically, this is a trivial thing to do. Set up a free running clock at a particular tick rate and let it go, and use the clock’s counter as an unsigned long and you can accurately measure time intervals as long as you cope with rollover.

The counter selected was the DWT hardware counter which counts System Clock cycles... so it "ticks" every 1/72,000,000 th of a second. Because of this, you need 72 "ticks" to equal 1 microsecond.

The DWT is a 32-bit counter, but when you divide it by 72, you end up with 59,652,323 microseconds, not 71+ minutes.

Now I don't know why a 32-bit timer wasn't set to a 1us tick rate so it would be exactly like the Arduino... or if that seemed wasteful when we have a perfectly good DWT counter we can use (or lose)... but this works well, is accurate, and wraps just like a 32-bit counter that rolls over after 71 minutes. So I don't think this is going to be much worse in terms of wrapping errors.

I imagine it was done so a that timer wasn’t wasted on something that the hardware could do out of the box. :smile:

Now, it’s my understand—and correct me if I’m wrong—that some sort of internal event happens each time the counter overflows. So, it should be trivial to check for this event and keep track of it in a uint32_t. I guess you could use this to watch for rollovers? I’m not really sure this would accomplish anything since you can already watch for it yourself as per @BDub’s post above.

I can’t possibly imagine a scenario where you’d want to watch micros() any longer than 60 seconds, either. Once it hits a certain point you won’t even be able to do additive math on it anymore, as you’d overflow your variable, which means you’d have to start breaking it down into smaller chunks, carrying remainders… It just sounds messy. Plus that would take numerous clock cycles, so if you were trying to do precise timing with it, well, it wouldn’t be very precise.

I’m not really a “(very) low level ARM guy” so I’m not quite sure I get it. Can you perhaps explain a use case where using micros() to time hour+ intervals would be useful, @mgssnr?

1 Like

I had simply assumed the documentation was correct, and was reporting the discrepancy.

I don’t necessarily have a use case. I was fooling around with the clock and noticed the discrepancy.

I didn’t check the software or the reference manual to see what rate the clock driving the timer was ticking at. I did not know which of (possibly multiple) clocks was selected, or if a prescaler was enabled. One ARM core I used recently had many different input clocks and prescalers for the GPTs, and I wasn’t feeling up to tearing into the code and docs to try to find the issue, so I simply reported it.

Perhaps someone can update the web page if it’s going to rollover at 59.x seconds instead of the 71 minutes on the page.

Oh, I must have missed where you mentioned it was in the docs. Sorry about that!

When you mentioned having ARM experience I guess I assumed you were saying it should be 71 minutes and there was an issue.

That’s why I was asking. :smiley:

@timb no, I just meant that I understand how these things worked, and couldn’t reconcile what I was seeing vs what was in the docs about micros().

I did fail to mention I read it in the docs. My bad there.

Nods I gotcha. No worries about not mentioning the docs, after my last post I scrolled back and didn’t see any mention about them, but it’s not a big deal. I’ll make a change to them so others will be aware! :smile:

We spent a lot of time talking about this :smile:

Docs say: “Returns the number of microseconds since the Arduino board began running the current program. This number will overflow (go back to zero), after approximately 70 minutes.”

I think that’s a pretty obvious copy/paste from Arduino’s docs :wink:

1 Like

Well, that’s a silly bug. Adding a bug for this. Didn’t realize we were just talking about something wrong in the docs either. :slight_smile:

Thanks guys!
David

Sorry about the confusion. I wasn’t aware it was implemented already.
The document now reflects the observations. Fixed the Ctrl-c/v typo as well.

Thanks!

2 Likes

Sorry to reply to a dead topic, but this doesn’t seem right. Correct unsigned math spanning rollover will only occur if the rollover corresponds to the word length of the variables, right? In other words, (nibble)0001 - (nibble)FFFF is 2, but (ulong)1 - (ulong)59652322 isn’t 2 unless you’re doing math in mod(59652323). So if micros() happens to rollover at the time you’re trying to use it to measure something then you’re going to end up with rubbish results unless you do the modulo math. Isn’t that true or am I missing something obvious? Is there an easier solution than modulo math? For example, can we programmatically reset the micros() counter just before we use it to ensure rollover won’t occur?

1 Like

fwiw, here’s how I’m dealing with micros() rollovers:

#define MICROS_ROLLOVER ((unsigned long)59652323)
static unsigned long us_delta(unsigned long old_us, unsigned long new_us)
{
    new_us -= old_us;
    if((long)new_us < 0)
        new_us += MICROS_ROLLOVER;
    return new_us;
}

So when I want the delta between old_micros and micros() I use us_delta(old_micros, micros()) and never have to worry about getting caught in a rollover (given that the time between old_micros and micros() is less than MICROS_ROLLOVER)

1 Like

If you were looking for the reason for the non-base-2 rollover, this was the explanation further up in this thread.

1 Like

I got that part. I was responding to the assertion that one could just subtract these values and unsigned math would take care of rollover. It clearly doesn’t. Anyway, I’ll wager there are some spark programs out there showing anomalous behavior because the dev assumed rollover worked the “normal” way. Seems like there should be a function like us_delta in the sdk and referenced from the micros() documentation.

1 Like

You’re absolutely correct. There was a lot of talk and work on other areas of the code that did involve full 32-bit counters and I got that mixed up with this :wink: Because micros() doesn’t return values up to UINT_MAX, you won’t be able to take advantage of unsigned subtraction preventing rollover error.

What do you think about having an extra function that just returns the current DWT count to work with, and because it does count up to UINT_MAX, it’s error proof when performing a simple unsigned subtraction. Also it’s 72 times more accurate than micros() on the Core, and 120 times on the Photon.

##example

/* Returns 32-bit DWT count in 1/72th of a microsecond increments on Core
 * Returns 32-bit DWT count in 1/120th of a microsecond increments on Photon
 */
uint32_t getDWT() {
  return DWT->CYCCNT;
}

uint32_t startTime = getDWT();
while ( (getDWT() - startTime) < (30 * SYSTEM_US_TICKS) ) {
  // loop here for 30uS.
}

/* SYSTEM_US_TICKS = 72 on Core and 120 on Photon.  This can be used as a multiplier constant to determine ticks in microseconds from the DWT counter value. */

And to be super precise you can do away with the function call overhead and use DWT->CYCCNT directly.

2 Likes

I would like that for many uses. But the micros() call is so handy for many applications… So, yeah, I like that direct access to the DWT, but I think a couple of small tweaks to micros() (e.g. a helper function like my example above and a few more lines of explanation in the docs) would be beneficial for a broader set of users and might cause fewer porting problems (e.g. when moving between Spark boards with different architectures and/or clock rates).

1 Like

How about we fix the micros() call to use both millis() and the DWT counter so we get full 32-bit precision?