Q: Does the length of an integer datatype impact the precision of calculation?

Moved from original thread (see quote)

It is using long math, not int math(s). Sorry for the confusion.

On all STM32Fxxx devices (so this includes the Core and the Photon), long and int are synonymous. They are both 32-bit values.

@ScruffR's point was that the math is computed over integer values, rather than floating point which will lead to unexpected rounding errors.

1 Like

@mdma, I am sorry, but I did not know that. I did think they were different.
But with the range of long, It looks like there should be no noticeable error.
If I am incorrect please advise me.
Sorry, I am not to versed in the STM32Fxxx.

When I first started computing, an int was 8 bits (256). It keeps changing, faster than I do I guess.

If there is a rounding error calculation with long, would that be less than 1%, or less than 0.01% or what?

Sure, we all have blind spots, which we must each be aware of in our interactions on this forum. In the face of uncertainty, it's to everyone's advantage that one defers to others, refraining from conjecture and supposition.

@mdma, I can understand your opinion. In a forum aspect, don’t rock the boat.
But I sometime ask in a technical aspect. How much error can be expected from calculation with a long, verses calculation with a float. I would guess that neither one would be off more than one thousands of a percent. Do you disagree ?

Just discourse, no harm, no foul !

This is a bit off topic for this thread. If you want to continue the discourse, maybe we should start a new thread. (Done by ScruffR)

For this thread, do you have a suggestion of how to hook the hardware to the A0 pin? Or what script may give a reading close to accurate? That would be more than welcome, since there are many ways to design that. What is your best suggestion ? Personally, I like #2 for the software. If you agree with that, no need to reply, else, give your suggestions. *(already answered in the original thread)*

@Jack, the possible max value of an integer datatype does not at all impact the precision of any calculation (as long it doesn’t exceed the range of course ;-)).
Why should the result of 10/3 differ between 8bit, 16bit or 32bit integers?

Can you clarify?


BTW: I tend to use “int” when explicitly talking about the datatype called this way, and “int” (with now special formatting, when I’m lazy and don’t want to spell out “integer”) - subtle differences :wink:

Not to interpret @mdma’s reaction, but I think it might be that in your previous post you stated a “contradiction”/contradictory view based on an misunderstanding on your side.
If you had rather written it as a question, since my wording did confuse you, it most likely would not have caused the same reaction.

2 Likes

Thanks for your help @ScruffR.
There can be cases where integer math is not appropriate. IMO it has no problems in addition, subtraction, multiplication. But division is one math function that may not provide expected results (so you need to be aware of that).
A division of 10000000/3000000 should provide a pretty accurate result, unless using 8bit ints. {I was wrong}
a division of one billion / 300 million may need the 32bit integer to work. (I was wrong}

Thanks, Jack

I'd dare to contradict.
An error of >10% (between the returned and the mathematically correct and expected result) is not pretty accurate in all cases.
Especially not when using this result to subsequently scale-up other values.

1 Like

I agree, if it produces an error of 10% compared to the float calculation.

This is discourse, I expect difference of opinion, contradictions, but with respect. In my opinion.

I Get your point now. 3 vs 3.33333333. Thanks for the heads up.

This is a well-studied field and I can heartily recommend Bernie Widow’s book “Quantization Noise: Roundoff Error in Digital Computation, Signal Processing, Control, and Communications”

Bernie gave a talk where I work a few years ago and he is a great scholar and a true gentleman.

6 Likes