*Moved from original thread (see quote)*

It is using long math, not int math(s). Sorry for the confusion.

*Moved from original thread (see quote)*

It is using long math, not int math(s). Sorry for the confusion.

On all STM32Fxxx devices (so this includes the Core and the Photon), `long`

and `int`

are synonymous. They are both 32-bit values.

@ScruffRâ€™s point was that the math is computed over integer values, rather than floating point which will lead to unexpected rounding errors.

1 Like

@mdma, I am sorry, but I did not know that. I did think they were different.

But with the range of long, It looks like there should be no noticeable error.

If I am incorrect please advise me.

Sorry, I am not to versed in the STM32Fxxx.

When I first started computing, an int was 8 bits (256). It keeps changing, faster than I do I guess.

If there is a rounding error calculation with long, would that be less than 1%, or less than 0.01% or what?

Sure, we all have blind spots, which we must each be aware of in our interactions on this forum. In the face of uncertainty, itâ€™s to everyoneâ€™s advantage that one defers to others, refraining from conjecture and supposition.

@mdma, I can understand your opinion. In a forum aspect, donâ€™t rock the boat.

But I sometime ask in a technical aspect. How much error can be expected from calculation with a long, verses calculation with a float. I would guess that neither one would be off more than one thousands of a percent. Do you disagree ?

Just discourse, no harm, no foul !

This is a bit off topic for this thread. If you want to continue the discourse, maybe we should start a new thread. *(Done by ScruffR)*

@Jack, the possible max value of an integer datatype does ** not at** all impact the precision of any calculation (as long it doesnâ€™t exceed the range of course ;-)).

Why should the result of

`10/3`

differ between 8bit, 16bit or 32bit integers?Can you clarify?

BTW: I tend to use â€ś`int`

â€ť when explicitly talking about the datatype called this way, and â€śintâ€ť (with now special formatting, when Iâ€™m lazy and donâ€™t want to spell out â€śintegerâ€ť) - subtle differences

Not to interpret @mdmaâ€™s reaction, but I think it might be that in your previous post you **stated** a â€ścontradictionâ€ť/contradictory view based on an misunderstanding on your side.

If you had rather written it as a question, since my wording did confuse you, it most likely would not have caused the same reaction.

2 Likes

Thanks for your help @ScruffR.

There can be cases where integer math is not appropriate. IMO it has no problems in addition, subtraction, multiplication. But division is one math function that may not provide expected results (so you need to be aware of that).

A division of 10000000/3000000 should provide a pretty accurate result, unless using 8bit ints. {I was wrong}

a division of one billion / 300 million may need the 32bit integer to work. (I was wrong}

Thanks, Jack

Iâ€™d dare to contradict.

An error of >10% (between the returned and the mathematically correct *and* expected result) is not pretty accurate in all cases.

Especially not when using this result to subsequently scale-up other values.

1 Like

I agree, if it produces an error of 10% compared to the float calculation.

This is discourse, I expect difference of opinion, contradictions, but with respect. In my opinion.

I Get your point now. 3 vs 3.33333333. Thanks for the heads up.

This is a well-studied field and I can heartily recommend Bernie Widowâ€™s book â€śQuantization Noise: Roundoff Error in Digital Computation, Signal Processing, Control, and Communicationsâ€ť

Bernie gave a talk where I work a few years ago and he is a great scholar and a true gentleman.

6 Likes