How to convert 16 bit binary number to an Integer

I would like to convert a 16bit Binary to an integer programatically. Below is the code that I am using. When I compile it I get an error: invalid conversion from 'int' to 'const char*'. Could someone please suggest where I could be going wrong ?

uint16_t Received_Bytes = 1011001111001101;
    int decimalValue = 0;
    for(int i = 16; i <=1; i++)
        {
        decimalValue += ((Received_Bytes>>(i-1)) & 1) * pow(2, (i-1));  // I believe error is in this line of code
        } 

Try 0b1010101010… that let’s the compiler know it’s 1’s and 0’s and not 100.1 billion or whatever that number is

1 Like

@Hootie81 tried with uint16_t Received_Bytes = 0b1011001111001101; but still getting the same error.

I compiled the code, adding 0b in front of the binary number, and the compiler error I get is

pow() was not declared in this scope

I did a search through the firmware, and pow isn’t defined. It’s part of the standard C++ libraries. To make this function available to you, please add

#include <cmath>

at the top of your application code, then it compiles. :slight_smile:

If I am interpreting your request correctly, Received_Bytes is a series of 1 and 0’s you have received somehow from some device?

You don’t need to convert any 16 bit binary to a value as it will already be in that format is the 1’s and 0’s are loaded to the INT value as you receive them from the device.

If however your received 1’s and 0’s are held as a STRING then you need to store them in either CHAR or STRING. char is easiest for your above code.

char *Receicved_Bytes = “1011001111001101”;

Now change your code to use

((char) (Received_Bytes++) - 48) 

and you will be accessing the pointer to the character array and as a number (the -48 converts from ASCII to number). Note the cast to char so that you only access a single byte and not the complete array.

I haven’t tried this but it should work.

Putting 0b in front of the series of 1’s and 0’s will create a 16 bit number and no need for any conversion. I still think the OP was trying to do this from a series of 1’s and 0’s from an external device of some sort.

I think you need to #include <math.h> to use pow.

You are right, the code compiles by making the pow function available.

@v8dave you’ve spotted it correctly I have received the 16 bit binary value from an external device. How do I convert this to a decimal number (my approach can be seen in the first post), as I need to use the decimal number in a formula.

@mdma when I insert these two lines of code along with the original one, I still get the same error: invalid conversion from 'int' to 'const char*' [-fpermissive]

Spark.publish("logging", decimalValue);  // This line of code could be causing the error
    delay(1000);

Yes, the problem there is you are trying to publish an event with event data decimalValue which is an int, but the function expects a string. Assuming this could be made to work, what would you like to see in the event data - the binary value? I also see now that you are trying to extract the binary digits using pow() - when the base is 2, then power operations 2^n are very easy and you don’t need pow() which is quite inaccurate since it uses floating point arithmetic, rather than integer arithmetic.

Maybe it’s easier if we start from the beginning - if tell us what you want to achieve (e.g. how you get the binary value) and what you want to do with that value.

As @mdma said you need a string to publish, more like this:

char pubStr[32];
sprintf(pubStr,"%d",decimalValue);
Spark.publish("logging",pubStr);

There are a lot of ways to convert an integer to char* string; this one is simple but there are many others.

Another option is to set an int value to 65536 and simply divide this by 2 each time you go through the loop. This will give you the same value as your POW but do away with floating point maths.

Also, consider a simple test such as this which assumes Received_Bytes is a character array of 1’s and 0’s.

int decimalValue = 0;
int mask = 65536;
for(int i = 16; i <=1; i++)
{
    if(((char) (Received_Bytes++) - 48) & 1)
    {
        decimalValue += mask;
    }
    mask = mask / 2;
}

Once you have the value then the post from @bko will work fine.

@bko @v8dave @mdma Thank you so much for you help.

To give you a complete overview: I am getting a 32 bit binary number (RES_0) from a slave which I am breaking into two parts (16 bit) integer and (16 bit) fractional as you can see from the below table (extracted from the data sheet). Using these two parts I intend to convert it to a decimal value e.g. 1.005674

I would then use the decimal value to calculate the correction factor (e.g. correction factor = 61.035/1.005674) and publish the correction factor.

Please advise if I am approaching it in the right way (my first post has the part of the code) or any better ways to solve this.

Hi @Falcon

If you want to do the correction on the core, I would do it in double-precision floating-point. It is not clear if your res_0 number is signed or unsigned but you can change the declaration to fit.

uint32_t res_0 = read_res_0();  //assume you can get the 32-bit value here

double res_0_double = ((double)res_0)/65536.0;  // scale by 2^-16
double corr_factor = 61.035 / res_0_double;