readSensorData(0x0A, dataBuffer, sizeof(uint16_t) * 129);
for(int16_t i = 0; i < 258; i += 2) data1[i / 2] = dataBuffer[i + 1] | (dataBuffer[i] << 8);
I think that both implementations do the same thing so I’m confused as to why the memcpy approach does not work when the bit shifting approach does. Any ideas why this is?
Is there a more memory efficient solution I could use that does not require a byte array and a data array. Ideally I would read the numbers directly into the data array.
The STM32 family is little endian - this means the LSB is stored before the MSB, but your data buffer is in big-endian format - the MSB comes first then the LSB.
When you explicitly build the uint16_t by hand the compiler is taking care of the order. When you copy the data from memory, then the order is whatever it is in memory, which is not the right order in this case.
The data is stored and read out as MSB-LSB which would mean it needs to be swapped to LSB-MSB to convert to a uint16 correctly - all makes sense so far.
However when I convert by hand I actually shift the MSB (byte[0]) << 8 and then OR it with the LSB (byte[1]) so I am preserving the MSB-LSB order rather than swapping it. This is whats confusing me… or is this what you mean by the compiler taking care of the order - in this case swapping what I have converted by hand.
I also read out some calibration data (int16 and uint16) using the memcpy approach and it matches the calibration spreadsheet I received with the sensor. Seems like there is something weird going on.