Temp Sensor Variations

I have been using the same analog temperature sensors with the Spark Core for a couple months now, and have had no reason to believe anything was wrong in the values I was receiving; until today :stuck_out_tongue:

So I have 3 sensors, T1, T2, T3. T1 has been properly plugged into a wall socket for over a week and was reading what I thought was the correct temperature (today it was ~18C), it’s also fairly close to the ground. I plugged T2 into a powerbar that was approximately 7-8ft away from T1 and it’s slightly closer to the ground, didn’t think it would be a big difference in temperature; however, it was reading 14.5-15 C which to me seems like a big jump. So I grabbed T3 plugged it into the adjacent wall socket that T1 was using and placed T1 1cm away from T3, It was now reading 16C :frowning:

Without getting into lengthy details, I switched sensors, plugged them all into the same location, made sure they all had the same firmware. In the end T2 and T3 are fairly close to one another, but T1 still seems to be way off.

This is how I’m reading the analog signal from the Core:

  analogRead(tempPin);
  delay(10);
  int tempRead = analogRead(tempPin);
  float tempmV = (((float) tempRead) / 4095 * 3300);
  tempC = ((5.506 - sqrt(pow(-5.506, 2) + (4 * 0.00176 * (870.6 - tempmV))))/(2 * -0.00176)) + 30; 

I got the tempC equation off the transfer function in the datasheet for my sensor here.

I realize sensors have error and each one to the next can be different, but I want to know if I’m doing something obvious wrong, if anyone sees anything and can point it out I would really appreciate it :slight_smile:

@UST, given that these are voltage output sensors, did you consider adding a 0.01uF capacitor between the analog input pin and GND? Like other temperature sensors (see below), there may be an known impedance “mismatch” with the Core analog front end which the capacitor has proven to stabilize. :smile:

I suspect your problem is that the reference voltage for the ADC on the core is just the 3.3V power supply rail(*), with a small filter (ferrite bead) to reduce the worst of the high freq noise that will be present on any digital system (this filtered voltage is labelled “3V3*” on the pinout.) The instantaneous value of this reference will depend on multiple things, including: the absolute accuracy of the voltage regulator, the temperature coeff of that regulator, the instantaneous load (will change due to many factors, including: WiFi activity, other digital output pins, STM32 activity)

Because of this, the analog inputs on the core are best suited for ratiometric measurements, referenced to the same 3V3* rail, rather than making accurate absolute voltage measurements.

You certainly may improve things, with additional decoupling as @peekay123 states; but you were reporting a systematic offset between readings on different devices, not erratic readings on a single device. To my ear, that sounds more like a difference in the 3V3* rail (ADC Vref) between devices.

I would recommend using temperature sensors with digital (SPI or I2C) outputs if you want higher accuracy and repeatability across multiple devices, these will be immune to these effects.

(*) - this decision is dictated by the pin-out tradeoffs made by ST for the physical chip package used in the core.

3 Likes

@peekay123 my apologies I forgot to mention that I do indeed have a 0.01uF cap in place !

@AndyW everything you said makes absolute sense, although right now I have things kind of stable, it’s not perfect. Probably switching to a SPI or I2C temp sensor would be the best decision.

Thank you for the fast replies and sound advice guys :slight_smile:

You got great advice already but I thought I would add a few things:

You could measure the 3v3* voltage on each core in situ and replace your 3300 in your code with the actual value on each core. Depending on how you power your cores, a separate 3.3V line might be easy to add external to the core.

Some experiments done earlier with breadboards suggest that the core with the relatively hot TI CC3000 on it can cause the temperature to read high. Physical isolation can help a lot.

The error tolerance in the datasheet (Figure 1.) is around +/- 1.8 C at the temperature you are measuring so all of your measurements are actually within that tolerance assuming T1 is +1.8 and T2/T3 are -1.x.

That’s sound advice as well, I’ll give that a try ASAP to see what the voltage is on the board with the varying temperature ~ if it is for some reason lower, I can adjust it in my code as you said! :smiley:

And if you are using the Spark cloud, you can make the voltage adjustment settable via Spark function (and readable via Spark variable if needed).

2 Likes

But remember, you cannot remotely read what 3V3* actually is via software, because there is no independent reference.

I also want to state that while measuring and tailoring each core’s full-scale value will allow you to clumsily compensate for variations in the voltage regulator, it will not take into account dynamic variance due to changing loads (e.g. cc3000 tx events etc)

6 Likes