ds18b20 zero and very high readings


After working perfectly my greenhouse monitoring system has started acting crazy. One of the ds18b20 sensors (on a single bus) has started to read either zero or way to high (60C instead of 30C). The wired part is that except being very off seems to vary like before. Once in a while it jumps back to normal.

I’m using the OneWire library and to my knowledge is not doing anything especially wired… Anyone seem something like this before to give me a clue where to start?

Here’s my code for reading the temperature:

	byte i;
	byte data[12];
	float celsius;

	// this device has temp so let's read it

	_onewire.reset();               // first clear the 1-wire bus
	_onewire.select(_addr);          // now select the device we just found
	// ds.write(0x44, 1);     // tell it to start a conversion, with parasite power on at the end
	_onewire.write(0x44, 0);        // or start conversion in powered mode (bus finishes low)

	// just wait a second while the conversion takes place
	// different chips have different conversion times, check the specs, 1 sec is worse case + 250ms
	// you could also communicate with other devices if you like but you would need
	// to already know their address to select them.

	delay(1000);     // maybe 750ms is enough, maybe not, wait 1 sec for conversion

	// we might do a ds.depower() (parasite) here, but the reset will take care of it.

	// first make sure current values are in the scratch pad

	_onewire.write(0xB8,0);         // Recall Memory 0
	_onewire.write(0x00,0);         // Recall Memory 0

	// now read the scratch pad

	_onewire.write(0xBE,0);         // Read Scratchpad

	// transfer and print the values

	for ( i = 0; i < 9; i++) {           // we need 9 bytes
		data[i] = _onewire.read();

	if(OneWire::crc8(data, 8)!=data[8])
		return 10000;
	// Convert the data to actual temperature
	// because the result is a 16 bit signed integer, it should
	// be stored to an "int16_t" type, which is always 16 bits
	// even when compiled on a 32 bit processor.
	int16_t raw = (data[1] << 8) | data[0];

	byte cfg = (data[4] & 0x60);

	// at lower res, the low bits are undefined, so let's zero them
	if (cfg == 0x00) raw = raw & ~7;  // 9 bit resolution, 93.75 ms
	if (cfg == 0x20) raw = raw & ~3; // 10 bit res, 187.5 ms
	if (cfg == 0x40) raw = raw & ~1; // 11 bit res, 375 ms
	// default is 12 bit resolution, 750 ms conversion time
	celsius = (float)raw * 0.0625;
	return celsius;

Hi @albin

I reformatted your code to be more readable.

What does the upper level code do with the early return value of 10000? That seems to be not compatible with the float return type used later when you return celsius;.


In the calling routine I check for high return values = failed CRC.

Shouldn’t 10000 (int) be casted to a float automagically?

After depowering the photon and the sensor for a few minutes it wen’t back to normal. But now after 12 hours the same error is back.