UDP logging to InfluxDB with Grafana

As part of my ongoing multi-zone thermostat project, I implemented a UDP -> local linux server for realtime logging, and thought that people might be interested in what was involved. It was actually fairly straightforward once all was said and done. First step was the UDP broadcast on the Spark, as there are some ongoing issues with UDP that are documented elsewhere on the forum. Anyways, I managed to get it working via the tip of calling stop() after every packet. Relevant spark code that gets called every second:

void udpBroadcast() {
    char tempString[128];
    int sum = 0;
    int temp;
    Udp.begin(3141);
    Udp.beginPacket(IPAddress(10, 0, 0, 255), 3141);
    for (uint8_t z = 0; z < NUM_OF_ZONES; z++) {
        // Write the zone data (which is set elsewhere and exposed via the Cloud.variable)
        sprintf(tempString, "%d:%s::%s,,", z, zoneString[z], actionPID[z]);
        temp = Udp.write(tempString);
        if (temp < 1) {
            Udp.endPacket();
            Udp.stop();
            return;
        }
        sum += temp;
    }
    // Write the current status of the relays, and the boiler Temp
    sprintf(tempString, "%i|%i|%i|%f", digitalRead(BOILER_PIN), digitalRead(HOUSE_PUMP_PIN), digitalRead(FLOOR_PUMP_PIN), boilerTemp);

    temp = Udp.write(tempString);

    if (temp < 1) {
        Udp.endPacket();
        Udp.stop();
        return;
    }
    sum += temp;

    sprintf(tempString, "\\\\%i\n", sum);  // very basic end of string "checksum."  Just the length of the printed characters
    Udp.write(tempString);
    Udp.endPacket();
    Udp.stop();
}

My home fileserver is running Ubuntu 14.04, so it was pretty straightforward to [install InfluxDB][1] The only hitch I encountered was that I already had a http server randomnly running on one of the default ports, so some config editing was necessary. I also installed [Grafana][2], which acts as a pretty front end for graphing the datalog. I then wrote a quick python script to monitor the UDP port, parse the string that the spark outputs, and write it to the InfluxDB:

#!/usr/bin/python
import socket, sys, json, urllib2
from influxdb import InfluxDBClient
pp = pprint.PrettyPrinter(indent=4)
UDP_PORT = 3141
sock = socket.socket(socket.AF_INET, # Internet
                     socket.SOCK_DGRAM) # UDP
sock.bind(("", UDP_PORT))
tempString=""
n=0
while True:
	data, addr = sock.recvfrom(1024) 
	tempString+=data
	if "\n"==tempString[-1]:
		jsondat=[]
		words=tempString.split("\\")
		if len(words[0])== int(words[-1]):
			#print words[0]
			segs=words[0].split(',,')
			
			for x in range(0,len(segs)-1):
				status,pid=segs[x].split('::')
				_,status=status.split(':')
				curTemp,setTemp,percOn=status.split('|')
				propAct,intAct,derAct=pid.split(',')
				jsondat.append({"name":"Zone" + str(x+1),
						"columns":["current_temp","set_temp","percent_on","proportional","integral","derivative"],
						"points":[[float(curTemp),float(setTemp),float(percOn),float(propAct),float(intAct),float(derAct)],]})

			boilerOn,houseOn,floorOn,boilerTemp=segs[-1].split('|')
			jsondat.append({"name":"RelayStatus",
						"columns":["boiler_relay","house_relay","floor_relay","boiler_temp"],
						"points":[[int(boilerOn),int(houseOn),int(floorOn),float(boilerTemp)],]})
			
			req=urllib2.Request('http://localhost:8086/db/Thermostat/series?u=DB_USER&p=DB_PASSWORD')
			req.add_header('Content-Type', 'application/json')
			response=urllib2.urlopen(req,json.dumps(jsondat))
			print response
		tempString=""

I haven’t yet set it up, but it would probably make sense to have an upstart script or init.d script run this and keep it running at bootup.

After all this, here’s the pretty result! I’m rather happy with it, especially the resolution that it affords compared to the cloud calls.

[1]: http://influxdb.com/docs/v0.7/introduction/installation.html
[2]: http://grafana.org/download/

5 Likes

Super cool! Hope to hear more from you in future. :smiley:

I’m a total newbie when it comes to TCP/UDP/sockets/network etc so such tutorials are really interesting for me :wink:

I’m actually working on a similar project but using StatsD + Graphite/Carbon. I finally got my server infrastructure set up the way I like it, so now I’m going to write 3 libraries for publishing from the Core to the server: StatsD TCP, StatsD UDP, and Carbon (TCP). I chose Graphite over InfluxDB for the built-in function API, and that it seemed to handle the volume of metrics I throw at it much more gracefully than InfluxDB (millions of data points in a matter of 24-48 hours).

You might want to look into Docker for setting up some of those services. It helps keeps things a little more “contained” so you don’t muck up your file server too much. When I finish up with the libraries, I’ll try to do a project write-up that includes using Docker to set up the services mentioned and how to push to them using the libraries in the Core firmware.

2 Likes