Help with mqtt json error

I am sending sensor data from an argon via mqtt to telegraf, then on to a postgresql database. It only works when I have a small number of metrics I send across. Anything else, I begin to get this errror:

[inputs.mqtt_consumer]: Error in plugin: unexpected end of JSON input

I have tried different formatting of the data being pushed to telegraf. I suspect I am not setting this up right on my end.

On my argon, I have tried this:

  char data[512];
    float temp_F = (sample.temperature* 9) /5 + 32;
    int readtime = Time.now(); //Unix Format

    snprintf(data, sizeof(data), "{\"readtime\":%ld000,\"deviceID\":\"%s\",\"location\":\"%s\",\"name\":\"%s\",\"type\":\"%s\",\"temp_F\":%.2f,\"temp_C\":%.2f,\"humid\":%.2f,\"absHumid\":%.2f,\"voc\":%.2f,\"co2\":%.2f,\"eco2_base\":%u,\"tvoc_base\":%u,\"pm1\":%.2f,\"pm2\":%.2f,\"pm4\":%.2f,\"pm10\":%.2f,\"nc0\":%.2f,\"nc1\":%.2f,\"nc2\":%.2f,\"nc4\":%.2f,\"nc10\":%.2f,\"typical_particle_size\":%.2f}",readtime,(const char*)System.deviceID(),deviceLocation,deviceName,deviceType,temp_F,sample.temperature,sample.humidity,sample.absoluteHumidity,sample.voc,sample.co2,sample.eco2_base,sample.tvoc_base,sps30.GetMassPM1(),sps30.GetMassPM2(),sps30.GetMassPM4(),sps30.GetMassPM10(),sps30.GetNumPM0(),sps30.GetNumPM1(),sps30.GetNumPM2(),sps30.GetNumPM4(),sps30.GetNumPM10(),sps30.GetPartSize()); 

and this:

    float temp_F = (sample.temperature* 9) /5 + 32;
    int readtime = Time.now(); //Unix Format

     String data = String::format( "{\"readtime\":%ld000,\"deviceID\":\"%s\",\"location\":\"%s\",\"name\":\"%s\",\"type\":\"%s\",\"temp_F\":%.2f,\"temp_C\":%.2f,\"humid\":%.2f,\"voc\":%.2f,\"co2\":%.2f,\"absHumid\":%.2f,\"eco2_base\":%u,\"tvoc_base\":%u}",readtime,(const char*)System.deviceID(),deviceLocation,deviceName,deviceType,temp_F,sample.temperature,sample.humidity,sample.voc,sample.co2,sample.absoluteHumidity,sample.eco2_base,sample.tvoc_base);  

the string data option was working until I added the readtime.

Here is my mqtt setup on telegraf.

[[inputs.mqtt_consumer]]
	name_override = "airquality"
	servers = ["tcp://xx.xx.xx.xx:1883"]
	qos = 0
	connection_timeout = 30
	topics = [
		"airmonitor/#",
	]
	persistent_session = true
	client_id = "airmonitor-telegraf"
	data_format = "json"
	json_time_key = "readtime"
	json_time_format = "unix_ms"
	json_timezone = "America/New_York"
	tag_keys = [
		"deviceID",
		"location",
		"name",
		"type"
	]

Thanks

Have you checked the return value of snprintf() or data.length() after your assignment?

This should be the first thing to check - my guess would be that you are hitting some sort of length limitation.

hello @ScruffR The buffer size in my string is 377 - used your rough and dirty recommended method here to count it.

However, you can see I have set the buffer size to 512 - and its still getting truncated. Could this be something related to telegraf or mqtt instead?

The limitation could hit anywhere on the path from your local buffer to the remote end of the communication.
Looking at your local buffer size is a good starting point, but since that’s not the culprit you may need to investigate further

  • what does the used MQTT library do with the data
  • what does the network stack do
  • what does the MQTT broker receive
  • what does it pass on

If you are using your own MQTT broker you should be able to look at its logs.
If not, you could setup a local broker and target that for your tests.
You can also send some test message from another machine (e.g. using MQTTBox) and also subscribe to see what you get there.

If all that doesn’t provide hints to find a solution you may be forced to shorten the message (e.g. abbreviate your key names a bit further).

The MQTT library connects to the broker on a raspberrypi mqtt input - which then forwards it to a postgresql database. The config for the mqtt input is shown in my original post.

I am also using mqttbox so perhaps I can check what it receives and report back.

@ScruffR

So I found out what it is - thanks to your guidance. The MQTT library I used has a default buffer limit of 255. I had to change it to 512 .

  MQTT client("server_name", 1883, MQTT_DEFAULT_KEEPALIVE, callback, 512); // max 512 bytes

Data now coming in to telegraf ok and also sent on to the postgresql db ok. Only issue now is the timestamp is off. I am using int readtime = Time.now(); . to set the timestamp in the data push to the mqtt broker. Then on the broker I do

json_time_format = "unix_ms"
json_timezone = "America/New_York"

But it is coming 5 hours ahead.

A trick I use is to always send the time in GMT/UTC. Then in the visualisation, have a variable that defines the time zone offset and a function that does the math to correct it based on the viewer and not the source.

Hi @shanevanj - Can you elaborate ? You mean you are doing the timezone correction in Losant ?( assuming thats what you are using)? How does that handle DST too? The timestamp when the data is published to the particle cloud is the correct timezone - Was wondering if I could send the published_at timestamp with the mqtt payload? Is that possible?

I even came across a time library you are also using - but havent been able to implement it yet. Seems a bit too much just to send the right timestamp for events.

Yes I went through all of that and ended up just getting UTC from particle cloud. Then sending all telemetry in UTC. In the front end (I use Thingsboard not Losant) - I set a attribute (variable) for my customer’s local time zone as an offset in seconds (easier to use in the JS scripting I use in the rules engine - I think Losant will work differently though).

This attribute name and value is published to my devices on startup. (They query shared attributes like this and get the current value) and this is saved to the device.

image

You will see the attribute “stz” above received here on startup. This offset is not used for the telemetry time stamp but on the devices LCD display so it shows the correct time in the time zone at the installed location. This is a crude but effective way to know if the stz value has been set in the backend, since the customer will complain the time is incorrect :slight_smile:

Then in the visualisation - I have a node that computes the offset (in one of my products, it computes it based on on the location of the browser)

var tzO = 0;

if (!isNaN(metadata.tzOffset)) {
    tzO = Number(metadata
        .tzOffset);
}


if (msg.Type === 'CONNECT_EVENT') {
    var a = new Date(Number(msg.lastConnectTime) + tzO);
} else if (msg.Type === 'DISCONNECT_EVENT') {
    var a = new Date(Number(msg.lastDisconnectTime) + tzO);
} else if (msg.Type === 'POST_TELEMETRY_REQUEST') {
    var a = new Date(Number(metadata.ts) + tzO);
} else {
    var a = new Date(Date.now() + tzO);
}
var months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
    'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'
];

var year = a.getFullYear();
var month = months[a.getMonth()];
var date = a.getDate();
var hour = a.getHours() < 10 ? '0' + a.getHours() : a.getHours();
var min = a.getMinutes() < 10 ? '0' + a.getMinutes() : a.getMinutes();
var sec = a.getSeconds() < 10 ? '0' + a.getSeconds() : a.getSeconds();
var time = date + ' ' + month + ' ' + year + ' ' + hour +
    ':' + min + ':' + sec;


metadata.localTime = time;

metadata.tzOffset is pulled from the shared attribute stz and added into as the telemetry message UTC timestamp so its now in the (customers) local timezone

} else if (msg.Type === 'POST_TELEMETRY_REQUEST') {
    var a = new Date(Number(metadata.ts) + tzO);

This is then saved into the telemetry database and is available to the visualisation engine etc…

@shanevanj very interesting. Thanks for sharing! This might be a bit more than I was hoping to do. But its very interesting how you are dynamically setting the local display time. I like that.

My data is going into a postgresql db. I found out postgres records the data in UTC anyway but if you send it in a timezone aware timestamp, and the timezone parameter is set in config, its supposed to serve up the data in queries using the set timezone. I haven't been able to get it to work, but it seems a less painless way to show my visualization data correctly in the correct timezone. here is the postgres documentation snippet ...

.... All timezone-aware dates and times are stored internally in UTC. They are converted to local time in the zone specified by the TimeZone configuration parameter before being displayed to the client ....... .... The TimeZone configuration parameter can be set in the file postgresql.conf , or in any of the other standard ways described in Chapter 19. ...

My initial work on this issue was with MySQL and because my project at that time needed to be universally accessed, I rolled my own timezone solution (as above) - I now use Postgresql as well but have ignored the timezone stuff as my (perhaps convoluted) method works.