Is there an easy way to save console logs and the data from connection monitoring?

I often use the combination of the Particle console and @rickkas7’s connection monitoring Node.js application to monitor the health of my devices. Problem is that these connections timeout, or my laptop sleeps or I am traveling - and I miss important events.

I am sure someone has solved this but, I did not see anything obvious in searching the forums. Is there an easy way to continuously monitor these sources of data and save daily logs as text files?

For example, could I provision a couple small AWS instances to stay connected and have some script run daily to save the logs and refresh the connection? Has anyone done something like this?

Thanks, Chip

1 Like

I currently have a webhook to Google Docs that records data to a spreadsheet. Each event becomes a new line on the spreadsheet. The communication requires no other local device(s), and I can access the data via any computer/phone with an internet connection and a browser.

I also have a USB hard-drive attached to an RPi3 B+ that logs more verbose data. In that case, I send MQTT messages between the Particle device and the RPi. This approach does not require internet connectivity.

For the future, I’m anxious to get some desk time with the Particle Rules Engine. It looks really exciting. Perhaps someone else can provide more information about it.


Until access is opened up to the rules engine, you could run your own on the raspberry with node-red and the public plugin :slight_smile: May not be as convenient, but does work.


I have been reading about the Rules Engine and that may be my long term solution. Do you have any information about how it could help me with this issue?

Thanks, Chip

My solution is a little more complex, but should scale well. I use the MQTT-TLS library to write events directly to AWS IoT Core. The rules in AWS invoke a Lambda function that parses the message (sensor reading, alert, diagnostics, etc.) and save the results to DynamoDB. I do not have any EC2 instances, so I don’t have to worry about load balancers, server maintenance, etc. It’s been working well for me. In my first field trial I used a local SD card to store a local log file, but in practice that wasn’t very effective since I have to physically access the devices (500 miles away) to retrieve the logs. My second prototype will be sending important messages that could be useful for troubleshooting to AWS via MQTT. I already upload device health data regularly, but will add significant errors, disconnects, reboots, etc. in messages that are only sent upon error condition or reboot. For my daily monitoring of the devices I have in the field, I just wrote a simple Python script that reads directly from DynamoDB and displays the most relevant fields.


Monitoring the event stream (SSE) from either your own server (even a Raspberry Pi) or a cloud service is the most common method.

There is an upcoming fix in particle-api-js to reconnect properly after a network or server interruption that will also help.

I’ve been monitoring the event stream from a Java Apache Tomcat server and storing the data in a MySQL database for years and it has worked well.