We just wanted to let you know that glowfi.sh now supports visualization of output from our machine learning API. So, if you are using our service for anomaly detection or other data processing, and you have setup an account with a 3rd party service, you can forward glowfi.sh predictions by selecting the Data Visualization tab in your admin dashboard.
We have a complete example of how to send data to glowfi.sh using Particle webhooks, predict anomalies in your data, and post to ThingSpeak from our API on our particle gitHub repo here.
As a follow up, we would like to gauge what other visualization services the community would like to see glowfi.sh integrate with. Please reply to this post if you have a particular service in mind, like Ubidots, Librato, etc.
We are working on getting proper how-to docs up on our new Data Visualization functionality, but for now I thought Iâd post the name of the fields you will need to have set up in Librato/Thingspeak for those that just canât wait for the docs. By endpoint, they are:
anomaly_detect: âprobability_predictionsâ and âanomaly_predictionsâ
predict: âpredictions.class_predictionsâ.
signal_extract: The feature names of the data being sent in plus the corresponding 1-to-1 raw inputs. For example, a processed âtemperatureâ feature would be sent on to Librato/Thingspeak as âtemperatureâ plus the corresponding âRAW_temperatureâ with the original value.
As a quick example of what the new functionality can do, here is a âpredictâ integration with Librato presenting class predictions vs actual on a accelerometer data (242 sensors) being streamed in real time: