I’m trying to brainstorm about the most efficient layout for my Particle project. I’ve got a Spark Core that reads sound intensity every minute, it also contains a switch to turn the measurement off or on to start a new measurement session (a session typically lasts around 18 hours).
So I would like to store a integer values every minute on a server so a user can go to this server using a web browser and see the different sessions and select them to visualise a individual session in a graph.
In the future I plan to let multiple users use multiple sparks to acquire and visualise the data.
I already came across a few solutions on this forum, but I’m wondering wether I covered all the option.
Put Spark data in a google doc. Not really an option here since it’s not really scalable to multiple users/multiple cores
Directly push the spark data in a mysql database on the server. This sounds like a good idea. Although this is only one way communication from the spark to the server
Setup of a nodejs server with mongodb to query the spark. Sounds also like a good idea and facilitates bi-directional communication. However I’m wondering wether this is as stable as the mysql example?
Maybe instead of in the last example use influxdb instead of mongodb as described here (http://influxdb.com/)
Any more thoughts on this?