I am trying to get a spark powered bedside monitor running (it measures heart rate, blood pressure and temperature of patients at the intensive care). The spark will be used to measure everything after which it will be sent out via SSE (spark.publish).
That data needs to be logged (in project description) to a file (preferably excel) together with time and date (timestamp).
The logged data will be used to plot graphs according to that data via C# code (real time graphs, getting updated constantly).
The question is how do I log this data to an excel sheet (or similar)?
Yes CSV would work as long as all values can be taken out seperately (in other words; I can acces them via some code in C#). Also the timestamp is important, but I guess that won’t be much of a problem if the other values can be logged as well.
In reply to bko:
That is pretty much what I am looking for, the downside is that it cannot be logged frequently, which is exactly what I’m going to do (aprox 1-2 times per second).
I don’t have any direct C# experience, but I’m pretty sure it can process CSV data. It’s a very simple format, so I don’t see why it couldn’t (especially if you have Excel installed on the same machine and can access the Excel functionality via C#).
I believe you automagically get the timestamp in your published data. A sample line of data from the spark-hq/Temperature feed looks like: data: {"data":"69.125000","ttl":"60","published_at":"2014-03-12T04:58:09.583Z","coreid":"53ff6a065067544826350587"}. However, if you aren’t satisfied with that timestamp, you could use your “listener” client to log its own timestamp when the messages are received.
As @wgbartley points out, there is rate limiting on Spark.publish(). Maybe you should consider your own UDP protocol instead?
People seem to like node.js for the type of job you want to do converting Spark.publish() events into a a C# readable format.
But if you just want to get Spark.publish() data into a file, which you could massage later into CVS and you have Linux or Mac or cygwin on PC, you do this:
The --no-buffer is key since by default curl buffers its output. Replace the {your access token} with your hex access token.
Data looks like this:
event: State
data: {"data":"Current Color Command: auto","ttl":"60","published_at":"2014-03-18T21:43:48.108Z","coreid":"50ff6a065067545632160587"}
event: Uptime
data: {"data":"7:18:48","ttl":"60","published_at":"2014-03-18T21:43:51.862Z","coreid":"50ff6e065067545641560387"}
event: State
data: {"data":"Current Color Command: auto","ttl":"60","published_at":"2014-03-18T21:43:53.126Z","coreid":"50ff6a065067545632160587"}
You could pipe this into your program or a Perl script that turned in to a CSV file easily as well.
I certainly hope you are working on a proof-of-concept prototype design since the Spark core is great, but maybe not quite ready for medical applications.
There’s actually a program out there that can directly convert json to csv (json2csv) directly in the command line. However, your mileage may vary on a Windows platform (I’m assuming that since you’re using C#). However, with something like Cygwin installed, you might be able to run it through a series of pipes to extract the data you need (curl → grep or awk → disk file → your program).