This is a mini-tutorial!
Let’s say your have sensor farm of Spark cores all publishing their data to the Spark Cloud. You need a way to see all that data in near real-time, all on one dashboard. So, how can you do it? There are lots of ways but here’s a simple one that you can edit and change to meet your needs.
First a couple of in action screen shots, first for the event named “Uptime”:
And here’s another shot for the event named “Temp”:
As you can see, you enter your event name (or you can hard code if you like), and the web page registers all the cores that are broadcasting that event and builds the table dynamically as the events come in.
So you start off with just the header row in the table and slowly one by one as events come in, the unique core numbers are added to the table. When new data comes in for a core number you already have in the table, only the data and timestamp fields are updated. When you get an event from a new core not in the table, you add a row at the bottom of the table.
No data is stored here permanently–you just get the current values all on one dashboard.
Here’s the code–don’t forget:
- To put your own access token in this file
- To keep this private and not put this on the internet!
<!DOCTYPE HTML>
<html>
<body>
<P>Event name:<input type="text" name="eventNameBox" id="evText" >
<br><br>
<table id="dataTable" width="500px" border="2">
<tr>
<td> Core ID </td>
<td> Data </td>
<td> Timestamp </td>
</tr>
</table>
<br><br>
<button id="connectbutton" onclick="start()">Connect</button>
<script type="text/javascript">
function start(objButton) {
document.getElementById("connectbutton").innerHTML = "Running";
var eventName = document.getElementById('evText').value;
var accessToken = "<< access token >>";
var requestURL = "https://api.spark.io/v1/events/?access_token=" + accessToken;
var eventSource = new EventSource(requestURL);
eventSource.addEventListener('open', function(e) {
console.log("Opened!"); },false);
eventSource.addEventListener('error', function(e) {
console.log("Errored!"); },false);
eventSource.addEventListener(eventName, function(e) {
var parsedData = JSON.parse(e.data);
var dt = document.getElementById("dataTable");
var rows = dt.rows.length;
var foundIt = false;
for(var i=0;i<rows;i++) {
var rowN = dt.rows[i];
if (false==foundIt && rowN.cells[0].innerHTML==parsedData.coreid) {
foundIt = true;
rowN.cells[1].innerHTML = parsedData.data;
rowN.cells[2].innerHTML = parsedData.published_at;
}
}
if (false == foundIt) {
var newRow = dt.insertRow(rows);
var cell1 = newRow.insertCell(0);
var cell2 = newRow.insertCell(1);
var cell3 = newRow.insertCell(2);
cell1.innerHTML = parsedData.coreid;
cell2.innerHTML = parsedData.data;
cell3.innerHTML = parsedData.published_at;
}
}, false);
}
</script>
</body>
</html>
One of the big differences between Spark.variable() and Spark.publish() is that publishing only requires one network connection per event stream back to the Spark Cloud.