I wanted to put together a small tutorial for people that don’t have in depth knowledge of how the whole API think works. Being in that category, I thought it might be nice to share a bit of the knowledge I gained trying to learn the basics.
This tutorial will show how to set up the core with an irradiance sensor, filter the output, and retrieve and store the result via a batch file.
Tools you will need:
- a text editor, like Sublime Text
- the cURL Command Line tool from curl.haxx.se
- I’d recommend using their download wizard to find the right version for you
- the JSON Command Line Processor jq
Step 1: Build the Circuit
As far as hardware bill of materials go, this one is pretty short. The schematic and the final result can be seen in the image below.
You need:
- 3 small wires
- TEMP5700 ambient light sensor ($0.70 at Digikey)
- 10k Ohm resistor
- Your core and breadboard + power source
Wire it all up… and your done with the hardware!
Step 2: Write the Spark Core code
This is very similar to the example provided by the Spark Core team that allows you to pull temperature data via the API. I’m going to post the code and explain a couple of minor differences.
int irradNumReadings = 30;
int irradiance = 0;
int irradianceArr[30];
int irradArrInd = 0;
int irradTotal = 0;
void setup()
{
Spark.variable("irradiance", &irradiance, INT);
pinMode(A7, INPUT);
}
void loop() {
irradTotal= irradTotal - irradianceArr[irradArrInd]; // subtract the last reading:
irradianceArr[irradArrInd] = analogRead(A7); // read from the sensor:
irradTotal= irradTotal + irradianceArr[irradArrInd]; // add the reading to the total
irradArrInd = irradArrInd+1; // advance to the next position in the array
if (irradArrInd >= irradNumReadings) // if we're at the end of the array,
irradArrInd = 0; // ...wrap around to the beginning:
// calculate the average:
irradiance = irradTotal / irradNumReadings;
}
The initial 5 declaration define some constants and the variable that we want to eventually pull from the device.
The setup() function just tells the Spark Core that the irradiance variable should be defined in a way that allows us to obtain the data via a curl request initiated by our batch file.
The primary difference between the code here and the temperature measurement example is that I’ve used an averaged output. This helps decrease the variance of the output. To do this, every time I record a new data point, I add it into an array (defined in this example to be 30 data points) and drop the oldest record.
I wasn’t able to find a shift array function, so instead of creating my own and using up a lot of processing power, I just kept track of the index that needs to be replaced. Increment by 1 each time a data point is recorded and when you hit the end, loop right back to the first index again.
Instead of adding up every index in the array, I used a running total, and just added and subtracted from that. Should keep the number of operations to a minimum…
The last thing to do was divide the array sum by the number of indices in the array. This will give us the average irradiance output for the last 30 data points. Play with the numbers a bit for your own sensor. You may find that you need more or less, depending on how accurate your components are.
Step 3: Prepare the Batch Programming Environment
I decided to program in the batch file environment because it was simple and didn’t require me to install a compiler or any server software. These files could also be called from other scripts (ie MATLAB) to perform some tasks without figuring out how to connect it up with a database hosted on my own server… or someone elses.
There are two supplemental items you need to obtain. The cURL command line interface and a JSON command line parser. These are both tiny programs (you only need the *.exe files) that you can drop into the same folder that you’ll be writing the batch file in. In my system, I’m using ‘C:\SparkCore.’ Once you do that, you’re ready to program.
Step 4: Write the Batch Programs
I broke the program into two components. One that loops (forever) and one that performs the request for data. We’ll start with the data request program first:
@ECHO OFF
REM function call: PullCoreData VARIABLE_NAME
REM This file will access the spark core and download the requested variable
REM In this case, the variable is defined by the function call
REM The device ID and the access_token has also been hard coded
REM following code retrieves the data, stores result in a temporary file,
REM reads the file into a parameter and finally deletes the temporary file.
REM -k ignores the HTTPS security
REM -s runs CURL in silent mode
REM jq is a JSON parser that can be used to extract the desired data from a response that has the correct format
curl -k -s https://api.spark.io/v1/devices/DEVICE_ID/%1?access_token=ACCESS_CODE | jq .TEMPORARY_allTypes.uint32 > tmpFile
set /p TestOutput= < tmpFile
del tmpFile
echo %date:~-10,10% %time%,%testOutput% >> testOutputFile.txt
ECHO %1 = %TestOutput%
All lines that start with REM is a remark (think comment). The batch file version of //.
The first command that is executed is @ECHO OFF. By default a batch file will display every line that is executed. This command turns that ‘feature’ off.
The next command is the bulk of the program, it uses the cURL command line interface to request the data from the spark core.
The first thing to note is that I used a %1 where the variable name typically is. This is how you define parameters that are passed in via the command line. We would pass the name of the variable into the program by calling it in the following manner:
PullCoreData irradiance
Wherever we see a %1, the batch file sees ‘irradiance.’
There are two options that I have used. There was an issue with a digital signature somewhere and the -k allowed the cURL program to ignore this and performs the request anyway. Not sure what caused it, but this was a workaround. The -s runs cURL without the progress bar that shows up when the program is called. There are a lot more options that are available, so feel free to go through the documentation.
You could store the entire response in a file by executing the command:
curl -k -s https://api.spark.io/v1/devices/DEVICE_ID/%1?access_token=ACCESS_CODE > tmpFile
If you go and open the file that was generated ‘tmpFile’, you’ll see something like:
{
"cmd": "VarReturn",
"name": "irradiance",
"TEMPORARY_allTypes": {
"string": "\u0000\u0000\u0003O",
"uint32": 847,
"number": 847,
"double": null,
"raw": "\u0000\u0000\u0003O"
},
"result": "\u0000\u0000\u0003O",
"coreInfo": {
"last_app": "foo",
"last_heard": "2013-12-31T18:45:04.302Z",
"connected": false,
"deviceID": "YOUR_DEVICE_ID_HERE"
}
}
We want to continue processing the response using the JSON parser. This is where the remainder of that line comes in:
curl<...>CODE | jq .TEMPORARY_allTypes.uint32 > tmpFile
This takes the output of the cURL command and pushes it into the jq JSON parser. We only want the value that’s associated with the uint32 data type. This command tells jp to only extract the data we want.
There is no simple way to assign the output of a command to a variable, so a quick workaround is to save it into a file and then import it to a variable. The > tmpFile
tells the batch file to write the result to a file called ‘tmpFile’. The next line 'set /p TestOutput= < tmpFile'
then pulls it back in to a variable called ‘TestOutput.’ We can then delete the temporary file.
The last two lines write the value to a output log and print the value to the command window. %date%
and %time%
are internal system variables. We can use them to timestamp our results. The ~-10,10
modifier just takes a subset of the char array output by %date%
, defining a start point 10 positions from the end with a length of 10 characters. If you dont mind having a 3 letter day [mon,tue,wed, ect.] prefix, you can leave that out.
echo %date:~-10,10% %time%,%testOutput% >> testOutputFile.txt
ECHO %1 = %TestOutput%
If you want to run the batch file, save the file as ‘PullCoreData.bat’, open the Command Prompt, navigate to the folder you saved the file in and type in:
>PullCoreData irradiance
You should see a response that looks like:
irradiance = 820
If you go open the ‘testOutputFile.txt’ file, you should see something like:
12/30/2013 23:05:58.34,820
Now we can put a wrapper around this and continuously poll the Spark Core for its current irradiance. The wrapper program is pretty simple:
@ECHO OFF
CLS
for /L %%n in (1,0,10) do (
@ECHO OFF
PullCoreData irradiance
)
:stop
call :__stop 2>nul
:__stop
REM Loop forever. Stop the loop by closing the window or by pushing "CTRL+C"
The first line turns off displaying all the commands, and the CLS clears the command prompt screen.
We then create a loop that runs for ever. It basically says to start at 1 and increment by 0 until you reach 10. Not sure if you need the :stop or :__stop lines, but it doesnt seem to harm to keep them there. Once you start this program, the only way to stop it is to hit “CTRL+C” or close the window.
Using MATLAB, I parsed the output file and plotted the irradiance vs time. I live in upstate NY, so clear sunny days are few and far between. The output you see here is not far from normal . Note that the irradiance is unscaled. I need to do a bit more research before i can convert from the voltage reading from the Spark Core to the actual irradiance in W/m^2
Once I get one of those San Diego days… I’ll post that as well. We’ll see how long that takes.
Thats it!
As long as you have the window open, this should sit there… working away while you go about your day. In my experience, the output is effectively updated every 500mS or so. So you get something along the lines of a 2 Hz sample rate. As long as you dont have some really fast moving data, this should work for you. If you need better resolution, you can probably figure something out by encoding multiple data points into a single output variable.
Hope that helps at least one other person out! I by no means proclaim to be an expert in any of these programming languages, so if someone has a suggestion how to do it better, please feel free to chime in!