New node.js library and CLI for Cloud API

I’ve created a new library for Node.js that I thought might be helpful to people here as well.

It’s available on GitHub along with further documentation, and you can install it via npm install --save sparknode.

The idea is that you don’t have to do much aside from providing your auth token and this library queries the Cloud API, creates a Collection of cores based on the token, and fetches and exposes all your Cloud functions and variables.

Alternatively, you can supply a token and id to the Core constructor for faster access to the core.

Either way, all the cloud-exposed functions and variables on your device become methods you can call directly.

##Example:##

var sp = require('sparknode');
var collection = new sp.Collection(myAuthToken);
collection.on('connect', function() {
  //Turn on an led
  collection.core1.digitalwrite('d7,HIGH');
  
  //Brew some coffee, then email me.
  collection.core2.brew('coffee', function(err, timeUntilFinished) {
    setTimeout(function() {
      //General awesomeness goes here.
      emailMe();
      sendSocketIoMessage();
      addCreamer();
    }, timeUntilFinished);
  })

  //Get a variable
  collection.core2.remainingCoffeeTime(function(err, value) {
    //Do something with value
  })

And an example of a single core.

var randomCore = new sp.Core(myAuthToken, deviceId);

randomCore.on('connect', function() {
  randomCore.turnOnLights();
});

This library should also work cross platform as it doesn’t rely on curl behind the scenes. I’m hoping it also makes it much easier for me to wire custom functions to a webapp.

I’m also tracking some of the data that comes back from the spark cloud on the core objects themselves, such as online, though I’m not sure how useful that will end up being.

###Future:###

I have several ideas I’d like to implement such a CLI for quick access.

sparknode core2 brew "coffee"

An API for the server sent events will also be a high priority as soon as that cloud API comes out.

I’m also thinking about writing a custom firmware that lets you add many more than 4 functions, directly from the CLI or even programmatically, using string parsing on the client side. I don’t know about anyone else, but I don’t need 64 characters of input very often, so I figured they’d be more useful this way. Check out the issues tracker on GitHub to add feature requests and see some of the plans I have.

4 Likes

##Update v0.2.0

I’ve updated the version number as well as added a feature to auto-update the variables if desired.

There’s even better documentation on the GitHub README but here’s a quick sample of the new functionality:

var core = new sp.Core(authToken, deviceId);
core.on('connect', function() {
  core.myVariable.autoupdate = 5000;
  
  core.myVariable.on('update', function(value) {
    //More awesomeness here.
    saveToDatabase(value);
  });
});
1 Like

##Update, v0.2.1: Cache me if you can.

Glad nobody’s ever made that joke before. Anyway…

The default behavior now is to cache the output of the cloud api for the initial call in a JSON object at your project root. This saves about 10-20 seconds of waiting but will still make the call anyway to keep current. If you’d like to override this behavior, you can pass an options object (optional, of course) to the constructor.

var collection = new Collection(myAuthToken, { skipCache: true })

or

var collection = new Collection(myAuthToken, { cacheFile: 'lib/cacheFile.json' } )
3 Likes

Loving all the work on this - keep it up @andrewstuart! Really like your design choices, looking forward to playing with this for my next project.

Thanks for the encouragement, and I very much hope to keep it up!

My wife will be having a baby tomorrow at the latest, so I may be kept pretty busy for a while, but I’d still like to use the occasional free moment to get some more done on the library. I’ve got baby data I need to track! :smile:

Hey @andrewstuart,

Wow Congratulations on the :baby: ! Hoping everything went well!

:smile: :spark:

David

Thanks! Everything went very well. Mom and baby are both home and happy, so that makes me happy :smile:

Anywho, I did some work tonight and added the CLI. I’m really loving this feature because it’s helped me quickly play with my firmware changes without having to write a full implementation.

##Update, v0.3.0: CLI!

If installed globally via npm install -g sparknode, sparknode will give you a command line interface mostly useful for debugging, but I suppose it could be used for other scripting applications as well.

The most useful command is probably spark -h, as it lets you discover the functionality directly from the command line.

As for the rest, right now there are three main commands under the main spark command: add, fn, and var. Each of these also have help generated with the -h switch.

####add
spark add will retreive any cores accessible via the given token. These are saved at your home directory under .sparkrc as JSON.

Syntax is spark add <token>.

####var
Retreive a variable from the spark cloud. Syntax is spark var coreName varName.

Options include:
-n Number of times to check the variable (–number)
-i Interval, in milliseconds, between checks (–interval)
-c Check continously at interval or 1 second. (will override -n) (–continuous)

Syntax is spark var <coreName> <variableName>.

####fn
Execute a remote function and print the return value.
Syntax is spark fn <coreName> <functionName> <argument>.

##CLI Examples

#Go get all the cores.
spark add 1234567890abcdef1234567890abcdef;

spark fn core1 brew coffee;
spark fn core2 digitalwrite "A1,HIGH";

spark var core1 brewTime;
spark var -i 100 -n 5 core2 coffeeStrength;

#My current personal favorite:
spark var -ci 100 core1 variable1;

Cool! Nice CLI, I’ve been working on one too, but you beat me to release! :slight_smile:

My plan was to write something very modular so as to make it really easy to expand on, but I like your API interactions. We had been thinking something similar in terms of ‘remembering’ your access credentials locally as well, to make it easy to play around with. Neat!

##Update v0.3.1 & v0.3.2: Sometimes I forget.

You can now get a quick list of available functions or variables from the CLI. Helpful if you’re like me and can’t remember off the top of your head which functions or variables you set up for your core.

spark fn core1

#Functions available for core 'core1':
#  brew
#  digitalread
#  digitalwrite
#  analogread

There is also a new ls command that is similar but a bit more verbose. It can be pared down by calling spark ls [coreName] for details of just one core.

spark ls

#Core: core1 (1234567890abcdef12345678)
#
#Functions: 
#brew
#digitalread
#digitalwrite
#analogread
#
#Variables: 
#delay
#
#Connected: true
#
#-----------
#
#Core: core2 (1234567890abcdef12345679)
#
#Functions: 
#getdata
#digitalread
#digitalwrite
#analogread
#
#Variables: 
#coffeeStrength
#
#Connected: true
#
#-----------

#Update 0.4.0 - The Main Event. Ha.
Events are now live :slight_smile:

Each event is accessible to your core (or collection) through the builtin events. You can simply call core.on(eventName, function handler(eventData) {/*do something;*/}); to register your handler on the event. If ‘event’ is passed to the on function as the name, then the handler will be called for every event with an {event: 'eventName', data: {/*eventData*/}} object.

core.on('coffeeBrewed', function(info) {
  console.log(info);
  console.log(info.data);
  //send an email with the number of cups remaining.
});

collection.on('event', function(eventInfo) {
  database.save(eventInfo);
  //All events for all cores get logged to the database.
  //API for public events coming shortly.
});

###CLI

Syntax: spark events [coreName]

Print a list of events coming from the spark cloud. If a coreName is supplied, then the events are limited to that core’s events. Press ctrl-c to interrupt, as usual.

Options:

-p Show a list of public events.
-n Only search for events with a specific name.

spark events -p

{ data: 
  { data: 'course',
    ttl: '60',
    published_at: '2014-03-13T10:42:23.157Z',
    coreid: '48ff69065067555019392287'
  },
  event: 'motion-detected2' 
}
...

#I have a hard time believing mr. 48ff69065067555019392287 is seeing motion every single second. amirite?

Check out the github page for more documentation and the issue tracker for future plans or to report bugs. :slight_smile:

@Dave, glad you like it :slight_smile: The good design of the cloud API made it really easy for me to get everything up and running and informed my design quite a bit. I’m really digging the HATEOS concept and the Spark team’s implementation.

Anywho, hope it’s somewhat helpful, or at least neat if nothing else :slight_smile: I saw your CLI will be coming out before the private server, so I’m interested to see how easily this library can carry over to that. I’m imagining it’s as easy as just changing the hostname once it’s all set up, but it’ll be exciting to see. Thanks for an awesome product :slight_smile:

1 Like

@andrewstuart you should take a look at the spark-cli code :smiley:

1 Like

This is probably a dumb question since this all seems really simple. I ran “npm install --save sparknode”. Added sparknode to my package.json…

“dependencies”: {
“express”: “^4.1.0”,
“logfmt”: “^1.1.2”,
“sparknode”: “^0.4.3”
},

Then in my main .js file I added…

var spark = require(“sparknode”);

var core = new spark.Core({
authtoken: ‘my_token’,
id: ‘my_cores_id’
});

And pushed to Heroku. But when I load it I get an application error. In heroku logs I see…

←[36m2014-04-25T07:01:03+00:00 heroku[slug-compiler]:←[0m Slug compilation started
←[36m2014-04-25T07:01:07+00:00 heroku[slug-compiler]:←[0m Slug compilation finished
←[33m2014-04-25T07:01:10.210044+00:00 app[web.1]:←[0m
←[33m2014-04-25T07:01:10.210749+00:00 app[web.1]:←[0m throw TypeError(‘Uncaught, unspecified “error” event.’);
←[33m2014-04-25T07:01:10.210768+00:00 app[web.1]:←[0m ^
←[33m2014-04-25T07:01:10.213028+00:00 app[web.1]:←[0m TypeError: Uncaught, unspecified “error” event.
←[33m2014-04-25T07:01:10.213041+00:00 app[web.1]:←[0m at process._tickCallback (node.js:415:13)
←[33m2014-04-25T07:01:10.213031+00:00 app[web.1]:←[0m at TypeError ()
←[33m2014-04-25T07:01:10.213034+00:00 app[web.1]:←[0m at /app/node_modules/sparknode/lib/core.js:36:24
←[33m2014-04-25T07:01:10.213032+00:00 app[web.1]:←[0m at EventEmitter.emit (events.js:74:15)
←[33m2014-04-25T07:01:10.213036+00:00 app[web.1]:←[0m at IncomingMessage. (/app/node_modules/sparknode/lib/common.js:52:11)
←[33m2014-04-25T07:01:10.213037+00:00 app[web.1]:←[0m at IncomingMessage.EventEmitter.emit (events.js:117:20)
←[33m2014-04-25T07:01:10.213039+00:00 app[web.1]:←[0m at _stream_readable.js:920:16
←[33m2014-04-25T07:01:10.144594+00:00 app[web.1]:←[0m Listening on 24571
←[33m2014-04-25T07:01:10.210495+00:00 app[web.1]:←[0m events.js:74
←[32m2014-04-25T07:01:07.175148+00:00 heroku[api]:←[0m Deploy 06d1c97 by
←[32m2014-04-25T07:01:07.175249+00:00 heroku[api]:←[0m Release v16 created by
←[33m2014-04-25T07:01:10.864160+00:00 heroku[web.1]:←[0m Stopping all processes with SIGTERM
←[33m2014-04-25T07:01:11.644253+00:00 heroku[web.1]:←[0m State changed from starting to crashed
←[35m2014-04-25T07:01:12.960485+00:00 heroku[router]:←[0m at=error code=H10 desc=“App crashed” method=GET path=/ host=hugables.herokuapp.com

Do you know what I’m doing wrong? Thanks a ton!

Is it working when you run it locally?

From the looks of it there’s some sort of error being passed back that hasn’t been caught. The library essentially just re-emits any events it gets from Node’s http module, so you may need to use core.on('error', function(err) { //do something with error}); in order to get a better error message. The EventEmitter module is a bit vague in its handling of unhandled 'error' events.

Did you fix this ? Because … i’m testing this right now, and from time to time I get this error … on my event streams … I alseo tried with a custom EventSource and this one also throws an exception from time to time … any idea how this can happen ?