Tutorial: Local Cloud 1st Time instructions [01 Oct 15]

The ports numbers can be changed in the spark-server source code.

8080 is used for the API server while 5683 is the COAP port.

However, only 8080 would be a easy change as COAP port is set in the core firmware and changing it requires local compiling.

Glad you fixed it. Have fun! :slight_smile:

Yes. If you put an IP address on the end of a spark keys server ... command, it will set your 'core to connect to a server at that address (on port 5683.) To change back to global cloud, simply omit the IP address. You do have to provide a server public key file, for this command.

FWIW, I quite like the reference style docs at the Spark-CLI source home page: https://github.com/spark/spark-cli. In this case, head right to the bottom of that page.

1 Like

@kennethlimcp, when you say fired up up spark server in the intro to the guide, do you essentially mean completing this tutorial that you also wrote? https://community.spark.io/t/tutorial-local-cloud-on-windows-25-july-2014/5949

If so, I’ve gotten to step 3 of this guide, but my cores won’t go from flashing cyan to breathing cyan and I have no idea why. Thoughts?

The console will output some messages.

I’m thinking your core public keys are not available yet…

Did you perform that step?

I’m attempting to get a local cloud up and running and it appears that I have it about 10% there. I’m at the point I can power up the core and it connects to the local cloud. (I replaced any token or deviceid with a stand in)

Connection from: 10.129.0.18, connId: 14
on ready { coreID: 'xxxxxxxxxxxxxxxxxxxxxx',
  ip: '10.129.0.18',
  product_id: 0,
  firmware_version: 11,
  cache_key: '_13' }
Core online!

I’ve got my spark-cli up to date. All the local cloud software was downloaded/installed just after that.

But I can’t actually do anything. I’m met with invalid access_token when I try to use it or bad errors when I try “spark keys doctor xxxxxxxxxxxxxxxxxxxxxx”:

From spark-server console:

TypeError: Object function (options) {
    this.options = options;
} has no method 'basicAuth'
    at Object.AccessTokenViews.destroy (/home/pi/spark-server/js/lib/AccessTokenViews.js:59:44)
    at callbacks (/home/pi/spark-server/js/node_modules/express/lib/router/index.js:164:37)
    at param (/home/pi/spark-server/js/node_modules/express/lib/router/index.js:138:11)
    at param (/home/pi/spark-server/js/node_modules/express/lib/router/index.js:135:11)
    at pass (/home/pi/spark-server/js/node_modules/express/lib/router/index.js:145:5)
    at Router._dispatch (/home/pi/spark-server/js/node_modules/express/lib/router/index.js:173:5)
    at Object.router (/home/pi/spark-server/js/node_modules/express/lib/router/index.js:33:10)
    at next (/home/pi/spark-server/js/node_modules/express/node_modules/connect/lib/proto.js:193:15)
    at next (/home/pi/spark-server/js/node_modules/express/node_modules/connect/lib/proto.js:195:9)
    at Object.handle (/home/pi/spark-server/js/node_modules/node-oauth2-server/lib/oauth2server.js:104:11)
10.129.0.5 - - [Sun, 07 Dec 2014 08:22:51 GMT] "DELETE /v1/access_tokens/yyyyyyyyyyyyy HTTP/1.1" 500 1045 "-" "-"
10.129.0.5 - - [Sun, 07 Dec 2014 08:24:25 GMT] "POST /oauth/token HTTP/1.1" 503 83 "-" "-"
10.129.0.5 - - [Sun, 07 Dec 2014 08:24:48 GMT] "GET /v1/devices?access_token=yyyyyyyyyyyyy HTTP/1.1" 400 109 "-" "-"
10.129.0.5 - - [Sun, 07 Dec 2014 08:29:51 GMT] "POST /v1/provisioning/xxxxxxxxxxxxxxxxxxxxxx HTTP/1.1" 400 109 "-" "-"
Connection from: 10.129.0.18, connId: 2
CryptoStream transform error TypeError: error:00000000:lib(0):func(0):reason(0)

I see all the 400’s and 500’s so I’m guessing something isn’t right with my spark-server. Is there a way to populate a user/access_token by hand in the server?

the “spark setup” part throws errors on the server console when I answer that I would not like to use the account already specified

Try spark logout and spark login again to create a new account on your :cloud:.

spark keys doctor is not available in the local cloud version but you don’t need it since your core is online :wink:

spark logout: after about 30 seconds I get “error removing token: Error: socket hang up” (doesn’t seem to matter if I’m using “spark config spark” or “spark config local”)

Thanks in advance. Dealing with two fringe cases at a time is a nightmare… Mac OS and Raspberry Pi.

Edit: I’ve ditched the Pi and still have the exact same issue on the Mac. The best I can tell, the spark cli is ignoring the command “spark config local”.

I’ve also played with “spark cloud login” and that actually hits my local cloud server. But with no users on the local server its not working:

10.129.0.5 - - [Sun, 07 Dec 2014 20:05:48 GMT] "POST /oauth/token HTTP/1.1" 503 83 "-" "-"

I’m concerned about the 503 reply. I’m guessing that is the spark-server instance saying it has no idea what to do with the post data to /oauth/token or that something crashed while it was trying to do something. I can’t find a single error during the build with “npm install --verbose” for either the pi or the mac.

Edit: after I moved the spark.config.json file out of the .spark directory I can now use “spark setup”

I did not try local cloud yet, but when you mention it, did you create a user on spark-server? Readme says:

6.) Create a user and login with the Spark-CLI

Yeah it is working finally.

Once I moved the spark.config.json file out of /.spark/ I was able to use “spark setup” to create a user on my local cloud.

I had to jump through a couple hoops to go back and forth between the spark cloud and the local cloud so I removed the spark-cli package from global and have two separate directories for spark and local now. I’m thinking it is a permissions issue since I don’t use an admin account for myself regularly.

… and I’ve got to do it all again tomorrow when my Edison gets here.

1 Like

I also found that starting the local cloud like this: ‘node ./spark-server/js/main.js’ causes it to create new keys (if none found) in the directory you are currently in. Which was obviously causing crypto errors when I was trying to get my cores to connect.

A server public key is generated the first time the local :cloud: runs and should probably be the same key you use to overwrite that in your core.

I’m not sure how this is causing issues for you unless you self generated a server key?

lets say I’m in /home/pi and I do ‘node ./spark-server/js/main.js’ it ignores the keys it already made in /home/pi/spark-server/js and creates a new set for the server in /home/pi

Hi Kenneth,

Thanks for this remarkable (and labyrinthine) tutorial. I got through it all and re-programmed my Core, but I had a couple of (unrelated) problems:

(i) whenever it came to putting the Core into DFU mode and re-programming it, the command failed because dfu-util wasn’t found. I solved this my making copies of it (and libusb.dll) in whatever folder I was working in. I re-checked the PATH environment variables and the path to dfu-util is there ok, but doesn’t seem to work as it should;

(ii) my Core connected to the new server ok, once I restarted the server (“node main.js”), but then kept disconnecting and reconnecting. Eventually, a red-led SOS came up with a HARD FAULT and I decided to call it a day - just too tired - and I’ll try again tomorrow.

I had a couple of questions:

(a) is my problem in (i) a common problem and can it be fixed?
(b) once I do get the core properly connected on the local Cloud, will I be able to use the spark-cli functions as before, and will curl commands to the Core still work in the usual way: i.e. do I log in under the same log in email and password, and will my access key still be valid?
© I am somewhat confused by the (huge) server public key - do I have to do anything about this?

Many thanks, again, for a remarkable tutorial.

Glad the tutorial was helpful! That’s the purpose of writing it anyways :wink:

1.) Sounds like you are on Windows. Just make sure the path to DFU-UTIL is added to the PATH for the command prompt: http://stackoverflow.com/questions/9546324/adding-directory-to-path-environment-variable-in-windows

  1. The same behaviour will be observed if the Spark :cloud: goes offline as well but there is redundancy built in so this is uncommon. Once that occurs, the core should automatically go back online once the server is back up.

I have tested that behavior and would like to know what program is your core running? If you are on the latest tinker firmware it should work fine. Same goes for new program compiled via the Spark build farm :wink:

3.) Most functions are available except…Multi-user support and cloud compilation. You can see the full list here: https://github.com/spark/spark-server#what-features-are-currently-present

4.) Are you referring to the server public key printed on the console during node main.js? That’s fine so not to worry about it!

5.) Spark-cli will work the same except that you will need to switch profile like i mentioned in the tutorial. You can create a new account with generically any email or password that does not need to match those of the Spark :cloud:.

You are essentially running an entirely new :cloud: and everything is fresh and new. :smiley:

Have fun :wink:

Kenneth,

Many thanks.

1 This problem was really simple once I used the PATH command and looked at the path string: there was a reference to dfu-util.exe earlier in the path string which related to a previous installation and was no longer valid. Deleted that and all fine.

2 Core reprogrammed with new server and breathing beautifully, and functioning normally as far as I can tell. I am running a really simple program that detects the output of a PIR sensor and flashes the on-board led if there is a change in state, and sends a publish message at the same time;

int previous = 0;
int current = 0;

void setup() {
    pinMode(D0,INPUT);
    pinMode(D7,OUTPUT);
    current = digitalRead(D0);
    previous = current;
}

void loop() {
    current = digitalRead(D0);
    if (previous != current){
        Spark.publish("movement");
        previous = current;
        digitalWrite(D7, HIGH);
        delay(500);
        digitalWrite(D7, LOW);
    }
    delay(100);
}

5 I have one final problem: I cannot seem to log in via ‘spark login’. My old log in details from the spark config are fine for that config, but when I switch to the new config the log in fails. I actually have two computers running. The laptop is running the normal spark cloud and is pulling data off one core (k1), and this computer is running a new server configuration and is connected to the second core (k2). The laptop has me logged in under my standard spark login. How do I create a new log in for the new server configuration?

Many thanks, again.

Roger

A couple of things:

1Here is the screen print once I start the server (“node main.js”):

You’ll see that there’s an IP address: 192.168.1.101, and also references to 192.168.1.9. Which one do I use when setting the new key?

2 When it comes to using a curl + http request the spark cloud one looks like this:
curl -s -k https://api.spark.io/v1/devices/k1/pot?access_token=XXXXXXXXXXXXXXXXXXXXXXX etc
(where ‘k1’ is the core name, and ‘pot’ a spark variable)
what would change for a local cloud request?

Thanks,
R

Perform a spark logout and no for access token removal

Use spark config identify to see which profile you are currently on and whether it’s pointing to the local :cloud: profile.

There is also spark config list to figure out what profiles you have created.

If you want to be safe, delete all the profiles file except spark.json and recreate using spark config profile_name apiUrl "http://DOMAIN_OR_IP"

The only change is the domain name or IP address and access token for the same request :wink:

The line that says server IP address is the one. The other refers to the core IP address!

Thanks. Did what you said, then went through a complete ‘spark setup’ route and created a new account (it took me a while because the interaction for setting up a new account isn’t very user friendly!). Now busy subscribing to the core. It’s interesting that when using the spark cloud server the commands ‘spark config list/identify’ don’t return anything - at least on my pc.

Regarding the changes to the request line: no problem changing the access token (I just got a new one from the setup process), but to be clear on what changes for the domain name:

Before:
https://api.spark.io/v1/devices/k1/pot?access_token=

Now:
https://192.168.1.101/v1/devices/k1/pot?access_token=

Is that correct?

That’s right! i guess cos the PR was merged recently so perform an update using sudo npm update -g spark-cli :wink:

I have made some progress: got a Core working, breathing merrily, on my new local server and responding to spark commands, but I have got stuck on using curl. I have a core running a sketch which reads an LDR and gives a reading to a spark variable called ‘pot’. The core is called ‘k1’.

The original curl command when I was using the Spark Cloud looked like this:

curl -s -k https://api.spark.io/v1/devices/k1/pot?access_token=XXXXXXXXXXXXXXXXXXXXXXXX

and it worked fine, pulling up the value of ‘pot’.

For the local Cloud I replaced it with

curl -s -k http://192.168.1.101/v1/devices/k1/pot?access_token=XXXXXXXXXXXXXXXXXXXXX

taking the access key from the original login after I switched to the local server (called ‘ks’) which is the same as the one in the profile ks.config.json

The result was that there was no result (I used https and http) - it just popped up with a new command prompt. I removed the -s and -k from the command:

curl -http://192.168.1.101/v1/devices/k1/pot?access_token=XXXXXXXXXXXXXXXXXXXXX

responded with:

curl: (7) Failed to connect to 192.168.1.101 port 80: Connection refused

Clearly I m missing something in the URL section. Can you help, please?

Many thanks,

Roger