Yes. If you put an IP address on the end of a spark keys server ... command, it will set your 'core to connect to a server at that address (on port 5683.) To change back to global cloud, simply omit the IP address. You do have to provide a server public key file, for this command.
FWIW, I quite like the reference style docs at the Spark-CLI source home page: https://github.com/spark/spark-cli. In this case, head right to the bottom of that page.
I’m attempting to get a local cloud up and running and it appears that I have it about 10% there. I’m at the point I can power up the core and it connects to the local cloud. (I replaced any token or deviceid with a stand in)
I’ve got my spark-cli up to date. All the local cloud software was downloaded/installed just after that.
But I can’t actually do anything. I’m met with invalid access_token when I try to use it or bad errors when I try “spark keys doctor xxxxxxxxxxxxxxxxxxxxxx”:
From spark-server console:
TypeError: Object function (options) {
this.options = options;
} has no method 'basicAuth'
at Object.AccessTokenViews.destroy (/home/pi/spark-server/js/lib/AccessTokenViews.js:59:44)
at callbacks (/home/pi/spark-server/js/node_modules/express/lib/router/index.js:164:37)
at param (/home/pi/spark-server/js/node_modules/express/lib/router/index.js:138:11)
at param (/home/pi/spark-server/js/node_modules/express/lib/router/index.js:135:11)
at pass (/home/pi/spark-server/js/node_modules/express/lib/router/index.js:145:5)
at Router._dispatch (/home/pi/spark-server/js/node_modules/express/lib/router/index.js:173:5)
at Object.router (/home/pi/spark-server/js/node_modules/express/lib/router/index.js:33:10)
at next (/home/pi/spark-server/js/node_modules/express/node_modules/connect/lib/proto.js:193:15)
at next (/home/pi/spark-server/js/node_modules/express/node_modules/connect/lib/proto.js:195:9)
at Object.handle (/home/pi/spark-server/js/node_modules/node-oauth2-server/lib/oauth2server.js:104:11)
10.129.0.5 - - [Sun, 07 Dec 2014 08:22:51 GMT] "DELETE /v1/access_tokens/yyyyyyyyyyyyy HTTP/1.1" 500 1045 "-" "-"
10.129.0.5 - - [Sun, 07 Dec 2014 08:24:25 GMT] "POST /oauth/token HTTP/1.1" 503 83 "-" "-"
10.129.0.5 - - [Sun, 07 Dec 2014 08:24:48 GMT] "GET /v1/devices?access_token=yyyyyyyyyyyyy HTTP/1.1" 400 109 "-" "-"
10.129.0.5 - - [Sun, 07 Dec 2014 08:29:51 GMT] "POST /v1/provisioning/xxxxxxxxxxxxxxxxxxxxxx HTTP/1.1" 400 109 "-" "-"
Connection from: 10.129.0.18, connId: 2
CryptoStream transform error TypeError: error:00000000:lib(0):func(0):reason(0)
I see all the 400’s and 500’s so I’m guessing something isn’t right with my spark-server. Is there a way to populate a user/access_token by hand in the server?
the “spark setup” part throws errors on the server console when I answer that I would not like to use the account already specified
spark logout: after about 30 seconds I get “error removing token: Error: socket hang up” (doesn’t seem to matter if I’m using “spark config spark” or “spark config local”)
Thanks in advance. Dealing with two fringe cases at a time is a nightmare… Mac OS and Raspberry Pi.
Edit: I’ve ditched the Pi and still have the exact same issue on the Mac. The best I can tell, the spark cli is ignoring the command “spark config local”.
I’ve also played with “spark cloud login” and that actually hits my local cloud server. But with no users on the local server its not working:
I’m concerned about the 503 reply. I’m guessing that is the spark-server instance saying it has no idea what to do with the post data to /oauth/token or that something crashed while it was trying to do something. I can’t find a single error during the build with “npm install --verbose” for either the pi or the mac.
Edit: after I moved the spark.config.json file out of the .spark directory I can now use “spark setup”
Once I moved the spark.config.json file out of /.spark/ I was able to use “spark setup” to create a user on my local cloud.
I had to jump through a couple hoops to go back and forth between the spark cloud and the local cloud so I removed the spark-cli package from global and have two separate directories for spark and local now. I’m thinking it is a permissions issue since I don’t use an admin account for myself regularly.
… and I’ve got to do it all again tomorrow when my Edison gets here.
I also found that starting the local cloud like this: ‘node ./spark-server/js/main.js’ causes it to create new keys (if none found) in the directory you are currently in. Which was obviously causing crypto errors when I was trying to get my cores to connect.
lets say I’m in /home/pi and I do ‘node ./spark-server/js/main.js’ it ignores the keys it already made in /home/pi/spark-server/js and creates a new set for the server in /home/pi
Thanks for this remarkable (and labyrinthine) tutorial. I got through it all and re-programmed my Core, but I had a couple of (unrelated) problems:
(i) whenever it came to putting the Core into DFU mode and re-programming it, the command failed because dfu-util wasn’t found. I solved this my making copies of it (and libusb.dll) in whatever folder I was working in. I re-checked the PATH environment variables and the path to dfu-util is there ok, but doesn’t seem to work as it should;
(ii) my Core connected to the new server ok, once I restarted the server (“node main.js”), but then kept disconnecting and reconnecting. Eventually, a red-led SOS came up with a HARD FAULT and I decided to call it a day - just too tired - and I’ll try again tomorrow.
The same behaviour will be observed if the Spark goes offline as well but there is redundancy built in so this is uncommon. Once that occurs, the core should automatically go back online once the server is back up.
I have tested that behavior and would like to know what program is your core running? If you are on the latest tinker firmware it should work fine. Same goes for new program compiled via the Spark build farm
4.) Are you referring to the server public key printed on the console during node main.js? That’s fine so not to worry about it!
5.) Spark-cli will work the same except that you will need to switch profile like i mentioned in the tutorial. You can create a new account with generically any email or password that does not need to match those of the Spark .
You are essentially running an entirely new and everything is fresh and new.
1 This problem was really simple once I used the PATH command and looked at the path string: there was a reference to dfu-util.exe earlier in the path string which related to a previous installation and was no longer valid. Deleted that and all fine.
2 Core reprogrammed with new server and breathing beautifully, and functioning normally as far as I can tell. I am running a really simple program that detects the output of a PIR sensor and flashes the on-board led if there is a change in state, and sends a publish message at the same time;
int previous = 0;
int current = 0;
void setup() {
pinMode(D0,INPUT);
pinMode(D7,OUTPUT);
current = digitalRead(D0);
previous = current;
}
void loop() {
current = digitalRead(D0);
if (previous != current){
Spark.publish("movement");
previous = current;
digitalWrite(D7, HIGH);
delay(500);
digitalWrite(D7, LOW);
}
delay(100);
}
5 I have one final problem: I cannot seem to log in via ‘spark login’. My old log in details from the spark config are fine for that config, but when I switch to the new config the log in fails. I actually have two computers running. The laptop is running the normal spark cloud and is pulling data off one core (k1), and this computer is running a new server configuration and is connected to the second core (k2). The laptop has me logged in under my standard spark login. How do I create a new log in for the new server configuration?
Thanks. Did what you said, then went through a complete ‘spark setup’ route and created a new account (it took me a while because the interaction for setting up a new account isn’t very user friendly!). Now busy subscribing to the core. It’s interesting that when using the spark cloud server the commands ‘spark config list/identify’ don’t return anything - at least on my pc.
Regarding the changes to the request line: no problem changing the access token (I just got a new one from the setup process), but to be clear on what changes for the domain name:
I have made some progress: got a Core working, breathing merrily, on my new local server and responding to spark commands, but I have got stuck on using curl. I have a core running a sketch which reads an LDR and gives a reading to a spark variable called ‘pot’. The core is called ‘k1’.
The original curl command when I was using the Spark Cloud looked like this:
taking the access key from the original login after I switched to the local server (called ‘ks’) which is the same as the one in the profile ks.config.json
The result was that there was no result (I used https and http) - it just popped up with a new command prompt. I removed the -s and -k from the command: