Would you mind elaborating on the advantages/disadvantages of using the local cloud vs. Spark cloud?
I think two of the bigger highlights for a lot of users is privacy and data ownership. The Spark Team is as transparent as any company really can be, but many people want total control over their data. It’s also possible to run a Core without an Internet connection but still talk to a local cloud. An advantage for business-class customers is they can run a local cloud environment to meet requirements for various compliance certifications (PCI, SSAE, HIPAA, etc). Granted, even sending encrypted data over a secure wifi network may still not meet some of those requirements. :-/
I need the local cloud for this need. I’d like to control a wall plug by smartphone. If my SparkCore needs to connect to the Spark web api, if my internet provider has got some problems, I couldn’t control my plug. Instead, if the SparkCore connects to the local cloud I can use the LAN to control the plug when I’m at home, and I’ll send message to my web server (that sends commands locally to the SparkCore) when I’ll away.
Did you have any issues getting the ursa@0.8.0 dependency to install for spark-server on Windows 7? Right now I’m trying to re-do everything with 32-bit installs instead of 64-bit installs to see if that makes a difference.
We had issues doing installation on… Windows 8 but not 7.
Getting URSA on windows is tricky and I will post the instructions I wrote during beta later when I’m on my laptop
@Elijah, i quickly pulled the instructions i wrote during the pre-release phase at:
Let me know if there are errors if any. It should work since we tested a few times before firming this guide
Thanks for the tutorial!
I think my issue is more system specific.
Ever since I ran npm cache clean -f
a couple days ago I have been getting an MSB8007 error from npm install
indicating an invalid platform error (Platform is: ‘x64’).
I tried uninstalling and reinstalling all of the components with 32-bit builds, but kept getting that MSB8007 error untill…
I tried npm install from within the node.js command line
Hmmm… probably could have just done that first
@kennethlimcp … it might be a good idea (or not really, see edit, below) to alert people to first BACK UP their existing private key, in case they want to use the core on the cloud again in future but have written a new key to it.
dfu-util -d 1d50:607f -a 1 -s 0x00002000:4096 -v -U old_core_private_key.der
At lest, I’m pretty sure that’s the situation I am now in. It seems I have over-written the private key that came on the core from the factory and there now appears no way to get it back. So I believe I have to generate a new key pair and send the public key to Spark for the cloud server.
EDIT: OH WAIT … I just discovered that the ‘new’ (to me) spark keys doctor
will take care of generating new keys and sending the public part to the cloud server automatically, now. Yay. All fixed.
spark keys doctor <core_id>
If you look in the directory, there are most likely 4 files available.
2 with the core_id and 2 with pre_coreid.
The pre_coreid files are the backup copy.
Not in my ~/.spark
folder, where I expected them.
EDIT: … (fluff removed) …
Oh! …
Having gone through the process again with a clean, new user on my system, I see that the pre_...
and other key files all ended up in the spark-server/js
directory. (Clearly, I did not not quite follow your instructions above in that regard.) I was expecting the keys to end up in ~/.spark
. But apparently, they just go in whatever the current directory is, when spark keys doctor
is executed, which is fine.
Edit: I believe I solved my problem I was running a old version of the CLI. After updating the CLI and doing the deep update I was able to see the core connecting.
sudo npm update -g spark-cli
spark flash --usb deep_update_2014_06
Below is a record of my issue in case anybody else runs into the same thing.
I do have one question… will this only run on 8080?
@kennethlimcp First thing first thank you very much for this tutorial, it is much clearer than the one on github. I understand better each of the pieces. I am however running into a issue I can’t seem to troubleshoot.
The issue:
- The light flashes cyan… I understand that this means the core cannot connect to the server
- If I look at the console for the server I don’t see the core attempting to connect
- if I go to the IP_ADDRESS:8000 in a browser I can see a JSON reply and I can see the connection attempt
So:
- It appears that the core is not connecting to the local cloud… I’m guessing it is not pointing to the correct IP and/or port
Question:
- Should this command use the port number after the IP address?
spark keys server default_key.pub.pem IP_ADDRESS
Potential differences between my setup and yours:
- I have to use a different port than 8080 something else on my system is using it. I changed this in main.js to 8000 and everywhere else the port is mentioned
- When I run spark keys server I get a ton of warnings… is this the source of the problem?
Output from spark keys server
checking file default_key.pub.pem
spawning dfu-util -d 1d50:607f -a 1 -i 0 -s 0x00001000 -D default_key.pub.pem
dfu-util 0.7
Copyright 2005-2008 Weston Schmidt, Harald Welte and OpenMoko Inc.
Copyright 2010-2012 Tormod Volden and Stefan Schmidt
This program is Free Software and has ABSOLUTELY NO WARRANTY
Please report bugs to dfu-util@lists.gnumonks.org
Filter on vendor = 0x1d50 product = 0x607f
Error during spawn TypeError: Cannot call method ‘on’ of null
Make sure your core is in DFU mode (blinking yellow), and is connected to your computer
Error - TypeError: Cannot call method ‘on’ of null
Opening DFU capable USB device… ID 1d50:607f
Run-time device DFU version 011a
Found DFU: [1d50:607f] devnum=0, cfg=1, intf=0, alt=1, name="@SPI Flash : SST25x/0x00000000/512*04Kg"
Claiming USB DFU Interface…
Setting Alternate Setting #1 …
Determining device status: state = dfuERROR, status = 10
dfuERROR, clearing status
Determining device status: state = dfuIDLE, status = 0
dfuIDLE, continuing
DFU mode device DFU version 011a
Device returned transfer size 1024
No valid DFU suffix signature
Warning: File has no DFU suffix
DfuSe interface name: "SPI Flash : SST25x"
Downloading to address = 0x00001000, size = 452
.
File downloaded successfully
Thank you for any help you can provide!
Edit: I did a tcpdump and I can see the core is attempting to connect to amazon. What command actually changes where the core is pointing to? Is it the spark keys server?
The ports numbers can be changed in the spark-server source code.
8080 is used for the API server while 5683 is the COAP port.
However, only 8080 would be a easy change as COAP port is set in the core firmware and changing it requires local compiling.
Glad you fixed it. Have fun!
Yes. If you put an IP address on the end of a spark keys server ...
command, it will set your 'core to connect to a server at that address (on port 5683.) To change back to global cloud, simply omit the IP address. You do have to provide a server public key file, for this command.
FWIW, I quite like the reference style docs at the Spark-CLI source home page: https://github.com/spark/spark-cli. In this case, head right to the bottom of that page.
@kennethlimcp, when you say fired up up spark server in the intro to the guide, do you essentially mean completing this tutorial that you also wrote? https://community.spark.io/t/tutorial-local-cloud-on-windows-25-july-2014/5949
If so, I’ve gotten to step 3 of this guide, but my cores won’t go from flashing cyan to breathing cyan and I have no idea why. Thoughts?
The console will output some messages.
I’m thinking your core public keys are not available yet…
Did you perform that step?
I’m attempting to get a local cloud up and running and it appears that I have it about 10% there. I’m at the point I can power up the core and it connects to the local cloud. (I replaced any token or deviceid with a stand in)
Connection from: 10.129.0.18, connId: 14
on ready { coreID: 'xxxxxxxxxxxxxxxxxxxxxx',
ip: '10.129.0.18',
product_id: 0,
firmware_version: 11,
cache_key: '_13' }
Core online!
I’ve got my spark-cli up to date. All the local cloud software was downloaded/installed just after that.
But I can’t actually do anything. I’m met with invalid access_token when I try to use it or bad errors when I try “spark keys doctor xxxxxxxxxxxxxxxxxxxxxx”:
From spark-server console:
TypeError: Object function (options) {
this.options = options;
} has no method 'basicAuth'
at Object.AccessTokenViews.destroy (/home/pi/spark-server/js/lib/AccessTokenViews.js:59:44)
at callbacks (/home/pi/spark-server/js/node_modules/express/lib/router/index.js:164:37)
at param (/home/pi/spark-server/js/node_modules/express/lib/router/index.js:138:11)
at param (/home/pi/spark-server/js/node_modules/express/lib/router/index.js:135:11)
at pass (/home/pi/spark-server/js/node_modules/express/lib/router/index.js:145:5)
at Router._dispatch (/home/pi/spark-server/js/node_modules/express/lib/router/index.js:173:5)
at Object.router (/home/pi/spark-server/js/node_modules/express/lib/router/index.js:33:10)
at next (/home/pi/spark-server/js/node_modules/express/node_modules/connect/lib/proto.js:193:15)
at next (/home/pi/spark-server/js/node_modules/express/node_modules/connect/lib/proto.js:195:9)
at Object.handle (/home/pi/spark-server/js/node_modules/node-oauth2-server/lib/oauth2server.js:104:11)
10.129.0.5 - - [Sun, 07 Dec 2014 08:22:51 GMT] "DELETE /v1/access_tokens/yyyyyyyyyyyyy HTTP/1.1" 500 1045 "-" "-"
10.129.0.5 - - [Sun, 07 Dec 2014 08:24:25 GMT] "POST /oauth/token HTTP/1.1" 503 83 "-" "-"
10.129.0.5 - - [Sun, 07 Dec 2014 08:24:48 GMT] "GET /v1/devices?access_token=yyyyyyyyyyyyy HTTP/1.1" 400 109 "-" "-"
10.129.0.5 - - [Sun, 07 Dec 2014 08:29:51 GMT] "POST /v1/provisioning/xxxxxxxxxxxxxxxxxxxxxx HTTP/1.1" 400 109 "-" "-"
Connection from: 10.129.0.18, connId: 2
CryptoStream transform error TypeError: error:00000000:lib(0):func(0):reason(0)
I see all the 400’s and 500’s so I’m guessing something isn’t right with my spark-server. Is there a way to populate a user/access_token by hand in the server?
the “spark setup” part throws errors on the server console when I answer that I would not like to use the account already specified
Try spark logout
and spark login
again to create a new account on your .
spark keys doctor
is not available in the local cloud version but you don’t need it since your core is online
spark logout: after about 30 seconds I get “error removing token: Error: socket hang up” (doesn’t seem to matter if I’m using “spark config spark” or “spark config local”)
Thanks in advance. Dealing with two fringe cases at a time is a nightmare… Mac OS and Raspberry Pi.
Edit: I’ve ditched the Pi and still have the exact same issue on the Mac. The best I can tell, the spark cli is ignoring the command “spark config local”.
I’ve also played with “spark cloud login” and that actually hits my local cloud server. But with no users on the local server its not working:
10.129.0.5 - - [Sun, 07 Dec 2014 20:05:48 GMT] "POST /oauth/token HTTP/1.1" 503 83 "-" "-"
I’m concerned about the 503 reply. I’m guessing that is the spark-server instance saying it has no idea what to do with the post data to /oauth/token or that something crashed while it was trying to do something. I can’t find a single error during the build with “npm install --verbose” for either the pi or the mac.
Edit: after I moved the spark.config.json file out of the .spark directory I can now use “spark setup”
I did not try local cloud yet, but when you mention it, did you create a user on spark-server? Readme says:
6.) Create a user and login with the Spark-CLI
Yeah it is working finally.
Once I moved the spark.config.json file out of /.spark/ I was able to use “spark setup” to create a user on my local cloud.
I had to jump through a couple hoops to go back and forth between the spark cloud and the local cloud so I removed the spark-cli package from global and have two separate directories for spark and local now. I’m thinking it is a permissions issue since I don’t use an admin account for myself regularly.
… and I’ve got to do it all again tomorrow when my Edison gets here.
I also found that starting the local cloud like this: ‘node ./spark-server/js/main.js’ causes it to create new keys (if none found) in the directory you are currently in. Which was obviously causing crypto errors when I was trying to get my cores to connect.