Tutorial: Local Cloud 1st Time instructions [01 Oct 15]

This will guide you through on how to Setup the Local :cloud: and use it with Particle-CLI after a successful installation.

Before you proceed, make sure you fired up particle-server successfully at least once. We will need the server public keys generated on 1st run later.

**NOTE: ** This will point the Particle-CLI to the local :cloud: and you will not be able to use features available on the :spark: cloud


1.) We will now create a new server profile on Particle-CLI using the command:

particle config profile_name apiUrl "http://DOMAIN_OR_IP"

For the local :cloud:, the port number 8080 needs to be added behind: http://domain_or_ip:8080

This will create a new profile to point to your server and switching back to the spark :cloud: is simply particle config particle and other profiles would be particle config profile_name

2.) We will now point over to the local :cloud: using particle config profile_name

3.) particle setup (on a separate CMD from the one running the server)

This will create an account on the local :cloud:

Perform CTRL + C once you logon with Particle-CLI asking you to send Wifi-credentials etc…

4.) on Command-line, cd to particle-server

5.) Place your core in DFU mode [flashing yellow]

6.) Change server keys to local cloud key + IP Address

particle keys server default_key.pub.pem IP_ADDRESS

7.) Go to cores_key directory to place core public key inside

  • cd core_keys
  • place core in DFU-mode
  • particle keys save INPUT_DEVICE_ID_HERE

NOTE: make sure you use the DEVICE_ID when saving the keys!

Reset the core manually by hitting the RST button

8.) Check for connection

  • Make sure particle-server is running
  • open a separate CMD (if you closed it earlier)
  • cd to particle-server
  • run node main.js
  • watch the cmd for connections by the core
  • you can restart the core and see if there’s any activity when the core attempts to reach breathing cyan

Example activity from CMD:

Connection from: 192.168.1.159, connId: 1
on ready { coreID: '48ff6a065067555008342387',
  ip: '192.168.1.159',
  product_id: 65535,
  firmware_version: 65535,
  cache_key: undefined }
Core online!

HOORAY!


Switching between :particle: :cloud: and Local :cloud:

Here’s a few things you need to know:

1.) You will need to flash the respective :cloud: Public Key to the core which you are connecting to

  • Place your core in DFU-mode (flashing yellow)

  • on the command-line,

    For :particle: Cloud:
    particle keys server cloud_public.der

    The :particle: cloud public key file is here: https://s3.amazonaws.com/spark-website/cloud_public.der

    For local Cloud:
    particle keys server your_local_cloud_public_key.der IP-ADDRESS

  • reset your core

2.) Changing of profile back to the default spark :cloud: on the Spark-cli must be performed using particle config particle

Knowing which profile Particle-CLI is pointing to

1.) The command is simply particle config identify

Example output:

KENMBP:~ kennethlimcp$ particle config identify
Current profile: local
Using API: http://192.168.1.68

This will ensure that you are pointing to your own :cloud:!

Updated on: 01 Oct 2015

17 Likes

Would you mind elaborating on the advantages/disadvantages of using the local cloud vs. Spark cloud?

I think two of the bigger highlights for a lot of users is privacy and data ownership. The Spark Team is as transparent as any company really can be, but many people want total control over their data. It’s also possible to run a Core without an Internet connection but still talk to a local cloud. An advantage for business-class customers is they can run a local cloud environment to meet requirements for various compliance certifications (PCI, SSAE, HIPAA, etc). Granted, even sending encrypted data over a secure wifi network may still not meet some of those requirements. :-/

6 Likes

I need the local cloud for this need. I’d like to control a wall plug by smartphone. If my SparkCore needs to connect to the Spark web api, if my internet provider has got some problems, I couldn’t control my plug. Instead, if the SparkCore connects to the local cloud I can use the LAN to control the plug when I’m at home, and I’ll send message to my web server (that sends commands locally to the SparkCore) when I’ll away.

2 Likes

Hi @kennethlimcp

Did you have any issues getting the ursa@0.8.0 dependency to install for spark-server on Windows 7? Right now I’m trying to re-do everything with 32-bit installs instead of 64-bit installs to see if that makes a difference.

We had issues doing installation on… Windows 8 but not 7.

Getting URSA on windows is tricky and I will post the instructions I wrote during beta later when I’m on my laptop :slight_smile:


@Elijah, i quickly pulled the instructions i wrote during the pre-release phase at:

Let me know if there are errors if any. It should work since we tested a few times before firming this guide :wink:

1 Like

Thanks for the tutorial!

I think my issue is more system specific.

Ever since I ran npm cache clean -f a couple days ago I have been getting an MSB8007 error from npm install indicating an invalid platform error (Platform is: ‘x64’).

I tried uninstalling and reinstalling all of the components with 32-bit builds, but kept getting that MSB8007 error untill…
I tried npm install from within the node.js command line

Hmmm… probably could have just done that first :wink:

2 Likes

@kennethlimcp … it might be a good idea (or not really, see edit, below) to alert people to first BACK UP their existing private key, in case they want to use the core on the cloud again in future but have written a new key to it.

dfu-util -d 1d50:607f -a 1 -s 0x00002000:4096 -v -U old_core_private_key.der

At lest, I’m pretty sure that’s the situation I am now in. It seems I have over-written the private key that came on the core from the factory and there now appears no way to get it back. So I believe I have to generate a new key pair and send the public key to Spark for the cloud server.

EDIT: OH WAIT … I just discovered that the ‘new’ (to me) spark keys doctor will take care of generating new keys and sending the public part to the cloud server automatically, now. Yay. All fixed.

spark keys doctor <core_id>
1 Like

If you look in the directory, there are most likely 4 files available.

2 with the core_id and 2 with pre_coreid.

The pre_coreid files are the backup copy.

1 Like

Not in my ~/.spark folder, where I expected them.

EDIT: ... (fluff removed) ...

Oh! ...

Having gone through the process again with a clean, new user on my system, I see that the pre_... and other key files all ended up in the spark-server/js directory. (Clearly, I did not not quite follow your instructions above in that regard.) I was expecting the keys to end up in ~/.spark. But apparently, they just go in whatever the current directory is, when spark keys doctor is executed, which is fine.

1 Like

Edit: I believe I solved my problem I was running a old version of the CLI. After updating the CLI and doing the deep update I was able to see the core connecting.

sudo npm update -g spark-cli

spark flash --usb deep_update_2014_06

Below is a record of my issue in case anybody else runs into the same thing.

I do have one question… will this only run on 8080?


@kennethlimcp First thing first thank you very much for this tutorial, it is much clearer than the one on github. I understand better each of the pieces. I am however running into a issue I can’t seem to troubleshoot.

The issue:

  • The light flashes cyan… I understand that this means the core cannot connect to the server
  • If I look at the console for the server I don’t see the core attempting to connect
  • if I go to the IP_ADDRESS:8000 in a browser I can see a JSON reply and I can see the connection attempt

So:

  • It appears that the core is not connecting to the local cloud… I’m guessing it is not pointing to the correct IP and/or port

Question:

  • Should this command use the port number after the IP address?
    spark keys server default_key.pub.pem IP_ADDRESS

Potential differences between my setup and yours:

  • I have to use a different port than 8080 something else on my system is using it. I changed this in main.js to 8000 and everywhere else the port is mentioned
  • When I run spark keys server I get a ton of warnings… is this the source of the problem?

Output from spark keys server


checking file default_key.pub.pem
spawning dfu-util -d 1d50:607f -a 1 -i 0 -s 0x00001000 -D default_key.pub.pem
dfu-util 0.7

Copyright 2005-2008 Weston Schmidt, Harald Welte and OpenMoko Inc.
Copyright 2010-2012 Tormod Volden and Stefan Schmidt
This program is Free Software and has ABSOLUTELY NO WARRANTY
Please report bugs to dfu-util@lists.gnumonks.org

Filter on vendor = 0x1d50 product = 0x607f
Error during spawn TypeError: Cannot call method ‘on’ of null
Make sure your core is in DFU mode (blinking yellow), and is connected to your computer
Error - TypeError: Cannot call method ‘on’ of null
Opening DFU capable USB device… ID 1d50:607f
Run-time device DFU version 011a
Found DFU: [1d50:607f] devnum=0, cfg=1, intf=0, alt=1, name="@SPI Flash : SST25x/0x00000000/512*04Kg"
Claiming USB DFU Interface…
Setting Alternate Setting #1
Determining device status: state = dfuERROR, status = 10
dfuERROR, clearing status
Determining device status: state = dfuIDLE, status = 0
dfuIDLE, continuing
DFU mode device DFU version 011a
Device returned transfer size 1024
No valid DFU suffix signature
Warning: File has no DFU suffix
DfuSe interface name: "SPI Flash : SST25x"
Downloading to address = 0x00001000, size = 452
.
File downloaded successfully


Thank you for any help you can provide!

Edit: I did a tcpdump and I can see the core is attempting to connect to amazon. What command actually changes where the core is pointing to? Is it the spark keys server?


1 Like

The ports numbers can be changed in the spark-server source code.

8080 is used for the API server while 5683 is the COAP port.

However, only 8080 would be a easy change as COAP port is set in the core firmware and changing it requires local compiling.

Glad you fixed it. Have fun! :slight_smile:

Yes. If you put an IP address on the end of a spark keys server ... command, it will set your 'core to connect to a server at that address (on port 5683.) To change back to global cloud, simply omit the IP address. You do have to provide a server public key file, for this command.

FWIW, I quite like the reference style docs at the Spark-CLI source home page: https://github.com/spark/spark-cli. In this case, head right to the bottom of that page.

1 Like

@kennethlimcp, when you say fired up up spark server in the intro to the guide, do you essentially mean completing this tutorial that you also wrote? https://community.spark.io/t/tutorial-local-cloud-on-windows-25-july-2014/5949

If so, I’ve gotten to step 3 of this guide, but my cores won’t go from flashing cyan to breathing cyan and I have no idea why. Thoughts?

The console will output some messages.

I’m thinking your core public keys are not available yet…

Did you perform that step?

I’m attempting to get a local cloud up and running and it appears that I have it about 10% there. I’m at the point I can power up the core and it connects to the local cloud. (I replaced any token or deviceid with a stand in)

Connection from: 10.129.0.18, connId: 14
on ready { coreID: 'xxxxxxxxxxxxxxxxxxxxxx',
  ip: '10.129.0.18',
  product_id: 0,
  firmware_version: 11,
  cache_key: '_13' }
Core online!

I’ve got my spark-cli up to date. All the local cloud software was downloaded/installed just after that.

But I can’t actually do anything. I’m met with invalid access_token when I try to use it or bad errors when I try “spark keys doctor xxxxxxxxxxxxxxxxxxxxxx”:

From spark-server console:

TypeError: Object function (options) {
    this.options = options;
} has no method 'basicAuth'
    at Object.AccessTokenViews.destroy (/home/pi/spark-server/js/lib/AccessTokenViews.js:59:44)
    at callbacks (/home/pi/spark-server/js/node_modules/express/lib/router/index.js:164:37)
    at param (/home/pi/spark-server/js/node_modules/express/lib/router/index.js:138:11)
    at param (/home/pi/spark-server/js/node_modules/express/lib/router/index.js:135:11)
    at pass (/home/pi/spark-server/js/node_modules/express/lib/router/index.js:145:5)
    at Router._dispatch (/home/pi/spark-server/js/node_modules/express/lib/router/index.js:173:5)
    at Object.router (/home/pi/spark-server/js/node_modules/express/lib/router/index.js:33:10)
    at next (/home/pi/spark-server/js/node_modules/express/node_modules/connect/lib/proto.js:193:15)
    at next (/home/pi/spark-server/js/node_modules/express/node_modules/connect/lib/proto.js:195:9)
    at Object.handle (/home/pi/spark-server/js/node_modules/node-oauth2-server/lib/oauth2server.js:104:11)
10.129.0.5 - - [Sun, 07 Dec 2014 08:22:51 GMT] "DELETE /v1/access_tokens/yyyyyyyyyyyyy HTTP/1.1" 500 1045 "-" "-"
10.129.0.5 - - [Sun, 07 Dec 2014 08:24:25 GMT] "POST /oauth/token HTTP/1.1" 503 83 "-" "-"
10.129.0.5 - - [Sun, 07 Dec 2014 08:24:48 GMT] "GET /v1/devices?access_token=yyyyyyyyyyyyy HTTP/1.1" 400 109 "-" "-"
10.129.0.5 - - [Sun, 07 Dec 2014 08:29:51 GMT] "POST /v1/provisioning/xxxxxxxxxxxxxxxxxxxxxx HTTP/1.1" 400 109 "-" "-"
Connection from: 10.129.0.18, connId: 2
CryptoStream transform error TypeError: error:00000000:lib(0):func(0):reason(0)

I see all the 400’s and 500’s so I’m guessing something isn’t right with my spark-server. Is there a way to populate a user/access_token by hand in the server?

the “spark setup” part throws errors on the server console when I answer that I would not like to use the account already specified

Try spark logout and spark login again to create a new account on your :cloud:.

spark keys doctor is not available in the local cloud version but you don’t need it since your core is online :wink:

spark logout: after about 30 seconds I get “error removing token: Error: socket hang up” (doesn’t seem to matter if I’m using “spark config spark” or “spark config local”)

Thanks in advance. Dealing with two fringe cases at a time is a nightmare… Mac OS and Raspberry Pi.

Edit: I’ve ditched the Pi and still have the exact same issue on the Mac. The best I can tell, the spark cli is ignoring the command “spark config local”.

I’ve also played with “spark cloud login” and that actually hits my local cloud server. But with no users on the local server its not working:

10.129.0.5 - - [Sun, 07 Dec 2014 20:05:48 GMT] "POST /oauth/token HTTP/1.1" 503 83 "-" "-"

I’m concerned about the 503 reply. I’m guessing that is the spark-server instance saying it has no idea what to do with the post data to /oauth/token or that something crashed while it was trying to do something. I can’t find a single error during the build with “npm install --verbose” for either the pi or the mac.

Edit: after I moved the spark.config.json file out of the .spark directory I can now use “spark setup”

I did not try local cloud yet, but when you mention it, did you create a user on spark-server? Readme says:

6.) Create a user and login with the Spark-CLI

Yeah it is working finally.

Once I moved the spark.config.json file out of /.spark/ I was able to use “spark setup” to create a user on my local cloud.

I had to jump through a couple hoops to go back and forth between the spark cloud and the local cloud so I removed the spark-cli package from global and have two separate directories for spark and local now. I’m thinking it is a permissions issue since I don’t use an admin account for myself regularly.

… and I’ve got to do it all again tomorrow when my Edison gets here.

1 Like