Tutorial: Local Cloud 1st Time instructions [01 Oct 15]

I nearly got it worked but stuck at this error:

Your server IP address is: 10.x.x.x
server started { host: 'localhost', port: 5683 }
42.x.x.x - - [Wed, 01 Apr 2015 02:46:56 GMT] "POST /v1/provisioning/53xxxxxxxxxxxxxxxxx HTTP/1.1" 400 109 "-" "-"

Here is what I’ve tried on my Google cloud server so far:

$ mkdir spark-core
$ cd spark-core
$ wget https://launchpad.net/gcc-arm-embedded/4.8/4.8-2014-q2-update/+download/gcc-arm-none-eabi-4_8-2014q2-20140609-linux.tar.bz2
$ tar xvjpf gcc-arm-none-eabi-4_8-2014q2-20140609-linux.tar.bz2
$ export PATH=$PATH:$HOME/src/spark-core/gcc-arm-none-eabi-4_8-2014q2/bin
$ git clone https://github.com/spark/core-firmware.git
$ git clone https://github.com/spark/core-common-lib.git
$ git clone https://github.com/spark/core-communication-lib.git
$ git clone https://github.com/spark/spark-server.git
$ npm install -g spark-cli (need either root or sudo)
$ cd spark-server
$ npm install
$ node main.js

On my local PC with dfu-util installed, here is the following done on my PC:

$ git clone https://github.com/spark/spark-server.git
$cd spark-server
$node main.js

While my local server (on my PC) is running I then did the following on a new terminal window:

$ spark config googleCloud apiUrl http://mydomainname.com:8080
$ spark config googleCloud
$ cd spark-server
$ spark keys server default_key.pub.pem mydomainname.com

I then did this on my local server (PC with DFU-UTIL):

$cd core_keys
$sudo spark keys doctor 53fxxxxxxxxxxxxxxxxxxx

I end up with the following error message when I try to upload the keys on the spark (Spark is in DFU mode). The below DFU message was too long. I am just pasting the error message portion of it.

File downloaded successfully
Transitioning to dfuMANIFEST state
Error during download get_status
Saved!
attempting to add a new public key for core 53xxxxxxxxxxxxxxxx
*********************************
      Please login - it appears your access_token may have expired
*********************************
submitPublicKey got error:  invalid_grant
Make sure your core is in DFU mode (blinking yellow), and that your computer is online.
Error - invalid_grant

On my google cloud server, I got the following error:

Your server IP address is: 10.x.x.x
server started { host: 'localhost', port: 5683 }
42.x.x.x - - [Wed, 01 Apr 2015 02:46:56 GMT] "POST /v1/provisioning/53xxxxxxxxxxxxxxxxx HTTP/1.1" 400 109 "-" "-"

What am I doing wrong? Any help would be highly appreciated.

After I ran this command:

particle keys server default_key.pub.pem 192.168.1.23

I get this from the spark-server output:

Connection from: ::ffff:192.168.1.29, connId: 1
1: Core disconnected: plaintext was the wrong size: 214 { coreID: 'unknown', cache_key: '_0' }
Session ended for _0

Any ideas why this is happening? Thanks

UPDATE: I found this link via Google. Is this related?

Yup…

Two helpful bits of information I learned after installing spark-server (local cloud) on Mac OS X was that the version of Node.js needs to be 0.10.X (as of July 2015). When I first installed node, I grabbed the latest which was 0.12.7 and is currently not supported.

Also if you get Crypto errors, make sure to delete the node_modules folder and re-run npm install. The server started just fine after that.

Hope that helps.

I am trying to revert back to the particle cloud, but I am facing a strange issue.
As mentioned in the OP, I downloaded the cloud public key file and run

spark keys server your_local_cloud_public_key.der IP-ADDRESS

now whenever I try to particle setup and login my device, after flashing cyan really quick, the color turns yellow and it never gets to the point where it 'breathes' cyan.

I also tried re-installing particle-cli after removing it completely, and also factory reseting my Core. Any ideas?

EDIT: For anyone facing the same problem with me, have a look here. It's not local cloud related.

I am doing every the way you explained but I get the following response on server terminal :

Your server IP address is: 192.168.1.38
server started { host: ‘localhost’, port: 5683 }
192.168.1.38 - - [Thu, 01 Oct 2015 22:05:07 GMT] “POST /v1/devices HTTP/1.1” 400 109 “-” “-”

Hi @kennethlimcp.. Thanks for the amazing tutorial.. I did manage to get my local cloud up and running and so far everything seems to be working fine... The only issue that I'm facing is that when from Particle CLI, when I logout and say yes to revoking the current authentication token, I get an error on the server as below:

TypeError: Object function (options) {
this.options = options;
} has no method 'basicAuth'
at Object.AccessTokenViews.destroy (/local/spark-server/lib/AccessTokenViews.js:59:38)
at callbacks (/local/spark-server/node_modules/express/lib/router/index.js:164:37)
at param (/local/spark-server/node_modules/express/lib/router/index.js:138:11)
at param (/local/spark-server/node_modules/express/lib/router/index.js:135:11)
at pass (/local/spark-server/node_modules/express/lib/router/index.js:145:5)
at Router._dispatch (/local/spark-server/node_modules/express/lib/router/index.js:173:5)
at Object.router (/local/spark-server/node_modules/express/lib/router/index.js:33:10)
at next (/local/spark-server/node_modules/express/node_modules/connect/lib/proto.js:193:15)
at next (/local/spark-server/node_modules/express/node_modules/connect/lib/proto.js:195:9)
at Object.handle (/local/spark-server/node_modules/node-oauth2-server/lib/oauth2server.js:104:11)

I tried searching for answers but couldn't find any... Any ideas on what might be causing this???

Thanks in advance.. .

Hmmm i recall the local cloud doesn’t have the concept of multi users so logout and login is not really applicable. But that might not be the actual case…

@kennethlimcp… One of my team members helped me out… The issue was in the way destroy function was calling basicAuth function within AccessTokenViews.js.

As soon as you change the line var credentials = AccessTokenViews.basicAuth(req) to var credentials = this.basicAuth(req) in AccessTokenViews,js and restart the server, it all works fine…

And so far it looks like the local cloud does support multiple users but only further testing would confirm this… I’ll keep you posted with the results of my multi user testing…

1 Like

Can you submit a PR to benefit the community at large? :slight_smile:

Hi @kennethlimcp…I might sound stupid as I’m new at this… What’s a PR??

Basically to submit the fix to the original github repo at: https://github.com/spark/spark-server

Maybe i will do it for this issue. :wink:

Thanks @kennethlimcp

I have followed this tutorial exactly and had trouble correctly getting my Photon to connect to my local cloud. I just want to share the things I had to do to get it working properly.

Make sure your node version is 0.10.36.

After downloading particle-cli, you need to change several lines in ApiClient.js
(/usr/local/lib/node_modules/particle-cli/lib/ApiClient.js)

from https://github.com/spark/spark-cli/blob/4f06c4bac32e3e75ecfcb217261b8bf57e1b7b47/lib/ApiClient.js

REPLACE the similar looking code with the following:

//GET /oauth/token
login: function (client_id, user, pass) {
	var that = this;

	return this.createAccessToken(client_id, user, pass).then(function(resp) {
		console.log("Got an access token! " + resp.access_token);
		that._access_token = resp.access_token;
		return that._access_token;
	}).catch(function (err) {
		console.error("Login error: ", err);
		return when.reject();
	});
},

This would allow you to create an account on you local cloud. Your local cloud DOES NOT know about your Particle Cloud account. When following the instructions in this tutorial, I was always signed in and never logged out. I decided to log out and I was never able to log in nor was I able to create an account until I changed the code on that file.

Make sure you do a

particle update

to make sure the system firmware is up to date.

Doing these 3 things helped me correctly run a local cloud, connect all my Photons to that local cloud, and control them through the iOS SDK with this small change [SOLVED] iOS SDK on Local Cloud

When using curl to access the Particle Cloud I use https, but when accessing the local cloud according to these instructions, I use http. Is it possible to access the local cloud using https?

This is a very common requirement and there’s a simple (ish) solution.
You need to do 3 things:

  • setup a secure web server (apache or nginx or something macosy/windowsy)
  • setup an “SSL reverse proxy” from your secure web server to your spark cloud
  • optionally setup your local cloud to only respond to your reverse proxy (otherwise you can still get to it over http)

Hope that’s enough for you to google solutions

1 Like

Hello guys,

when Photon is trying to connect to cloud it’s flashing fast green then cyan and after yellow and again, which supposed to be wrong keys. Any ideas how to make the final step working.

managed to connect to the server, photon worked with the second address.

but there are lot of errors

Your server IP address is: 192.168.56.1
Your server IP address is: 10.0.0.9                 //this worked
server started { host: 'localhost', port: 5683 }
Connection from: ::ffff:10.0.0.15, connId: 1
CryptoStream transform error TypeError: Cannot read property 'length' of null
CryptoStream transform error TypeError: Cannot read property 'length' of null
on ready { coreID: 'ID',
  ip: '::ffff:10.0.0.15',
  product_id: 6,
  firmware_version: 65535,
  cache_key: '_0' }
Core online!
CryptoStream transform error Error: error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt
onSocketData called, but no data sent.
1: Core disconnected: socket close false { coreID: 'ID',
  cache_key: '_0',
  duration: 25.083 }
Session ended for _0
Connection from: ::ffff:10.0.0.15, connId: 2
CryptoStream transform error TypeError: Cannot read property 'length' of null
CryptoStream transform error TypeError: Cannot read property 'length' of null
on ready { coreID: 'ID',
  ip: '::ffff:10.0.0.15',
  product_id: 6,
  firmware_version: 65535,
  cache_key: '_1' }
Core online!

Is there any updated directions to get this running in 2018 with a raspberry pi3B running stretch?

root@hole:/home/pi/spark-server# npm install

> ursa@0.9.4 install /home/pi/spark-server/node_modules/ursa
> node-gyp rebuild

gyp WARN EACCES user "root" does not have permission to access the dev dir "/root/.node-gyp/10.12.0"
gyp WARN EACCES attempting to reinstall using temporary dev dir "/home/pi/spark-server/node_modules/ursa/.node-gyp"
gyp WARN install got an error, rolling back install
gyp WARN install got an error, rolling back install
gyp ERR! configure error
gyp ERR! stack Error: EACCES: permission denied, mkdir '/home/pi/spark-server/node_modules/ursa/.node-gyp'
gyp ERR! System Linux 4.9.35-v7+
gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /home/pi/spark-server/node_modules/ursa
gyp ERR! node -v v10.12.0
gyp ERR! node-gyp -v v3.8.0
gyp ERR! not ok
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! ursa@0.9.4 install: `node-gyp rebuild`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the ursa@0.9.4 install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2018-10-14T19_46_28_052Z-debug.log


sounds like lots of issues with new versions of node?
this is my node version 10.12.0 and npm 6.4.1

Setting up the local cloud seems like the easiest way to accomplish stuff on my particle photon… there has to still be people setting this up right?

Try this https://github.com/brewskey/spark-server

1 Like

Hmmm it must be too hard to keep up with all the changes. I tried this and the program sends a throw error when I start it. :frowning: I was able to run a light weight http server on the photon to control what I wanted locally but I still think this would be an interesting clean solution.