Local Cloud on Raspberry Pi

Hi @juano2310,

Your access_token for the local cloud will be different from your server access_token, so if it’s no longer valid or not working, try logging into your local cloud server to get a new access_token. You can also just open up your user file and change your expiration date / token manually too if you like. :slight_smile:

Thanks,
David

Thanks a lot!!! I think I got it to work :smile:

1 Like

I have run through the RPi setup and got the local cloud server running - it was really straightforward: thanks for your great tutorial. However, getting dfu-util on the RPi is proving to be a bit stubborn. I have been to the alternative site to download the dfu-util files and that works ok, but I cannot seem to get the RPi to run dfu-util.

I have tried the following from the “Quick Install on a Raspberry Pi” procedure from Github:

sudo apt-get install lib-usb-1.0-0-dev (this works fine)
wget https://s3.amazonaws.com/sparkassets…dfu-util-0.8-binaries.tar.xz (this works fine)
cd dfu-util-0.8-binaries (no problem)
./configure (comes up with the message: configure no recognized)
make (fails)
sudo make install (fails)

Could you help?

Many thanks,

Hi,
I’ve been having problems getting the local cloud to work on the RPi (and the same problem with the Windows laptop). I can load spark-cli and this works on both, and the access via the Spark Cloud is all fine. Putting the core keys into the core is also fine.

But, when firing up the local cloud I have problems setting up an account, and also ‘decryption’ errors. Here is an extract of the attempt to setup an account using the RPi (it’s pretty much the same on the PC):

Could I please have an email address?  rb@mail.com
and a password?  **

Trying to login...
Could not get new access token:  server_error
login error:  server_error
Using the setting "username" instead 
Logged in!  Saving access token: 081b03c88f15750a906935ebfd6f9e1ddf55fed6
Using the setting "access_token" instead 

----------------------
Finding your core id

/usr/local/lib/node_modules/spark-cli/commands/SerialCommand.js:136
			for (var i = 0; i < ports.length; i++) {
			                         ^
TypeError: Cannot read property 'length' of undefined
    at /usr/local/lib/node_modules/spark-cli/commands/SerialCommand.js:136:29
    at /usr/local/lib/node_modules/spark-cli/node_modules/serialport/serialport.js:531:11
    at FSReqWrap.oncomplete (fs.js:99:15)

And, this is what I get from the server:

pi@raspberrypi ~/spark/spark-server $ node main.js
-------
No users exist, you should create some users!
-------
connect.multipart() will be removed in connect 3.0
visit https://github.com/senchalabs/connect/wiki/Connect-3.0 for alternatives
connect.limit() will be removed in connect 3.0
Starting server, listening on 8080
static class init!
found 53ff68066667574854362567
Loading server key from default_key.pem
set server key
server public key is:  -----BEGIN PUBLIC KEY-----
  [KEY REMOVED]
mwIDAQAB
-----END PUBLIC KEY-----

Your server IP address is: 192.168.1.21
server started { host: 'localhost', port: 5683 }
Connection from: 192.168.1.10, connId: 1
CryptoStream transform error TypeError: Cannot read property 'length' of null
CryptoStream transform error TypeError: Cannot read property 'length' of null
on ready { coreID: '53ff68066667574854362567',
  ip: '192.168.1.10',
  product_id: 0,
  firmware_version: 11,
  cache_key: '_0' }
Core online!
CryptoStream transform error Error: error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt
onSocketData called, but no data sent.
1: Core disconnected: socket close false { coreID: '53ff68066667574854362567',
  cache_key: '_0',
  duration: 25.13 }
Session ended for _0
Connection from: 192.168.1.10, connId: 2
CryptoStream transform error TypeError: Cannot read property 'length' of null
CryptoStream transform error TypeError: Cannot read property 'length' of null
on ready { coreID: '53ff68066667574854362567',

I seem to have problems setting up a new account on the local cloud, the server thinks I don’t have any. It’s the same on both the PC and the RPi. Can anyone help?

Many thanks,

You will need to downgrade Spark-cli due to a bug using sudo npm install -g spark-cli@0.4.94

With that, use spark login to create a new account. Make sure you are pointing to the local :cloud: (check using spark config identify)

Also, try to flash the latest tinker to the core if possible :slight_smile:

Many thanks, Kenneth.

OK. First, I reflashed the Core with tinker (spark flash --usb tinker), then I reinstalled spark-cli using sudo npm install -g spark-cli@0.4.94 as you said - took forever, but no drama. Then set the config to the local cloud. Running spark setup failed the first time - i got the same errors as before:

/usr/local/lib/node_modules/spark-cli/commands/SerialCommand.js:136
			for (var i = 0; i < ports.length; i++) {
			                         ^
TypeError: Cannot read property 'length' of undefined
etc
etc

I then restarted the local cloud and tried spark setup again. This time it created the account, and when I rebooted the local cloud server again it recognised the account and the core is breathing cyan. But I am still getting the decryption errors all the time on the local cloud server terminal output - about every 25.030 - 25.040 seconds (!). Each time the Core flashes cyan and then carries on breathing.

I then try “spark list” and it names the core, but says it’s offline - even though the server output says that the core is online! :confused: When I try a few “curl” commands they don’t even go through - message comes back saying that it cannot find the server even though the URL’s are the same. I gave up.

Back to the PC. Trying npm install -g spark-cli@0.4.94 was a total disaster, so I reinstalled spark-cli. It works fine on the Spark cloud. However, when I fire up the local server there is no way I can get to set up an account - does not give me any token etc. The Core behaves the same way as with the RPi: breathing and then intermittent flashing with these messages:

CryptoStream transform error Error: error:06065064:digital envelope routines:EVP
_DecryptFinal_ex:bad decrypt
onSocketData called, but no data sent.
1: Core disconnected: socket close false { coreID: '55ff6d065075555351161787',
  cache_key: '_10',
  duration: 25.085 }
Session ended for _10
Connection from: ::ffff:192.168.1.6, connId: 12
CryptoStream transform error TypeError: Cannot read property 'length' of null
CryptoStream transform error TypeError: Cannot read property 'length' of null
on ready { coreID: '55ff6d065075555351161787',
  p: '::ffff:192.168.1.6',
  product_id: 0,
  firmware_version: 11,
  cache_key: '_11' }
Core online!
CryptoStream transform error Error: error:06065064:digital envelope routines:EVP
_DecryptFinal_ex:bad decrypt

The “ffff:192.168.1.6” looks a bit funny?

Since I cannot get an account, I cannot log in so I cannot try any “curl” commands to test it.

It used to work… :grimacing:

Any ideas, gratefully received.

Not sure what happened. Can you try sudo node main.js to fire up the server?

@rblott and @kennethlimcp
This is exactly what I experienced and did but over three days I tried install after install with new settings and accounts. I reached the point on monday where I said ok i’ll go back to spark’s cloud and try again next weekend. Core says its setup right but access token is out of sync with spark and i’m buggered if I can get it too behave properly.
The length of undefined is dfu-utils is not installed/working though which I can at least help with. as uninstalling and reinstalling the server, cli, dfu, nodejs over and over I’ve hit this a couple of time and found it to be dfu not there or not working.

I believe the cryptostream thing is actually a bug at the moment but I cannot prove it yet. As I need my core back on the spark’s cloud so i can try again and get everything setup right.
The tutorial was more of a script and needs more “it should say this” to help ascertain whether things are going wrong or not.
Certainly gave me a good start though but I have now created some rudimentary bash scripts for the install and “uninstall” of the parts.
It should be noted that dfu utils does not seem to be maintained currently and from the common gitsource link does not work at all. There is a github one but it is pre-spark.

@kennethlimcp,

I’ve benefited from your tutorial on how to get the local cloud working on a RPi B+, thanks for this. :+1:

Along the way, I’ve found that some updated steps are needed if you’re going to use the RPi 2. I’m posting them here for anyone who might have come across the same issues as I did. As future updates roll out, I’m sure this information will become somewhat obsolete again, but these steps worked for me as of May 2015, Raspbian 3.18.

These points are made in reference to your tutorial linked here :

  1. node.js on a RPi 2 needs to be built with armv7 architecture. As of writing, this builds v0.10.38:

curl -sL https://deb.nodesource.com/setup | sudo bash -
sudo apt-get install -y build-essential python-dev nodejs

  1. The ursa and express modules shouldn’t have compilation errors
  2. I used pm2 to set up autostart/restart monitoring for the local cloud. Instructions on the pm2 github page are straightforward enough that they don’t need repeating here

Cheers

Thanks i wrote that for Rpi v1 so some steps are needed

You can use nohup. An example is:
nohup node main.js > …/spark-server.out 2> …/spark-server.err &

On second thought - PM2 mentioned above is probably better for Node, but cluster mode should be disabled, might want to consider running it with specific logging output parameters and might want to configure watch.

Basic example:
pm2 start main.js -i 1

Also you can view the “output from the live feed” (real-time log output) with “tail -f”, for ex:
tail -n 100 -f …/spark-server.out

displays 100 lines of the latest log file contents, and any new things written to it.