Local Cloud on Raspberry Pi

Hi @gauravptalukdar,

If you have the latest spark-cli, and the local cloud, you can also get your key in place with the keys doctor:

# connect your core in dfu mode:
spark keys doctor your_core_id

You might need to update your copy of the local cloud with something like:

cd spark-server/
git pull

:smile:
Thanks,
David

Hey @Dave and @kennethlimcp, Well I got the core connected to the local cloud. Thanks for the help. But there is this one thing. It disconnects and reconnects very frequently. Any advise on that. Here’s the output:

Connection from: 192.168.1.103, connId: 1
on ready { coreID: ‘55ff6b065075555332071787’,
ip: ‘192.168.1.103’,
product_id: 0,
firmware_version: 6,
cache_key: undefined }
Core online!
Connection from: 192.168.1.103, connId: 2
on ready { coreID: ‘55ff6b065075555332071787’,
ip: ‘192.168.1.103’,
product_id: 0,
firmware_version: 6,
cache_key: undefined }
Core online!
Connection from: 192.168.1.103, connId: 3
on ready { coreID: ‘55ff6b065075555332071787’,
ip: ‘192.168.1.103’,
product_id: 0,
firmware_version: 6,
cache_key: undefined }
Core online!

Hi @gauravptalukdar,

There are a lot of reasons why this could happen. Can you give us some more info about your setup, what you’re doing when the core disconnects, type of core, etc, etc. :slight_smile:

Thanks,
David

@Dave, i posted about this on github as well.

Using any user firmware other than tinker exhibits this behavior. It’s weird but you can test it out :stuck_out_tongue:

We just see SOS flashes over a few different user using Local :cloud:

Exactly @kennethlimcp. getting the tinker app onboard gets the core connected immediately but any other application exhibits this behavior. Though sometimes it eventually gets connected after around 20 to 40 attempts with the following output:

Connection from: 192.168.1.103, connId: 1
on ready { coreID: ‘55ff6b065075555332071787’,
ip: ‘192.168.1.103’,
product_id: 65535,
firmware_version: 65535,
cache_key: undefined }
Core online!
routeMessage got a NULL coap message { coreID: ‘55ff6b065075555332071787’ }
got counter 45466 expecting 45465 { coreID: ‘55ff6b065075555332071787’ }
1: Core disconnected: Bad Counter { coreID: ‘55ff6b065075555332071787’,
cache_key: undefined,
duration: 0.067 }
Session ended for 1
Connection from: 192.168.1.103, connId: 2
on ready { coreID: ‘55ff6b065075555332071787’,
ip: ‘192.168.1.103’,
product_id: 65535,
firmware_version: 65535,
cache_key: undefined }
Core online!
routeMessage got a NULL coap message { coreID: ‘55ff6b065075555332071787’ }
got counter 22124 expecting 22123 { coreID: ‘55ff6b065075555332071787’ }
1: Core disconnected: Bad Counter { coreID: ‘55ff6b065075555332071787’,
cache_key: undefined,
duration: 0.032 }
Session ended for 2
Connection from: 192.168.1.103, connId: 3
on ready { coreID: ‘55ff6b065075555332071787’,
ip: ‘192.168.1.103’,
product_id: 65535,
firmware_version: 65535,
cache_key: undefined }
Core online!
routeMessage got a NULL coap message { coreID: ‘55ff6b065075555332071787’ }
got counter 22040 expecting 22039 { coreID: ‘55ff6b065075555332071787’ }
1: Core disconnected: Bad Counter { coreID: ‘55ff6b065075555332071787’,
cache_key: undefined,
duration: 0.024 }
Session ended for 3
Connection from: 192.168.1.103, connId: 4
on ready { coreID: ‘55ff6b065075555332071787’,
ip: ‘192.168.1.103’,
product_id: 65535,
firmware_version: 65535,
cache_key: undefined }
Core online!

And phew!! finally gets connected. I guess its that incremented counter value that is received.

-Gaurav

Aha! I bet I know what this is! Race condition in the GetTime post-handshake. Probably isn’t normally seen due to small background latency, comes out in local cloud… 1 sec…

edit: posted issue for this here https://github.com/spark/spark-server/issues/18

If you’re compiling locally, can you try commenting out the send_time_request line here?

Thanks,
David

Didn’t help @Dave

I will try to dig into it a little bit.

-Gaurav

Hi @gauravptalukdar,

Oh, okay, cool! Commenting out that line and doing a fresh build / installing that firmware didn’t help, that’s good to know! Hmm. I’m guessing some other race then. The server error logs you’re seeing essentially say it’s getting a message too early, that another message should have been received before it. Can you edit the SparkCore module, and add a line for me, and send those new logs?

Right after this line (the “got counter, expecting” ):

Add this line:

 console.log("core got message of type " + msg._type + " with token " + msg.getTokenString() + " " + messages.getRequestType(msg));

Thanks!
David

Hi @Dave sorry, I have no idea where to implement this file. :expressionless: Is this .js module in the installed spark server.? I can only see the spark protocol source files in the core-communication library.

I think it should be spark-server/js/node_modules/spark-protocol/js/clients/SparkCore.js

Does anyone know how can I view the console.log output if I’m firing up the main.js on startup?

Previously, I manually spin up the Rpi and used screen node main.js.

This allowed me to relogin that screen and view the logs.

What happens if it was running on startup? :slight_smile:

Hi @kennethlimcp,

Good question! A common solution is to pipe the output to a file, you can throw something like this in your startup script:

/usr/local/bin/node main.js >> /var/log/my-server.log

And then you can watch it in realtime with:

tail -F /var/log/my-server.log

Thanks,
David

1 Like

So there’s no way to enter the process and watch the “live feed”? :slight_smile:

Hi @kennethlimcp and @Dave,
Quick silly question, I’m trying to use curl to interact with the local cloud but I keep getting “The access token provided is invalid.” I use curl to interact with the Spark API all the time but it’s not working for my local sever.
Is there anything I should consider?
Thanks a lot!

Hi @juano2310,

Your access_token for the local cloud will be different from your server access_token, so if it’s no longer valid or not working, try logging into your local cloud server to get a new access_token. You can also just open up your user file and change your expiration date / token manually too if you like. :slight_smile:

Thanks,
David

Thanks a lot!!! I think I got it to work :smile:

1 Like

I have run through the RPi setup and got the local cloud server running - it was really straightforward: thanks for your great tutorial. However, getting dfu-util on the RPi is proving to be a bit stubborn. I have been to the alternative site to download the dfu-util files and that works ok, but I cannot seem to get the RPi to run dfu-util.

I have tried the following from the “Quick Install on a Raspberry Pi” procedure from Github:

sudo apt-get install lib-usb-1.0-0-dev (this works fine)
wget https://s3.amazonaws.com/sparkassets…dfu-util-0.8-binaries.tar.xz (this works fine)
cd dfu-util-0.8-binaries (no problem)
./configure (comes up with the message: configure no recognized)
make (fails)
sudo make install (fails)

Could you help?

Many thanks,

Hi,
I’ve been having problems getting the local cloud to work on the RPi (and the same problem with the Windows laptop). I can load spark-cli and this works on both, and the access via the Spark Cloud is all fine. Putting the core keys into the core is also fine.

But, when firing up the local cloud I have problems setting up an account, and also ‘decryption’ errors. Here is an extract of the attempt to setup an account using the RPi (it’s pretty much the same on the PC):

Could I please have an email address?  rb@mail.com
and a password?  **

Trying to login...
Could not get new access token:  server_error
login error:  server_error
Using the setting "username" instead 
Logged in!  Saving access token: 081b03c88f15750a906935ebfd6f9e1ddf55fed6
Using the setting "access_token" instead 

----------------------
Finding your core id

/usr/local/lib/node_modules/spark-cli/commands/SerialCommand.js:136
			for (var i = 0; i < ports.length; i++) {
			                         ^
TypeError: Cannot read property 'length' of undefined
    at /usr/local/lib/node_modules/spark-cli/commands/SerialCommand.js:136:29
    at /usr/local/lib/node_modules/spark-cli/node_modules/serialport/serialport.js:531:11
    at FSReqWrap.oncomplete (fs.js:99:15)

And, this is what I get from the server:

pi@raspberrypi ~/spark/spark-server $ node main.js
-------
No users exist, you should create some users!
-------
connect.multipart() will be removed in connect 3.0
visit https://github.com/senchalabs/connect/wiki/Connect-3.0 for alternatives
connect.limit() will be removed in connect 3.0
Starting server, listening on 8080
static class init!
found 53ff68066667574854362567
Loading server key from default_key.pem
set server key
server public key is:  -----BEGIN PUBLIC KEY-----
  [KEY REMOVED]
mwIDAQAB
-----END PUBLIC KEY-----

Your server IP address is: 192.168.1.21
server started { host: 'localhost', port: 5683 }
Connection from: 192.168.1.10, connId: 1
CryptoStream transform error TypeError: Cannot read property 'length' of null
CryptoStream transform error TypeError: Cannot read property 'length' of null
on ready { coreID: '53ff68066667574854362567',
  ip: '192.168.1.10',
  product_id: 0,
  firmware_version: 11,
  cache_key: '_0' }
Core online!
CryptoStream transform error Error: error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt
onSocketData called, but no data sent.
1: Core disconnected: socket close false { coreID: '53ff68066667574854362567',
  cache_key: '_0',
  duration: 25.13 }
Session ended for _0
Connection from: 192.168.1.10, connId: 2
CryptoStream transform error TypeError: Cannot read property 'length' of null
CryptoStream transform error TypeError: Cannot read property 'length' of null
on ready { coreID: '53ff68066667574854362567',

I seem to have problems setting up a new account on the local cloud, the server thinks I don’t have any. It’s the same on both the PC and the RPi. Can anyone help?

Many thanks,

You will need to downgrade Spark-cli due to a bug using sudo npm install -g spark-cli@0.4.94

With that, use spark login to create a new account. Make sure you are pointing to the local :cloud: (check using spark config identify)

Also, try to flash the latest tinker to the core if possible :slight_smile:

Many thanks, Kenneth.

OK. First, I reflashed the Core with tinker (spark flash --usb tinker), then I reinstalled spark-cli using sudo npm install -g spark-cli@0.4.94 as you said - took forever, but no drama. Then set the config to the local cloud. Running spark setup failed the first time - i got the same errors as before:

/usr/local/lib/node_modules/spark-cli/commands/SerialCommand.js:136
			for (var i = 0; i < ports.length; i++) {
			                         ^
TypeError: Cannot read property 'length' of undefined
etc
etc

I then restarted the local cloud and tried spark setup again. This time it created the account, and when I rebooted the local cloud server again it recognised the account and the core is breathing cyan. But I am still getting the decryption errors all the time on the local cloud server terminal output - about every 25.030 - 25.040 seconds (!). Each time the Core flashes cyan and then carries on breathing.

I then try “spark list” and it names the core, but says it’s offline - even though the server output says that the core is online! :confused: When I try a few “curl” commands they don’t even go through - message comes back saying that it cannot find the server even though the URL’s are the same. I gave up.

Back to the PC. Trying npm install -g spark-cli@0.4.94 was a total disaster, so I reinstalled spark-cli. It works fine on the Spark cloud. However, when I fire up the local server there is no way I can get to set up an account - does not give me any token etc. The Core behaves the same way as with the RPi: breathing and then intermittent flashing with these messages:

CryptoStream transform error Error: error:06065064:digital envelope routines:EVP
_DecryptFinal_ex:bad decrypt
onSocketData called, but no data sent.
1: Core disconnected: socket close false { coreID: '55ff6d065075555351161787',
  cache_key: '_10',
  duration: 25.085 }
Session ended for _10
Connection from: ::ffff:192.168.1.6, connId: 12
CryptoStream transform error TypeError: Cannot read property 'length' of null
CryptoStream transform error TypeError: Cannot read property 'length' of null
on ready { coreID: '55ff6d065075555351161787',
  p: '::ffff:192.168.1.6',
  product_id: 0,
  firmware_version: 11,
  cache_key: '_11' }
Core online!
CryptoStream transform error Error: error:06065064:digital envelope routines:EVP
_DecryptFinal_ex:bad decrypt

The “ffff:192.168.1.6” looks a bit funny?

Since I cannot get an account, I cannot log in so I cannot try any “curl” commands to test it.

It used to work… :grimacing:

Any ideas, gratefully received.