Installing personal cloud on a remote server

Hi!

I am attempting to put a copy of Spark-Server up on a remote server. The server is up and running, but I am having difficulty with the authentication/keys at handshaking. The server reports it is “expecting to find a public key in the core_keys directory” of the server. Well, that directory is empty.

I see the problem is that

spark-server/js/default_key.pub.pem

is remote (Finland), but my Core local (Chicago) and I need to run

spark keys server default_key.pub.pem xxx.xx.xx.xxx

locally with the core plugged into the USB port. So, I though I could just copy the default_key.peb.pem from the server and execute it locally. Then would I take the new.peb.pem and put it back in the remote server’s core_keys directory?

I guess I need a better explanation/understanding of how this authentication works and what the spark-CLI interface is actually doing to the server and to the core in the key creating process.

Any guidance would be appreciated.

Jason

Yup that’s right

So here’s how you can do it

1.) spark keys server default_key,pub.pem domain_url

2.) spark keys save core_id (be sure to replace core id with your core id)

3.) place the core public key in thecore_keys` directory in your server

spark keys server is replacing the public key and domain name saved in the External flash of your core that is used during connection. So the core knows which domain name to handshake with and what public key to perform encryption

spark keys save simply copy out the private key and make a public key that will be uploaded to the server.

OK, that works! Thank you, that was a concise and clear answer which fixed the issue immediately.

Now, I see that the core is connected on the server side of things, however, when I run spark list I see no spark cores connected and, further, can’t claim my core (if I even need to do this?)

Thanks again,

Jason

1 Like

the claiming function is not available on the spark-server for the 1st release.

You can try to restart the main.js again and see if the list appears.

Also, make sure your Spark-cli is pointing to your own server and not the Spark :cloud: :wink:

Yeah, checked and double checked that I was pointing to the correct server. Restarted server a few times. Server sees the core:

Your server IP address is: XX.XX.XX.XX
server started { host: 'localhost', port: 5683 }
Connection from: XX.XX.XX.XX, connId: 1
on ready { coreID: '53ff6bREDACTED572267',
  ip: 'XX.XX.XX.XX',
  product_id: 0,
  firmware_version: 7,
  cache_key: '_0' }
Core online!

and on the local side, via Spark CLI:

Checking with the cloud...
Retrieving cores... (this might take a few seconds)
No cores found.

I will continue to tinker with it!

Thanks,

Jason

How did you point Spark-cli to the server IP address?

There’s 2 ways to do it.

1.) Edit the IP address in the spark.json file in the .spark directory. This is on your laptop

2.) Add in a new profile using spark config my_server_name apiUrl http://xxx.xxx.xxx

switch over the profile using spark config my_server_name

To switch back to spark :cloud:, the command is spark config spark

The documentation is here: https://github.com/spark/spark-cli#spark-config

If you did switched to your server, the 1st step would be being required to login

Hey!

Almost.

At first I was just changing the local spark-cli config file to the new server, but then I tried it your way (thanks for the link). I went back and reinstated the default spark :cloud: .config file and am able to log in and see my cores on the Spark :cloud: , so that is up and working. Thanks!

Now, creating a local config file works, and I can see the spark local login hitting my remote server, but now the authorization token in tweaked (which may have been the problem all along) . . . see this:

xxx.xxx.xxx.xxx - - [Sun, 16 Nov 2014 15:41:02 GMT] "POST /oauth/token HTTP/1.1" 503 83 "-" "-"

So I am trying to figure out how to reset or retrieve my authorization token . . . and that should clear things up.

Thanks,

Jason

Simply delete the access token in the .json file and re-login again :slight_smile:

Ok, that solved that issue. Thank you.

Able to log on to the server via CLI, though spark list still returns No cores found. I was also trying some of the curl-type commands (customizing for the new server address), but the server kept refusing anything on port 443 (perhaps that is not implemented in the personal server?) I do know 443 is open on the server.

I learned a good deal about keys and tokens today.

Jason

So you mentioned you see the cores listed when you connect to the spark :cloud:?

That means you have not switch the core server keys to your server :wink:

Well, I thought I had done that, so I went back and started the whole process again. See if you can find the problem:

Log into server and grab the default_key.pub.pem:

scp jason@xxx.xxx.xxx.xxx:/home/jason/spark-server/js/default_key.pub.pem /Users/jason/Documents/localFolder
jason@xxx.xxx.xxx.xxx's password:
default_key.pub.pem 100% 451 0.4KB/s 00:00

Then write that key to the core:

$ spark keys server default_key.pub.pem xxx.xxx.xxx.xxx
Creating DER format file
running openssl rsa -in default_key.pub.pem -pubin -pubout -outform DER -out default_key.pub.der
running dfu-util -l
checking file default_key.pubXXX_XXX_XXX_XXX.der
spawning dfu-util -d 1d50:607f -a 1 -i 0 -s 0x00001000 -D default_key.pubXXX_XXX_XXX_XXX.der
dfu-util 0.7

Copyright 2005-2008 Weston Schmidt, Harald Welte and OpenMoko Inc.
Copyright 2010-2012 Tormod Volden and Stefan Schmidt
This program is Free Software and has ABSOLUTELY NO WARRANTY
Please report bugs to dfu-util@lists.gnumonks.org

Filter on vendor = 0x1d50 product = 0x607f
Opening DFU capable USB device... ID 1d50:607f
Run-time device DFU version 011a
Found DFU: [1d50:607f] devnum=0, cfg=1, intf=0, alt=1, name="@SPI Flash : SST25x/0x00000000/512*04Kg"
Claiming USB DFU Interface...
Setting Alternate Setting #1 ...
Determining device status: state = dfuERROR, status = 10
dfuERROR, clearing status
Determining device status: state = dfuIDLE, status = 0
dfuIDLE, continuing
DFU mode device DFU version 011a
Device returned transfer size 1024
No valid DFU suffix signature
Warning: File has no DFU suffix
DfuSe interface name: "SPI Flash : SST25x"
Downloading to address = 0x00001000, size = 1024
.
File downloaded successfully
Okay! New keys in place, your core will not restart.

Then I ran spark identify just to be sure I had the correct ID. Then ran spark keys save

$ spark keys save 53ff6b0REDACTED72267
running dfu-util -l
FOUND DFU DEVICE 1d50:607f
running dfu-util -d 1d50:607f -a 1 -s 0x00002000:1024 -U 53ff6bREDACTED72267
running openssl rsa -in 53ff6bREDACTED72267 -inform DER -pubout -out 53ff6bREDACTED72267.pub.pem
Saved!

So, that all looks good . . . went ahead and reset WiFi credentials on the core as they had been overwritten at some point . . . then sent the new token over to the server

$ scp 53ff6bREDACTED72267.pub.pem jason@xxxx.xxx.xxx.xxx:/home/jason/spark-server/js/core_keys
jason@xxx.xxx.xxx.xxx's password:
53ff6bREDACTED72267.pub.pem 100% 272 0.3KB/s 00:00

And that completed successfully . . . started up the spark-server

$ node main.js
RolesController - error loading user at /home/jason/spark-server/js/users/npm-debug.log
connect.multipart() will be removed in connect 3.0
visit Connect 3.0 · senchalabs/connect Wiki · GitHub for alternatives
connect.limit() will be removed in connect 3.0
Starting server, listening on 8080
static class init!
found 53ff6bREDACTED72267 <-- it found it!
Loading server key from default_key.pem
set server key
server public key is: -----BEGIN PUBLIC KEY-----
blah, blah, blah, blah, blah
blah, blah, blah, blah, blah
blah, blah, blah, blah, blah
blah, blah, blah, blah, blah
-----END PUBLIC KEY-----

Your server IP address is: XXX.XXX.XXX.XXX
server started { host: 'localhost', port: 5683 }
Connection from: xxx.xxx.xxx.xxx, connId: 1
on ready { coreID: '53ff6bREDACTED72267',
ip: 'xxx.xxx.xxx.xxx',
product_id: 0,
firmware_version: 7,
cache_key: '_0' }
Core online!

Yea!, it's online, let's go over and check spark list on the local machine . . .

$ spark list
Checking with the cloud...
Retrieving cores... (this might take a few seconds)
No cores found.

So, the core is online and yet cannot be found . . . Heisenberg?

Is the core breathing cyan?

1.) You did not send in the server IP Address

spark keys server xxxxx.pub.pem IP_ADD

2.) Is spark-cli pointing to the Spark :cloud: or local :cloud:?

3.) “Found” is simply means the core public key was found. :wink:

Hey:

  1. The core is breathing cyan. I can toggle the server off and on and the core disconnects and reconnects perfectly. The core is connected to the server because the server says “core connected!”

2 ) I though I had sent the server IP address here:

$ spark keys server default_key.pub.pem xxx.xxx.xxx.xxx <--my IP address
Creating DER format file
running openssl rsa -in  default_key.pub.pem -pubin -pubout -outform DER -out default_key.pub.der
running dfu-util -l
  1. The CLI is pointing at my local cloud, and I switch back and forth, checking my core on the Spark Cloud, so this is working as expected.

I am simply trying to follow instructions here and I am getting frustrated, so I am gonna put it aside for awhile. I think if I understood the intricacies involved I might have a better shot at it (what goes where and why?). I am going to go back to your original tutorial (in which the core is physically connected to the server at the time of setup). I have performed that tutorial successfully numerous times! :smiley: Maybe I can make the leap to a remote setup that way . . .

Wait, I don’t have to run the Spark-CLI on the remote server, do I?

Argh.

Jason

1 Like

Simply make sure that spark-cli is using yhe profile for your local :cloud: and it should be listed :smile:

@kennethlimcp i want to deploy a spark-server in my instance.
But after “particle keys doctor core_id” i can see core_keys dir have this core’s .json and .pem file.
I must restart my spark-server then i can get the new core’s data from api or "particle list"
Is any possible do not need to restart spark-server?