I also found that starting the local cloud like this: ‘node ./spark-server/js/main.js’ causes it to create new keys (if none found) in the directory you are currently in. Which was obviously causing crypto errors when I was trying to get my cores to connect.
A server public key is generated the first time the local runs and should probably be the same key you use to overwrite that in your core.
I’m not sure how this is causing issues for you unless you self generated a server key?
lets say I’m in /home/pi and I do ‘node ./spark-server/js/main.js’ it ignores the keys it already made in /home/pi/spark-server/js and creates a new set for the server in /home/pi
Hi Kenneth,
Thanks for this remarkable (and labyrinthine) tutorial. I got through it all and re-programmed my Core, but I had a couple of (unrelated) problems:
(i) whenever it came to putting the Core into DFU mode and re-programming it, the command failed because dfu-util wasn’t found. I solved this my making copies of it (and libusb.dll) in whatever folder I was working in. I re-checked the PATH environment variables and the path to dfu-util is there ok, but doesn’t seem to work as it should;
(ii) my Core connected to the new server ok, once I restarted the server (“node main.js”), but then kept disconnecting and reconnecting. Eventually, a red-led SOS came up with a HARD FAULT and I decided to call it a day - just too tired - and I’ll try again tomorrow.
I had a couple of questions:
(a) is my problem in (i) a common problem and can it be fixed?
(b) once I do get the core properly connected on the local Cloud, will I be able to use the spark-cli functions as before, and will curl commands to the Core still work in the usual way: i.e. do I log in under the same log in email and password, and will my access key still be valid?
© I am somewhat confused by the (huge) server public key - do I have to do anything about this?
Many thanks, again, for a remarkable tutorial.
Glad the tutorial was helpful! That’s the purpose of writing it anyways
1.) Sounds like you are on Windows. Just make sure the path to DFU-UTIL is added to the PATH for the command prompt: http://stackoverflow.com/questions/9546324/adding-directory-to-path-environment-variable-in-windows
- The same behaviour will be observed if the Spark
goes offline as well but there is redundancy built in so this is uncommon. Once that occurs, the core should automatically go back online once the server is back up.
I have tested that behavior and would like to know what program is your core running? If you are on the latest tinker firmware it should work fine. Same goes for new program compiled via the Spark build farm
3.) Most functions are available except…Multi-user support and cloud compilation. You can see the full list here: https://github.com/spark/spark-server#what-features-are-currently-present
4.) Are you referring to the server public key printed on the console during node main.js
? That’s fine so not to worry about it!
5.) Spark-cli will work the same except that you will need to switch profile like i mentioned in the tutorial. You can create a new account with generically any email or password that does not need to match those of the Spark .
You are essentially running an entirely new and everything is fresh and new.
Have fun
Kenneth,
Many thanks.
1 This problem was really simple once I used the PATH command and looked at the path string: there was a reference to dfu-util.exe earlier in the path string which related to a previous installation and was no longer valid. Deleted that and all fine.
2 Core reprogrammed with new server and breathing beautifully, and functioning normally as far as I can tell. I am running a really simple program that detects the output of a PIR sensor and flashes the on-board led if there is a change in state, and sends a publish message at the same time;
int previous = 0;
int current = 0;
void setup() {
pinMode(D0,INPUT);
pinMode(D7,OUTPUT);
current = digitalRead(D0);
previous = current;
}
void loop() {
current = digitalRead(D0);
if (previous != current){
Spark.publish("movement");
previous = current;
digitalWrite(D7, HIGH);
delay(500);
digitalWrite(D7, LOW);
}
delay(100);
}
5 I have one final problem: I cannot seem to log in via ‘spark login’. My old log in details from the spark config are fine for that config, but when I switch to the new config the log in fails. I actually have two computers running. The laptop is running the normal spark cloud and is pulling data off one core (k1), and this computer is running a new server configuration and is connected to the second core (k2). The laptop has me logged in under my standard spark login. How do I create a new log in for the new server configuration?
Many thanks, again.
Roger
A couple of things:
1Here is the screen print once I start the server (“node main.js”):
You’ll see that there’s an IP address: 192.168.1.101, and also references to 192.168.1.9. Which one do I use when setting the new key?
2 When it comes to using a curl + http request the spark cloud one looks like this:
curl -s -k https://api.spark.io/v1/devices/k1/pot?access_token=XXXXXXXXXXXXXXXXXXXXXXX etc
(where ‘k1’ is the core name, and ‘pot’ a spark variable)
what would change for a local cloud request?
Thanks,
R
Perform a spark logout
and no for access token removal
Use spark config identify
to see which profile you are currently on and whether it’s pointing to the local profile.
There is also spark config list
to figure out what profiles you have created.
If you want to be safe, delete all the profiles file except spark.json and recreate using spark config profile_name apiUrl "http://DOMAIN_OR_IP"
The only change is the domain name or IP address and access token for the same request
The line that says server IP address is the one. The other refers to the core IP address!
Thanks. Did what you said, then went through a complete ‘spark setup’ route and created a new account (it took me a while because the interaction for setting up a new account isn’t very user friendly!). Now busy subscribing to the core. It’s interesting that when using the spark cloud server the commands ‘spark config list/identify’ don’t return anything - at least on my pc.
Regarding the changes to the request line: no problem changing the access token (I just got a new one from the setup process), but to be clear on what changes for the domain name:
Before:
https://api.spark.io/v1/devices/k1/pot?access_token=
Now:
https://192.168.1.101/v1/devices/k1/pot?access_token=
Is that correct?
That’s right! i guess cos the PR was merged recently so perform an update using sudo npm update -g spark-cli
I have made some progress: got a Core working, breathing merrily, on my new local server and responding to spark commands, but I have got stuck on using curl. I have a core running a sketch which reads an LDR and gives a reading to a spark variable called ‘pot’. The core is called ‘k1’.
The original curl command when I was using the Spark Cloud looked like this:
curl -s -k https://api.spark.io/v1/devices/k1/pot?access_token=XXXXXXXXXXXXXXXXXXXXXXXX
and it worked fine, pulling up the value of ‘pot’.
For the local Cloud I replaced it with
curl -s -k http://192.168.1.101/v1/devices/k1/pot?access_token=XXXXXXXXXXXXXXXXXXXXX
taking the access key from the original login after I switched to the local server (called ‘ks’) which is the same as the one in the profile ks.config.json
The result was that there was no result (I used https and http) - it just popped up with a new command prompt. I removed the -s and -k from the command:
curl -http://192.168.1.101/v1/devices/k1/pot?access_token=XXXXXXXXXXXXXXXXXXXXX
responded with:
curl: (7) Failed to connect to 192.168.1.101 port 80: Connection refused
Clearly I m missing something in the URL section. Can you help, please?
Many thanks,
Roger
I solved the problem. The correct curl was this:
curl http://192.168.1.101:8080/v1/devices/k1/pot?access_token=XXXXXXXXXXXXXXX
Which mirrors what was in the server config file (ks.config.json)
{
"apiUrl": "http://192.168.1.101:8080",
"username": "exciting@emailaddress.com",
"access_token": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX"
}
All the best,
Roger
I set this up again today and where it says http://DOMAIN_OR_IP/
in step 1, I put in http://192.168.1.102
- but I had to add the port 8080 for it to work, i.e. http://192.168.1.102:8080
.
In some ways, blindly following instructions feels like driving with your eyes closed, but on the other hand, if I’m cognizant and start making changes, then I’m not following the instructions!
Thanks for the feedback! I have added an extra note for that
For the last step when I run node main.js, I got the error:
Caught exception: Error: listen EADDRINUSE{"code":"EADDRINUSE","errno":"EADDRINUSE","syscall":"listen"}
something blew up { '0': { [Error: listen EADDRINUSE] code: 'EADDRINUSE', errno: 'EADDRINUSE', syscall: 'listen' } }
I follow the every step, and I did not see
Connection from: 192.168.1.159, connId: 1 on ready { coreID: '48ff6a065067555008342387',
ip: '192.168.1.159',
product_id: 65535,
firmware_version: 65535,
cache_key: undefined }
Core online!
I checked there is processing on port 8080.
You will need to kill the process and run the node app again or change the port to another one
Do I need change the spark-server code? if I change the port to another one?
Do you mean, I need first run node main.js, then after everything setup keep it running, open another terminal and run node main.js with another port?
sorry for the lack of useful information as i was replying via mobile.
1.) There’s some other processes running on port 8080 and the spark-server did not managed to fire up. If you are comfortable of looking up what is running on port 8080 and turn it off that would be great!
2.) You can change the port to something else for testing here: https://github.com/spark/spark-server/blob/650275feadaa5a5c28415df485ad3073e10b4e97/js/main.js#L97
But to make sure that your Spark-cli config file is updated as well
3.) The console log output that you should see if everything runs smoothly is:
Loading user ken@lc.com
connect.multipart() will be removed in connect 3.0
visit https://github.com/senchalabs/connect/wiki/Connect-3.0 for alternatives
connect.limit() will be removed in connect 3.0
Starting server, listening on 8080
static class init!
found 48ff6a065067555008342387
found 48ff6a065067555008342387
found 53ff65065067544816420487
found 53ff65065067544816420487
found 53ff6f065075535135261687
found 53ff6f065075535135261687
found 55ff6e065075555329461687
Loading server key from default_key.pem
set server key
server public key is: -----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0I98joL/q7+M9jdvmek3
b3M9T4GpoT1ruIplrQwmw7yJrR/UHI3AGOGR1xS/LGyEYneWWa3O+s2wZWbY3Ahw
16RAVVYc10k50wIphtYs1ktEiAVsCaCz4rDhFr1Pfkh6Kcb2TmP6dWRNjyTd4Wuh
pkmfRiilUQjY+AfRKEkHZrnkzHfdMCj747RiB6gxE0biprvZN+DdSSajUq1Ju3D9
MgBPh1RfvS0iamv3DFpN3X6u38VskM8MXjfMXnnn6rrUeAyxjiV5NxeknHwz/KMt
mHRgFXwY1qPfhfrxZO1PRreSdNOjjBdLRS7hed2CJLXlGyJIV34nJWr7DsfldnTN
wQIDAQAB
-----END PUBLIC KEY-----
Your server IP address is: 192.168.1.143
server started { host: 'localhost', port: 5683 }
the node app will run forever until you stop it or close the terminal
4.) Only 1 copy of the main.js needs to be running so there’s no need for an additional terminal etc…
I did the same steps as the tutorial.
I flushed spark keys to the core using
spark keys server default_key.pub.pem IP_ADDRESS
However after I hitting the RST button, the core is flashing blue light. So I run node main.js, but never shows the following information:
Connection from: 192.168.1.159, connId: 1 on ready { coreID: '48ff6a065067555008342387',
ip: '192.168.1.159',
product_id: 65535,
firmware_version: 65535,
cache_key: undefined }
Core online!
Only shows
.......
Your server IP address is: 192.168.1.143
server started { host: 'localhost', port: 5683 }
What confusing me is step 7,
First you have make sure spark-server is running, then open separate CMD, and cd to spark-server/js, run node main.js, does it means there are two terminals running spark-server?
Thank you so much! @kennethlimcp