There’s some issue with specific event subscription. Simply use the mass event firehose on the local cloud for now.
I'm trying to get the server running on an RPi. There are several issues:
-
main.js tells me there are no users. I thought I set myself up as one with spark setup. This might have set my up on the cloud, not locally.
-
I was getting the same errors as reinerge so I tried backing up to an earlier version of nodejs (0.10.36) . Main.ls now crashes with this error:
Caught exception: Error: /spark/spark-server/js/node_modules/ursa/bin/ursaNative.node: undefined symbol: node_module_register{}
I'm currently compiling nodejs 0.10.29 to see if it will work.
I'm getting close to getting the local server to work, any help on these last issues will be great.
Thanks
For question 1:
1.) We will now create a new server profile on Spark-cli using the command:
spark config profile_name apiUrl "http://DOMAIN_OR_IP"
For the local cloud, the port number 8080 needs to be added behind: http://domain_or_ip:8080
This will create a new profile to point to your server and switching back to the spark cloud is simply spark config spark and other profiles would be spark config profile_name
You can use the command spark config identify
to know which profile you are using
2.) Did you follow my tutorial here?
http://gitbook.com/read/book/kennethlimcp/spark-local-cloud-on-raspberry-pi
Hi,
I have followed your tutorial and I can successfully start a local and/or remote Spark server, as well as interact with it as expected via RESTful requests and SparkJS implementations (+/- missing features such as spark core remove
, et cetera).
However, I have one problem: It seems that spark list
and/or a direct http://[ip]:8080/v1/devices?access_token=[access_token]
call will return an empty array until the server process is killed and restarted! (To clarify, it is not always empty, it just does not reflect any devices added since the last server restart, which is equivalent to being 0-length in the case of adding the first device.)
I have tried using spark core add
, and also manually SCP’ing the generated key files from spark keys save
into the ../spark-server/js/core_keys
folder. I can see in the server output it recognizes /provisioning
requests, and I can confirm that the key files are created in /core_keys
– but they are not acknowledged until I kill and restart node main.js
.
I should also point out that, even though these device list requests do not correctly indicate the newly added Spark, the Spark does successfully handshake with the server:
Connection from: [core_ip], connId: 2
on ready {
coreID: '[core_id]',
ip: '[core_ip]',
product_id: [product_id],
firmware_version: [firmware_version],
cache_key: '_1' }
Core online!
The core also successfully emits events, which can be seen by using the /events
endpoint as well as through SparkJS’s onEvent().
It seems the problem is only the /devices
endpoint.
Obviously this is not very useful since restarting the server process for every new device (or batches of devices) provisioned is not really a sustainable method of operating a service. Is there a way to force the server to refresh the devices it is “aware” of?
Reported this last September… https://github.com/spark/spark-server/issues/28
Aha!! Gotcha.
It looks like the server is working! The Spark is breathing cyan, and it shows up in “spark list”.
Good things!
Now how do I flash it?
I assume I need to compile my code in the cloud and get a firmware_XXX…bin file.
How do I get that to the locally served core?
I forgot to say thanks kenneth. The git book you pointed me to resolved a lot of issues…
One thing though. The newest npm install did put the ursa stuff in properly.
Thanks
You need to use Spark to compile or compile it locally to get a bin file.
So the commands are:
1.)spark config spark
2.)spark compile .......
3.) spark config PROFILE_NAME
4.spark flash CORE_NAME xxxxxxx.bin
You can string all into a one line command to make it easier to run
Hot Damn! It worked!
Thanks!
How do I install the WifFi credentials?
What do you mean?
Simply follow the usual procedure of setting up wifi on the core…
It looks like the wifi credentials are sent via the Spark app from my iPad.
Does the Spark app need a server to work? Will the Spark app on my iPad link to my local spark server?
The usual method worked fine. The core connected to the new and breathed cyan. The Spark app never finished connecting, I think that makes sense. What is “listen mode” and how did the app find the core? Also, how does the cloud server find the cores? I don’t have a fixed ip address. I thought that was needed to access a system from the world at large? I know this is a mistaken concept, but I never really thought about it.
I have the server running and 2 cores connected. When I test the connections with this line:
curl "http://10.0.0.11:8080/v1/devices/5...............7/AllValues?access_token=XXXXXXXXXXXXXX"
I get a different responses depending on which system issued the command. My mac gets the data from “AllValues” fine. The Raspberry Pi running raspberian and is the system running the spark-server gets the following:
{
"code": 400,
"error": "invalid_grant",
"error_description": "The access token provided is invalid."
}
Along with the error, the Spark Core is knocked off line a while.
I’ve check the config files in ~/.spark and they are identical.
Where else should I look? I’ve tried using localhost instead of the IP address, that had the same error.
Hi @kennethlimcp I was downloading a new copy of spark-server today for another install and noticed that the spark-server/js subdirectory was gone. It makes things more streamlined with main.js being in the parent. Just thought I’d make note of it until you get around to tweaking your great tutorial once again.
@techbutler, thanks!
I have updated the instructions and also added in newer information that allows you to check which profile Spark-cli is pointing to.
Let me know if you see more issues
Just the same small tweak in the README.md at https://github.com/spark/spark-server
@kennethlimcp, I’ve been having trouble getting the local cloud working although it seems very close.
I’m runing on a mac (mavericks) with node v0.12.0
I have made and account on my local cloud. Loaded the keys onto my core per your tutorial, and did spark keys save
as well. When I plug my core in, I get this output from my server.
Connection from: ::ffff:10.0.0.46, connId: 2
CryptoStream transform error TypeError: Cannot read property 'length' of null
CryptoStream transform error Error: error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt
CryptoStream transform error TypeError: Cannot read property 'length' of null
on ready { coreID: '54ff6b066672524810391167',
ip: '::ffff:10.0.0.46',
product_id: 0,
firmware_version: 11,
cache_key: '_1' }
Core online!
onSocketData called, but no data sent.
1: Core disconnected: socket close false { coreID: '54ff6b066672524810391167',
cache_key: '_1',
duration: 25.046 }
Session ended for _1
Connection from: ::ffff:10.0.0.46, connId: 3
CryptoStream transform error TypeError: Cannot read property 'length' of null
CryptoStream transform error Error: error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt
CryptoStream transform error TypeError: Cannot read property 'length' of null
on ready { coreID: '54ff6b066672524810391167',
ip: '::ffff:10.0.0.46',
product_id: 0,
firmware_version: 11,
cache_key: '_2' }
Core online!
onSocketData called, but no data sent.
1: Core disconnected: socket close false { coreID: '54ff6b066672524810391167',
cache_key: '_2',
duration: 25.086 }
Session ended for _2
Connection from: ::ffff:10.0.0.46, connId: 4
CryptoStream transform error TypeError: Cannot read property 'length' of null
CryptoStream transform error Error: error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt
CryptoStream transform error TypeError: Cannot read property 'length' of null
on ready { coreID: '54ff6b066672524810391167',
ip: '::ffff:10.0.0.46',
product_id: 0,
firmware_version: 11,
cache_key: '_3' }
Core online!
onSocketData called, but no data sent.
1: Core disconnected: socket close false { coreID: '54ff6b066672524810391167',
cache_key: '_3',
duration: 25.086 }
Session ended for _3
Connection from: ::ffff:10.0.0.46, connId: 5
CryptoStream transform error TypeError: Cannot read property 'length' of null
CryptoStream transform error TypeError: Cannot read property 'length' of null
on ready { coreID: '54ff6b066672524810391167',
ip: '::ffff:10.0.0.46',
product_id: 0,
firmware_version: 11,
cache_key: '_4' }
Core online!
CryptoStream transform error Error: error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt
onSocketData called, but no data sent.
1: Core disconnected: socket close false { coreID: '54ff6b066672524810391167',
cache_key: '_4',
duration: 25.079 }
Session ended for _4
Connection from: ::ffff:10.0.0.46, connId: 6
CryptoStream transform error TypeError: Cannot read property 'length' of null
CryptoStream transform error TypeError: Cannot read property 'length' of null
on ready { coreID: '54ff6b066672524810391167',
ip: '::ffff:10.0.0.46',
product_id: 0,
firmware_version: 11,
cache_key: '_5' }
Core online!
CryptoStream transform error Error: error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt
onSocketData called, but no data sent.
1: Core disconnected: socket close false { coreID: '54ff6b066672524810391167',
cache_key: '_5',
duration: 26.041 }
Session ended for _5
Connection from: ::ffff:10.0.0.46, connId: 7
CryptoStream transform error TypeError: Cannot read property 'length' of null
CryptoStream transform error TypeError: Cannot read property 'length' of null
on ready { coreID: '54ff6b066672524810391167',
ip: '::ffff:10.0.0.46',
product_id: 0,
firmware_version: 11,
cache_key: '_6' }
Core online!
From the looks of it the core keeps connecting over and over, getting “CryptoStream transform error TypeError”.
Any ideas as to why this might be happening?