Local Cloud on Raspberry Pi

Ya, it is in dfu mode and it gives me a few lines of ‘#’ until completion.

Hmmm. Can you show what files you have in the js and core_keys folder?

1.) The default_key.pub.pem might be missing in the js folder

2.) I have no idea since the server will self resolve if a file is not found.

3.) Maybe the flashing of server public key to the core wasn’t successful

  1. The default_key.pub.pem is present in the /js folder. No issues in the server/cloud side. The /core_keys dir is (lets say) empty when created.

  2. I flashed the default_key.pub.pem (which is the public key of the cloud I guess). Worked fine.
    Command : spark keys server default_key.pub.pem < Cloud_IP>

  3. Now to populate the core_keys directory, I will need a public key of the core to reside inside the core_keys folder. Correct me if I am wrong here.
    So to generate the public_key_core we need to extract it from the core using dfu-util. Here’s where I am facing the problem. The command “spark keys save < Core ID>” will create a file by the name < Core ID>. But the server doesn’t recognize it and keeps looking for a pub.pem extension. So if I generate the keys by “spark keys save < Core ID>.pub.pem”, the server stops looking for the file but says it’s an invalid public key.

    So can you list out the files present in your “/spark-server/js” and “/spark-server/js/core_keys” folder as well as the /.spark/spark.conf.json file. Thanks.

Also I would appreciate your help in another matter.

  1. If connected to the spark cloud, the core breathes CYAN but I cannot ping the core. Why is it so?? All the applications work fine including TCP applications.
  2. If I include “spark_disable_cloud.h”, the core doesn’t connect to the cloud, breathes GREEN and I can ping the core now. But none of the applications related to the network such as MQTT or TCP related applications seem to work. I am not able to figure it out. Any idea…

Thanks @kennethlimcp

1.) Can you check your spark-cli version? spark --version

2.) If your core is patched with the latest CC300 version 1.2.9, the ping function is diabled by TI. We need to work with them to add it back

3.) The behavior is a little weird for your situation you describe for the cloud disable.

I would like to test and see why the ping is working.

Are you free? I can troubleshoot the issue offline for you :slight_smile:

1 Like

@gaurav,

I did a test and it seems like you might not have the latest CC3000 patch on?

Here’s the ping result with this firmware:

#include "spark_disable_cloud.h"

void setup()
{
    pinMode(D7, OUTPUT);
    digitalWrite(D7, HIGH);
}

Ping doesn’t work for me at all whether the core is connected to the LC or not :wink:

1 Like

Sure we can take it offline. It’s going a bit off-topic. I do have a few screenshots for you.

Thanks.

Hi @gauravptalukdar,

If you have the latest spark-cli, and the local cloud, you can also get your key in place with the keys doctor:

# connect your core in dfu mode:
spark keys doctor your_core_id

You might need to update your copy of the local cloud with something like:

cd spark-server/
git pull

:smile:
Thanks,
David

Hey @Dave and @kennethlimcp, Well I got the core connected to the local cloud. Thanks for the help. But there is this one thing. It disconnects and reconnects very frequently. Any advise on that. Here’s the output:

Connection from: 192.168.1.103, connId: 1
on ready { coreID: ‘55ff6b065075555332071787’,
ip: ‘192.168.1.103’,
product_id: 0,
firmware_version: 6,
cache_key: undefined }
Core online!
Connection from: 192.168.1.103, connId: 2
on ready { coreID: ‘55ff6b065075555332071787’,
ip: ‘192.168.1.103’,
product_id: 0,
firmware_version: 6,
cache_key: undefined }
Core online!
Connection from: 192.168.1.103, connId: 3
on ready { coreID: ‘55ff6b065075555332071787’,
ip: ‘192.168.1.103’,
product_id: 0,
firmware_version: 6,
cache_key: undefined }
Core online!

Hi @gauravptalukdar,

There are a lot of reasons why this could happen. Can you give us some more info about your setup, what you’re doing when the core disconnects, type of core, etc, etc. :slight_smile:

Thanks,
David

@Dave, i posted about this on github as well.

Using any user firmware other than tinker exhibits this behavior. It’s weird but you can test it out :stuck_out_tongue:

We just see SOS flashes over a few different user using Local :cloud:

Exactly @kennethlimcp. getting the tinker app onboard gets the core connected immediately but any other application exhibits this behavior. Though sometimes it eventually gets connected after around 20 to 40 attempts with the following output:

Connection from: 192.168.1.103, connId: 1
on ready { coreID: ‘55ff6b065075555332071787’,
ip: ‘192.168.1.103’,
product_id: 65535,
firmware_version: 65535,
cache_key: undefined }
Core online!
routeMessage got a NULL coap message { coreID: ‘55ff6b065075555332071787’ }
got counter 45466 expecting 45465 { coreID: ‘55ff6b065075555332071787’ }
1: Core disconnected: Bad Counter { coreID: ‘55ff6b065075555332071787’,
cache_key: undefined,
duration: 0.067 }
Session ended for 1
Connection from: 192.168.1.103, connId: 2
on ready { coreID: ‘55ff6b065075555332071787’,
ip: ‘192.168.1.103’,
product_id: 65535,
firmware_version: 65535,
cache_key: undefined }
Core online!
routeMessage got a NULL coap message { coreID: ‘55ff6b065075555332071787’ }
got counter 22124 expecting 22123 { coreID: ‘55ff6b065075555332071787’ }
1: Core disconnected: Bad Counter { coreID: ‘55ff6b065075555332071787’,
cache_key: undefined,
duration: 0.032 }
Session ended for 2
Connection from: 192.168.1.103, connId: 3
on ready { coreID: ‘55ff6b065075555332071787’,
ip: ‘192.168.1.103’,
product_id: 65535,
firmware_version: 65535,
cache_key: undefined }
Core online!
routeMessage got a NULL coap message { coreID: ‘55ff6b065075555332071787’ }
got counter 22040 expecting 22039 { coreID: ‘55ff6b065075555332071787’ }
1: Core disconnected: Bad Counter { coreID: ‘55ff6b065075555332071787’,
cache_key: undefined,
duration: 0.024 }
Session ended for 3
Connection from: 192.168.1.103, connId: 4
on ready { coreID: ‘55ff6b065075555332071787’,
ip: ‘192.168.1.103’,
product_id: 65535,
firmware_version: 65535,
cache_key: undefined }
Core online!

And phew!! finally gets connected. I guess its that incremented counter value that is received.

-Gaurav

Aha! I bet I know what this is! Race condition in the GetTime post-handshake. Probably isn’t normally seen due to small background latency, comes out in local cloud… 1 sec…

edit: posted issue for this here https://github.com/spark/spark-server/issues/18

If you’re compiling locally, can you try commenting out the send_time_request line here?

Thanks,
David

Didn’t help @Dave

I will try to dig into it a little bit.

-Gaurav

Hi @gauravptalukdar,

Oh, okay, cool! Commenting out that line and doing a fresh build / installing that firmware didn’t help, that’s good to know! Hmm. I’m guessing some other race then. The server error logs you’re seeing essentially say it’s getting a message too early, that another message should have been received before it. Can you edit the SparkCore module, and add a line for me, and send those new logs?

Right after this line (the “got counter, expecting” ):

Add this line:

 console.log("core got message of type " + msg._type + " with token " + msg.getTokenString() + " " + messages.getRequestType(msg));

Thanks!
David

Hi @Dave sorry, I have no idea where to implement this file. :expressionless: Is this .js module in the installed spark server.? I can only see the spark protocol source files in the core-communication library.

I think it should be spark-server/js/node_modules/spark-protocol/js/clients/SparkCore.js

Does anyone know how can I view the console.log output if I’m firing up the main.js on startup?

Previously, I manually spin up the Rpi and used screen node main.js.

This allowed me to relogin that screen and view the logs.

What happens if it was running on startup? :slight_smile:

Hi @kennethlimcp,

Good question! A common solution is to pipe the output to a file, you can throw something like this in your startup script:

/usr/local/bin/node main.js >> /var/log/my-server.log

And then you can watch it in realtime with:

tail -F /var/log/my-server.log

Thanks,
David

1 Like

So there’s no way to enter the process and watch the “live feed”? :slight_smile:

Hi @kennethlimcp and @Dave,
Quick silly question, I’m trying to use curl to interact with the local cloud but I keep getting “The access token provided is invalid.” I use curl to interact with the Spark API all the time but it’s not working for my local sever.
Is there anything I should consider?
Thanks a lot!