Undocumented flash sequence after connecting to network: rapid red, rapid cyan, rapid yellow, repeat

That’s what I tried initially - see the post above Dave’s.

Can you try running the command prompt as administrator?

Same thing:

…and with --force…

To be honest, I am not sure I really understand the ‘keys’ thing. To the best of my knowledge there’s a pair of keys: one on the server and one on the core, and they have to match in some way. I tried to reprogram this core’s keys to a local server, and whilst it programmed ok, it didn’t connect: it went to the rapid orange/red/cyan flashing stage. It only seems to be this core: it’s brother sits happily next to it breathing away on public or local cloud…

It sounds like an openssl issue to me…

Are you using the latest Spark-cli? Use spark --version to check :slight_smile:

version comes back with: 0.4.94

I don’t think it’s SSL, As I mentioned at the start I have flashed another core between local cloud and public cloud no problem: it’s just this core that’s the pest.

@rblott,

sorry for not being detailed enough with the help. Been hacking things together for the past few nights

Can we do this:

1.) spark keys new

2.) Place core in DFU mode

3.) spark keys load core.pub.pem

If this step fails use this: dfu-util -d 1d50:607f -a 1 -s 0x02000:leave -D core.der

4.) spark keys send core_id core.der

Thanks, Kenneth. Here’s what happened:

1 OK
2 OK
3 “spark keys…” command failed, so tried “dfu-util…” which worked
4 Failed

See screen shot:

I am still trying to work out this keys business. This is what I am assuming happens with the various steps:

1 “spark keys new” creates a new key, without any argument as in the case here, it creates new public key files called “core.pub.pem”, “core.pem”, and “core.der”. The first two are “.pem” files, and the latter is a security certificate.

3 This loads the newly created “core.pub.pem” or “core.der” file into the core. When the core connects it uses this key combined with its core ID to match a key sent to the server in step 4.

4 This creates a new key combined with the core ID which is sent to the server to match the key the core will use to connect.

Am I broadly on the right track?

Many thanks, again.
Roger

@rblott,

i’m really happy to hear that the DFU-util worked! My workaround suggestion did work! :dancers:!!!

Alright so my bad for Step 4, i wrote everything off my memory.

It should be spark keys send core.pub.pem instead as mentioned here: https://github.com/spark/spark-cli#spark-keys-send

Once you do that, it should work fine. If it still doesn’t, we will reflash the the server keys and i’m 99% confident it will work! :smiley:

Thanks Kenneth - I’ll have to try this out later.

Is my commentary on how this works about right? I am planning to set up the local server on a Raspberry Pi and will have to flash the keys into the core on my windows machine because I can’t get dfu-util to work on the Pi. So, I really need to understand how this process works to ensure I get the keys right.

Thanks, again.
Roger

1 Like

The process sounds correct and dfu-util and nodejs works well on Rpi so there’s no reason why you can’t use :smiley: :smiley:

OK, here’s the latest:

1 I did what you suggested two posts ago and ran step 4 with "spark keys core.pub.pem and it went fine. Reset the core, ensured the config was to the public cloud etc. It flashed the normal green, then a blink of cyan, then three slow RED flashes, rapid cyan for about 10 seconds ending a a single cyan ‘breath’, before flashing green again. Not ideal.

2 I checked this condition out on the community and it seemed to indicate that it was a server connection problem, so I decided to fire up the local cloud, and re-program the core keys, but before doing this I started all over again with a new local cloud (erased all the old stuff and re-loaded the server into a new folder).

3 Having re-programmed the key into the core it connected and started breathing cyan: it worked! So, I seem to have the core (and it’s better-behaved cousin) breathing nicely on the local cloud.

4 I then tried to shift them back to the cloud using your tutorial above, but this didn’t work for either. I’ll leave them where they are for now and try again tomorrow.

Regarding the RPi: I got the node software installed and ran a server ok, but struggled with the dfu-util. I used the “Quick Install on a Raspberry Pi” from GitHub (dmiddlecamp) and downloaded the dfu application from the alternative link (https://s3.amazonaws…etc), executed the tar xvf command, and then cd’d into dfu-util-binaries-0.8. But after that “./configure” wasn’t recognised, nor make, not dfu-util… Not sure what’s going on here. Any ideas?