Configuring Cores Blinking Green [SOLVED]

Hi Everyone,

I am trying to program a large number of cores and have run into a difficult snag.

What I am doing is using a usb cable and dfu-util to pass the credentials and claim the core then flashing from the command line with the command spark flash its_deviceId bin_file_path

Anyways this worked Great for 3 units on the fourth I got a flashing green led when the device should have transitioned to breathing cyan. While it is blinking green it will give a quick pulse of cyan every couple of minutes it lasts less than a second.

I have had this happen to each and every core since the 3 good unit. I now have 8 cores in my problem pile. They are all brand new right of the box.

Now for my setup I am using a PC and a TP-Link AP. I have rebooted the pc multiple times during this and have removed power from the AP to reboot.

My phone can connect to the AP no problem and 3 cores that I configured are able report data no problem through the AP.

I am pulling my hair out, any ideas?

I am think this is purely a problem at the spark.io server problem. I am getting Failed to claim core, server said [ ‘device does not exist’ ]. Just to be clear I get this response when I hit enter (after waiting a minute or) when it will not get to breathing cyan.

Also I have tried to do the same using SPARK-DEV I can save credentials retrieve the deviceID, but when I try to claim it I also get device does not exist.

Is it possible that of all the cores I have most are not in the spark.io database?

Is anyone else being able to claim devices right now? If you please let me know.

I was able to configure one more devices but now have 12 in the bad pile.

This looks like it is worst than I thought I decided to take the 3 initial good units and do a factory reset on them and try to reconfigure them intermixed with untried new units from the boxes.

What I found is, what was originally good can still be configured and programmed but the untried units are still failing. So that has removed my pc and programming software and the spark.io server from the problem list and as far as I can tell leaves me with a whole lot of product DOA right out of the box.

I will sort of backtrack on what I have just said about DOA until more proof.

And the reason I am doing that is I was just looking at the com port inside of the device manager on my pc. I see for example a good unit that was previously tested connect with a low com port number like com12 while one of the latest failed units connects at com29.

It just strikes me that this could be the source of the problem I have never seen something like com29 assigned. I am assuming that the pc has a history space somewhere that can be flushed so a new device will get assigned a low com number.

Does anyone have any experience with this?

I assume you’re on Windows.

If so, than you can add an environment variable DEVMGR_DISPLAY_NONPRESENT_DEVICES = 1 and check the menue item View - Show hidden devices in Device Manager.
After that you should see all your previous Core COMs grayed-off in the devices list and then you can uninstall them.

But I’m not sure if this actually will cure your problem, since there isn’t anything wrong with COM29 (up to 256 should work), unless the program you’re using doesn’t allow it.

Could you try to set your credentials via PuTTY, I have just checked, I can get to my Cores even as COM99.

2 Likes

If you have to you can also right click on the unit in Device Manager and simply re-assign it to another com port. As long as the port isn’t really in use (as in it doesn’t appear in the device manager) you can force the device to use a lower com port (even though Windows will balk that it’s currently in use)

1 Like

True, the only thing you might have to do is to reset the Core once after that, before opening the newly assigned COM port - atleast I have to on Win8.1 and PuTTY :wink:

@harrisonhjones & @ScruffR Thanks, Yes I’m on windows. I’m kind of back in the DOA feeling again but I am not sure.

I found out how to erase the com/usb history in regedit so that when I connect devices am getting com port assignments like com3 & com4 for but I am still getting the blinking green not transitioning to cyan on units right out of the box.

The communication through the port is not really the problem in that I can see that dfu-util is able to get the deviceId and write with an echo the ssid & password. I was just grasping at straws when I saw that the initial units were working with low com port numbers and all the failed units had high com port number something I have never run into.

Ok it really seems to me there are 3 possibilities

  1. A Large number of DOA units.
  2. The spark server that is involved it the claiming process if misbehaving.
  3. Most of the units I have are missing in the the spark server database so it will not allow me to claim them

What do you think? Any ideas on how to prove disprove any of these possibilities?

Also grasping for straws here :wink:

Could you try to provide the credentials to the Core via PuTTY and make sure to select the correct encryption scheme and then after that try to claim the Core?
How about claiming one of the new Cores via Smart Config?
What encryption scheme have you got for your WiFi?
Could your router have run out of DHCP IP addresses for new Cores?
Maybe you could PM @harrisonhjones some of your stubborn Core’s IDs to have him - or anybody else from Spark - check their status in their DB.

Some of these ideas might be ridiculous, but if nothing helps … going mad sometimes does :wink:

1 Like

@HardWater, if we still get no where a remote desktop session might be in order. They’ve been pretty successful in the past

1 Like

Hi @ScruffR I tried using Putty see results below

Your core id is MY_deviceID
SSID: MY_ssid
Security 0=unsecured, 1=WEP, 2=WPA, 3=WPA2: 3
Password: MY_password
Thanks! Wait about 7 seconds while I save those credentials...

Awesome. Now we'll connect!

If you see a pulsing cyan light, your Spark Core
has connected to the Cloud and is ready to go!

If your LED flashes red or you encounter any other problems,
visit https://www.spark.io/support to debug.

    Spark <3 you!

I still end up with flashing green when it should go to cyan. Any idea why I get an occasional short duration cyan amongst all the flashing green?

I’ll give some of your other ideas a try Thanks again.

Check your DHCP range size and how many free IP addresses it has left to allocate

1 Like

As said :wink:

@ScruffR @harrisonhjones & @pra Thank you guys. You were all correct. It was dhcp that I was running out of address on my home network was set to hand out only 12 addresses.

Well I fixed that and the sun has started to shine even though it is 8:30pm

Thanks Again

3 Likes

Glad to hear, that you solved it - and what a pitty, I would have taken your DOAs off you :wink:

As would I! (The “DOAs” not the DOAs :slight_smile: )

1 Like