"Failed to claim core" - Support came through! [Solved]

I’ve been able to connect - sometimes - to my Spark Core via the iPhone, but usually not. When trying to claim the Core via CLI (“spark setup”), I get this message:

Failed to claim core, server said [ ‘That belongs to someone else’ ]

This has been reported several times in these forums, but from what I can see the answer has always been to send an email to support and wait for them to release the Core so I can claim it.

Well, I sent that email more than a week ago and the only response from Spark support has been to question my ownership of the core. I responded to that note with a fuller description of my circumstances and an offer to provide whatever information they want, but that was five days ago and I’ve heard nothing.

I am left with a Core that is breathing cyan, but which I cannot claim. Is it possible that I could make it work using a private server? Might there be anything else I can do while I wait for Spark?

Sorry to hear about that.

Yeah a local spark server will work well if you are exploring this option

@dave might be able to help with this or they are probably waiting for a response from the other party who claimed the same id.

I’m sure they are working on it!

Hi @laurenh,

Sorry about the confusing responses! Honestly we’ve spent a lot of time thinking about your core, it’s tricky! When your core was claimed, someone didn’t provide an email address, so we’re unable to contact the current registrant, what we typically do in that situation. Current policy prevents us from releasing the email address on that account, but it isn’t an email address, so…

My process in the past to resolve conflicts like this is to have the current owner of the core, you, change the public key on the core, and send it to us at hello@spark.io, and then we can verify that you physically have the core, and we can release it. We’re solving this problem down the road by verifying email accounts used to register cores, adding an optional ‘lock’ setting to prevent people with physical access from claiming the core, and more.

Sorry again about the slow response, this is the first time we’ve hit this edge case, so we want to be sure we do the right thing :tm:



David, thanks very much for your response, and your patience with my flagging patience. I’m glad we agree that it is the responsibility of the software handling registration and claiming to ensure that invalid records are not created. As to the “someone who didn’t provide an email address,” well, there was nobody thumbing the touchscreen but me. :slight_smile: I would swear that I responded appropriately to the prompts. But I write software for a living, so I understand the significance of a user who is certain they did everything right. :wink:

In the vein of constructive criticism, let me offer this: I love the concept of the Spark Core, and the look, feel, and general design the hardware as well as of the web pages that support it are absolutely top-notch. I’ve no doubt that the the talent and brainpower behind all this will make it work.

But consider: someone who orders and pays for a Spark Core, retaining documentation of the purchase, has everything they need to pursue a refund today. (Please be assured that I am not considering that! I mention it only as an example.) It’s simply not reasonable to tell a customer who has already waited ten days that they must continue to wait indefinitely while policy is hashed out to be sure that they truly own the device that they wish to claim.

It seems to me that one of the following approaches would make this sort of case less disruptive both to Spark and its customers in the future:

1: Track the device IDs with shipments, so that the owner of the device can be verified via payment information, shipping address, invoice number, or some other reasonably secret information;

2: Ship replacement units to people who have waited unreasonably long times for the “release” of a device that they thought they already owned. Let the customer cross-ship the unclaimable device, and (if you wish) charge them for the new unit if the old one isn’t returned;

3: If standard policy results in an undue delay for the customer, make a human exception and do whatever is necessary to verify and release the unit, without making the customer wait indefinitely for Spark to refine their business processes.

Thanks again for your time and attention. I look forward to a speedy resolution of this issue.


Thanks for the response. I have a local server apparently working on a Raspberry Pi as of last night. I am having trouble updating the Core’s key, which from what I gather is a good thing because Spark support might need to be able to contact the device using its original key before they can let me use it. But I’m definitely pursuing a local server for the projects I have in mind.

Cool stuff! What’s up with the core keys? I should be able to assist you with that.

1.) Change the server keys on the core to your local cloud keys

2.) Upload the core public keys to your local cloud folder


I appreciate the offer of help, I do. I’ve trolled a few web sites and most seem to expect that the spark-cli utilities will work, but apparently they won’t if the Core can’t be claimed. The instructions I found for moving the keys around using other tools are cryptic, to say the least (or perhaps they’re just incomplete).

My core is connected to my MacBook Pro; spark setup worked to get it on the wifi network, so it is breathing cyan. I have been able to copy the local server’s public key to a .pem file in a local directory. I’ve created a spark.config.json file that points at the local server and has a null token and username.

Can you recommend a resource that will take me from here, or one that has the correct procedure should I need to start over?

Thanks again!

They will all work if you are using the local server.

It just seems to me that you have yet to change the server keys to the local cloud keys on the core.

Can you place the core in DFU mode and use spark keys server server_keyfile.pub.pem ip-address to replace it?


Hi @laurenh,

I don’t disagree :slight_smile: We are getting closer to be able to track the device ID with shipments, so we can address the claiming issue from that front as well, and I certainly am making a human exception, normally we would never release a device that’s been claimed for security reasons. Lets continue this over email and I can help you get your core released and more. :slight_smile:


1 Like

That threw an exception when I tried it - I’ll go home at lunch (Alaska time) and try again. That is, assuming that I won’t be messing up what @dave is trying to do to help.

Thanks again for sticking with this. I can’t wait to have a cluster of these things going. :slight_smile:


Thank you so much! I’m in the Alaska time zone and will be home at lunch at for part of this evening. I can even bring the Core back to the office if that will help.

I’ll search my emails to see whether I have your address, but I know you have mine. :slight_smile:

1 Like

Hi @laurenh,

Just emailed ya :slight_smile:


@dave, I’ve confirmed that I don’t have your email address. I believe you have mine, though. Thanks again for your help!

@kennethlimcp, thanks. I’ve installed the MacPorts dfu-util and tried to load keys via the methods suggested by you and by @Dave. With the Core plugged in and blinking yellow, this is what I get from the OS X terminal:

sh-3.2# spark keys server default_key.pub.pem
Creating DER format file
running openssl rsa -in default_key.pub.pem -pubin -pubout -outform DER -out default_key.pub.der
checking file default_key.pub192_168_0_148.der
spawning dfu-util -d 1d50:607f -a 1 -i 0 -s 0x00001000 -D default_key.pub192_168_0_148.der
dfu-util 0.7

Filter on vendor = 0x1d50 product = 0x607f
No DFU capable USB device found
Make sure your core is in DFU mode (blinking yellow), and is connected to your computer
Error -

Here’s the command @Dave gave me, and the its result:

sh-3.2# spark keys load core.der
Apparently I didn’t find a DFU device? util said dfu-util 0.7

I’m thinking I’ll verify the USB connection via CoolTerm, but I’m otherwise at a loss here.

Other users have had problems with USB ports directly on MacBooks being shared with internal devices. The fix was to use a USB hub to isolate the Spark core. Running dfu-util -l or even sudo dfu-util -l should find the device. That last character is a lowercase “L” not the number one.

Also USB 3.0 ports can be a problem! Try a USB 2.0 port or a hub.


Thanks, @bko!

I’ve tried two different usb cables, connecting directly to the MacBook, and via USB hubs (one 3.0, the other 2.0). dfu-util -l, in all cases, returns:

Found Runtime: [05ac:821d] devnum=0, cfg=1, intf=3, alt=0, name=“UNDEFINED”

CoolTerm complains that the serial port “usbmodem1411” is not available, though it was there every time I looked last night, and for a brief moment I seemed to have it this afternoon.

I’ll pack things back to the office and cogitate. Thanks for your help. (And to @Dave and @kennethlimcp)

The device it found [05ac:821d] is an Apple device inside your MacBook.

Coolterm will not find the core unless you have started USB serial in your sketch with Serial.begin(baudrate); or you are in blue flashing listening mode reached by holding mode for 10 seconds after reseting. If you are running Tinker, USB serial will not be started and the USB modem device will not appear in Coolterm, for instance.

1 Like

@bko , @Dave:

There appears to be an intermittent issue with the serial port, perhaps due to problems with the USB cables I’ve tried. The third cable might be doing better, but as of yet I haven’t been able to load a sketch. I’m working strictly with spark-cli and dfu-util for now; no Tinker or Web IDE is involved.

I suspect that the first two USB cables I’ve tried might not work well with the Core, but I might be getting better luck with the third. I’ll try that tonight and post if something relevant develops.

1 Like

@Dave, @kennethlimcp, @bko: let me thank you all again for your efforts. I have much to file away for future use.

@Dave just released the Core to me! Our long national nightmare is over. :wink:

At the beginning, the problem was whatever combination of software, my fat thumbs, and my thick head that produced an unusable record in the cloud spark server. In the end, fixing it took a lot of patience on @Dave’s part, everyone’s suggestions and my finally trying a third USB cable.

The next project is to get it working with the local server. If I run into issues, I’ll start another thread for that.

My hat is off to you all.