CC3000 broadcasts network password unencrypted

Dear Spark Core community –

I’ve looked at various documents that describe the CC3000 setup process and it seems to me that the current approach encourages people to expose the password for their network in an easy to sniff manner during the setup process.

The CC3000 setup process does allow for the use of an AES key to encrypt this information, but this increases complexity for device manufacturers (providing per-device keys in a tamper proof manner) and reduces convenience for end users (they need to enter a long AES key in addition to the SSID and password for the network that they want the CC3000 enabled device to connect to).

The non-AES approach seems to rely on the fact that the network password will only be visible for a very short period of time.

But anyone with any kind of security background knows that this kind of “good enough” security rarely turns out to be good enough even in low security home environments.

I’ve been trying to get some clarification about this from TI:

This is a long runnig thread and I tend to post TL;DR style messages. Put please look at this post in particular:

Here I discuss how the network password is bundled up and then essentially broadcast for all to see.

I really want the CC3000 to be the dream solution to setting up headless devices on wifi networks but so far I haven’t been convinced that the TI device doesn’t in effect encourage device creators to be careless with end user network passwords.

Please note - I’m completely happy that if the AES key option for setup is used then things are secure. But I don’t understand why TI is making it an option not to use an AES key, as not using one exposes end user passwords. Using AES clearly increases complexity but I don’t see not using it as a reasonable alternative.



1 Like

Very interesting @ghawkins, on the CC3000 First Time Configuration page I don’t even see a way to encrypt the security code as you suggest with the AES key. Maybe I’m missing something but the general premise is that you basically create a SSID beacon by making your iPhone (or any other client) look for a specific SSID that is basically the SSID of the router or access point you want the CC3000 to connect to, PLUS some config bytes and then finally your network password in plain text. I see the obvious security issue…

What is keeping Ti from creating an app that allows you to type this information in, and encrypts your network security password before sending out the beacon. Ti could even randomize the key while making it easy to enter by doing something like laser engraving a QR code on the top of the CC3000 that you could scan from their app. Companies like Spark could also take this matter into their own hands by doing something similar with a QR code sticker, but it complicates the MFG process a bit… something Ti could do more easily.

That is the point of this device… making it easy to add your coffemaker/mailbox/garage door/sprinkler system to the Internet of Things (IoT).

I’m also confused about the proprietary-ness of SmartConfig… if they are not encrypting your network password based on a “secret” key… then there doesn’t seem to be anything intellectual about parsing an SSID beacon for some bytes.

Can you explain the AES option more so we can think about other ways to attack this issue?

1 Like

Hmm reading more I see how you add the AES-128 key, but this is something Spark would have to randomize and code into each Core along with providing a reasonably easy way to enter the key into you smartphone or computer. A QR code might be a good idea, but the mfg’bility issue still exists…

Easiest thing I can think of would be to make the computer that randomizes the key, program this into the Core when initial FW is flashed… and also at the same time print out a human-readable/QR combination sticker that gets affixed to the top of the CC3000.

Hello @BDub

First off I want to say that there isn’t a 100% clear message on all this as far as I’m concerned.

The person I’ve been engaging with on the TI forums is Tomer Kariv who says he is part of the CC3000 team.

So he should know exactly what’s going on.

He says the information covered in First Time Configuration is out of date.

He specifically says:

The smartconfig algorithm DOES NOT share the SSID or password
insecurely. This means that if you start the smartconfig application
on your smartphone for instance, and sniff the air, even if you don’t
use the additional AES encryption checkbox
, and still use secured
connection to the AP, the password of the AP and the SSID will not be
shared as is

That sounds reassuring, but in a later post he says:

Sharing the algorithm would mean that everyone would be able to
"listen", decyrpt, and detect the SSID and password.

Not reassuring, so it relies on an algorithm, which if leaked will compromise CC3000 based devices? Later again he says:

I’m saying that once the patent is share-able, then customers/users
must use AES encryption for security.

Sounds even worse - why offer the option to not use AES at all if it will inevitably be compromised.

The posts seems very confused to me, references to proprietary algorithms, patents etc.

However I can’t find any relevant patent application and having looked at the TI Java implementation for a SmartConfig client, i.e. the thing an end user would use to enter an SSID, password and optional AES key, I’m unconvinced that there’s anything patentable in the whole process.

What I see looks pretty unsurprising - there’s no clever proprietary algorithm.

Note that looking at this code I can see that plain old UDP packets are used to broadcast the SSID etc. rather than wifi probes as discussed in the TI documentation.

I don’t have a CC3000 enabled device so I can’t test things out in the real world, but I will try to go further with the TI Java SmartConfig client and see if I can mock up something that impersonates a CC3000 enabled device.

Now onto your second message :slight_smile: I completely agree with what you’re suggesting - that’s what I was thinking too.

However I think the sticker would have to be that tamper proof style that you get with bank PINs etc.

I.e. something where you can see if someone else has already seen the number before you.

And these tamper proof stickers would then have to be moved from the Spark Core that they came with to the exterior of the end user device that the Spark Core is encorporated into.

All sounds quite complex and inconvenient. If the keys are on non-tamper proof stickers then as an attacker one would just have to get into the supply chain and e.g. record the sticker numbers on devices delivered to a particular company whose network one wanted to hack.

In short - I think the AES option is secure but very inconvenient, the non-AES approach is being pushed by TI and others as it seems simple and convenient, but they’re glossing over the fact that it’s extremely insecure.



1 Like

Ok further reading and it appears your SSID and password are not sent in plain text as this reply states:

Obviously security through obscurity is still the issue, but I think a simple warning to the users would be sufficient for a device mfg.

For example, I’m not worried that my neighbors are going to hack my network when I initially setup my Spark Core. I know for a fact that I am the only hacker type in the range of my house. And even if they were, they would need to know I was using a CC3000, what the decoding algorithm is for the encoded SSID/PASSWORD that is being sent momentarily, and be actively listening for it… highly improbable that it would be intercepted.

@ghawkins sorry I didn’t see your post before replying last… but I think we are catching up at the same point.

I see your additional concern about what happens when the patent is public. However, I think the concept of an “algorithm” can be patented as a utility patent without actually making the algorithm public. They will likely offer examples in the patent, but not not exact one that they use.

Because of this, it makes it fairly secure… but all it takes is one disgruntled or careless employee to leak the algorithm.

Still though, it’s really only an issue upon setup… so it’s still a fairly tall stack up of if w + x + y + z all occur, security can be breached.

Spark just needs to decide how secure they want everyone’s network to be, and be transparent about which option they choose.

So I don’t really know what I’m talking about when it comes to security issues, but what someone said on the TI forum seems to make sense:

What TI has accomplished here is push-button inclusion of a limited-UI
device into a wifi network. The problem of getting devices onto a
secured network has been one of the last disadvantages of using wifi
in the internet of things over options like ZigBee and Z-Wave that
have push-button inclusion.

That being said, it is insecure by logical necessity if you don’t need
to type in any identifying key from the target device. If you’re not
typing in something like the AES key, there’s no way for the algorithm
to distinguish between the device you intend to connect and a spy.
Even if we never figured out how the password is encoded in the beacon
(come on TI, don’t insult us), you could still just use another CC3000
to get on the network.

That being said, this is momentary insecurity, and I think it’s worth
it for the convenience. Take Z-Wave for example: it’s an industry
leading home automation wireless protocol that people use to lock and
unlock doors and open their garage doors, and it has this same
insecurity. If someone is packet sniffing your Z-Wave traffic when you
connect your door lock, they can compromise your network. But guess
what? No one plants Z-Wave sniffers around people’s houses to hack
their systems.

If you don’t have to type anything from the device, there’s no way to know if you’re talking to the device or an attacker. I disagree with his conclusion at the end: in the long term, if we want everything to be connected, it has to be secure (or bad people will start planting sniffers around people’s houses once it’s worth it). But given how many people just have the default passwords on their routers…for now it doesn’t seem like the worst thing in the world to have things be momentarily insecure, so long as people who care are able to keep things safe.

Longer term, you’d basically have to have some sort of token to prove you’re the device though - the idea of a tamper proof QR code seems interesting. Is there any other way? RFID/NFC that’s integrated into the device? (even then, an attacker could “fool” it with the right equipment though, maybe?)

Just want to chime in quickly to say I love this discussion. Keep it up! We are aware of the potential problems here, and are striving for a balance between usability and security in the Spark Core. All suggestions are welcome.

This is an issue we’ve debated at length within the team, as it definitely comes down to a trade-off between convenience and security. The whole issue is compounded when you consider that our design is open source, making it very difficult to store anything securely, because everything must be shared! And it’s compounded even further when you consider that the Spark Core might be embedded into other products — and the creators of those products may have a different perspectives than we do on convenience/security decision.

So let me start with this as our overarching philosophy: because the Core is open source, you can do whatever you want — but we’ll provide smart defaults based on our estimation of the needs of the majority of our users.

Now, one fact to clear up: TI has, in the history of the CC3000, had two different set-up procedures. First Time Config was the original procedure that utilized the SSID beacon. Smart Config is the modern procedure that uses some TI proprietary magic and a more complicated, protected protocol (which results in a much better user experience). First Time Config has been deprecated and does not work with CC3000s with the latest firmware, but its documentation is still available. This leads to much confusion.

Ok, on to the debate. The only way to use AES to secure the Cores during set-up would be to assign each Core a unique ID, program it on the chip during manufacturing, provide a sticker on the Core (either a hash or a QR code or something), and store the unique ID and associated AES key in the Cloud, where it can be pulled down by the mobile app. This sucks from a user perspective; I hate QR codes almost as much as I hate hashes and serial numbers.

The alternative is to leave it unencrypted, where the protection is, as @BDub said, “security by obscurity”. However, it is quite a lot of obscurity; you would have to be within 300 feet of the person connecting the device and listening for the few seconds that the Smart Config signal is being sent during set-up; then you’d have to reverse engineer TI’s protocol.

In my mind, there are two groups of folks that we need to cater to; those who would trade a very minor potential security breach for an added convenience, and those who find security very important and are willing to work to maintain that security.

Therefore, we essentially have a different solution for each group. The default behavior is that the Spark Cores will all share the same AES key for Smart Config; therefore the credentials are encrypted, but this only adds a nominal security layer vs. the unencrypted option, since everyone shares the same key.

However, you’ll have the option of programming your own AES key by reprogramming the Core over USB, and in the Spark set-up app you’ll be able to change the AES key. That way you can make it more secure with just a little extra work.

For folks who are thinking about embedding the Core into other products or projects — you can make your own security decisions to find your own optimal solution.

And finally, it’s not impossible that TI may provide a better solution in future firmware updates; they did, after all, add Smart Config as a firmware update to replace the way-less-cool First Time Config.

Thoughts? Comments? Concerns? Please let us know what you think!


The sticker could also have the AES key itself instead of a unique ID that required a cloud lookup, maybe? Each core would have to be programmed with the unique AES key itself, of course. Only slightly better, but at least it removes one step.

At the end of the day, if you need to get a Wi-Fi password into a device, it either has to have that AES key (or some other secret) in there (and have some way for a user to read it) so you can tell it securely over Wi-FI, or you have to tell it the password some other way, with NFC or an IR light or a cabled connection or a keypad or whatever. It seems that there’s no way around that basic tension, and if you want to make it easy to connect these things to the internet, your plan sounds reasonable. At least it’s up to each device maker to decide how they want to approach it.

@zach I like your approach because it’s truly encrypted, albeit with a known key, but nevertheless encrypted. Is this on top of the SmartConfig encoding scheme? If so, and if it’s only visible during config, I think this is secure enough, and most end users of the Spark Core are not immediately hooking this up to NSA’s intranet. Basically if you are scurred, you should take the extra steps and reprogram the AES key on the Core first over USB… I really like this whole approach. Further thoughts… as long as you are re-programming the AES key over USB, would it be possible to make an app to transfer your network settings over to the Core and it would be ready to go?

As the person who kicked off this discussion I’d like to say two things first:

  • @zachary - I really appreciate your positive attitude to the discussion, on the TI forums I didn’t get such a strong feeling of directness and openness.
  • @zach - thanks for the background information, glad to see there’s someone with a clear story here about the development of the CC3000, rather than the rather confused talk on the TI forum about patents (which no one can actually point to, not even to the relevant patent applications) and vague discussions of proprietary processes.

I think security through obscurity is no real security so why do TI even maintain a pretence, they should be open and just say if you don’t use AES then you trade security for convenience - and that for some people this will be an acceptable trade and for some not.

I think providing an AES key to the end user is a tricky issue, we can throw around ideas here but all will, I suspect, have issues similar to the AES vs non-AES approach, i.e. convenience vs security.

E.g. if stickers were being used then one would try to get someone into the supply chain for such devices, if this person saw that a shipment was going to be made to an interesting target, e.g. a big bank, then they would note down the numbers before the delivery.

One could then use tamper proof stickers and other approaches, all again with pluses and negatives.

I appreciate what people are saying about momentary insecurity and that many end users will not be big banks but individuals who may accept the convenience/security tradeoff.

However I think device manufacturers cannot limit who uses their end products.

I think CC3000 enabled devices will be hard to police, and will be a big headache to corporations. E.g. as I posted on the TI forum I think the CC3000 is a bigger risk to a corporation than most other security issues, e.g. end users copying data onto USB sticks.

I’m talking here about security issues resulting from employee carelessness rather than malicious insider attack.

One can say that one’s policy is no copying data onto USB keys but the risk is still low if a non-malicious party disobeys this rule (e.g. so they can work on something at home). It’s hard for an external party to look out for such occurrences, however the same is not true if an external parties is looking out for the installation of a CC3000 device on a given network.

Anyone who’s worked in a corporate environment knows people routinely disobey the security rules, e.g. write their password on a postit etc.

If CC3000 enabled devices become popular then it’s inevitable that someday one of the employees will bring in some fun device they got and connect it to the network even if the rules prohibit it, simply because they fail to appreciate the issues involved in what they’re doing.

I think it’s not just end users that will fail to understand the issues, I think many manufactures will too, e.g. not even thinking about whether AES should be used, just going with the easy option, or not appreciating the issues involved in secure AES key delivery or using the same AES key with all devices.

Basically I think the nature of the CC3000 (rather than some fixable flaw) means that it will introduce unanticipated security issues for end users for whom momentary insecurity, whether they know it or not, is not acceptable.

BDub - transferring credentials via USB sounds like a good idea to me, it means no third party can watch the traffic or impersonate the device (e.g. with CC3000 someone could listen for credentials and pretend to be the relevant device). All smartphones have USB capabilities - but I guess the CC3000 approach looks nicer as it doesn’t require the end user to have the necessary cable with them or having to provide a cable with the device along with the necessary adapters (for lightening or the old style 30-pin connector for Apple devices, mini and micro USB for Android etc. devices).

PS sorry BDub that I couldn’t put the at-symbol in front of your name above - apparently new users can only use the at-symbol twice in a given message :slight_smile:

A friend has pointed out to me that any corporation that takes it’s network security seriously will be using WPA-Enterprise.

I.e. a setup where, instead of using a pre-shared key as in home and small office networks, per-user authentication using RADIUS servers is used.

CC3000 devices cannot be used with WPA-Enterprise networks, so I’m probably overplaying the risk to banks etc. in my previous posts.

So it’s only an issue for small office environments and home users. However I’ve got to say that if I was a home user I’d still be pretty annoyed if my hard drives were wiped by the nasty kid next door just because I’d installed fancy Christmas lights or whatever that used the CC3000. Maybe that’s not on the scale of some kind of corporate disaster but a big issue for the person involved. But as others point out people already seem to be prepared to take this kind of momentary risk with existing technologies (though I would say generally they are probably doing so without a very informed appreciation of the risk, minor or not, involved).

1 Like

@ghawkins no problem! :slight_smile:

Another cheap idea would be to use a simple led or ldr as a low speed UART input on your device and a phone app that would flash the screen white/black to send Manchester encoded network configuration data.


Yep - I was thinking about the same thing. It seems like a really nice solution.

It avoids the cable issue of some kind of USB solution but it provides the same level of security (you hold your phone near to to a light sensor on a particular device - you’re pretty safe that you’ll only be communicating with that device).

I live and Zurich and know a lot of people at ETH, will have to see if any of them know the guys in the papers you reference.

Though other people did think of this long before. Timex were doing it way back in 1994 with their Timex Datalink watches developed in combination with Microsoft:

[ Had to leave out the opening http-slash-slash-colon as the forum software isn’t allowing me, as a new(ish) user, to include more than one link in my post it seems. ]

And the flashing light mechanism is also used by the Electric Imp developed by ex-Apple/Google people that addresses much the same problem space as the CC3000:

Nice, yeah I think I knew of all those at some point… I keep forgetting everyone that uses it. There was also another kit maker that used it with boards that would snap apart… like Lilypad stuff, but I can’t find it now. They used it to change the light patterns.

It’s always seemed like a nice way to configure something really cheap and easy, and I’ve wanted to build it into something ever since I saw it.

There are pros and cons to each. If we used the LDR then the spark would not be tied to the CC3000 which means we could get the always asked for Ad-Hoc mode :wink:
I dont think the ldr solution is as user friendly, plus you would always have to expose this LDR for all shields which might mean some designs would have to compromise.

Another alternative would be to pass via USB - USB comms is not very supported in smartphones.

Can you explain this more… do you mean STM32 instead of Spark? Also I don’t believe the CC3000 has the ability to do ad-hoc mode at all. The way I think of LDR is just a way to get your network data to the Spark Core just like the SmartConfig app … except you really wouldn’t need to encrypt anything because it’s such a short range transmission medium.

Stacking shields with LDR is kind of an issue though, which I don’t have a “good” answer for.

Something else I was thinking of that’s kind of an issue when you package up your project… is access to the user BTN that puts the Spark Core in SmartConfig mode. If you ever have to change your network information for whatever reason, it would be nice if you could wire up an external BTN. It wouldn’t be hard to solder a wire to the BTN pin that’s there, but adding a little solder pad next to it would be even better.

I just mean that a core feature of the CC3000 is the simplelink wifi SSID connection and supplied apps.
If Spark used a different method to get the SSID into the core, then the simplelink would be moot and they could use one of the other chips with the ad-hoc mode built in.

I gotcha now… well I think you’d agree it’s a lot late for that type of a switch. What other notable wifi modules are there that are this cheap?