Access Controls on Local Flashing of Firmware

I am developing an access control system based on Particle devices [see: https://github.com/TeamPracticalProjects/MN_ACL]. The project consists of Argon-based RFID card stations and separate (Photon - based) RFID Lock hardware. The project heavily leverages Particle’s cloud security; specifically, card stations communicate their access control decisions to lock control hardware using pub-sub via a secure Particle account. The firmware is all open source, but the facility strictly limits access to their Particle account. Therefore, only access control devices that are claimed into this account can communicate with one another.

A hacker could modify our (open source) firmware to bypass some of the access control rules, but a hacker’s Argon could not communicate with lock devices without access to the facility’s private Particle account. Likewise, a hacker can’t OTA flash new firmware to a card access station’s Argon because the hacker doesn’t have access to the facility’s Particle account.

Here is my question: if a hacker locally compiles hacked firmware (via Workbench) and if they can gain physical access to a card station’s Argon (one that is claimed into the facility’s Particle account), could that hacker locally flash their hacked firmware onto the Argon and thereby gain access to other devices in the facility’s Particle account? For example, could a hacker who gains physical access to a station’s Argon, put the Argon in DFU mode and locally flash new firmware to the Argon with a USB connection, or does locally compiled firmware include some sort of access token that prevents this from happening if the hacker is not (was not) logged into the facility’s Particle account? If there is protection on local flashing via USB, is there some other way for a hacker to locally flash code to a Particle device that bypasses Particle account access control (but still leaves the Particle device claimed into a secured Particle account)?

There is no way to prevent locally flashing code by USB or JTAG. However, there are limits to what you can do.

If you are using the product features to manage the devices, and the local binary does not have the matching product ID and version, it will immediately flashed with a firmware update back to the official software.

The device doesn’t contain an auth token. While a rogue device could publish a product event, it can’t access any other features. It’s not possible to get an access token off a device, even with physical access, because there isn’t one on the device. The device is authenticated by a different scheme (RSA public/private key pairs), not an access token, for this reason.

However, rogue firmware will still be able to publish product events, so this will be a risk.

It will always be difficult to secure a device when you can’t control physical access to it.

1 Like

@rickkas7: Thx for the very helpful information. I’m glad that you were able to reply so quickly. Shortly after I saw your post, we had to make a major decision about where to mount an Argon-based RFID card access terminal. My clear understanding of the vulnerabilities of physical access to the device helped us make an informed decision to locate this terminal in a well-secured location.

1 Like

Hi @BobG -

Sounds like and interesting project :slight_smile:

@rickkas7 gave you some great tips and glad you managed to make an informed decision based on that. Maybe just a thought, and it all really depends on how sure you are someone will try to physically hack your devices and how far you are willing to go to prevent that;

I suppose if you switch to production modules such as P1 and B-series where you have no built in USB port, you can then get somewhat creative with the type of connection port when designing the PCB. Of course you will still have USB connecting to you PC, but I suppose you can have whichever connector you want on the other side of the cable unique to your board design. Easy example, custom USB to RJ45 cable. RJ45 is a bit common, I would look for something completely non-standard if it was a huge security concern :slight_smile:

Almost like a closed system design many hardware vendors use to limit access to / tampering with devices (think Apple and lightning connectors etc.)

Best of luck :wink:
Friedl.

@bobg, seeing that the software is open, I would guess that there is nothing stopping anyone with the expertise hacking as you said.

You could add some other security overlay, like an anti tamper mechanism like EFTPOS terminals use, eg they remove power from RAM used to store secret keys if the case is removed.

Am positive that this is a well known hacking pattern that the security community has solved in many ways.

The use of the Particle cloud to authenticate is part way there but does not solve the attack described.

@UMD: good points. It’s not so much about secret information inside the firmware. Most of the secret information in my project is stored in webhooks in the Particle cloud. It’s about the device firmware using data obtained using secret information to make access control decisions. If the firmware is hacked to operate normally except for bypassing the tests for user access permissions, then unauthorized users can use the hacked firmware to gain access regardless of the actual data that drives these decisions. Particle’s security ensures that only those with access to the account that devices are claimed into can OTA flash firmware to them. That’s sufficient for me. But securing devices from local firmware flashing via physical access to the device is a much harder problem. So hard, in fact, that I have to agree with @riskkas that physical hardening is the only real solution.

Clearly, any microcontroller can be hacked using JTAG – that’s a hardware mechanism for reading/writing directly to flash memory on the microcontroller and is the original way that firmware (usually a bootloader) gets flashed in the first place. But some microcontrollers allow JTAG to be disabled after some initial firmware (e.g. bootloader) is placed into it. My real concern was about USB local flashing (under control of some sort of system firmware - e.g. bootloader) and whether/how this might be disabled or otherwise protected from hacking. Using a production device that doesn’t have USB is one obvious solution. But physical hardening is generally the ultimate solution.

I was asking about this not in the hopes that someone couldn’t hack a device that they have physical access to (likely impossible) but rather to understand how difficult a task this would be. Think about modern cars with the steering wheel and transmission locks. A criminal can’t simply gain access to the car and jumpstart it in order to steal it. They also have to disable the mechanical interlocks. Clearly, if the criminal has physical access to the car long enough, and with the right tools and knowledge, they can defeat these protections. But these protections make it hard – very hard - to accomplish in practice. These protections therefore reduce the amount of physical hardening (e.g. locking the car in a garage) that a user needs to be concerned about. Not eliminating the threat (that’s usually impossible); just reducing it to an acceptable level.

@BobG, agree, a difficult situation to solve.

Note that I was not talking about embedding a secret in the firmware, but in memory that is cleared upon a tamper. The firmware must not understand what the secret is, that is in your backend solution, it just passes it on. Assume that the secret would need to be combined with something unique to the device (eg its serial number) into a one way hash.

Simpler idea! Not sure if you are able to determine application’s hash remotely? If so, then your backend system could match this with what what is expected. If the app has been updated without your knowledge, you will know about it.

Food for thought.