How secure boot and trusted boot can be owner-controlled

Background. In recent years, the computing and embedded systems industry has adopted a habit of implementing “secure boot” functionality. More often than not, this is implemented by allowing a hash of a cryptographic public key to be fused into eFuses in a main CPU or SoC. When such a design is adopted, the private keys are invariably held by a device vendor and not by the device owner1, which means that only the device vendor can provide updated firmware for the device. This is problematic for a number of reasons:

Discerning a threat model. The actual value of such secure boot functionality appears vague. Specifically, what is the threat model? Only two present themselves:2

  1. OS compromise (e.g. via a zero-day) cannot remain resident after a reboot (that is, security against remote attacks).

  2. Security against physical attack.

The problem with (2) is that (2) is fundamentally unattainable because in the worst case, an attacker could simply remove the fused chip and replace it with an unfused chip. Thus any attempt to mitigate against threat (2), or claim effective protection against (2), is a sham. In the author's view, the mitigation of supply chain attacks is an essentially insurmountable problem at this time, at least in terms of technical mitigations; only organisational/procedural control mitigations, such as accounting for chain of custody, seem viable. TLDR: You can't trust (for example) an IP phone which you can't account for the uninterrupted custody of, and any vendor which tells you otherwise is lying. Physical bugging devices could be inserted without even changing any of the firmware, after all. There is no silver bullet solution to supply chain attacks.

(1) is arguably more compelling, though it does feel like it often misses the point. For example, it does nothing for the security of the device configuration, as opposed to its firmware. For most devices, illicit changes to configuration can be just as dangerous as changes to firmware. For an IP phone for example, what is the point in securing against changes to firmware, if an attacker obtaining non-persistent execution on a device can simply change the device's configuration to send all its calls to a server the attacker controls? It is deeply amusing how often “secure boot” is added as a check-box feature, with barely any attention paid to the equally important question of secure configuration. How many IP phones will willingly get their configuration from DHCP servers and unencrypted HTTP connections? To my knowledge, the answer to this question is “all of them”.

Implementing owner-controlled secure boot. Moreover, it should also be noted that you don't actually need to use keyfusing to implement (1). For example, the “secure boot” functionality on x86 PCs allows users to change their own trust roots at any time. The way this is implemented is by having a region of a nonvolatile storage device reserved for boot firmware and trust configuration, which can be locked against mutation after boot. The only way to make this region writeable again is by resetting the system, restoring execution to said boot firmware.3 Thus, absent physical intervention, any mutation to the boot firmware or configuration must be approved by said boot firmware.

Although most SoC vendors design their SoCs to support keyfusing as their officially supported means of “secure boot”, it is actually possible to implement this owner-controlled secure boot design on most SoCs via only a small amount of additional board components. This takes advantage of the fact that

  1. SoC-class devices almost never have onboard flash, and instead boot from an external flash device;
  2. external flash devices usually have a “Write Protect” pin; and
  3. many classes of flash device allow the “Write Protect” pin to be configured to write-protect some, but not all, of the device's memory.

The implementation looks like this:

For example, a first-stage bootloader could choose to accept and apply signed updates to itself if presented during the boot process, before the Write Protect latch is set (but an owner could always restore control to a new key by manually flashing the flash via physical intervention); or a bootloader could display a prompt on a display connected to the SoC, showing the cryptographic identity of the new bootloader and asking if the user wishes to install it; or a bootloader could simply refuse to modify itself ever, rendering itself effectively immutable barring physical intervention.

In short, this creates an owner-controlled secure boot system which mitigates against threat (1) (but not threat (2), but all claims to mitigate against threat (2) are a sham anyway).

Implementing this dynamic in MCU-class chips is probably a lot less feasible, because most MCU-class chips are designed to boot from onboard flash, but do not provide lock bits or Write Protect-pin equivalent functionality to prevent regions of flash from being modified after boot. In general while MCU-class devices tend to offer all sorts of assorted security logic, they tend, like keyfusing, to be designed with specific modes of application in mind by the vendor and are not adaptable to novel use cases like the above, so finding MCUs which can offer high levels of security and defence in depth while also remaining owner controlled can be difficult.

However, there are some MCU-class devices without onboard flash which are designed to boot from external flash devices (some higher-end NXP MCUs, for example), which are thus of high interest to anyone seeking to implement the above functionality on MCU-class devices.

Implementing owner-controlled trusted boot. As mentioned above, it is absurdly common to see the industry promote “secure boot” functionality with orders of magnitude less attention paid to the issue of secure configuration. It can be observed that in large part the security objective of any design is to prevent malicious access to sensitive information. This suggests that the application of trusted boot, rather than secure boot, may be a substantially better approach for actually accomplishing useful security objectives.

The difference between secure boot and trusted boot is simple: In secure boot, the hardware authenticates the software it runs. In trusted boot, the software authenticates the hardware running it. Sort of — the reality is that both secure and trusted boot involve measuring a piece of code to be booted, but whereas secure boot uses that measurement to decide whether to boot it or not, trusted boot uses that measurement to decide what cryptographic secrets that code may be allowed to obtain.

Of course, these can be combined and do in fact go well together. Yet in the embedded space there has been comparatively little interest in trusted boot relative to the interest in secure boot, despite it actually being more flexible and arguably more suited to protecting secure information. However, substantially all trusted computing implementations in the wild appear to me to be seriously flawed:

As I noted when discussing threat models above, attempts to mitigate against physical attacks are futile — in the context of secure boot, because in the worst case an attacker can simply replace a keyfused chip on the PCB with a fresh one. However this criticism does not apply to trusted boot, because trusted boot does not seek to prevent unauthorized code from running; it simply seeks to ensure that particular data, such as certain cryptographic secrets, are only available to specific code, based on the identity of the code running on the device.

Trusted boot potentially can provide some security from physical attackers. For example, if an attacker physically replaces the SoC with an unfused version, with trusted boot, the attacker would lose access to any secrets stored on the device; because all information secured via trusted computing is ultimately secured using a secret fused into the chip itself, throwing out the chip for a new one also throws out the very secret an attacker is trying to obtain.

It should be noted that this security is not absolute and a determined attacker almost certainly will be able to extract secrets from a device making use of trusted computing (voltage glitching, power analysis, side-channel attacks, etc.); however it does seem that some degree of security against casual physical attackers is gained here. (Note again that this simply provides security against the extraction of secrets, and once again one must consider whether other physical attacks — such as implanting bugging devices — are not equally undesirable, attacks for which no mitigation is possible, other than maintaining uninterrupted chain of custody. When implementing either secure boot or trusted boot, care must be taken to ensure that the level of security being provided against physical attacks doesn't render itself completely moot by the inability to mitigate other physical attacks of an equally or more damaging nature; otherwise, it is just security theatre.)

How can trusted boot be implemented? Consider that keyfusing-based “secure boot” is usually implemented on SoCs by allowing the hash of a cryptographic public key to be fused into in on-die eFuses, which is then read and verified against code in flash by the chip's mask ROM during boot. In other words, the immutability of an on-die mask ROM is used to enforce the desired security properties. Thus, it's not hard to conceive of a similar design for trusted boot:

Note that while on the surface of it this seems like it commits you to never changing a single bit of the bootloader, ever, it is actually more flexible than it seems, because the bootloader can choose whether to pass its key to another bootloader. For example, if the bootloader stores an encrypted configuration file, it could facilitate an update to itself in the following way:

In other words, a bootloader authenticated via trusted boot can choose the parameters and terms of its own successorship, and choose to yield control of data to other code, after having authenticated that code on its own terms. Alternatively, if the loss of all information secured via trusted computing is acceptable, the bootloader can simply be changed by force. Thus the owner of, for example, a second hand device, can always take control of the device by “wiping it down” in this way and installing a new bootloader of their choice, in the process losing access to any secrets stored on it by the previous device owner.

Other enhancements are possible; most obviously, the use of a captive-key cryptographic block on a SoC with write-only key registers, which are commonly found in many popular SoCs, to store the derived bootloader key, rather than simply passing a bootloader its key in memory, which may be desirable in some circumstances. For example, this prevents key leakage in the event of remote exploitation; a malicious actor compromising a device can only make use of a key for as long as they control the device, but not escape with the key. (I like to think of these hardware units as “cryptographic glove boxes”.)

Sadly, I'm unaware of any SoC which implements a logic like this in its boot ROM. Even where trusted computing functionality is implemented, it tends to be dependent on the security of a (keyfusing-based) secure boot process, rather than being orthogonal to it from a security perspective.

Short of some SoC vendor deciding to adopt this model, this leaves a few options for implementation:


1. Theoretically, if a vendor avoided doing the fusing themselves, the end user could take control of the fusing process and do it themselves, but since this requires a user to create and hold their own signing keys and sign all firmware updates with them, and because there is no way to subsequently change keys in the event of loss or compromise, this is utterly impractical for all but the most sophisticated device owners.

2. Actually, there is a third threat model: Protecting the device *against the device owner.* An alarmingly large number of modern products which are “sold” to consumers consider this to be part of their threat model. See [(1)](https://boingboing.net/2012/01/10/lockdown.html), [(2)](https://boingboing.net/2012/08/23/civilwar.html) for further discussion.

3. Actually, nowadays, modern Intel/AMD x86 systems do tend to allow the protected regions of the boot flash to be updated after boot, namely via UEFI calls which trap into System Management Mode. These platforms have a flash controller designed to allow greater access to flash if the system is running in System Management Mode.