Translating hardware CWEs into security rules for regression runs.
Whether you’re just starting to build out a hardware security program at your organization, or you’re looking to optimize existing hardware security processes, the MITRE Common Weakness Enumeration (CWE) database is an excellent resource to keep in your toolbox.
A CWE is a type of vulnerability, or flaw, in the design of either hardware or software in embedded systems. Individual vulnerabilities are called common vulnerabilities and exposures (CVE) and the CWE database provides the means to categorize the ever-growing list of CVEs. The CWE database is a community-developed list, but it is managed and maintained by the MITRE Corporation.
I’ve written previously about how the process of information flow checking, which is the process of automatically tracking information about secure assets through logical and sequential transformations through the hardware design. The process of identifying security requirements and translating them to verifiable rules can be leveraged with the CWE database. It is a way to verify that you don’t have the weaknesses listed in the CWE database in your design.
For the purposes of this blog, let’s take a detailed look at four different hardware CWEs and how you can translate them to security rules which you can include in your simulation or emulation regression runs in your hardware security program.
CWE-1277 happens when a product’s firmware cannot be updated or patched. This leaves systems vulnerable to attack, as potential weaknesses in the firmware have no means of repair and can be exploited by bad actors.
Depending on the actual design implementation there may be several vulnerabilities related to this CWE to verify. If the design doesn’t have the ability to patch ROM code, this could simply be an implementation oversight, or it could be intentional.
Security verification will not find this type of vulnerability if something is missing that only exists in documentation. However, it will find cases where there are security violations caused by an incorrect implementation or design.
Let’s consider the circumstance where the implemented update mechanism can be bypassed. This case can be modeled as an information flow verification problem.
For this example, consider a hypothetical SoC, illustrated below:
The Trusted Microcontroller Unit (tmcu) in the Hardware Root of Trust (HRoT) loads firmware from the ROM in the HRoT. The ROM is programmed during manufacturing and cannot be changed. To provide a method to update the firmware, new firmware can be written to the trusted non-volatile memory (tnvm) and a bit set in the tmcu will instruct it to read firmware from a predetermined address in tnvm instead.
An attacker may be able to read the modified firmware and later write back malicious code to the tnvm. The attacker may also clear the control bit in the tnvm so that the device falls back to executing old firmware which may have known security vulnerabilities and thereby enabling further attacks.
The tmcu register controlling firmware location must not be writable from outside the HRoT. The updated firmware must not be readable or writable from outside the HRoT when it is being used by the tmcu.
Now, let’s take a look at how Tortuga Logics’ Radix technology can detect and prevent vulnerabilities caused by weaknesses described in CWE-1277, enforcing these security requirements.
The requirements can be expressed as information flow verification problems, i.e.:
Using the Tortuga Logic Radix no-flow operator “=/=>” we can write the following security rules.
Using the security rules above, the Tortuga Radix tool will build a security monitor which when simulated together with the design will flag any violations of the rules.
CWE-1351 happens when a hardware device, or the firmware running on that device, has incorrect or missing protection features that maintain the goals of security primitives when the device is cooled below standing operating temperatures.
Detecting security violations caused by weaknesses described in this CWE assumes that the design has temperature dependent logic, for example, a Random Number Generator. The entropy of the Random Number Generator (RNG) is tested at power up and boot will be disabled if the test fails. Here, we consider the case where the entropy source may be used even if it fails an entropy test. This case can be modeled as an information flow verification problem.
For this example, let’s consider the hypothetical SoC below:
The RNG used by the AES encryption module is using the output from a Physically Unclonable Function (PUF) as its entropy source. The SRAM based PUF needs to operate at a certain temperature to ensure sufficient entropy. An on-chip temperature sensor measures the temperature of the PUF and if it is too low, it won’t allow the PUF to be used and the SoC won’t boot.
An attacker may try to write the temperature sensor control register with a much lower threshold temperature allowing it to work in a colder environment, making the PUF output depend on previous data instead of manufacturing inconsistencies. An attacker may also try to boot the SoC even if the PUF entropy test is failing.
The temperature sensor control register should not be writable by any agent outside the Hardware Root of Trust (HRoT) module. The PUF output should not be used by the AES unless the temperature is above a threshold and the entropy test of the PUF is passing.
The requirements can be expressed as information flow verification problems, i.e.:
Using the same Radix no-flow operator in our previous example, “=/=>” we can write the following security rules.
Using these rules, Radix will build a security monitor which when simulated together with the design will flag any violations of the rules.
CWE-441, also known as “confused deputy,” occurs when the product receives a request, message, or directive from an upstream component, but does not sufficiently preserve the original source of the request before forwarding the request to an external actor that is outside of the product’s control sphere. This causes the product to appear to be the source of the request, leading it to act as a proxy or other intermediary between the upstream component and the external actor.
For our purposes, let’s consider the case where secure data reaches an unsecure destination through the use of an agent with higher access privileges. This case can be modeled as an information flow verification problem.
Let’s take a look at a hypothetical SoC, illustrated below:
The Core0 processor is running software at the lowest privilege level and the security access policy in the interconnect prevents it from reading or writing any data in the HRoT SRAM memory. The access policy allows the DMA to read and write data in the HRoT SRAM memory. All the Core{0-N} processors may program the DMA to do IO or Memory data transfers as long as the access policy for the DMA allows it.
An untrusted agent running code on the Core0 processor can access data in the Hardware Root of Trust (HRoT) SRAM by programming the DMA to transfer data to a location it can access, so that it appears that the DMA is the source of the transaction.
The untrusted processor must not be able to access data in the HRoT SRAM regardless if it is done directly or through another agent (the confused deputy).
The requirement can be expressed as information flow verification problems i.e.:
With the Radix no-flow operator “=/=>” we can write the following security rules. It is not necessary to include DMA signals in the rules as the path information may take is not known. The rules will identify a violation if data flows through an intermediate storage location or if it includes transactions initiated by another agent.
With these security rules, the Tortuga Logic Radix tool is able to build a security monitor which when simulated together with the design will flag any violations of the rules.
The last CWE we’ll discuss is CWE-1263: Improper Physical Access Control. This happens when the product is to be designed with access restricted to certain information, but it does not sufficiently protect against an unauthorized actor’s ability to access these areas.
Sections of a product intended to have restricted access may be inadvertently or intentionally rendered accessible when the implemented physical protections are insufficient. The specific requirements around how robust the design of the physical protection mechanism needs to be depends on the type of product being protected. Selecting the correct physical protection mechanism and properly enforcing it through implementation and manufacturing is critical to the overall physical security of the product.
Here, we consider the case where a physical probing attack is being detected and we want to ensure mitigations are effective. This case can be modeled as an information flow verification problem.
In this example, consider the hypothetical SoC below:
The SoC has an anti-tamper detection circuit to detect if the device is de-capped. If the device is de-capped, then the memories will be zeroized so no information will be leaked to an attacker. The processors will also halt to avoid leaking any information about running programs.
An attacker may decap the device and probe internal signals to access sensitive data in memories or registers or observe the program running on the embedded processors.
When physical tampering is detected e.g. by someone de-capping the device, the processors should stop and memories should be cleared i.e. zeroized to avoid sensitive data leakage.
The requirements can be expressed as information flow verification problems i.e.:
Using the Tortuga Logic Radix no-flow operator “=/=>” we can translate the requirements to the following security rules. It’s important to note that rules are also required to verify that each processor is halted.
With the above security rules defined, the Tortuga Radix tool will build a security monitor which when simulated together with the design will flag any violation of the rules.
In order to provide true security in deployed devices, a security program that is rooted in hardware is necessary. Our Radix technology is a means to detect and prevent security vulnerabilities in existing functional verification environments. Leveraging MITRE’s CWE database, which provides a roadmap for vulnerabilities that might exist in your system, enables development teams to identify and isolate those flaws and address them before the device is manufactured.
If you’d like to see an in-depth look at the coverage of Hardware CWEs that Radix provides, including hardware CWEs not discussed in this blog, download our updated coverage guide today.
Leave a Reply