Guiding Principles To Ensure Your Hardware Is Secure

Determining how and where hardware security holes arise is a critical step in developing secure device.


The modern society relies on complex, intelligent electronic systems. Automotive, avionics, medical, smartphones, communication and 5G networks, critical infrastructure, data centers, and other applications are ever more dependent on integrated circuits (ICs) that deliver high performance, low power consumption, safety, cybersecurity, and continuity of operation. Hardware is so central to competitiveness and user experience that large companies have reversed established trends and invested in building IC development teams dedicated to their products. Google, Facebook, Tesla, Alibaba, and Apple are prominent examples.

Many people are familiar with software vulnerabilities and the resulting need to frequently install patches and the latest releases of software tools and mobile apps. However, in recent years, industry executives, policymakers, and the public have been exposed to an increasing number of articles presenting semiconductor components as a concerning source of cybersecurity risks for vehicles, IoT devices, network equipment, and even defense and critical infrastructure systems. In 2015, news of the Jeep hack reached mainstream publications. Meltdown and Spectre, and other microprocessor security flaws discovered by security researchers, were widely reported on news outlets like BBC, going well beyond the scientific and hardware engineering community. The Cisco router bug discovery, where an FPGA device could be partly reversed engineering and remotely reprogrammed to switch off critical security features, highlighted how an otherwise minor hardware detail could be leveraged to expose the communications of sensitive institutions. The Huawei 5G controversy also shows how difficult it is to assess critical network equipment security independently. Finally, it is worth noting that organizations like the World Economic Forum have published articles to raise the awareness that hardware is the foundation of the pervasive electronic systems that are so crucial to modern society, and therefore deserves much scrutiny. At the consumer level, hardware security features are not yet clear product differentiators, like display resolution, camera pixels, memory size, processor performances, and power consumption. However, hardware security is already at the top of the agenda of many organizations and institutions. Determining how and where hardware security holes arise is a critical step in developing secure device.

As hardware is being designed, the pre-silicon design steps, including RTL coding and IP integration, may introduce weaknesses and vulnerabilities in hardware components. A simple error in the reset value of a register could grant unrestricted access to a critical asset at boot time. Under rare conditions, an encryption key could be temporarily stored in a register visible in the address map and accessible by software. In some corner cases, a register used to configure security functions could be writable by an unauthorized peripheral. Protected information could have minor but measurable effects on a low-privilege process regarding execution timing, power consumption, or other factors. Illegal bus transactions, violating interface protocol rules, could interfere with the secure boot execution. Faults, such as bit flips in memory cells, could disable security functions.

Unlike target users, attackers spend significant resources to misuse and abuse the hardware component. Functional bugs in corner cases that are irrelevant to the target application can become a valuable security-breaking feature. Meltdown, Spectre, or Orc show that even hardware that has been implemented correctly can be vulnerable to transient execution (or micro-architectural side-channel) attacks. Simple power analysis (SPA) and differential power analysis (DPA), examples of physical side-channel attacks, leverage power consumption measurements to extract secure data. Focused ion beam (FIB) or voltage glitch controllers are popular ways to insert faults for malicious purposes. An interesting and often overlooked aspect of physical attacks such as fault injection is that they are often considered to require expensive equipment and physical access to the device, thus significantly reducing the number of potential attackers. Unfortunately, many physical attacks can also be carried out remotely through software processes that use dynamic voltage and frequency scaling (DVFS) functionality or specific memory access patterns (see rowhammer), for example.

Principle 1:
Security is not an afterthought, something that can be bolted on an existing product. On the contrary, security must be built into the system from the concept phase and taken care of during the entire development lifecycle. One of the challenges is that the methodology must be applied to all the system’s components, including the silicon. Although each system and application have specific requirements, hardware components and semiconductor IPs are often developed out of context. IPs that are not developed following a rigorous security-by-design approach can only target an increasingly smaller set of applications.

Principle 2:
Another important aspect to consider is that security is a very dynamic field. New vulnerabilities are discovered daily, while the system is already operating in the field. The development environment continues to play a critical role after production. A model-based analysis is essential for an efficient, comprehensive assessment of a component vulnerability’s impact and to validate patches and updates. This is no longer true only for software components but also for hardware. With highly configurable heterogeneous computing platforms, FPGAs, and eFPGAs powering advanced AI-based applications, the development environment must support the continuous validation and verification of hardware updates.

DevSecOps (development, security, operations) is an approach, increasingly popular in software development but also applicable to hardware, which integrates security in a continuous development and delivery flow. Source: Google.

Principle 3:
Hardware security issues should be found as early as possible and with the cheapest method. Weaknesses introduced during IP RTL coding should be found during IP security verification, not during netlist simulation or system validation. Issues that can be detected using automated, standardized solutions should not be found through ad-hoc, effort-intensive processes. This is important both for reducing the development cost and increasing quality. Whenever possible, verification solutions should not rely on expert input and user-defined workloads. Simulation-based verification often misses vulnerabilities because it only examines a narrow subset of all possible hardware behaviors. While this may be sufficient to ensure that the hardware behaves as expected in the intended use-case scenarios, it does not provide adequate coverage of the misuse case scenarios that an attacker may construct. Formal-based methods are more suited to systematically detect vulnerabilities through an exhaustive analysis of all possible workloads, including misuse-type scenarios.

OneSpin offers solutions that can help protect against security threats early and effectively. We’ve written a white paper that focuses on the hardware security issues designers face and the innovative ways to prevent them. The paper is split in two parts. Part 1 focuses on design attacks aiming to insert malicious logic in semiconductor IPs and ICs. Part 2 focuses on architectural and implementation-level bugs, vulnerabilities, and weaknesses that expose hardware to violations of security requirements.  Download the paper titled “Trust Assurance and Security Verification of Semiconductor IPs and ICs: Part 1 and Part 2” at

Leave a Reply

(Note: This name will be displayed publicly)