Measuring how injected faults propagate through a design and how long they remain in the system.
When consumers think about security for their electronic gadgets, financial applications probably spring first to mind. Identity theft and unauthorized access to bank and investment accounts are a constant threat. But there’s more to worry about every day. Stories of webcams and smart speakers being hacked are all over the web. Users rightfully demand that device manufacturers provide a high level of security to protect their family.
Internet of Things (IoT) applications also present considerable risks. Malicious agents who get access to home or office security systems can unlock doors for easy access by burglars or lock occupants out. Modifying temperature settings by hacking a smart heating, ventilation, and air conditioning (HVAC) network can cost businesses a lot of money. There are many such scenarios, some still theoretical and others that have already happened.
The upshot is that security of electronic systems has become far more important in today’s highly connected world. There are always bad actors poking and probing everything they can find on the Internet. They attempt to compromise confidentiality by accessing private information, integrity by modifying or destroying information, or availability by denial-of-service attacks. The “CIA” of electronic systems must be protected.
This requirement, combined with the complexity of modern systems, means that software security is no longer enough. Hardware security must be built into chips from the earliest stages of the design process. Hardware security can be divided into two categories. Architectural security is responsible for avoiding design vulnerabilities that can be exploited by malicious agents. Tampering security ensures that the silicon cannot be probed or faulted directly.
Architectural security violations occur at the interfaces between secure and non-secure blocks, or between secure and non-secure transactions. Passing data from a secure source to a non-secure destination constitutes a leak of private information. Passing data from a non-secure source to a secure destination can introduce a fault trigger. Hardware functions such as encryption, decryption, authentication, and root of trust are required.
It is vital to verify both types of hardware security during register transfer level (RTL) simulation as early as possible in a chip project. Architectural security is verified by reachability checks, such as whether keys contained within a secure module can be accessed by unauthorized agents. Tampering security is verified in simulation by using proxies for the vulnerabilities, such as keys stored in memory.
RTL simulation can model injected faults as “taints” that act as security proxies, allowing simulation to verify how taints travel and how long they remain in the system. This verifies both architectural security and tampering security by measuring permeability—how far taints propagate through the design—and permanence—how long taints remain stored in register or memory. Simulation with taints detects design bugs that lead to security issues.
Any simulator capable of security verification must be able to apply taints and track their propagation through the RTL design. The simulator must have an “algebra” to model how taints propagate through all types of RTL code constructs and design elements. Finally, taints must check for security vulnerabilities while not modifying the design behavior.
The figure below shows an example of a system-on-chip (SoC) design for which hardware security is required. The specific case shown verifies that the JTAG input pin has no path to private data in the system flash memory. The user sets a taint on the input pin and a monitor on the flash memory, and the simulator checks whether the taint ever propagates to the monitor.
The use of taints to verify hardware security is not just theoretical or academic; a commercial solution is available. The Synopsys VCS simulator has taint propagation (T-Prop) functionality that meets all the requirements above. This enables users to set, trace, and monitor taints as proxies for malicious attacks. The simulator contains a taint propagation engine that handles all RTL constructs, including data paths, control paths, operators, and clocks.
It can be challenging for users to visualize how taints propagate. The Synopsys Verdi debug and verification management solution receives taint information from simulation and displays results in a way that is easy to understand. The figure below shows an example. The user taints an “if” statement data input while the “if” condition is selecting the other data input, so the taint is not propagated. When the tainted data input is selected, the taint propagates to the output. The data taint is then released and the condition is tainted, so the taint propagates immediately to the output.
Many types of hardware security verification can be performed with taint propagation, including:
Many users have employed these capabilities to find serious security issues in their RTL designs, including detecting stale data that should have been deleted and discovering processor vulnerability to PACMAN attacks. The combination of Synopsys VCS and Synopsys Verdi allows chip designers and verification engineers to build hardware security in from the start and ensures that it protects the chip and system as intended. This process is now mandatory for many electronic system applications.
Jean-Philippe Martin of Intel Corporation will present “Security Verification through State Flow Visualization with Taint Propagation in Synopsys VCS and Verdi” at the Synopsys SNUG Silicon Valley event held March 20-21 in Santa Clara, California. A white paper is also available to provide more details on how T-Prop works and the types of security issues it can find.
Leave a Reply