Using Formal Verification To Prevent Catastrophic Security Breaches

In the wake of the Yahoo hack, chip makers need to increase the focus on security verification.

popularity

The news of last week’s Yahoo hack that affected 500-million or so users sent shock waves of anxiety far and wide. It’s not clear yet how the massive data breach occurred or through what means the hackers accessed the network. It could be the chips that drive the network, often vulnerable to attacks on their operational integrity.

It’s no surprise, then, that semiconductor companies are searching for a viable verification strategy that can be implemented in a design and verification flow to thwart security breaches in chips powering the information grid. Finding an easy solution is no small endeavor given how sophisticated hackers have become, especially those who are state sponsored.

That solution could be formal verification and, in fact, more and more verification groups are turning to this versatile tool. A few examples help illustrate its strengths.

The first is the use of an unexpected signal path for data transfer or operational control that creates a common hardware vulnerability through the regular operational conduits of a design or secondary components. These could include scan path, debug interfaces or an unanticipated memory read/write.

Another example is the manipulation of a device’s power rail to track private data access, the so-called “side-channel” attack. Of course, regular channels also open up vulnerabilities that may be used to target device security.

The most effective way to track a non-existent path is to check every state a block can get into and confirm none of these states allows either a confidentiality breach or an operational integrity problem.

Formal verification is the tool of choice when it comes to security verification, due to the exhaustive nature of the checks that may be applied. For example, formal is good at ensuring an unexpected path along which a confidential key might be read does not exist, or that any operational scenarios cannot occur together that could cause an integrity problem.

The Trojan War (1260–1180 B.C.) taught us a subterfuge tactic known as the “Trojan Horse.” It’s been updated with a modern spin where a malicious code segment is inserted into a design prior to implementation, to be activated later. FPGA designs are vulnerable to an attack like this when the final bitstream is loaded into the FPGA. Equivalency Checking (EC), a subset of formal verification, can be used to determine whether late-stage code has been added by comparing it with early design coding, looking for additional segments or operations.

Of course EC is only effective where there is an original design segment to compare the final code with, and this is not often the case in RTL design. One effective option to halt a Trojan Horse breach at the RTL level is an iterative process where a set of assertions that inspect design operations are run against the code. Coverage analysis is applied to understand sections of the design code remaining untested and additional assertions are written to cover them. Any section of code with no apparent relationship to the expected behavior will be singled out and eliminated.

The Yahoo hack was yet another wake-up call to chip makers. They need to stay a step or two ahead of hackers and view formal verification as an excellent solution to thoroughly test an SoC design, preventing a possible catastrophe.