Staying A Step Ahead Of Hackers With Continuous Verification

Utilizing digital twins to reduce the risk of updates to programmable hardware introducing new vulnerabilities.

popularity

We’re all familiar with the apps on our phones and how often they get updated. Most of the time, these updates are done over the air quickly and easily. Other times, a completely new download of the software is required. But let’s take a look at the hardware platforms that the software runs on. What happens when the hardware needs to be upgraded? Today’s hardware platforms are expensive to develop, especially if we look the more complex system that run critical infrastructures, industrial IoT, autonomous vehicles, or safety-critical applications. To amortize the cost of these systems, feature upgrades need to be incremental and customizable throughout the lifetime of the product. Heterogenous platforms that include programmable logic, programmable engines, and accelerators deliver on the needed customization and flexibility that allow updates in the field.

As the various incarnations of the hardware platform occur, however, the threat of bugs, or worse malicious logic finding their way into the design, rises.  The hardware also needs to keep pace with the evolving ways that hackers can infiltrate the design. What’s needed is a continuous verification strategy to ensure that the hardware functions as intended and is secure at all times.

Verification performed pre-fabrication is often an extensive and expensive process and produces an enormous amount of information. This information though, is often left behind once the design enters fabrication, and as shown in the figure, there is still much left in the lifecycle which requires verification. Fortunately, we continue to find innovative ways to advance technology, and one such advancement is the introduction of the “digital twin” to aid the continuous verification effort.

A digital twin is a virtual representation of the system or product that serves as the real-time digital counterpart of a physical object. The digital twin allows the physical product to be modeled all the way down to the IC. For this example, we demonstrate how the digital twin may be reprogrammed, as it would happen in the physical system itself.

The digital twin utilizes all of the design models and verification originally performed prior to pre-fabrication. The ability to combine this original modeled data with physical dependency models with real-time data enables more system simulation and predictive analysis to improve end-to-end processes. This process also enables availability of the original automated verification methods as well as analysis solutions to be used with the most current models of the design in the system. Furthermore, if new security weaknesses or vulnerabilities become known, the model of the IC may be analyzed for impact. While not all exploits may be discovered prior to deployment, the continuous verification environment enables much faster risk assessment and mitigation action for the issues that might otherwise go undetected or undiagnosed for months or even longer.

The recent SolarWinds hacking incident that left many fortune-500 companies and US government networks exposed is an interesting cautionary tale for unchecked software and hardware supply chain security vulnerabilities. The attack occurred during an update process.

In typical verification processes, the update would be verified in isolation prior to creation of the signed binary. Details are still forthcoming exactly where in the update process the malware was introduced, but let us assume the verification of the updated code prior to binary creation did not reveal any abnormal behavior. The binary was signed and then delivered for deployment. Most systems rely on the signed binary, and then potentially do some “sandbox” testing before full deployment. However, with this particular attack, even a few weeks of rigorous test would not have discovered the anomalous behavior due to the timed-release nature of the malware. You may ask, so what could have been done differently?

To answer this, we propose the hypothetical scenario where the hardware is programmable, such as the case with field programmable gate arrays (FPGA) and will receive a maintenance update. One may argue that programmable hardware be considered software, however the verification of these devices rely on more traditional hardware verification methods. For the purposes of this scenario, we will therefore assume that hardware verification practices are needed and will be performed. Using disciplined processes for this, the update would be functionally verified in the virtualized digital model. Leveraging pre-fabrication verification, rigorous coverage-driven process and completeness verification along with security assessment can be repeated. For higher assurance, additional analysis and methodologies may be introduced that may not have otherwise been available.

In the case where continuous verification methodologies exist, models of the updated hardware may continue to be verified in the system model and updates may be introduced to the model well in advance of deployment in the actual system. Real-time data of the deployed update simulated in the system also may offer more insight into any anomalous behaviors detected. While this may still not detect all potential exploits, we must take advantage of the methods and processes available. This continuous verification environment offers an incredible opportunity to increase the testing visibility and testing time we have with these systems.

OneSpin provides a set of key technologies that assist in the continuous verification effort so that modifications done to the hardware don’t introduce new bugs, safety issues, or security vulnerabilities. A newly published white paper further describes the life cycle of SoC verification from pre-fabrication to over-the-air updates. Download the white paper at https://www.onespin.com/resources/white-papers.



Leave a Reply


(Note: This name will be displayed publicly)