R-FPGA Security Risks

Using reprogrammable FPGAs is an efficient business approach, but it’s also a security nightmare.

popularity

Configurable chips have been around for a long time. Modern FPGAs, E/EEPROMS and other types of programmable memory have allowed us some flexibility in changing chip functionality in the field. But really, this is static reprogramming and requires a process and procedure. Moreover, it needs to done by knowledgeable programmers, either on site or remotely. But the fact remains that field re-programmability is generally not possible in response to real-time, on-the-fly contextual conditions.

The term “generally” was used because there are some exceptions to the no real-time statement. One involves reconfigurable FPGAs (R-FPGAs), which are capable of dynamic reprogramming based upon real-time conditions. (The R-FPGA was covered in depth in a related article.)

R-FPGAs promise to be the great enabler for much of the unfolding Internet of Things/Everything (IoT/E). Because many objects will autonomous, they will need to be able to adapt to different applications and do so in real time. That is the R-FPGA’s brass ring.

R-FPGAs in the Real World
There are a number of scenarios for R-FPGAs, some closer to reality than others. One very like to be seen soon is in intelligent transportations systems (ITS).

“Automotive systems, especially in the aftermarket, are changing very rapidly, both in terms of the market and the applicable standards,” said Beate Wiessner, head of energy corporate account management and business excellence at Siemens Energy. “In-car entertainment systems are being driven by demands for extra information, more powerful communication features and more extensive controls, integrated with traditional radio, CD and streaming service features. The flexibility of R-FPGA technology allows the capability to update, as necessary so the device is always on the leading edge, with a multitude of intelligent features.”

In another scenario, that same R-FPGA—or a second chip if a single one can’t handle it all—in a vehicle’s communication system can be used in multiple traffic scenarios. The device can be reconfigured by internal programming, or via remote traffic system servers, to be multifunctional according to conditions. For example, when the vehicle is entering the highway, the R-FPGA can be configured for automatic toll collection, replacing the vehicle’s electronic toll collection (ETC) tag. When the vehicle exits the highway the R-FPGA is loaded with a traffic control module that makes the vehicle capable of communicating with other vehicles, or with infrastructure equipment, in order to provide and receive peer information for optimizing traffic flow. Later, when the vehicle is seeking a parking spot, the R-FPGA can be loaded with a parking application that leads the driver to the closest parking place and keeps track of the parking time.

But reconfigurable chips, while they solve a slew of problems in the dynamic functionality arena, such as ITS, also create a serious security worm hole. Imagine the security nightmare the previous scenario might face, with all the various systems it must link to. Self-reconfiguration is a potential gold mine for hackers, and locking down these chips requires a paradigm shift in thinking.

“Engineers need to connect with both the customers and the marketing arm to decide what particular threats need to be considered, especially with reprogrammable devices,” said Ramesh Karri, professor in the Department of Electrical and Computer Engineering at the Polytechnic Institute of New York University. “It is important to make the end user aware of what threats are out there so the design can best be hardened against them. It is a change in mindset.”

It’s also a mindset that needs to be flexible to deal with constantly evolving threats. “It’s a continuous race,” said Christian Wiebus, senior director of platform management for secure card solutions at NXP. “What we do today will certainly need to be revised tomorrow.”

This is particularly true for reconfigurable hardware because multiple configurations can have multiple attack vectors, where standard FPGAs and ASICs only have to worry about a single configuration being compromised.

The New Paradigm – Threat Assessment in R-FPGAs
With reconfigurable hardware being implemented in so many applications; communications, power grid management, transportation subsystems, weapons systems, medical devices, consumer products, and others, security breaches can have devastating and far reaching consequences. Threats to these devices are numerous and from many vectors and affect all stages of the lifecycle development. Such threats are equally applicable to different types of chips, reconfigurable and non-reconfigurable, so the discussion is valid for both standard FPGAs and reconfigurable lines. However, for the reconfigurable iterations, the compromises can come from many more vectors, ranging from malicious fabs to virtual threats.

  • The Trojan. A Trojan horse is perhaps the most common attack on hardware. It is, typically, a malicious piece of programming inserted into the IC that looks like a standard function. However, when it executes, it performs some unintended, malicious action. For example, it may be a piece of additional logic that blocks access to a resource (memory, I/O, virtualized devices, etc.) Or it may enable access to restricted data that would otherwise not be available to the sniffer. Or it may reroute instructions. It also may be a time bomb that goes off once the device is deployed, say after a certain sequence of events occur after a certain number of iterations. The list goes on and on.
  • The back door. This is a special type of code that enables access to the system, which allows the hacker to gain access to the system. It can enable functionality. Within the security envelope it could selectively disable the encryption core, for example. This is typically done without the user’s knowledge and thus is harder to detect when the attack is occurring. While back doors aren’t as common as Trojans and are more specific in what they are used for, they are harder to detect during testing and can be much more dangerous.
  • The kill switch. This is a more specific type of code used to physically disable the device. It is used to maliciously manipulate the software that runs on the chip. It is a one-shot and once activated, the chip becomes inoperable. There are various renditions of how the kill switch works, but they all do the same thing – kill the chip.

There are others but these are the more prolific and many fall under the generic umbrella of the name Trojan.

On the physical side there are a number of methodologies for compromising both the FPGA and its reconfigurable twin. They come in two flavors, passive and invasive.

Passive, or non-invasive attacks are used to probe the device to glean data such as how the internal working of the chip are set up or where and what data is stored in it. This is becoming less workable with the high pin count of modern FPGAs and not as big of a threat as it once was.

Semi-invasive attacks go a bit further. They remove the packaging, but do not physically damage or alter the chip itself. Such techniques can analyze the electromagnetic radiation emitted during the functioning of the chip and evaluate it via complex algorithmic analysis.

One popular semi-invasive technique attacks what is called the side channel. This process attempts to obtain information from the physical characteristics of the system. It analyses parameters such as timing, power consumption, supply voltage variations, and thermal radiation for patterns or routines. Algorithmic analysis of such data can be used to decipher information, which can then be exploited to compromise the system. Side-channel attacks generally require a high level of technical expertise of the particular of the internal workings of the system as well as sophisticated equipment so such attacks usually have specific goals or targets in mind.

Compromise on the EDA Side
Developing, configuring, and programming a modern FPGA is done largely with a number of sophisticated CAD tools that have been developed by a substantial number of organizations, across many different platforms. Add to that the plethora of generic, standardized intellectual property (IP) cores for design reuse and one can readily see the potential for compromise.

The threat here is the subversion of the design tools. Such subversion could easily result in malicious hardware being inserted into the final implementation. This is especially true for R-FPGAs because they are so much more accessible with internal interconnected routings. For instance, if there is a critical piece of functionality, perhaps an encryption core which holds secret keys, there is really no way to verify that this core cannot be tampered with, or snooped on without the entire system being designed and built from the ground up, including IP. That means the designer would have to know and use all of the design tools for synthesis, as well as the standard and reconfigurable hardware.

Typical subversion techniques include:

  • Covert channels. These are an attack where rogue IP cores use a shared resource to transmit information without the authorization or knowledge of the shared resource. For example, a malicious core may transmit information about the internal state of the reconfigurable hardware using an IP core that controls a particular I/O (audio, video, RF block, USB, etc.).
  • Side channels. These are another potential vector for attack. The IP cores have access to internal resources of the chip, allowing them to tap into side channels in a much easier way than at the chip or device level.
  • Bypass. This avenue circumvents security features through rerouting, spoofing, or other techniques. An example of bypass is to reroute data from one core to another, or even to I/O pads and therefore out of the chip. Bypass is risky because, unlike narrow bandwidth covert channels that are difficult to exploit, bypass present a large, easily exploitable overt leak in the system.

While this has improved significantly over the years, nothing is 100% unless you have control from inception to production. There is still some truth to Ken Thompson’s well-worn adage, “You can’t trust code that you did not totally create yourself. You can only trust your final system as much as your least-trusted design.” And even though several EDA tools have become the de facto standard and the major EDA developers’ tools are trusted and standardized, one cannot totally eliminate some measure of uncertainty.

Securing the Bitstream
The bitstream provides the information that is required to program the FPGA. This data often contains proprietary information, and the result of many months/years of work. It may contain any type of sensitive data such as a cryptographic key or security process. Protecting the bitstream is synonymous with eliminating intellectual property theft.

What makes bitstreams such a security risk is that they can be stored in memory or logic on the R-FPGA. So locking down the bitstream from copying or extraction is a high priority for security on these chips. In other cases, the bitstream is stored somewhere in the system, in non- volatile external memory, where the attacker can either directly read the memory or eavesdrop on the bus when the FPGA is powered up and the bitstream is feed into the device. Either way, they present and opportunity for data theft.

Attacks on a bitstream can be done in a number of ways, including cloning. Cloning is extremely invasive because once the bitstream is obtained, it is simply a matter of acquiring additional FPGAs and use that bit- stream to create a replica of the. This has a negative effect on the entire device cycle, from design to shipment because counterfeit devices can be sold more cheaply. This has a ripple effect across the product’s entire production cycle.

Another way to compromise an R-FPGA is reverse engineering. One example is the process of discovering properties of the design by translating the bitstream into some higher form to decode the encryption key.

Finally, there what is called a readback attack. This attack obtains the bitstream directly from the functioning device. This can occur because many FPGAs allow the configuration to be directly read out of the device either through the Joint Test Action Group (JTAG, for this case), an Internal Configuration Access Port (ICAP), or another similar bitstream programming interface. This is especially applicable to R-FPGAs because they swap out configurations via bitstream data as a function of the device in the field, and each reconfiguration cycle can be an opportunity for compromise.

Conclusion
Locking down an FPGA is a fairly new practice. Security always has been an issue, but breaches were it was often few and far in between – and only recently, because FPGAs have become inexpensive enough to replace the ASIC they modeled for so long.

R-FPGAs add another level of potential intrusion because they are reconfigurable and have additional vulnerabilities. As these devices become cheaper and more agile, they will be found in more and more devices, most of which will be autonomous and connected in the IoT/E. Coupled with the multitude of interconnect paths of the IoT/E, and there is a potential for disaster of almost unfathomable magnitude.

But it doesn’t have to happen. The technology for securing this next generation of FPGAs exists. It just has to be included in designs. And given the threat of something going wrong as the IoT/E gets rolling, that may change rather quickly.



Leave a Reply


(Note: This name will be displayed publicly)