How Secure Are FPGAs?

With encryption at risk in the post-quantum world, FPGAs have never been more vulnerable, requiring both traditional and novel defenses.

popularity

The unique hybrid software/hardware nature of FPGAs makes them tempting targets for cyberattacks, while also enabling them to rebuff attacks and change the attack surface before significant damage can be done.

But it’s becoming increasingly challenging to address all the potential vulnerabilities. FPGAs are often included in larger systems, each with their own unique attack vectors as well as some more common ones, and they are being connected to more devices that may not be as secure. This increases the need for programmability, while also raising some concerns about how quickly these devices can respond to threats.

“A lot of government systems use FPGAs, because they know they’re malleable,” said Scott Best, senior principal engineer at Rambus. “They’re field-programmable, so you can update them, in contrast to ASICs.”

While most FPGA vulnerabilities, such as bitstream attacks, are well-known and can be addressed with robust encryption and good design hygiene, quantum computing will introduce novel and more worrisome vulnerabilities for FPGA devices. Encryption algorithms such as AES-256, which is considered unbreakable using conventional computing, might be cracked in several hours using a quantum computer, according to numerous presentations at conferences.

“All integrated circuits designers ought to start considering this now,” said Jayson Bethurem, director of marketing and business development at Flex Logix. “The Cyber Resilience Act that’s been put in place in Europe demands that by 2025, chips and integrated circuits must have some level of cryptography, especially in secure and functional safety systems.” The U.S. has its own cybersecurity initiatives, also being rolled out in the same timeframe.

FPGA security basics
An FPGA chip is a core of building blocks, including programmable logic, surrounded by various kinds of I/O, while an embedded FPGA (eFPGA) is just the core. Key to an FPGA’s functionality is the “bitstream,” essentially an instruction set for the programmable logic. This bitstream is the frequent target of attacks. Because I/Os and their ports can be a potential vector for attacks, eFPGAs can be a less vulnerable way to achieve re-programmability.

“SoC security issues are what our customers care about, because the eFPGA sits in the middle of their chip where it’s safe,” said Geoffrey Tate, CEO of Flex Logix. “Most of the attacks come from the interfaces on the outside.”

Traditional FPGAs are SRAM-based devices, which are completely unprogrammed at startup. Most do not have an internal non-volatile memory, so programs must come from an external source. That source needs to be fully secured, which is why a zero-trust supply chain with an audit trail is essential.

Because they are programmable, FPGAs have been targeted by a range of hackers, including some who are simply showing off, as well as state actors intent on extracting valuable information or disrupting key infrastructure.

“There are real, nation-state type of attacks,” said John Hallman, digital verification technology solutions manager with Siemens EDA. “You can look at the funding numbers, like the five-year plans of China, and see how much they have invested in some of their microelectronic capabilities. Being able to understand the developer code and manipulate and change function is being funded by our adversaries.”

Types of attacks
An ACM review paper described two main categories of attacks. One involves active attacks, which seek to disrupt normal functions. Passive attacks, in contrast, leave normal functions intact, masking the real goal of extracting information. Fault injection attacks are an example of the former, while side-channel attacks are the classic form of the latter.

One of the most common ways to target an FPGA is a “substitution” attack, in which the attacker reads the bitstream and either substitutes another bitstream or changes the original bitstream and places a compromised version back in.

“It’s basically a way to take over the device, similar to what you do on a processor of taking out the software and putting in your own software,” Hallman said. “A lot of emerging protections are coming out, but it’s one that we’ll have to continue to pay attention to because of how complex FPGAs are. If you have control of that bitstream and can change and write in your own function, you have access to so much in the product or system.”

Worse, attackers have been able to change bitstreams that have been validated. That means they may have been able to infiltrate the original design, so that when the design was deployed it had some piece of code an attacker could utilize. Such problems can be traced to third-party IP, whether in specific blocks downloaded from the Internet, or even from a third-party vendor contracted to build the design.

Fortunately, there is a first-line defense. “There are tools that can be used before the bitstream is deployed that give accessibility and visibility back to the code,” said Hallman. “Do your due diligence with the mitigation verification before you deploy the bitstream.”

Good design hygiene—a multi-layered approach
In another scary scenario, attackers can target error correction codes. “Here, it’s a matter of how much access they have to the chip, whether it be through the test port or through the I/O directly,” Hallman said. “The more data they can collect, the more they’re going to be able to experiment and determine how they can defeat that type of correction process.”

To circumvent such attacks, it’s important to make tradeoffs between protections and accessibility as early as possible in the design cycle. As with all security, defenses must be multi-layered, with decisions about what layers are needed judged by understanding the threat space. With enough time and effort, attackers are going to find a way to accomplish their goal, so the counter-measure is to either to deter, or in the best case, entirely prevent that attacker from jumping to the next step.

“Break your requirements down to a functional level,” Hallman said. “Identify the threat space early for the particular device and know if it’s going to be susceptible to physical-type attacks, or if you’re going to be limiting this device to some type of remote access. Is there a cyber element that needs to be considered by allowing some type of network access? That access could open up attack vectors, such as being able to reprogram over remote links. Many of those tradeoffs need to happen early. You need to understand the effectiveness of any protection that goes in before you actually deploy it in your particular device.”

All assumptions should be documented, so that systems integrators know if they have to put in first-line or additional protections for the device or product, and any extra circuitry should be identified up front.

“When you put in your protections, make sure you’re understanding what those protections are, and that you quantify how well those protections are working for you so it can be known what else needs to be done at higher levels of integration,” Hallman said. “In the FPGA, there’s logic that’s hooked up to default-type conditions. But there’s also logic that could be changed out of those defaults. With a simple bit flip, whole regions of logic could be activated and become part of the normal functionality. Doing due diligence to remove that type of logic early should be part of common practice.”

Hallman recalled the story of a back door in a chip allegedly created by hackers. It turned out not to be a deliberately planted vulnerability, but rather test logic that could bypass many of the protections that were set up for the device. As a result, readouts of registers thought to be completely secret could be accessed through this initial unknown capability.

To prevent such issues, FPGA vendors should make test capabilities known to customers without being asked first. FPGA vendors will share that information if you go through proper channels, but it’s not something they’re going to publicize for obvious reasons.

In addition, FPGA providers should consider disabling design-for-test functionality once it has served its purpose. “After a part is ready for production and it’s gone through its proper tests, you could blow those fuses and not allow that particular test access anymore,” Hallman said. “It limits some of the re-programmability in the field or it may limit some debug, so there are trade-offs, but there are opportunities to lock out that type of functionality before fielding it.”

Newer approaches, like types of logic locking, determine whether specific keys can enable the device to be tested in a certain way, or they can be deployed so the device will only function once the lock is provided. But these are still in addition to basic security measures, because no single approach prevents all problems.

Post-quantum attacks
For all the work to keep out the current generation of attackers, the coming quantum era will present even far greater challenges, because of quantum computing’s threat to bitstream encryption, one of the fundamental layers of  FPGA security. Rambus advises the first step to security is knowing where vulnerable cryptography is deployed in FPGAs.

With state-actor quantum computers predicted to be fully functional circa 2030, the time to plan is already upon the industry, especially since some quantum encryption breaking schemes could be in place as early as 2025. “Everybody has to start thinking two years in advance,” Flex Logix’s Bethurem said.

The concerns are doubled for chips that go into long-lived devices. “Automotive chipmakers and others, like the industrial sector, have to be thinking about protecting their chips from a security point of view for a decade to two decades from now,” said Flex Logix’s Tate.

To stay ahead in the game, Flex Logix advocates “crypto-agility,” a dynamic cryptographic scheme. “People harden their security solutions using an AES algorithm in their device,” Bethurem said. “That AES algorithm is private, but it’s fixed in function. If somebody ran enough data through quantum compute capability, or even through AI, they could unlock the recipe for that cryptography. Because that’s a fixed function, even FPGA companies stamp that out as hardware. Once that’s cracked, you’re vulnerable.”

The solution may be a variation on salting, a technique which derives its name from “salting” baked-in cryptography schemes with another layer of dynamic and flexible protection. “This is like RSA keys, which are always dynamically changing,” Bethurem said. “But it’s not just a key. It’s a whole cryptography engine. Those could be built into adaptable hardware. Having that flexibility going forward is going to be critical. You have to always change the keys to your front door.”

The foundation of crypto-agility is to have a programmable block in the security system. “If it’s hardwired and gets hacked, what do you do? If it’s programmable and gets hacked, you can at least update to a new algorithm and download it to all of the products that need it. And maybe you just continually change your security every month or so. Don’t give hackers the time to figure out how to hack it,” Tate said.

One additional issue to consider in such scenarios is that updating must be transparent to the end user. It’s hard to imagine consumers being aware and compliant enough to continually update their cars’ security systems.

Yet even in a post-quantum world, the basic operation of an FPGA/eFPGA still leaves room for hope. “There’s always a piece that could be updated and changed. Reconfigurability is something we can leverage to help even the playing field,” said Siemens’ Hallman.

Another possible strategy was suggested by the ACM review, which included a discussion of obfuscation, which is already used in situations where encryption is unworkable. It could be worth exploring as a potential post-quantum security defense.

Ultimately, Hallman insists that whether in a pre- or post-quantum world, security always requires more than one line of defense. “We have to recognize that we’re not going to be able to just rely on encryption. We need to look for other methods of protection. We must continue the idea of layered approaches.”

Mike Borza, Synopsys scientist, agreed, noting it’s always best to think through security as early as possible. “It’s much easier to design security in than it is to go back and retrofit it. Going back to retrofit is likely to leave holes, or leave you with compromises that you wouldn’t have had if you’d thought about it at the beginning. Generally, I advocate for doing security early on. It doesn’t need to be a major exercise or preoccupy the design team for a lot of the design, but it’s something that you can design in, plan for, and then at appropriate points, you either make the necessary security features available or you test that they’ve been implemented properly and that they behave as they’re intended.”

Finally, the EDA industry is starting to build design tools that either look for security vulnerabilities in things like the CWE (Common Weakness Enumeration) https://cwe.mitre.org and/or identify well known patterns that lead to vulnerabilities. There is still much work to be done, but at least the chip industry is taking security seriously these days.

Further Reading
Software-Defined Hardware Architectures
Hardware-software co-design is established, but the migration to multiprocessor systems adds many new challenges.
IC Security Issues Grow, Solutions Lag
Signing off on hardware security may involve lifetime updates; AI adds unknowns that are difficult to trace.



1 comments

d0x says:

Surely we will all have quantum computers in 10 years

Leave a Reply


(Note: This name will be displayed publicly)