Are FPGAs More Secure Than Processors?

Implementing security remains challenging, regardless of the hardware platform.

popularity

Security concerns often focus on software being executed on processors. But not all electronic functionality runs in software. FPGAs provide another way to do work, and they can be more secure than functions executed in software.

FPGAs provide more control of hardware and are more opaque to attackers. In the case of embedded FPGAs, the designer is in complete control of the entire system. That means relying less on hardware that someone else designed. It also means that the final design is unlikely to be publicly documented, meaning that many attacks would have to be preceded by a difficult reverse-engineering task.

“If you’re worried about security, an FPGA block is going to be much harder in all cases for somebody to crack,” said Geoff Tate, CEO of Flex Logix.

An enormous amount of electronic functionality is accomplished by writing software that executes on processors. The security implications and need to protect such systems have been well documented. But FPGAs are also a popular way to render functions in hardware instead of software. While this is sometimes done as a prelude to dedicated silicon circuitry in an ASIC, FPGA accelerators are popular in data centers. Designs in fledgling industries where requirements may change rapidly may also use FPGAs.

Fig. 1: On the left, a function is executed in software. On the right, the same function is executed in an FPGA-based accelerator. Source: Bryon Moyer/Semiconductor Engineering

Fig. 1: On the left, a function is executed in software. On the right, the same function is executed in an FPGA-based accelerator. Source: Bryon Moyer/Semiconductor Engineering

Security in FPGA-based systems is no less important than security in processor-based systems. But FPGAs have some fundamental differences that help with the security task. The major differences have to do with both the availability of information and the control that hardware engineers have.

Harder to hack what you don’t know
Processors are either built by hardware companies or designed by IP companies for inclusion in an SoC. In the latter case, the hardware team integrating the processor into the chip has limited visibility into the processor block. The focus is on the interconnect, not the details of the logic in the processor. Once configured and connected, the processor is available to software engineers for creating code.

In order to create code, the hardware architecture of the processor must be well documented. The instruction set, the pipeline, the I/Os, the memory architecture, the internal timing — all of these must be known in order to create the best-performing software. But the availability of all of that information also puts the processor at risk because it’s available not only to well-meaning coders, but also to attackers looking for weaknesses and ways to intrude.

FPGAs are less transparent. Off-the-shelf versions have a well-documented high-level architecture, but lower-level details are left to the design software. While processors must accommodate programmers that want to work in assembly language and manage every clock cycle, FPGA designs long ago lost the ability to manually control how the different logic blocks are configured. The scale of design is so huge that, practically speaking, the place-and-route tools can do a far better job on a complete design than hand-tweaking would accomplish.

As a result, FPGAs don’t document the low-level details that would be necessary for manual design. Even the bitstreams used to configure FPGAs remain undocumented. It may be possible to reverse-engineer some of those details, but that requires a lot of work — far more work than is needed to understand a processor. “Good luck trying to figure out which bits [of the bitstream] go where,” observed Tate. There is even greater obscurity in an FPGA fabric that’s embedded in an SoC (an eFPGA). “When you’re talking about an embedded FPGA, it’s even harder.”

In addition, processors tend to control entire systems, while also hosting the various software programs that need to run. So even though it’s possible for an FPGA to control a system, that’s not typical. A processor is usually in charge, and the FPGA accelerates specific delegated tasks. An attack on a processor risks the attacker taking control of the system, but that is an unlikely result for an FPGA attack. “Even if you did take control of the FPGA, you can’t take control of the system,” said Tate.

Processors also take center stage when managing communications, and so the communication channel becomes a means of trying to access the processor. FPGAs tend to operate at a lower architectural level, so attacking them through a communication channel may be less successful.

Documentation may not be the only vulnerability for processor-based systems. The ability to probe around to find what’s happening or even affect behavior is easier with processors. “FPGAs provide better security than processor-based systems because processor-based systems are susceptible to trial-and-error attacks in which adversaries use techniques such as memory buffer overflows to change the processor’s behavior,” said Ray Salemi, FPGA solutions manager at Mentor, a Siemens Business. “On the other hand, FPGAs load their configuration bits from a ROM at startup and there is no way for the data traveling through the data paths to modify the underlying logic.”

Easier to secure if you’re in control
Processors, whether pre-designed or as configurable IP, provide limited control to a hardware designer. An off-the-shelf processor gives no control. It is what it is. Processor IP in an SoC provides a greater level of control, but it’s still at a high level, where you specify things like how many instances of different blocks may be laid down or which and how many I/Os may be created. “With an SoC, the internals of the actual hardware are a black box,” said Jason Oberg, co-founder and CTO of Tortuga Logic. “You can configure what stuff does at the software level, but you really don’t know how the software interacts with the hardware.”

The detailed design of those blocks and I/Os are out of the chip designer’s control. That’s intentional, of course — it’s the basis for the productivity enhancement that IP provides. But any security vulnerabilities can’t be corrected by the purchaser of that IP. “The more of the system that you control and can analyze, the higher your understanding of the system, your knowledge of the security, and your overall assurance are going to be,” said Oberg.

With an off-the-shelf FPGA, a hardware designer has complete control over the logic that gets implemented in the fabric of the FPGA. “A big reason why FPGA security is seen as better is because of the customization of the design in your hands as opposed to somebody else’s,” said John Hallman, product manager for trust and security at OneSpin Solutions. “That last customization piece is yours.”

Verification is also more transparent. “In an FPGA, where you’re building your own custom ASIC, you have much more white-box analysis,” said Oberg. “This allows you to look at a lot of different corner cases and helps you to design a more robust system, rather than trusting that a vendor has done this properly.”

But there is a fair amount of other logic in an FPGA over which the designer has no control. At the very least, there is the hardware infrastructure that enables the configuration of the FPGA. Many FPGAs also have hardened processors and other blocks. “In a modern SoC-based FPGA, you’re still heavily reliant on the ASIC portion of the FPGA to bring the part up securely,” noted Oberg. A hardware designer using such an FPGA has little control over those hard logic blocks.

An embedded FPGA (eFPGA) provides even more control. All of the logic connecting the FPGA block to the rest of the system is under the control of the designer. “In the case of an eFPGA, you’re building your own ASIC and you can fully control the whole process — how your actual bitstream is loaded and where it gets stored,” said Oberg. Any blocks connecting to the FPGA are either selected by or designed by the design team. Any concerns about security vulnerabilities can be addressed in a way that’s not possible with an off-the-shelf FPGA.

This is, of course, a double-edged sword. “The embedded FPGA gives you an area of reprogrammable logic, but it does not provide you a template for securing that logic,” cautioned Hallman. “It’s up to you as the integrator to customize your security. You’ve got to come up with your security plan and your verification plan to ensure that what you’re deploying meets your security needs.” So the benefit accrues only if the designer knows how to implement all of the necessary security details.

There’s also a security benefit to building an SoC that contains a processor, on-chip memory, and an eFPGA. That means that more signals remain on-chip, making them harder to snoop. “There are potential benefits to having the software and hardware working together in the same device,” said Joe Mallett, senior marketing manager at Synopsys. “There are limited external memory interactions, more secure channels, and tight coupling of software and hardware.”

Like software, FPGA code also can be updated, both to improve functionality and to fix security holes. “People are doing over-the-air ‘firmware’ updates in the FPGA, where you’re modifying the underlying hardware functions,” said Hallman. But updating an FPGA takes more rigor than updating software. “Updating FPGAs has been seen traditionally as a more robust process.”

Changing FPGA code in a manner that provides the new desired functions while fitting within the old boundaries, achieves the right timing, and sticks with existing I/Os can be more difficult than changing software. But it still has value. “There certainly is a risk of not being able to close timing in a particular area,” said Hallman. “Those challenges remain, but you at least have a fighting chance of reconfiguring, as opposed to in an ASIC. It’s one step better.”

That said, the security software stack will rely on the security features built into the hardware. If that hardware is updated, it might break the software stack. “The security hardware can be updated to meet the current needs, but at the same time it may make the old software stack stop working,” Mallett noted. “This may be a benefit due to the forced update on both the hardware and software at the same time.”

Fig. 2: An eFPGA within a larger chip. The infrastructure supporting the eFPGA is under the control of the designer. Source: Flex Logix

Fig. 2: An eFPGA within a larger chip. The infrastructure supporting the eFPGA is under the control of the designer. Source: Flex Logix

Reconfiguration of an eFPGA under the control of the processor, however, represents yet another vulnerability. “Embedded eFPGA’s show that security is only as strong as the weakest link,” said Salemi. “If an eFPGA can be reconfigured by an embedded processor in the SoC, then an adversary who gets control of the embedded CPU can reconfigure the embedded FPGA to install a back door. For example, one could reconfigure an eFPGA to implement a previously compromised version of a communications interface, thus opening up the entire chip to attack.”

These kinds of concerns rise to a fever pitch in markets such as automotive, robotics and avionics, where safety and security overlap. In addition to typical security measures, on-chip data traffic patterns need to be monitored to identify aberrant behavior for irregularites. The same is true for thermal hot spots or unusual thermal activity, which may or may not be related to normal aging and use cases.

“We have enhanced security features to protect against things like differential power attacks,” said Manuel Uhm, director of silicon marketing at Xilinx. “We also have multiple temperature sensors within the die, because you can get hotspots and customers need to know where these hotspots are to accommodate a thermal cooling solution. We test for aging, as well. All of this is critical for how long devices last in the field, and also how long we carry them. So we have customers still buying Virtex-5 chips [introduced in 2006]. And in automotive, you may need to cover this stuff for 20 to 25 years.”

Visible tampering
Tampering is always a concern for both hardware and software. A standard FPGA has its hardware committed and verified. Regardless of whether it’s inherently secure, an attacker would not be able to tamper with that basic hardware. But the contents of the fabric could still be tampered with by messing with the bitstream and altering the image that’s loaded.

This was shown to be possible several years ago in a paper presented by a team of researchers at Technische Universitat Berlin using contactless probing.

“In addition to creating electrical signals, these transistors emit a tiny amount of infrared and they will also modulate an infrared signal, depending on whether a transistor channel is active or not active,” said Scott Best, technical director of anti-counterfeiting products at Rambus. “So with a very precise infrared laser, you can bounce the laser off of a transistor and it comes back and it will tell you whether or not that transistor is on or off, whether or not there’s voltage there or not. You can synchronize your sensing to when a transistor is switching, or when a signal is on a bus wire transitioning. There is some material prep required in order to do this, because with any high resolution usually you thin down the backside of the chip from 250 microns thick to maybe 25 microns thick. So now the infrared is escaping easily through the backside of the silicon, and you get really high resolution for sensing the infrared emission.”

With an eFPGA, the array is instantiated in soft form, so a rogue designer could attempt to tamper with the design before it’s committed to silicon. But the array nature of the fabric makes it much harder to change without that change being evident upon visual inspection. It’s much easier to tweak something in a design that’s a relative jumble of gates, where the change will be harder to detect. “When you have an array structure like an embedded FPGA or a memory, it’d be very hard to put a Trojan circuit in there without disturbing the unified array structure,” noted Tate.

In addition, some critical users of eFPGAs require other ways of detecting possible tampering. For instance, the intended power profile of the chip under a typical workload can be simulated and then compared to the actual chip when built. If they differ too dramatically, it indicates something is amiss. Here again, it’s not so much about preventing tampering as it is to make any attempt at tampering more evident.

eFPGA bitstream details usually aren’t released in data sheets and applications notes. They’re typically made available only to serious customers under NDA, and the company offering the eFPGA could decline to work with someone having questionable motives. That’s not ironclad protection of the bitstream contents, but it does provide another layer of protection.

While standard FPGAs come with a memory that holds their contents, an eFPGA usually uses the boot ROM for storage. A standard FPGA could do that too, of course, but the bitstream block would be stored in a format determined by the FPGA maker. With an embedded FPGA, a designer is free to control where in memory that bitstream goes, and it will often be much smaller than the boot code itself. A determined design engineer could even scramble the bitstream so that it was sprinkled around in different parts of the memory rather than being in one block.

One further way of preventing bitstream tampering is to encrypt the bitstream in storage, decrypting only when it loads into the device. In an off-the-shelf FPGA, any such encryption will be under the control of the company selling the FPGA. With an eFPGA, the specific encryption technique and the means of protecting the key remain under the control of the designer.

“The customers don’t need to tell us how they map the eFPGA configuration code into their flash memory,” said Tate. “They can encrypt some or all of the flash memory and decrypt in the SoC. They can stripe or using other algorithms to ‘hash’ the addresses to store in a non-linear fashion. Or they can do both.”

Closing the back door
For both processor-based and FPGA-based systems, there remains one notorious way of creeping into the system — the JTAG (or other debug) port. This port provides access to huge regions of the internal logic. While the organization of the internal scan chains may be tedious to reverse engineer, it’s not impossible.

For this reason, where security is particularly important, designs often disable the JTAG port after the design and manufacturing are complete. This is sometimes accomplished by blowing a fuse irreversibly. If this option is used, the JTAG port will not be available in the future for helping to diagnose any issue.

Another alternative is to disable the JTAG port through the bitstream. The final image can contain instructions that the JTAG port be disabled. If the device is returned in the future with a need for failure analysis, the company that created the bitstream could load a private bitstream that enables the JTAG port for internal use only.

It’s also possible to password-protect the JTAG, although this presents the challenge of managing the key. Will a single key be used for all devices? Then hacking one device will unlock every other device. But using a different key for every device now presents a provisioning challenge for a feature that’s unlikely to be used often. So this can be a more difficult option to implement.

All of these considerations also are affected by the technology from which the FPGA is built. By far most of the FPGAs in use today use SRAM contents that must be reloaded at power-up. This is where bitstream management is most important. It also is more flexible for, say, reloading a bitstream that enables the JTAG port.

Non-volatile configuration in a flash-based device eliminates the need to load a bitstream at every power-up. Devices are configured during manufacturing, making it difficult for an attacker to change the contents of a device once it’s deployed. That said, Oberg noted that SRAM-based devices can use redundancy, if necessary, for resilience against both single-event upsets and changes to the contents — assuming that any tampering happens to only one of a redundant set.

Multi-tenancy – processors and FPGAs
The data center provides a different security challenge — multi-tenancy. With processors, the idea is that more than one client can make use of a server. It’s up to the server infrastructure to ensure that information from one client can’t leak over to the other client. This is done both by separation of memory and by time-sharing the processor itself.

There is also interest in providing multi-tenant access to FPGA accelerators in the data center. “Partitioning has become very important for maintaining some level of isolation,” said Hallman. Rather than time-sharing a processor, this would space-share the FPGA fabric, with one portion containing the logic being used by one client and another portion serving another client.

This is, of course, much more complicated than sharing a processor. How will the different images be managed? Can they be loaded or reconfigured independently of the other client’s part of the logic, even while it’s running? How would hardware be walled off between the two clients? “I’ve had some specific conversations where customers want to allow accessibility to parts of the FPGA design, but they want to lock down other parts, especially in a data-center application,” noted Hallman.

The hardware element that would enable this partitioning would be the partial-reprogramming circuitry. That defines the granularity with which the FPGA fabric can be reprogrammed without touching other parts of the FPGA. But beyond that, there are no physical barriers to signals crossing between domains. Hallman said that enforcement of the partitioning must be done through verification by proving that independent domains don’t affect each other.

While this is an intriguing scenario, it’s probably not as well established as the isolation techniques in place for processor-based multi-tenancy. And the fact that it’s hardware rather than software makes this a more difficult capability to enable securely. At present, it’s not common. “Most customers aren’t reprogramming on the fly during operation,” said Tate.

Conclusion
The choice between implementing functionality in software on a processor and on an FPGA involves a classic tradeoff. Software is more flexible and is easier to change. FPGAs are more flexible than hardened logic, but they are still less flexible than software. Security becomes another consideration in this decision. If the ultimate security is needed, FPGAs and embedded FPGAs may keep the system safer than software.

But it’s not a panacea. “An FPGA implementation has many of the same challenges as a traditional software/microprocessor, such as encrypting flashes, secure boot, and protection from in-system attacks,” said Mallett. While there may be an opportunity for better security, there’s still plenty of work to be done.

—Ed Sperling contributed to this report.

Related
Fundamental Changes In Economics Of Chip Security
More and higher value data, thinner chips and a shifting customer base are forcing long-overdue changes in semiconductor security.
HW Security Better, But Attack Surface Is Growing
Experts at the Table: How cost, tradeoffs, and safety are impacting cyberattacks.
Security At The Edge
Experts at the Table: How to keep devices that last longer secure, particularly when AI/ML are added in.
ML Opening New Doors For FPGAs
Programmability shifts some of the burden from hardware engineers to software developers for ML applications.
Configuring Processors In The Field
How late can something be deferred during the development process? With dynamically extensible processors, that may be while it’s operating.



Leave a Reply


(Note: This name will be displayed publicly)