Back Doors Are Everywhere

Despite fears about hardware breaches, access to firmware and chip data is more common than you would expect.

popularity

By Ernest Worthman & Ed Sperling

Back doors have been a part of chip design since the beginning. One of the first open references was in the 1983 movie “War Games,” which features a young computer whiz who uses one to hack into a computer that controls the United States’ nuclear arsenal.

In reality, modern back doors predate Hollywood’s discovery by about 20 years, starting in 1965 with a time-sharing OS called Multics. [See reference 1] At that time the U.S. Department of Defense was trying to develop a secure operating system, and Multics was the result.

“The first back doors were discovered during the evaluation of Multics by the U.S. Air Force,” notes Pankaj Rohatgi, director of engineering in Rambus‘ Cryptography Research Division. Two pioneers, Paul Karger and Roger Shell, both in the military at that time, were involved in this project and assigned to evaluate the security of Multics where they, as part of the analysis, put in what were then called “trap doors” using a language called PN1. These trap doors were used for testing the system and security breaches.

Karger and Schell are considered the fathers of modern computer security. Schell basically laid the foundation for security with the creation of the Orange Book. [See reference 2]

What’s a back door?
Back doors have been around for millennia. The Trojan Horse of ancient Greece was the first recorded example, and the name is still equated with malware. The World War II Enigma machine, which was used to encode and later decipher German military communications is another example. Today, the term generally refers to any type of unauthorized access to electronic devices. That could be a hidden entrance or an undocumented way to get access. (For this discussion, it is focused on chips and their systems, and the issues around hardware back doors.)

Back doors come in many forms. Not all of them are the work of devious minds. In fact, many so-called back doors are design flaws, which can then be used to compromise a chip’s security. Given the sheer complexity of large SoCs, not to mention the increased use of third-party IP in many designs, it’s also possible to build in extra circuitry that can allow an outsider to take control of a device. Such was the case with Heartbleed.

“If I were an attacker, I would contribute as many IP blocks into open source as possible,” says Serge Leef, vice president of new ventures at Mentor Graphics. “Every IP comes with a testbench that proves the IP can perform what it claims to do. So if you have a USB IP block and you run it across the testbench that came with the IP, you can confirm this IP works as a USB controller. But what else does this thing do? If you have 20,000 lines of Verilog in the IP, no human can grasp that level of complexity. What’s not in use is not visible. Open source is the easiest place to add back doors, trap doors and Trojans. For blocks of any size, these will never be found.”

Verification IP that comes with buggy or infected IP likewise may not show all of the corner cases. But considering that even legitimate VIP doesn’t necessarily identify all the corner cases, this typically doesn’t set off any alarm bells. Smart programmers can build in corner cases that don’t show up unless they are activated.

“A couple of companies have tried to solve this with formal (verification),” says Leef. “We think that’s hopeless because you need to know what you’re looking for. If a key is hidden somewhere, the idea is that you don’t want they key to leak onto the bus. But there are many other ways to access that key. The only hope of finding Trojans—and in this case I’m using Trojan as a broad term—is monitoring functions that can recognize different behavioral patterns.”

He compares this to analyzing traffic patterns on a freeway and identifying which cars are moving too fast, which ones aren’t obeying the metering lights, or those that are moving slower than the rest of the traffic. Extraordinary behavior is often easier to track than looking for the back door circuitry, a strategy that has been employed in network security for years.

Good and bad
Not all back doors are bad, though. Some are built into devices for reliability or testing reasons. “Just about every chip in the world has a factory test capability,” notes Richard Newell, senior principal product architect for the SoC Products Group at Microsemi . “Chip manufacturers have to have a way to test chips to make sure they yield so the product sold to the customer is of high quality.”

That test capability is essential for quality control. So are field upgrades to fix bugs inside of chips and to update capabilities to keep devices from becoming obsolete. For all of those reasons, some sort of back door is necessary. In the 1990s, PC makers added capabilities into PC hardware that would allow them to power on those computers while they were still on a company loading dock to burn in the corporate image, thereby saving IT departments thousands of man hours.

But a hardware back door also can be used to compromise a device’s firmware or security mechanisms, allowing manipulation of programs and data. This is different than attacking the programming from a software code perspective, which is what viruses and malware do. That is not to say that back doors cannot be used to load viruses and malware, but chip back doors typically are used to alter the code in the chip’s firmware.

“There are many types of back doors,” notes Rohatgi. “The most obvious is that there is some extraneous process that is running or some extra code that has been inserted that isn’t part of the normal programming. The more sophisticated it is, the harder it is to detect. The best back doors are in the binary. That way the it will never show up in the source code. There is also the concept of putting back doors in as a crypto implementation.”

A crypto implementation affects the output of a crypto chip. It contains the back door, but it cannot be seen. This is called Kleptography, which is defined as the “study of stealing information securely and subliminally.” [See reference 3]

A kleptographic attack is an attack in which a malicious developer uses asymmetric encryption to implement a cryptographic back door. In this way, cryptography is employed against cryptography. The back door in question is not an additional channel of communication to the world outside the cryptosystem, and it does not require the transmission of additional data. The back door is instead embedded directly within the intended communication. Thus, Kleptography is a subfield of cryptovirology, the application of cryptography in malware. [See references 4, 5]

Opinions and strategies vary
Accessing a back door can grant access to a device’s hardware from a remote location. The code being manipulated resides in the firmware area, which is what the back door allows access to. But putting back doors in chips has become a science, and one that is highly controversial.

“A back door is a bad idea,” says Ronald Rivest, an Institute Professor at MIT and cryptography expert. “No one supports it.”

That depends on who is defining the back door, though. Virtually all chips allow updates to firmware, even when the system isn’t on. But is embedding these back doors in chips criminal? Do they cross privacy lines? Are they invasive?

Not necessarily. There are lots of people who at one time or another wished they could have someone remotely access their computer to clean up a virus or fix boot issues, with or without a network connection—or simply reset a device where the access data has been compromised or forgotten. The same goes for mobile devices. This is a major argument for back doors.

At the same time, low-cost manufacturing can open the door to the wrong kind of back doors. There are hundreds of low-cost foundries in the world. Not all of them are transparent about their ownership. This has given rise to an entire industry that will check those chips for a fee, de-layering chips to compare the netlist of the chip that has come back from the foundry against the original netlist.

“There are foundries that are trusted and some that are not,” says Mentor’s Leef. “A lot of startups will gravitate to the lowest possible price point. At the other end of the spectrum, big companies will send people to the factory floor to babysit wafer lots.”

He adds that new techniques are being developed to minimize that kind of risk, such as filling in the white spaces in a design with repeating circuits. That way any override of the fake cells changes the electrical profile and sets off alarms.

Even in cases where back doors serve a legitimate purpose, not all of them are well constructed. “There are some very poor implementations of access ports where the manufacturer just leaves the port open,” says Newell. Other cases include using a factory passcode for every chip, everywhere. “In that case, it is only protected until someone figures out the factory password, then every similar device is now vulnerable.”

If one is designed into a chip, it must be so well secured that nefarious super coders can’t get access to it. That is the golden rule of chip manufacturing. So if it were always used for good, it wouldn’t be a problem. In this case, the difference may depend on who is accessing the chip. Adds Newell: “We lock the chips before they leave the factory, cryptographically. And once the customers install their own data onto the chips, it disables the factory key and locks it with the user key.” At that point the chip is no longer accessible by the manufacturer.

Even that has tradeoffs. If the chip is used in a mission-critical application such as military hardware, there is no way to do failure analysis once the port is locked. So a workaround is to have the customer unlock the chip with their key, which will re-enable the factory key. But that technique needs to be very tightly controlled. If that should fall into nefarious hands, it can be dangerous, to say the least.

1. Multics, which stands for Multiplexed Information And Computing Service, began as a research project in 1965 and was an important influence on operating system development. The system became a commercial product sold by Honeywell to education, government, and industry.
2. The Orange Book is nickname of the Defense Department’s Trusted Computer System Evaluation Criteria, a book published in 1985. The Orange book specified criteria for rating the security of different security systems, specifically for use in the government procurement process.
3. Yung, Kleptography: The Outsider Inside Your Crypto Devices, and its Trust Implications, DIMACS Workshop on Theft in E-Commerce: Content, Identity, and Service, 2005.
4. A. Young, M. Yung, Cryptovirology FAQ, Version 1.31.
5. A. Young, M. Yung, Malicious Cryptography: Exposing Cryptovirology, John Wiley & Sons, 2004.

Related Stories
Malicious Code In The IoT
Scare Of The Month: The Breach At Juniper



Leave a Reply


(Note: This name will be displayed publicly)