How Secure Is Your SoC?

popularity

The Russian Federal Guard Service’s decision to revert to typewriters—machines that leave unique mechanical fingerprints—is a good indicator of just how serious security has become in a world dominated by electronics and pervasive connectivity.

What’s less apparent is the growing threat at the SoC level—inside complex chips, between the chip and the board, and between chips on the Internet of Things. Security used to be an issue that was dealt with almost exclusively by operating systems and network managers. Not anymore. The number of unknowns in third-party IP, the interaction of subsystems and the networking of secure with non-secure devices has opened a Pandora’s Box of threats and potential threats.

On the plus side, security risks are opening huge new opportunities for tools vendors. On the down side, it could have an impact on chip design, integration, performance, and even power consumption.

Physical threats
There are two distinct areas of security threats when it comes to SoCs. One is functional—an attack that interrupts the normal functioning of a chip to obtain data or simply shut it down. The second is physical, which can involve literally grinding off the package, putting the chip under an electron microscope, and inserting probes into various parts to determine where there is a pattern that can be interrupted.

The physical part involves three different approaches, according to Mathias Wagner, fellow and chief security technologist at NXP. One is reverse engineering. After breaking down into one layer, hackers take photos of each construct on the chip and then reconstruct those images.

“Once you find a point in a design that you want to tap, you go in with a needle and a probe,” he said. “You also can use a laser-pointer-like device to scan chips using a TV monitor and find a weakness. So if you know the instruction the chip is supposed to execute and you execute a different instruction, you can determine whether the pin number was right or whether the hash value was correct. If the answer skips, you may find yourself in a part of the code you’re not supposed to be in.”

A second approach is to tap into the embedded code or other levels of software when the chip boots up. That allows hackers to change the clock from an internal to an external clock, for example, and to slow down the execution to any speed they choose. The third approach is to track power consumption or radiation and correlate that with what the chip is doing, then run control group tests to determine what is going on.

“The real challenge is when the chip is in a hostile environment and there is no chance to talk back to the mother ship and say it’s under attack,” said Wagner. “Multicore makes it much more complicated.”

So do multiple states, modes and power islands. And with more third-party and re-used IP, there are frequently more voltages, more memories, and more complexity overall.

“What was an ad hoc concern is emerging into a number of very important themes,” said Bernard Murphy, chief technology officer at Atrenta. “One involves passive attacks, where you analyze what’s going on by power supply or EM emissions. If you do electromagnetic monitoring you can see fluctuations as things are happening, which is where data is moving in and out. A second area involves cryptography. It may take you to the end of time to crack 256 bits all at once, but you can crack 20 bits at a time.”

There also is a whole new class of Trojan horses being created to wake up and take control of the circuitry. While software Trojans are a well-known problem in malware, hardware Trojans are far less understood. In effect, they have moved the security threat from the software hacker to the electrical engineer, who knows how to power down a chip or transmit a secure key.

“These are a different dynamic than software Trojans, which you typically download from the Internet,” Murphy said. “Hardware Trojans are pre-loaded into the design. That raises new opportunities for EDA to find these Trojans because a Trojan should never be activated in normal testing. But to have a secure system you really need to focus on the design from architecture to packaged end project and even beyond that. This has big implications for offshoring and optimizing for cost.”

Functional threats
The number of potential security holes in a complex SoC at this point is largely unknown. The threat is not from the individual pieces, such as purchased IP or subsystems. It’s more about how the SoC is put together, where unexpected weaknesses can show up, and how much security is added to deter hackers. This typically falls into the functional category of security, which adds one more thing for verification teams to worry about.

“We’ve been using verification to prove that a design does what it was intended to do,” said Wally Rhines, chairman and CEO of Mentor Graphics. “The more difficult problem is proving it does not do what it was intended to do. That is an EDA problem, and it covers the whole food chain from hardware to embedded software to IP.”

It’s clear that customers are increasingly concerned about this risk, though.
“We got introduced to the notion of security by our customers,” said Oz Levia, vice president of marketing at Jasper Design Automation. “The problem they described was that there is an on-chip secure area where it is safe to deposit and read/write keys or encryption keys. If they are compromised it leads to a breach. But there still need to be ways to update data and read and write. They wanted to verify data was only written and read by intended sources. The challenge is to make sure the data remains intact even under a false description. You also don’t want the security to zero out because they reset the clocks or short-circuit something with high voltage.”

For EDA, this opens up a number of opportunities, both from the system design and the verification side. “The message we got from our customers is that they need a much higher degree of confidence,” said Levia. “Almost every SoC manufacturer has a need to store, modify and read some collection. And those at the boundary can always find a way to manipulate data. So looking at the trust zone is not enough. You need to expand to the boundary of the chip.”

Nor is the problem confined just to digital part of the design. No part of the SoC is immune. Manmeet Walla, senior product manager for mixed-signal PHY IP at Synopsys, said the HDMI standard for security (high bandwidth digital content protection) was cracked in 2010 and the master key was published on the Internet. A new standard had to be developed to replace it.

“The problem involved digital rights management as it moves from the set-top box to the television,” said Walla. “If the keys are hacked, you can get movies for free. The only way to solve that is to build a dongle for it.”

System-level security
But building extra hardware, adding noise to disguise signals, and creating circuitry to keep power constant or tamper resistance to shut off a chip is far from easy. It costs money and it adds margin to a designs that already are suffering in performance and power because of extra margin. The result is that more people are beginning to think about this as early as the initial architecture.

It’s hard to say when exactly the tide changed on security. The transition from physical theft—as in hard cash or documents—to the virtual world via software and networking occurred in the mid-1990s with the proliferation of the Internet, and over the past 18 to 24 months it has moved to the chip-level hardware that can unlock data stored in software. The chip world is just starting to wake up to the magnitude of this problem.

Some of this still can be controlled through software, of course. A system on chip has its own embedded software.

“We’re just starting to really think about it,” said Chris Rowen, a Cadence fellow. “As complexity scales, no one has all the blocks. So does the system architect understand all the subsystems and is there a good mechanism for looking at that? One thing that’s changed is we’re starting to put real operating systems on chips. With those operating systems you can make a key distinction that says, here are some parts of the software that are trusted and other tasks operate in user mode. It’s a way of bounding the problem. You also can use a central firewall so that only some of the data is allowed to pass. And with virtualization, you can create the illusion that a task has access to a full suite of services even though it doesn’t. It still has to go through an interface.”

Some of it also can be controlled through the chip’s interconnect fabric, both within an SoC and through the interface to the outside world. “The overriding statement is that security is more important to more people,” said Drew Wingard, CTO of Sonics. “You expect to do more risky things to complete payments. Content owners trust that they can display content, even though—unlike the old videotapes—a digital copy is a perfect copy. Even a Tesla does regular software updates to the dashboard. All of this highly connected stuff is more vulnerable.”

Conclusions
Recognizing there is a security problem and that it now involves hardware is a good starting point. There are roughly 100 academic and technical papers submitted each year on attacks and embedded security. The problem is it takes years to develop a chip and software, which means that chipmakers are starting from behind.

On top of that, counterfeit components can creep into a disaggregated supply chain with a built in “back door” or embedded malware, and there are cases where this has happened. But security has always been a game of leapfrog. The real challenge is where is the best place to implement it.

“The future direction is security by design rather than security by obscurity,” said NXP’s Wagner. “It’s not enough to hide things anymore and hope no one finds them. People are only starting to realize this and waking up to a wider picture.”



Leave a Reply


(Note: This name will be displayed publicly)