中文 English

Hardware Attack Surface Widening

Cable Haunt follows Spectre, Meltdown and Foreshadow as potential threat spreads beyond a single device; AI adds new uncertainty.

popularity

An expanding attack surface in hardware, coupled with increasing complexity inside and outside of chips, is making it far more difficult to secure systems against a variety of new and existing types of attacks.

Security experts have been warning about the growing threat for some time, but it is being made worse by the need to gather data from more places and to process it with AI/ML/DL. So even though efforts are beginning to solidify around secure methodologies and technologies, they are not keeping pace with the growth in data and advancing technology that can turn that data into valuable information.

At the chip level, there has been significant improvement in security. At the system level, the trends look very different.

“The attack surface per device is actually shrinking,” said Robert van Spyk, senior offensive hardware security research at Nvidia. “It’s getting smaller for Android devices, in particular. But the digital footprint — the entire ecosystem attack surface — is expanding. It’s going to be very hard to address that with something that works at a system level, not just specific devices.”

Put simply, a chip vendor can make a difference at the chip level. But as the number of devices connected together continues to balloon, the entire ecosystem must be in sync.

“The problem is that we have 1 trillion devices that are not symmetrical,” said Chowdary Yanamadala, senior director of security marketing at Arm. “There are lots of rich nodes, constrained nodes and mainstream nodes. With different deployment schemes and structures, there are different attack surfaces and attack vectors that we need to worry about. So how do we protect these devices? There is no one silver bullet. But we can make sure that security is addressed in a methodical manner, through a framework that can handle the appropriate threats and attack vectors that are pertinent to a particular deployment. While that framework might change, depending on the deployment, the need for a framework to address this in a methodical, systematic manner is essential. There are gaps, and we are trying to fill them. From there you can build on top of it and apply the necessary protection mechanisms, depending on the deployment.”

AI and machine learning
Even though many more security holes have been identified and closed up over the past year, that isn’t keeping pace with the number of new threats. This is complicated by the rollout of AI/ML/DL seemingly everywhere, which collectively add some new twists to the security picture. AI can be used to find weaknesses in both software and hardware, as well as to help defend against attacks in real time. But it also can be used to optimize various system behaviors, and that can open the door to new attacks.

“AI will help attackers in a number of ways, where behaviors that used to be unique to humans can now be automated in ways that are lot harder to distinguish from humans,” said Paul Kocher, an independent cryptographer and security consultant to Rambus. “When you’re trying to do things like CAPTCHAs (completely automated public Turing tests) or analytics on traffic, AI can help adversaries get through a lot of those things. On the defense side, there are lots of companies rolling out AI to help with things like log data. Many of these techniques will be useful, but there are some kinds of signals that AI thinks it sees that aren’t real, and there also are various companies making crazy claims about what their AI can do. So if you’re a customer, it’s really hard to know if you’re getting anything that’s actually going to help you.”

It’s also difficult to determine up front just how effective AI will be in stopping attacks, because in some cases it is an AI algorithm versus another AI algorithm.

“From the defense perspective, AI is attempting to throw out some abnormal traffic, while the adversaries are trying to make the traffic escape whatever detectors are there,” said Kocher. “At the same time, there’s not a database of all the things an attacker can do. So on the defense side, it constrains what the attacker can do. On the side of making things worse — and the car is a good example — what do you do when AI says something is problematic. It gives you two bad choices if you’re a system designer. You can either shut down and crash or go into some failure mode, or you can log this thing and hope that someone comes back and looks at this later. But the number of warning log messages produced by electronic devices today is larger than what humans can look at. So then you put AI on that problem, and that AI has its own issues.”

AI systems, meanwhile, introduce a whole other level of uncertainty. AI systems are supposed to adapt in order to improve performance or optimize various traits. But it’s difficult to determine what the acceptable parameters should be for AI behavior, or to predict how AI will be used. For example, AI in a vehicle will be different under extreme conditions than under average conditions. On top of that, the training data is in constant motion. That makes it far difficult to protect these systems, because AI can be attacked from the training data all the way through to the inferencing process.

Buffering attacks
Circuit aging adds yet another set of complications, and that is becoming more problematic as devices are used for extended periods of time in automotive, industrial and medical applications. At this point, it’s not clear how these circuits will age and what kinds of vulnerabilities that will add into systems.

One approach to dealing with this is to add margin into designs, which adds cost on multiple fronts, including silicon area.

“Because of reliability concerns, and not just security, we need to assess the amount of risk that develops over time,” said Arm’s Yanamadala. “It’s not a super-accurate process, but it’s probably accurate enough to margin for that. That’s something that needs to happen in the design phase. You also can use additional measures where you monitor this with a lot of PVT sensors and various other sensing mechanisms that can be placed on hardware to monitor and trigger an appropriate response, which could involve recalibrating or shutting down parts of a device.”

It’s also not clear when unwarranted activity is a function of aging circuits, which can leak current, or an attack. In either case, a chip that is supposed to be off, but which is showing activity in terms of data processing, I/O or minor fluctuations in heat and power, can send out alerts that something is not right.

“Security is a natural build-out of in-die sensors,” said Dennis Ciplickas, vice president of characterization solutions at PDF Solutions. “Whether this is a process of aging, or whether it’s a human fault, this kind of alert be activated through the same technology. Security under the hood has a lot of commonality with test and monitoring.”

Understanding aging effects is difficult enough in markets such as industrial, aerospace and automotive, and it’s not always clear what is causing an issue because at this point there is too little history and data.

“If you’re designing a sensor, you know its behavior,” said David Fritz, senior autonomous vehicle SoC leader at Mentor, a Siemens Business. “So you can tell when a sensor is dirty or muddy, and you can determine how it behaves when it ages. You don’t need to train data for that. But you do need to deal with the inferencing. If a circuit is aging over time, it may still need to be 80% accurate. As that degrades, you may only be 65% confident that it’s giving you the right result. Understanding that can be the difference between whether you’re cautious or aggressive when you make that die.”

From a security standpoint, though, this has broad implications for the design strategy.

“A couple years ago there was an attack on a car’s anti-lock brake sensors,” said Nvidia’s van Spyk. “That wasn’t about chip aging, but you definitely want margin there. You also need to know where your weak spots are, and maybe add more margin there or have an extra mechanism to make it not so weak anymore. In the anti-lock brake case, someone was able to mess with the parameters. For machine learning, there’s adversarial AI where someone can give bad input to throw things out of calibration and potential produce the wrong response. Those are soft attacks because it’s based on software rather than direct exploitation of the hardware. But you want to know what those are, or at least be aware that they can be present in your system.”

The problem is that not all chipmakers really want to delve into these issues because it’s not their core competency.

“People are almost afraid to know what their actual vulnerability level is,” said Colin O’Flynn, CEO at NewAE Technology. “If someone monitors 1,000 decryptions, they can recover a key really easily. So you can make different decisions about your architecture, you know, such as how updates are done, how keys are stored, or how products are sold, and there are tools to help with that. But part of the problem is most people don’t want to know how bad some of this stuff is, especially when it’s baked into ROM. With the recent iPhone checkm8 exploit, which was a boot ROM exploit over USB — this is a case where it was baked into ROM, so that’s it. The exploit will live forever now in those devices and cannot be fixed, which becomes the issue of putting more complexity in these low initial layers.”


Fig. 1: Power consumption and electronic emanations that form the basis of a side-channel attack. Source: Rambus

Adding structure and methodology
There is no single fix for security issues in technology, but there is at least a growing awareness that security is an issue in many devices, and that not all of it is expensive or time-consuming. Case in point: SB-327. The California legislation, which went into effect on Jan. 1 of this year, mandates that any device capable of connecting to the Internet — directly or indirectly — must have minimum security features and unique passwords for each device. Those devices also have to be authenticated before they are used for the first time.

“It would have been relevant for Cable Haunt because most users don’t change their passwords.,” said Rambus’ Kocher. “This creates some challenges for manufacturing. It adds an additional step, although not a particularly difficult one. It’s also a regulatory step that’s not controversial and which is less work than other security methods. Another thing that’s relevant is Rust as a programming language. A lot of engineering teams that were using C are switching over to Rust now for their engineering work. Teams using Rust are more productive than ones using C because they have fewer debug headaches and they can attract more people to work on their projects. Rust is where C programming seems to be going.”

Other frameworks from companies such as Arm, Microsoft and Intel approach this from the design side. And companies such as Synopsys have software linting tools that can identify potentially risky code at the push of a button. None of these is sufficient by itself, but they do provide extra hurdles that attackers must deal with.

“One of the issues I see with a lot of these sort of new frameworks is they discount whole classes of attack because those are complicated,” said O’Flynn. “But trillions of devices are getting added and people don’t realize how easily some of them can be attacked. We see frameworks designed for really simple devices that are added into a complicated device on some critical network, and then someone pivots from that into the critical network. This is where a lot of effort is needed. We need to create valid attack models.”

But these moves collectively show that the tech industry is beginning to at least grapple with these kinds of issues, even if they are not sufficient.

“The frameworks absolutely have deficiencies, which is why we need a robust framework where there is a methodical way of dealing with threats such as side-channel attacks,” said Arm’s Yanamadala. “You need to consider the attacker’s viewpoint as well as the defense. What is the cost of attacking particular devices? If it is low enough, then an attack class becomes practical. For example, if the cost of attacking hardware was only a few hundred dollars, then it is not an issue. It’s important to go back to the code framework to fix this. There need to be layers or levels, but it has to be looked at from the attacker’s point of view. There is no other way to deal with this.”

This is particularly important as data is cleaned up and consolidated, making it more valuable to attackers.

Security has been getting better,” said van Spyk. “A lot of companies care about security, and they care about it in a serious way. Whether they’re approaching it methodically or not, they’re taking great strides to get to where they should be. Not everyone is there yet, but particularly on the IoT side, there are vendors starting to offer end-to-end solutions so that developers don’t mess up their crypto with their updates. That’s making a big difference. And some platforms just enable higher security out of the box.”

Conclusion
The general view of security from the outside is that it is one step forward, two steps back. But there are enough pieces coming into place that not everything is as vulnerable as it was even a year ago, and if progress continues then it may be harder to hack into devices in the future. Easy fixes, regulations, and more attention to detail will help.

Whether this ultimately will deter attacks or minimize the damage from those attacks remains to be seen. There also are more devices and more complexity within those devices, and various flavors of AI are still so new that no one quite understands how that will affect the overall security paradigm. This is a complex problem, and no single or simple answer exists for how to solve it.

Security Related Stories
New Security Risks Create Need For Stealthy Chips
Thinner dies and insulation layers add vulnerabilities for better hacker tools. Solutions do exist, but there are tradeoffs and no guarantees.
Security Tradeoffs In A Shifting Global Supply Chain
How many simulation cycles are needed to crack an AES key? Plus, the impact of trade wars on semiconductor security and reliability.
Election Security At The Chip Level
RISC-V-based solution under development, but the very nature of a voting system raises issues.
A Trillion Security Risks
Why an explosion in IoT devices significantly raises the threat level.
How Secure Is Your Face?
Biometrics are convenient and ubiquitous, but they aren’t a substitute for good security.
Finding Hardware Trojans
Why locating security threats in hardware is so difficult.
Why Data Is So Difficult To Protect In AI Chips
AI systems are designed to move data through at high speed, not limit access. That creates a security risk.
New Approaches For Hardware Security
Waiting for secure designs everywhere isn’t a viable strategy, so security experts are starting to utilize different approaches to identify attacks and limit the damage.
Who’s Responsible For Security Breaches?
How are we dealing with security threats, and what happens when it expands to a much wider network?
Can The Hardware Supply Chain Remain Secure?
The growing number of threats are cause for concern, but is it really possible to slip malicious code into a chip?
IP Security In FPGAs
How to prevent reverse engineering of IP
Semiconductor Security Knowledge Center
Top stories, special reports, videos, blogs and white papers on security issues



Leave a Reply


(Note: This name will be displayed publicly)