HW Security Better, But Attack Surface Is Growing

Experts at the Table: How cost, tradeoffs, and safety are impacting cyberattacks.

popularity

Semiconductor Engineering sat down to discuss security on chips with Vic Kulkarni, vice president and chief strategist at Ansys; Jason Oberg, CTO and co-founder of Tortuga Logic; Pamela Norton, CEO and founder of Borsetta; Ron Perez, fellow and technical lead for security architecture at Intel; and Tim Whitfield, vice president of strategy at Arm. What follows are excerpts of that conversation, which was conducted live at the Virtual Hardware Security Summit


(L-R): Ron Perez, fellow and technical lead for security architecture at Intel; Jason Oberg, CTO and co-founder of Tortuga Logic; Tim Whitfield, vice president of strategy at Arm; Pamela Norton, CEO and founder of Borsetta; Vic Kulkarni, vice president and chief strategist at ANSYS.

SE: As chips increasingly are used in in safety-critical, mission-critical applications, and in general more complicated and more integrated systems, those devices are becoming more difficult to secure. The cost of not securing these devices is going up, but so is the cost of implementing security. Who pays for all of this, both from a monetary and a hardware resources perspective?

Perez: We all do — us as technologists, as vendors, as providers to our customers, as well as our customers and end users. Now, we all pay for it in different ways. Sometimes it’s the cost of not doing not addressing security concerns that may come in the form of brand impact or sales loss. But there’s also the cost of investing in security, and you really never know how much is enough.

Norton: We’re really trying to work some initiatives with the United States government involving standards and a framework. Now that we’re pushing more and more compute power to the edge, it’s bringing up a lot of issues around privacy and security. We can help drive that as a group, as an entity that still fosters development and acceleration of great chip technology. Our hope is to create a trust factor that has a scoring mechanism associated with that chip.

Whitfield: A recent report said that by 2021, there will be something like $6 trillion worth of cybercrime damage. That’s the cost of not taking it seriously. And that’s before you get on to things like damage to reputations and business. It’s a shared responsibility. In terms of design and standards, it’s a cost we all have to bear.

Kulkarni: Early adopters, who of course value security the most, probably will pay a premium for security. And we’re all working toward that. But there is a duality across a large number of IoT customers, which includes 1 billion to 2 billion IoT nodes. A recent McKinsey survey said 40% of the respondents that were doing anything in the IoT would be unwilling to pay for security. So it’s a very sharp division of early adopters versus mass usage. One way to deal with that is to create a standard for CWE (common weakness enumeration), similar to ISO 26262, and so on, with the higher levels of For example, one way to cut that into two will be if there is a standard, let’s say, on CWE, then similar to ISO 26262, and so on. With those higher levels of gradations, people will be able to pay with the current pricing structure. That will be possible.

Oberg: Oftentimes, security is more of an afterthought. That results in a very high cost when you actually add it on. What typically happens is, you’re going through your whole development lifecycle, you get to near the end, and then you actually want to implement some security measures, whether it’s adding a feature or maybe actually doing an external penetration testing, something like that. At that point it can be extremely costly. But if you layer this into your process, you can dramatically reduce the costs. There’s obviously an infrastructure cost of putting together that kind of process and plan. But if it’s part of the DNA, you can enable yourself to ship a secure product, a mitigated product, without this enormous cost with respect to security. Obviously, it doesn’t come for free. It’s just a matter of how affordable and how easy you make it to get.

Perez: Vic raised an interesting point on the McKinsey study. In my experience, nobody wants to pay for security. None of our customers, none of our companies want to pay for security. But some of us recognize the need for that assurance. You can call it risk management, however you want to term it. Maybe it’s just a conservative outlook on your particular business or your area. But whatever it is, we’re willing to pay for that extra assurance, which is very much like insurance.

SE: Security has always been a trade-off between risk versus cost. Has that formula changed in markets such as automotive and mil/aero?

Oberg: That’s definitely accurate. There are a few drivers for implementing security. There’s risk reduction, which often is a result of someone that’s been burned. They understand the impact of actually having a vulnerability come out. Or they’re forced to do it through standards, which is more binary. They either do it to sell their product, or they don’t to not sell in that market. But the trend of this risk/cost tradeoff is growing pretty dramatically within the hardware domain. Obviously, if you look at certain markets, their tolerance for risk is dramatically lower than others. For example, for a little IoT widget, their risk tolerance is going be high. If someone breaks in, it’s not going to cause a big business impact. If you look at a defense system, the risk tolerance is extremely low. It’s as close as you can get to zero tolerance. That is going to play a huge role in how much they’re willing to spend. A lot of organizations are looking at it this way: ‘What’s my ROI, how much money should I spend to decrease my risk?’ That is definitely the right way of thinking about these problems, because it allows you to help measure risk based on how much you’ve invested.

Perez: That overall the formula, the algorithm, hasn’t changed. It’s still that risk tradeoff piece. The amount of risk, given how much more we’re doing digitally/logically versus analog-type technology, has certainly changed, which makes that part of the equation much bigger. And safety now factors into the risk side of that equation much more than it has in the past for the examples you cited — mil/aero, automotive — but increasingly, everything.

Whitfield: I agree. The equation hasn’t changed. But there are more devices that require security, and security can’t be optional. And as we see more connected devices, it’s about finding the appropriate level of security for specific applications. So clearly, as you go into mil/aero, there’s the functional safety side and that zero tolerance Jason talked about that. But we have to find the appropriate levels of security across every device. It cannot be optional.

Norton: What we’ve seen specifically within the DoD is that they’re precluded from leveraging some of the latest tech because the smallest node that is in a trusted foundry is 12nm. So they’re funding initiatives on how they can produce this system on a chip. Even if it was produced in an untrusted foundry, it needs to be trusted. Some of the technology we’ve been working on would help give them the ability to produce a smaller form factor in an untrusted facility. They’re trying to find all sorts of ways to reduce that risk, but right now the cost is extremely high because they have a very small quantity of SoC chips that they leverage.

Kulkarni: You’re absolutely right in terms of the nodes in aerospace and defense. However, I’m very encouraged by the billions of dollars of recent funding that is going into the DARPA AISS program. And going toward 7nm to 5nm is being talked about now with most of the primes we are talking to. And that’s very encouraging, because the government wants to bring the latest technology stateside. And also, the zero-trust environment is where the Trojans and other fault injection can be applied. People are getting very concerned about that and looking at new techniques. That’s an important opportunity for all of us in the security world. Also, I find that sometimes security and complexity are incompatible. It’s often said that because somehow new vulnerabilities are being discovered, customers have a hard time figuring out how secure they really are, despite their efforts. Instead of asking standard questions like, ‘How do I optimize the CPU?’ we need to start asking, ‘How do I create security across multiple domains?’ We need to look at hardware, software, OS, application, cloud, and so on. And that creates a hierarchy of security.

SE: In the past, it was only really the federal government that was seriously concerned with Trojans. Is that changing? And are people actually finding them?

Oberg: The notion of a Trojan is implies that there’s malicious intent. Someone did it on purpose. You could debate that, but there are problems that were not spec’d out either because of a mistake, or maybe it was just bad documentation. That is extremely common. If you look at the end results of whether it was intentional or not, you’re still leading to an exploit. Outside government/DoD applications, the notion of having a hardware design flaw, whether it’s accidental or intentional, is definitely very prevalent. Whether someone did on purpose or not, there should be a process in place to prevent that.

Perez: There’s a fine line between a Trojan and a defect. There certainly are documented cases of malicious intent. It’s usually kind of the disgruntled-employee-type scenario, where it’s tied back to close to a proof that somebody put something in maliciously. What’s much harder, of course, is any kind of state-sponsored malicious intent.

SE: As we start getting into more autonomy in cars, is the model for security in safety-critical applications realistic? What happens if someone says all of a certain brand will turn left at this hour? Have we got this under control?

Oberg: No, and the intersection of functional safety and security is a scary one. There’s obviously a lot of effort with respect to making sure systems are fault tolerant. They have redundancy and all the things you need to be ISO 26262 compliant. But when it comes to security, a lot of that can get broken pretty easily. You can find a flaw and even though it’s fault tolerant and safety-critical, an attacker will understand those protections and leverage that. Unless there’s a security component, what’s going to happen is someone will look up how the system was validated, look at the safety-critical compliance that it’s meeting, find an exploit, and then actually break in and get around a lot of those protections. There’s a lot of work to be done, and obviously the impact is high. If you go back to the risk/cost tradeoff, the risk is high if you find a vulnerability in an ADAS system and cause a car to not detect a person when it was supposed to.

Perez: You introduce ethics into the equation, as well. Both personal ethics — does a car company pay ransom, for example — but also, from a technical standpoint. Should the automated driving system actually turn left if it can sense that somebody’s going to get hurt, or will it determine that’s not the best thing to do at that time? Should it actually obey the commands that it’s getting?

Norton: The bigger concern is the attack vector is expanding. We’ve got all this massive cloud processing happening at the server level, and now we’re pushing all of this down to these devices. For Level 4 or 5 autonomy, as our chip does for drones, for example, there are a ton of issues involving making a decision without a human in the loop. There are a lot of ethical issues, and then there are issues around that device itself being hijacked and having a digital twin take over and intercept it and have it do damage that it wasn’t intended to do. That’s the concern right now. We’ve got such progress happening from a compute efficiency and performance and power standpoint, and everything on these small nodes, and yet we are facing extreme security mayhem from autonomous devices. We’re dealing with a ton of societal issues. And I believe security begins at conception for a chip. I believe in the future that we will have new economies on a chip. And for us to have machine-to-machine transactions in a trusted environment, we have to look at the inception. The IP has been secured. We have a ledger that we can look at the entire history of that chip, including what neural networks have been downloaded, whose provision to download. That goes into a whole other conversation. But I believe this conversation is so important right now to define that and ensure we take back control and that we have trusted devices that are processing AI. The AI is going to be infused in every chip in the future.

Kulkarni: All of us should think out of the box a little bit as an industry. Similar things happened with DVD attacks and everything else for many years with content piracy. Can we think of a software/hardware platform together, with major industry collaboration, where the firmware can be updated along with IP itself in a car or other edge compute devices? And it can be unique to each car — literally. So as opposed to the constituency that is working against all of us, the black hat guys and the military attackers and so on, can we create such an environment where there will be unique encryption for the target device, which may be a car? That is possible through AI and other ideas. And can we put together a firmware package and really change it on the fly?

Whitfield: Yeah, there are clearly some big challenges ahead. The comment Pamela made about the attack surface and the exponential increase in software/hardware is absolutely true. But as for security being realistic for safety-critical applications, it has to be right. That’s going to take new technology, new approaches, and ‘security first’ to solve these problems.

—Susan Rambo contributed to this report.

 

Related
Fundamental Changes In Economics Of Chip Security
More and higher value data, thinner chips and a shifting customer base are forcing long-overdue changes in semiconductor security.
What Makes A Chip Tamper-Proof?
Identifying attacks and protecting against them is still difficult, but there has been progress.
Security knowledge center
Top stories, white papers, blogs, videos about hardware security
Making Sense Of PUFs
What’s driving the resurgence of physically unclonable functions, and why this technology is so confusing.
Hardware Attack Surface Widening
Cable Haunt follows Spectre, Meltdown and Foreshadow as potential threat spreads beyond a single device; AI adds new uncertainty.
Determining What Really Needs To Be Secured In A Chip
Experts at the Table: Why previous approaches at security have only limited success.
Security on our YouTube Channel



Leave a Reply


(Note: This name will be displayed publicly)