Hardware Security Set To Grow Quickly

As vulnerabilities become more widespread and better known, industry standards will have to keep up.

popularity

Experts At The Table: The hardware security ecosystem is young and relatively small but could see a major boom in the coming years. As companies begin to acknowledge how vulnerable their hardware is, industry standards are being set, but must leave room for engineers to experiment. As part of an effort to determine the best way forward, Semiconductor Engineering sat down with a panel of experts at the Design Automation Conference in San Francisco, which included Andreas Kuehlmann, CEO of Cycuity; Serge Leef, head of secure microelectronics at Microsoft; Lee Harrison, director of Tessent automotive IC solutions at Siemens EDA; Pavani Jella, vice president hardware security EDA solutions at Silicon Assurance (on behalf of IEEE P3164); Warren Savage, researcher at the University of Maryland’s Applied Research Lab for Intelligence and Security and is currently the Principal Investigator of the Independent Verification and Validation (IV&V) team for the Defense Advanced Research Projects Agency (DARPA) AISS program; Maarten Bron, managing director at Riscure/Keysight; Marc Witteman, CEO at Riscure/Keysight; Mike Borza, scientist at Synopsys; Farimah Farahmandi, Wally Rhines Endowed Professor in Hardware Security and Assistant Professor at the University of Florida, and co-founder of Caspia Technologies; and Mark Tehranipoor, chair of the ECE department at University of Florida, founder of Caspia Technologies. What follows are excerpts of that discussion. To view part one of this discussion, click here.


L-R: University of Florida’s Tehranipoor, Silicon Assurance’s Jella, Cycuity’s Kuehlmann, Microsoft’s Leef, Synopsys’ Borza, Riscure/Keysight’s Witteman, DARPA’s Savage, Riscure/Keysight’s Bron, University of Florida’s Farahmandi, and Siemens’ Harrison.

SE: What is the state of the hardware security ecosystem at this point?

Harrison: Safety has become quite mature. There are lots of tools and technology to automate a lot of the safety side of things. But we see security being at the point of those early days of safety where you have experts, but it’s a very, very hands-on thing, and those experts only work on the really important things. So it’s not a commodity yet. It’s still very niche.

Borza: There are extra dimensions that go along with security, which go beyond what’s covered by safety.

Tehranipoor: The ecosystem is young, but it’s really dictated by how much money the companies and the application makers and government are willing to pay for. Before DFT [design for test], it was 5% of your overhead, so companies said, ‘No way.’ But then it came in and now everybody uses DFT. When you think about security versus manufacturing defects, it’s quantifiable. But when it comes to security, you’re dealing with the intelligence of a human being, which is not quantifiable and extremely difficult to model. Everything we’re talking about is difficult to model. There was a time that there was no market. It was zero dollars, and there were all these incidents that happened. There were responses from different companies and DARPA in 2008, 2014, 2016, 2018, where they all came in and invested money. 2018 was really a year of security importance because we got the Bloomberg story. I used to get tens of phone calls to see where is the solution. After going through all of these customers, it came down to one thing — what percentage of the total verification cost are you willing to give away for security? The repeated answer I got from all of them was not 30%, it wasn’t 40%, it was between 5% and 10%, and for some, 15%. We got the average of all of these responses and we came down to 10% as the number that the majority of these companies are willing to give away. That really dictates what the ecosystem looks like. Now, of course, it’s not final. Why? Because the next threat can actually take that number from 15% to 20%. And the next threat could potentially get it to 30%.

Kuehlmann: It took 20 years.

SE: From what we hear it is going to happen, and it’s not going to take another 20 years.

Kuehlmann: That’s a fundamental difference. I get the yield thing and all of that, but all these distribution curves — yield, performance, and power — they’re all part of some continuous distribution with little excursions. Cyberattacks are a step function, and we fundamentally cannot model step functions in an economic sense. Security is very difficult to quantify economically. You don’t want to have this black swan event where everybody has some kind of top-down decision that forces them to act. The strongest forcing factor you really have is regulations and standards. If you look at automotive, the market is essentially saying every chip needs to be certified for ISO, and suddenly everybody has to do security. Before that, nobody cared. Nobody cares unless there is a real market leader out there. So the state of the ecosystem, in certain sectors, is evolving in many parts.

Savage: Mark, is your 5% to 15% number cogs or cost of development?

Tehranipoor: It’s the money the company is willing to take away from the current verification and then set aside for security. It’s not development. It could be the cost of licensing. Let me give you real numbers. The verification market as of today is $1.2 billion. If you add emulation to it, that’s close to $2.2 billion. Based on our assessment, the market size today is about $200 million to $300 million for security verification. Companies are not willing to spend more. Standards, requirements, another big incident, they’re all going to increase this over time, but I tell everyone, ‘Hey, remember that this number was zero at one point. At some point back in mid-2015, we really were just around $10 million to $20 million.

Farahmandi: That kind of awareness has been there for security, although we are missing one catastrophic event that happens and urges everybody to invest more and more on security. Because the possibility is there and everybody can anticipate that these threats and attacks can happen, we’re seeing more and more investment from big companies into security. Probably that has come from the customers asking for security, especially when you’re talking about autonomous vehicles. There are a lot of standards there that the autonomous industry wanted. Cost is an issue, but we’re seeing that a lot of EDA solutions are being developed. Caspia Technologies, IQ3, Riscure, we really are seeing more and more of these companies investing in EDA solutions. We are seeing from academia the development of databases for vulnerabilities, because we think that resource is going to be helpful when you want to embed security into design. On the academic and research side, we’re talking about a lot of help from AI to perform security verification, and we believe that AI EDA solutions will bring the cost of security down.

Witteman: That’s true. Security is very much driven by standards and regulation. We see that all the time. Products made in unregulated markets are typically very weak from a security perspective, and products that need security certification come in at a higher level, and the certification process itself brings it even further. But why is there a regulation? There are two reasons. One is cost, and the other is the fear of costs. A typical example is in the pay TV world. We’ve made a lot of revenue because the pay TV companies lost revenue due to piracy — $20 billion every year — so they want to invest in making their products more secure. The other market where it’s more the fear of cost is payments. Banks are very much aware of the cost of brand damage. They want to protect their brand, so they are going to invest in security certification just to avoid security incidents. Nowadays we see many more examples.

Leef: The elephant in the room is this — as somebody who tried to sell security products from 2014 to 2017, my observation is selling security is like selling vitamins. You’re selling a remedy for some unquantified abstract threat. It is extremely different from selling, let’s say, cancer drugs, where there’s an imperative and there’s a timeline. How do you do the demand creation in this space? It seems like what you need to do is find the analog of patients with genetic predispositions for cancer, who are already aware and potentially scared, and they are the ones that can form early markets.

Borza: We actually have some of that in the security space. There have been people in segments of the market that have been proactive in responding to these things. There are others who are total laggards and still have their heads in the sand about their responsibility to do anything. It’s only in retrospect that people start to understand, after a big attack, what the cost of that attack was. That’s part of the reason they can’t quantify in their minds what the risk they face is by not addressing this at design time, which is several years in advance of when that attack is going to happen. It’s only after they go through a succession of these kinds of things that they realize this is like buying insurance. You’re preventing something that will happen in the future. This is one of the other problems. If you’re really good at your job of doing security and preventing successful attacks, then what’s the value of that attack that didn’t happen? Most people don’t know unless there’s an analog in a parallel company where they got successfully hacked and you didn’t, because you had already anticipated and been able to prevent that attack.

Leef: I would also echo what Mark said about metrics. When I was trying to start up the ACE program at DARPA, I was asked what are the success metrics going to be here? I said, ‘Well, the situation is pretty dire right now and it’ll be less bad when we’re done.’ DARPA said, ‘Well, that’s not really how we do this. The way DARPA does this is, here’s the state of the art, here’s the desired state, here are technical challenges, and here are strategies for repairing the technical challenges. So what are the numbers for the current state and the desired state?’ We spent a lot of hours trying to come up with quantification of security.

Tehranipoor: I can add one point to what Serge said earlier, because I was aware of your work from 2014 to 2017. I don’t know what’s in between vitamins and painkillers. But we have moved on, thanks to some of the important attacks that have happened. We’re not a painkiller yet. The pain is going to come, based on what Andrea said. Something has to happen so we get more requirements. But let’s go back to industry. When was the last time we actually forced industry and commercial groups to do something unless somebody asked for it? If you look at DFT, DFR, DFx, somebody had to ask for it.

Leef: Don’t overestimate the regulatory point. When I was starting to put this in place at DARPA, I asked, ‘Isn’t there regulation right now where the chips going into defense applications are secure?’ Somebody in the office dug through all kinds of regulations and found there is. Black and white, it says defense contractors are supposed to deliver something similar to deal to 2662. I haven’t found a single defense contractor who will pay any attention. There was a regulation, but nobody enforced it because if you tell Raytheon, ‘Hey, you have to be in compliance with this,’ they say, ‘Oh, yeah, that airplane we were talking about? It’s going to be twice the cost.’

Kuehlmann: The issue is that the requirements, on a very high level, essentially have to secure information systems, including software, hardware, and so on. It’s not broken down yet.

Tehranipoor: This is an important topic. We saw the standard came out. It was put together by seven or eight people from industry. It got over to my desk. I asked a couple of our students to look into it. In three months we broke into it. We informed every company in that standard and said, ‘Hey, we are going to be writing a paper about this.’ The point is that we can develop a standard for manufacturing defects for yield, but be careful with requirements, be careful with the standard, be careful with those sort of things. If there is a requirement or policy, nobody cares about it. This was the standard P1735. We published an FCCS paper, it got a tremendous amount of noise, and what happened? The team was back again, and this time they took one of our team members in there so they can actually address the problem. It’s a point of pride because we prevented something bad that could happen down the road and we worked with companies to address it. But the point I’m making is, be careful what you wish for, because here’s a security problem versus defects and other things that we do. If you say we have a standard for manufacturing defects, nobody’s antenna is going to go up. The moment you say we have a standard for security, the researchers are on it because we want to break that standard.

SE: In regard to IEEE P3164, is it going to be the same thing?

Jella: The standard doesn’t prescribe security. It gives a framework of things you need to think about. It describes two methodologies. The standard basically suggests two methodologies to how to do modeling. If you deal with complex IPs, how do you deal with that?

SE: Can we have suggestions, or do we actually need standards?

Tehranipoor: Attackers cannot be forced to adapt to what we say. We have to be the ones to adapt to what attackers do.

Borza: It’s important that we not try to dictate exactly what the solutions are. The nature of it is dynamic. There’s also that element of what is the value of being what’s being protected to what the cost of that protection is.

Tehranipoor: We do not appreciate the fact that we live in a domain where the motivation to break is extremely high — 75% of the researchers in the cybersecurity domain work on attack, 25% work on defense.

Jella: The framework is not prescriptive, but when we actually try to implement it, it feels very abstract. You still need to come up with mathematical algorithms to be able to quantify security issues, and that is the trickiest part. Every industry will struggle for a while. If you want a model, like power supply metrics or a signal SRE security metric, there are numerous quantifiable points. It’s really hard to do that in the space of threat modeling. It’s abstract, because it’s very dynamic. It’s a human touching the attack surface that makes it very complex, very hard to guess.

Borza: The other thing about 3164 is that it’s really, in its objectives, about creating a clear way for an IP vendor or creator to communicate the security properties of their product, whatever those properties are. It’s not a value judgment about whether they’re good or bad, just what they are, to someone who would integrate those into a product. So an Intel or an AMD or a Qualcomm, what do they get when they buy a particular piece of IP? One of the things that has been missing is a clear way to talk about what it is. There is some language and some discussion of threat models in the spec, and there’s also quite a lot of definitions of different kinds of data objects. So there’s a definition for assets, there’s a definition for threats and how those are documented. Again, particularly in this case, what is mitigated and what is not mitigated of those threats? It may be perfectly appropriate for someone to supply a piece of IP that doesn’t mitigate some threat even though the threat is identified, because it may not be in the use case the supplier has in mind. Or maybe the appropriate place to mitigate the threat is actually during the system integration, not in the IP, either because it’s too expensive or because it’s not practical to implement it in the IP. But you can wrap it at integration time with something that does mitigate the threat.

Savage: Mike, you picked up on my latest hobbyhorse on security. There’s a whole lot of reductionism kind of philosophy about security. That’s really not security at the summit. It’s a systems problem, not a leaf-level problem for the most part.

Borza: The difficulty is you’re selling leaves as an IP vendor, and so you’ve got this plug-in that’s going to go into something that may be part of a larger secure something that’s wrapped in a system, which has a functional requirement that’s independent of the security. The system itself should be secure in order to trust that the functional mission is being accomplished.

Savage: That’s a great place because you can elevate the knowledge of the vulnerabilities to the next level. At each sub-system, we can do that, and you finally can get to the sub-system rather than the system level.

Borza: This is sort of the antithesis of what we talked about before, where threat modeling was top down. This is a case where you’ve got these things that are coming bottom up, and they are designed to integrate into and contemplate that they’re going to be dealing with this top-down assessment.

Bron: We see this with certification, as well. Oftentimes, certification is where people composite, so the system consists of an SDK from vendor A and a backend system of vendor B and the fingerprint sensor of vendor C. Each of these components gets looked at individually and the corresponding security guidance documentation tells you what residual risks there are. But that’s somebody else’s problem. The person who integrates it or ultimately brings a payment solution to market bears the ultimate financial risk and liability if something goes wrong. So at least in certification land, it’s not an unknown philosophy to look at it that way.

Borza: The idea of those components is that you can supply the documentation for a component ‘out-of-context,’ as they call it, which is independent of the overall system integration. The idea is that you close off those out-of-context, unmitigated risks when you get to the system. Sometimes you’ll finish the system and there are still unmitigated risks that you know about, and you normally rationalize them away by saying, ‘We don’t think this is a relevant threat for our particular application. Nobody can get at this thing physically, so we don’t need to worry about the potential side channels that you can measure only if you’re up close and physical with this thing.’

Jella: I just want to clarify what the standard is currently doing. There are two things that are going on with the standards. One is helping designers or system integrators identify assets and the methodologies that are suggested as part of the standard. The second part is, if we identify the assets, how do we then communicate it to the rest of the engineering? For instance, the security collateral like security parameters that go into design, security objectives, what are the security assets? What are the security attack points? What do the vectors look like if the attack actually happens on that particular data point? All of that, I would call it security collateral. Today, we have something called IP bundle that we all are aware of. We give verification IP as part of it — testbenches, IP, and so forth. Typically, you equal that to security collateral as a security bundle, and all of this will be part of that. We are trying to establish an industry standard on how to communicate all this security collateral in a format that is consumable by system integrators. It needs to be something that is easily adopted by IP folks and pushed out the door, and also for EDA vendors to standardize the format concerning the schematic. That’s what we’re working on. The abstract part I’ve mentioned is asset allocation. Defining the format of how we provide the collateral is something we’re working on. This is do-able at this point, but defining assets is still somewhat abstract when you want to even bring it into the mathematical realm. So it depends on the EDA vendors — how they take this framework and adopt it into their tools.

Borza: You hit the nail on the head, Pavani, because it’s really about having this language that can be used by tools to integrate these things. If you can do that, there’s the possibility we can bring a lot more automation to bear on the overall system security problem.

Read part one of the discussion:
Defining Chip Threat Models To Identify Security Risks
Not every device has the same requirements, and even the best security needs to adapt.



Leave a Reply


(Note: This name will be displayed publicly)