Defining Chip Threat Models To Identify Security Risks

Not every device has the same requirements, and even the best security needs to adapt.

popularity

Experts At The Table: As hardware weaknesses have become a major target for attackers, the race to find new ways to strengthen chip security has begun to heat up. But one-size does not fit all solution. To figure out what measures need to be taken, a proper threat model must be assessed. Semiconductor Engineering sat down with a panel of experts at the Design Automation Conference in San Francisco, which included Andreas Kuehlmann, CEO of Cycuity; Serge Leef, head of secure microelectronics at Microsoft; Lee Harrison, director of Tessent automotive IC solutions at Siemens EDA; Pavani Jella, vice president hardware security EDA solutions at Silicon Assurance (on behalf of IEEE P3164); Warren Savage, researcher at the University of Maryland’s Applied Research Lab for Intelligence and Security and is currently the Principal Investigator of the Independent Verification and Validation (IV&V) team for the Defense Advanced Research Projects Agency (DARPA) AISS program; Maarten Bron, managing director at Riscure/Keysight; Marc Witteman, CEO at Riscure/Keysight; Mike Borza, scientist at Synopsys; Farimah Farahmandi, Wally Rhines Endowed Professor in Hardware Security and Assistant Professor at the University of Florida, and co-founder of Caspia Technologies; and Mark Tehranipoor, chair of the ECE department at University of Florida, founder of Caspia Technologies. What follows are excerpts of that discussion.


L-R: University of Florida’s Tehranipoor, Silicon Assurance’s Jella, Cycuity’s Kuehlmann, Microsoft’s Leef, Synopsys’ Borza, Riscure/Keysight’s Witteman, DARPA’s Savage, Riscure/Keysight’s Bron, University of Florida’s Farahmandi, and Siemens’ Harrison.

SE: How do you define a threat model?

Jella: Threat modeling is about making a replica of what threats might be out there, to be able to assess and understand the risk analysis. If I were to own a home, I would want to secure it as much as possible.

Harrison: A threat model is really building a model of what the impact would be for a particular kind of security attack. From that context you can judge what the potential impact would be and the overall damage, making sure that when you apply mitigation for that attack, you apply it in such a way that’s appropriate for the actual threat. You don’t want to apply a huge amount of mitigation for a threat that’s got a minimal impact. The threat model really defines the definition of the next step you want to take.

Borza: Threat modeling for us is just the first stage of analyzing, assessing, and building secure systems. But it’s an extremely important stage, because it’s where you identify the assets you have to protect or that you’re trying to protect, the attackers, the adversaries, the nature of the threats, and as Lee said, the costs and impacts of those attacks and what your objectives are. It’s important, as well, to state what’s out of scope, what’s not an objective, or what’s not an asset that you need to protect. It’s that process of really whittling down what it is you care about in this particular system context.

Tehranipoor: Threat modeling has to be discussed within the context of CIA — confidentiality, integrity, availability. Once that is said, the rest is understanding what the entry points are, what causes potential leakage of information, or ways that you could manipulate sensitive information. There are many different ways — logical approaches, physical approaches, etc. — but at the end of the day, when you talk about modeling, it really comes down to what your objective is. In an academic security domain, I need to make sure this information is running. Once you go to the industrial domain, however, it’s all about tradeoffs between cost, risk — the risk assessment you have to perform, what makes sense, what doesn’t, and where the applications are. You have to bring all of that information so you can perform your strategy.

Farahmandi: Threat modeling is a structural analysis of the potential attack vectors, and structurally analyzing the security requirements of the design and how they can be addressed. You need to go through the security requirements. What are the attack surfaces? What kind of architectures do you need for them? What will be the security level that you need? Then we come up with a set of high-level decisions.

Bron: One of the activities of risk here is security evaluation and threat modeling, and corresponding vulnerability analysis plays an important role there. A risk is where an asset and a threat and a vulnerability coexist, and where it comes together. Threat modeling is a way to enumerate the threats that potentially could go after assets you wish to protect, and there are several methods for it. There is STRIDE, there is DREAD, there are IEEE standards. The common denominator between all those models is that they find structural ways of identifying the asset, which is something of value to the threat. It could be something direct like a payment asset or a card number that has value in it, but also something indirect. For example, when you look at silicon, something that gives you access to a debug interface or to JDK, that’s also something of value to an attacker. These threat modeling frameworks are there to put some structure and process around identifying assets and what the potential threats will be.

Savage: A threat surface is an understanding of possible attack surfaces that may take place by a person, at a place, or by a thing. It’s understanding that when you’re doing design, and the mitigations you need to put in by anticipating what bad things could happen down the road to the product you’re building.

Witteman: Threat modeling is a bit of a fancy term. It sounds abstract, and therefore I don’t really like it that much. Threat modeling should be about translating theory into practice. There are 1,000 publications every year about threats, and most people come to make the distinction, to see what’s bad and what’s very bad. Translating that theory into practice is important, and this is what threat modeling could be about.

Leef: When I joined DARPA, there were two other project managers who were responsible for different facets of hardware security. What I quickly found is there was dissonance in definitions of what the attack surfaces were between myself and my two colleagues. We ended up defining a model with four attack surfaces, ranging from side channel to supply chain to reverse engineering to malicious hardware. That may not be an exact representation of absolutely everything, but it was good enough for us to be able to unify our forces, rally the community of researchers, and actually make some progress. For a surface attack model, what is the intensity of defense that you really need to apply? That is driven by economics of the of the chip and the use case. These guys see that the things getting attacked most of the time are bank cards and set-top boxes — things closest to the money, so to speak. For attack surfaces, we clearly saw at the Department of Defense that two of them were attracted to economic attackers and two were in the domain of nation states. What we aspired to do is to come up with countermeasure strategies whose economics matched the impact and potential damage of the attack. But what your topic made me think of is, we can defend against things we have seen before and that humans can anticipate. But what about applications of artificial intelligence here? What about modeling our enemies? In other words, what are our adversaries likely to do? What are they thinking? Since we cannot really predict that kind of stuff, that may be an application space for AI techniques to contemplate?

Kuehlmann: I would summarize it in short as matching the business plan of an attack. The business plan includes the technical capability, the business objectives, where you want to try and put the resources right and get matched to the response. That’s on the high level. There’s a lot of technical utilities.

Leef: What’s the worst thing that can happen when somebody attacks the intelligent lawn sprinklers? But what about the system that manages control surfaces on the ICBM? Strategies have to be proportionate to the threat. I’m being facetious here. We actually looked at the scenario of adversaries causing intelligent toasters to burn down in Orange County, for example, and how many houses can you burn down? That actually is a relatively lucrative target for terrorists.

Tehranipoor: Threat modeling everything that you have around is really about what is going to be leaked and what is going to be manifested. When you think about models, there is an objective for the model. When you think about threat modeling, it’s actually a top-down approach the majority of the time, if not all of the time. I have this particular system over here with this particular application. Can that application become a problem? If you don’t address that, then you have to go to a below layer and see what happens in the below layer. Then you’re going to have to go a layer below, and then you get to the SoC design level to say, ‘I need to worry about this because this potentially can go up.’ So there is a top-down, bottom-up kind of a tradeoff to begin to figure out what the particular impact is going to be. The example that that Serge is giving, even though we’re joking about it, is a really good example. From a sprinkler standpoint, what would be the impact? Is it even worth it to go to the chip level and then do even modeling? The answer is no. But you get a missile application and ask, ‘Do I need to do modeling?’ The answer is yes. Then you start figuring out where that particular CIA violation is going to happen.

SE: Cybersecurity has long been focused on software. Only in recent years have we started talking really seriously about hardware. Have enough resources been directed towards figuring out security modeling when it comes to hardware?

Borza: The analytical processes for hardware and software are largely the same. So we’ve benefited from the advancement of software security, through the threat modeling processes being developed. The specific threats can be different, and the specific mechanisms that are exploited are different, but the underlying analytic principles are the same. This is a case where we’re able to leverage something that’s going on in a closely related area and get a lot of benefit out of that. It’s early days. There are a lot of people or a lot of companies that don’t do significant threat modeling on their products. Obviously, there’s still education work to be done about why is this important. But that’s where we are right now, some places take it very seriously and spend significant resources and effort on that.

Leef: I would echo what Mike said. We saw it when we were crafting initial security programs at DARPA. We characterized the community of designers into four groups. The top were merchant semiconductor companies that understood security threats, had a lot of in-house security expertise, and created bespoke solutions. These companies have had hundreds to thousands of engineers to do all this stuff manually. On the other edge of this were the IoT startups. To them, the most important thing was getting it to market securely, and nobody knows if they’re going to be around a year from now so who cares? We thought the interesting groups between those two extremes were system companies that do chips, like Cisco or some automotive guys. What we saw were pockets of expertise here and there, but the management questioned the economics of introducing security. This was truly application-specific. If a car crashes, can somebody trace this to whatever? The other group that was interesting was the defense industrial base. All of those companies had experts, but they were highly valued. They were viewed as craftsmen. They would be deployed only on things where the Pentagon said this must be secure. It wasn’t pervasive by any stretch of the imagination. I also want to add that Mark’s team has created a really nice taxonomy of threats. My thought is that this taxonomy is a great entry point. We now have increasingly clear definitions of the attack surfaces. These are fundamental, foundational things that could allow us to start thinking about maybe applying AI to get in the heads of adversaries.

Bron: You mentioned threat modeling. What if we take the words threat modeling very literally, and we try to model the threat in terms of AI? What would an adversary do? Interestingly enough, internally at Riscure, we had a similar kind of conversation a few weeks ago, where we provide testing tools to test the measurement equipment where you can measure the attack resistance of the device or silicon, which is absolutely lethal in the right hands. If that’s the person that you’re going to model as part of your toolset, you create something that is very dangerous. I’m intrigued by the fact that if you were to model what would an adversary do, the adversary would make the tradeoffs in the economics that you mentioned differently. We all know if the semiconductor gets compromised, you build a house on quicksand, all the layers above it could theoretically be compromised. The security doesn’t have to be absolute. In DoD, the way we were thinking is if that an adversary is capable of breaking into one chip at a cost of $1 million, that’s a win for us. If they’re able to break into a million chips at the cost of $1 million, that’s a win for them.

Kuehlmann: There’s a lot of similarity between software and hardware. But the fundamental difference is you have to look at the threat models in an attack. There’s an attack and there’s a response. Software, to take the classical example, you have some vulnerable open-source software and the race starts to find vulnerabilities, typically in a few days. With hardware, you have a very different response. You have a much longer lifecycle, so this whole ecosystem between the attacker and the defendant is quite different.

Farahmandi: We can leverage the understandings that are based in the security software side, but there are fundamental differences there because when we are talking about hardware security, there are a lot of attacks on the supply chain we are worried about. There are a lot of attacks that are based on the equipment and the capability of the attackers, and also the complexity of the designs are different. But the attackers can leverage software to attack the hardware. That’s one of the most significant attacks we see on the hardware side. We saw examples of a hack at Intel, and there are several of such attacks. You create the software code, and the software can access hardware to create privilege escalation or side channel attacks that become wider. Everybody could do that. Rather than having access to the design or having the equipment, you just leave it to the software side. It becomes in the vicinity of everybody.

Borza: Software quality gets better. The classic defects are getting harder in some of the more modern languages to commit, like buffer overruns, overflowing array boundary, and those kinds of things. Those have gotten much harder in more modern languages. What that has done is shifted the effort of the adversaries for software attacks to now start exploiting underlying hardware vulnerabilities that they may be able to use in order to make what’s essentially a software attack. It’s a software-only attack that’s orchestrated because they’re able to exploit an underlying hardware vulnerability that gives them access to the system in a way that’s not designed and that they were never intended to have.

Bron: Are you arguing that the lowest hanging fruit is moving from software more to them because of modern languages like Rust?

Borza: I’m arguing that software attacks will continue to come at us, but that they are starting to incorporate more and more exploitation of hardware vulnerabilities because the easy software-only defects are disappearing.

Tehranipoor: Very good point, and the quality of the software testing with regard to security has gone off tremendously. What’s really keeping me up at night is this notion of whether there is a particular part of the hardware where somebody could have the software programmed to figure it out? That can cause issues similar to what Meltdown and Spectre did. When that happened, Intel lost 8% of its share value, so we have to think about an attack like this that could potentially appear again.

Savage: I work for ARLIS, a research lab affiliated with the University of Maryland and whose primary sponsor is the Office of the Undersecretary of Defense. Our little specialty is we call the human domain with regard to security. I am one of two semiconductor people at this place that deals with mostly three letter agencies and that kind of security problem. The level of thinking that went on at the semiconductor and EDA level is nothing that approaches the level of thinking that those guys do as far as thinking about very creative ways to attack the United States. There’s a big opportunity for that in our industry.

Harrison: Ultimately, you need to look at it from a from a hybrid approach. This includes adding elements into the design to detect some of these attacks — so not necessarily having that security built deeply into your design, but having that observation of what’s going on so you can detect the attacks when that happens. It’s great being able to detect these things. With software, you can make that analysis and say, ‘The hardware is saying this, so there must have been an attack.’ But to carry that through, you also need the hardware components in the design to then be able to mitigate and shut down that attack.

Leef: The first time I met Mark, probably 30 years ago, we talked about this. What happens if the on-chip instruments do detect a high likelihood Trojan? What do we do? If we shut the thing down, that may be exactly what the adversary intended, and quarantining this thing doesn’t make any sense. To me, this stands as an open question. I have not seen an elegant strategy for this.

Tehranipoor: We’re now switching to an interesting topic, which is manipulation. If manipulation happens, what are you going to do about it? My quick getaway answer for this is there’s always a decision engine on the chip that will do something.

Jella: Back in the day, I used to work on developing models for power supply boards. We would look at, in today’s terms, what you call PPA. But in this security landscape, it’s beyond that. It’s not three dimensional. It’s not five-dimensional. It’s multi-dimensional. There is a scale, there is impact, first of all, and then there’s the scale of the impact. Even if a designer wants to bring security into their design, they have to think about the footprint again, performance all over again, cost all over again. Even if we have all these particular dimensions figured out, do we actually get this done just by the IP vendors, or only the EDA vendors? It has to be a strong ecosystem, just like how we have IP vendors, EDA vendors, design service vendors, and so forth in the entire ecosystem of designing chips. We also have a stronger security ecosystem, and it takes time to build that engine all over again. It’s a very complex problem.



Leave a Reply


(Note: This name will be displayed publicly)