Security Gaps In Open Source Hardware And AI

Experts at the Table: Why AI systems are so difficult to secure, and what strategies are being deployed to change that.

popularity

Semiconductor Engineering sat down to discuss security risks across multiple market segments with Helena Handschuh, security technologies fellow at Rambus; Mike Borza, principal security technologist for the Solutions Group at Synopsys; Steve Carlson, director of aerospace and defense solutions at Cadence; Alric Althoff, senior hardware security engineer at Tortuga Logic; and Joe Kiniry, principal scientist at Galois, an R&D center for national security. What follows are excerpts of that discussion, which was held live at the Virtual Hardware Security Summit. To view part one of this discussion, click here.

SE: If a chip is supposed to last more than 10 years, it’s very likely that someone will be able to hack during its lifetime. Is there a way to architect chips so that doesn’t happen?

Borza: Some interesting approaches to security are built around the idea that you don’t know what threats are coming, but you can keep shaking up the behavior of the chip in a way that prevents somebody from gaining a foothold to launch an attack. One approach involves a notion of randomizing the operation of the processor and adding noise into it to constantly shake that up. Even if you manage to get a foothold into the chip to get an attack rolling, in a few seconds it changes on you. Those kinds of approaches are starting to be developed through the DARPA programs, where they’re exploring the use of design automation, but also a lot of security automation that’s sort of baked into the silicon at a very low level.

Carlson: That’s really interesting stuff, where you want to monitor everything, every possible attack surface — temperatures, AC/DC levels. And if you have that monitoring capability, how you respond to different scenarios can be updated to deal with new kinds of attacks, and new combinations of attacks. You’ve got the basis for recognition and some sort of remediation. But I don’t hold hope that we’ll be able to prevent attacks for a 10-year timeframe. That seems pretty optimistic based on what I’ve seen from the hacking community today.

Handschuh: One other thing worth mentioning, which is a new approach, is RISC-V and the work that RISC-V Foundation and International are trying to do. They’re trying to completely open source designs and make sure that everybody sees what’s going on — not necessarily free, but open source — so that we can all start from the same basis. And yes, of course, we will find bugs and problems, but you will be able to fix these as we go. And the more we move forward, the more we learn about everything such that, globally, everybody gets educated a bit more about how to do this securely. Security is a big part of that. So there is hope.

Kiniry: We’re going to learn a lot about this because these security architectures are being vetted right now in DARPA’s first bug bounty exercise, called FETT. We’re running it behind the scenes, and it’s been fascinating to watch hundreds of red teamers attack these DARPA-funded platforms, find interesting things, and cycle through those iterations. Chips coming from academic teams to prime contractors are being vetted very carefully, as best you can do in a distributed red team environment. We’re going to see more and more of that moving forward.

Handschuh: Is there anything to publish?

Kiniry: I’m arguing for that, and we’ll see how that works out. They’re learning a lot from FETT. I wouldn’t be surprised if we have a second edition next year. And the cool thing about the exercises is that the infrastructure we’ve built is kind of reusable for other exercises in a comparable fashion, and it’s all deployed on AWS. So we’re actually running something like 10 different SoCs on AWS F1 for this entire exercise with operating systems, secure enclaves, and everything else.

Handschuh: Is there a future for some kind of an online penetration testing solution for hardware?

Kiniry: That’s been done in an ad hoc way on occasion, but we’re working with Synack on this. And this is the first time Synack has ever done an engagement where you simulate hardware in the cloud as part of an exercise. Everyone is learning a lot from it. And coming out of that, we’ll learn whether or not it’s reasonable to continue to do that.

Carlson: Going back to open source, there is a lot of hope and promise, but there’s the flip side of the coin, too. You always want to know who is spending all their hours on that open source project, and what are their intentions. There seems to be no shortage of folks with somewhat awkward intentions, if not nefarious ones.

Borza: Open source has been very good, but it’s not a be-all and end-all. We’ve seen open source software cases where there were long-standing bugs or security defects present, and it was in nobody’s interest to find them or analyze the thing. And so while the idea of open source is great in terms of everybody can look at it, understand it, and figure out whether there are vulnerabilities present, unless people are actually doing that you’re not gaining any more insight. What’s usually happening is companies are just picking up that stuff and using it without having pay any development costs. They’re not supporting the ongoing security development. You have a very similar situation with open-source hardware. The unsexy part of this is verifying it and testing all the corner cases. That’s a lot of hard work, and it’s difficult to justify unless the chip industry is going to take that on from a company-by-company basis and effectively duplicate the work of verifying a processor core for every chip they put out.

Carlson: My favorite anecdote involves Linux. There was a bug where if you held down the spacebar you become root. It was there for forever.

Kiniry: It feels like the 1990s right now with this conversation, because we’ve seen the evolution of what happened in software. In the early days, everybody said people like me were crazy for working on open source and contributing. These days I work on open-source hardware, and my customers will not buy something unless they have full transparency all the way down to physical design, as well as the tools that vet the hardware. If you build a tool that is a black box, and you make some promise and you put your hand over your heart, most of my customers will reject that promise no matter how honest-looking you are.

SE: Let’s drill down into transparency. One issue we’re starting to see with AI is that most of this stuff is a black box. We don’t necessarily have the transparency into the algorithms and how they change, and use cases will vary and subsequently present different security risks. How do we add that transparency?

Althoff: There is some academic work around transparency, and particularly finding safe regions and applying constraints to the models. A lot of security is ensuring properties, and active defense is going to be a part of this. We may end up seeing AI that is meant to ensure another AI is behaving properly.

Kiniry: There are several DARPA programs on the agenda now because of attention to the growth and explosion of AI research and its use in the DoD. If an AI is a black box — especially if it is used in a safety-critical or mission-critical setting, it’s not acceptable if it just gives you a promise and says, ‘Trust me.’ You don’t trust. There are several new research programs spun up right now on explainable AI, where you can build software, hardware, AI platforms that give justification evidence for the answers they provide. It’s a whole new view.

Carlson: How do you explain the unexplainable?

Kiniry: Well, most AI does boil down to just a fairly complicated sets of equations.

Handschuh: What we’re witnessing with AI is a little different, but also similar to security. It’s a little different because we’ve reached a point in deep learning where we humans don’t necessarily understand what’s going on anymore. It goes beyond our mental computation capabilities. The results seem correct in some form, but we don’t really know what’s going on. Security at some point in the future may start reaching that level, as well, when we have so many lines of code or hardware equivalence that we can’t possibly figure it out by ourselves anymore. We will need tools to do that. And perhaps one day, even the tools won’t exactly be able to cover everything anymore. It’s a question for the future.

SE: There are two sides of this. First, you can impact the training algorithm with very subtle changes, and poison the code. Second, on the inferencing side, the system may not be doing exactly what you think it’s supposed to be doing and you don’t know why. Is there any progress on either front?

Borza: Yes, there is. People are trying to make the AI algorithms more explainable, and also to give some observability to the internals. But fundamentally what’s going on is there are discontinuities in the data sets that are used for training, and those discontinuities don’t necessarily represent themselves as linear responses. The responses go extremely nonlinear when you encounter a situation the AI hasn’t been trained for. This is one of the things we really need to start addressing — whether you would be able to have a linear response to these fundamentally discontinuous problems, or whether you can have solutions for a discontinuous set of input data. Over time that’s going to improve. But for some time, and possibly a long time, systems will do things occasionally that seem to come completely off the rails.

Carlson: Just like people.

SE: A lot of security involves one-vendor solutions, but as we look at future designs, there are more multi-vendor, heterogeneous designs coming. Each one of those potentially has a different level of security. How do we build that into the architecture so there isn’t a problem? This includes updates, as well, because different pieces may be updated at different times with different capabilities.

Handschuh: First of all, there is a notion of lifecycle in all these devices, including multiple IPs, chips, or SoCs, or pieces that get put together. And then there’s almost like a passport — an identity form that says these are the pieces that I have, these are the kind of services that each piece will render, this is what I’m expecting from each piece. There needs to be some more thinking about how we make sure it all fits together well, and that we can correctly track which part is at which stage so that we consistently know, ‘Okay, this part maybe is behind by one step. So we’ll wait until it gets updated to re-establish and reconfirm the security levels as we move on.’ I’m envisioning some kind of a passport that gives you properties and characteristics where you could say, ‘Which piece is at what level?’

Althoff: This is a situation where you might want to categorize by threat models. You might want to say these components are secure entities, environments, and come with these survivability plans. If you put everything together in this way, under this threat model, you have a guarantee. But then you have the problem of matching levels and making sure that if you compose these components, you have something that is correct by construction afterward.

Kiniry: How often have you seen a piece of IP of any kind come with a threat model?

Althoff: Never.

Kiniry: So we have a fundamental problem here with the way we’re building reusable artifacts today because it does not lead itself to composable security.

Borza: Part of what we’re doing with IPSA (IP Security Assurance) at Accellera is trying to address that. We’re starting to develop a standardized method for interchanging information about threat models, and part of that work is feeding the CWE (Common Weakness Enumeration) hardware portion of the database. What are the well-known attacks? What are the fixes for those? How do you try to discover those before you ship IP? But you’re right, until the last couple of years, most IP doesn’t come with a threat model.

Kiniry: Much of the work we do is building the interfaces between components using formal techniques so that you can guarantee something about the data interchange across interfaces, or at least protect yourself against a component of the system being circumvented or back-doored or impacted by something else in the rest of the system. That’s where we spend most of our effort, whether it’s automotive, DoD things that fly or otherwise, is in these interfaces. It’s kind of boring work. It’s like parsers in hardware and stuff like that. But at least it’s a place where you can build something useful, tractable, high-performance, and which provides some really well-known guarantees in the meantime.

Borza: It gives you a way, as a system component designer, to protect yourself from something that’s coming at you from within the system. But the question is, how do you make the entire system continue to perform in the way that it’s supposed to — or at least get done as much of its mission as possible, in spite of the fact that you now have parts of the system that may be compromised or behaving badly?

Kiniry: Absolutely, you cannot hardware printf when something went wrong.

Related
New Security Approaches, New Threats
Techniques and technology for preventing breaches are becoming more sophisticated, but so are the attacks.
Dealing With Security Holes In Chips
Challenges range from constant security updates to expected lifetimes that last beyond the companies that made them.
Security Knowledge Center
Top stories, special reports, videos, white papers and blogs on security



1 comments

jay says:

Randomization sounds good in shorter run but its actually a recipe for disaster in longer run. How so? Its a chaos generator and promotes discrimination (as in stereotyping) rather than differentiation.

Leave a Reply


(Note: This name will be displayed publicly)