Experts at the Table, part 2: Emphasis shifting to firmware, system-level architectures, and collaboration between industry, academia and government.
Semiconductor Engineering sat down with Helena Handschuh, a Rambus fellow; Richard Newell, senior principal product architect at Microsemi, a Microchip Company; and Joseph Kiniry, principal scientist at Galois. Part one is here. (This is the second of two parts.)
L-R: Joseph Kiniry, Helena Handschuh, Richard Newell.
SE: Some of the new applications for hardware designs are tied to AI, deep learning and machine learning. Those kinds of chips are based on much higher integration of software and hardware, but the starting point is the flow of data through a chip rather than software or hardware. How does that affect security?
Newell: We’re going to have to use principles of separation in order to have processes that line up and which have not been compromised.
Handschuh: How you create open-source security is a big question. We don’t know how to do computation on encrypted data today. That is very complicated. We haven’t started looking at what it means to have malicious attacks on encrypted data processing. That is very complicated, and it’s a much bigger problem.
Kiniry: The DoD is very excited about AI, but not everyone is thinking about security at that level. In fact, the only realization of algorithms that pay attention to matters of security are ones that are able to tag certain data being secret using information flow reasoning. There’s quite a bit of funding on secure computation, as well as encryption on software and hardware, but that’s just going to give you a bigger hammer. It’s not always the right hammer, and it’s very slow. There are new challenges coming up, too. Let’s say you have a set of algorithms in hardware and software that are used for good purposes. We still have a fundamental problem about how you have hardware explained. It’s called ‘explainable AI,’ whereby algorithms are able to provide evidence that the answer they’ve given is actually the right answer. That’s a wide-open challenge. There’s a lot of work on it, but very few people are thinking about explainable AI from a hardware standpoint.
SE: Isn’t he term ‘right answer’ somewhat murky? The correct answer is a distribution rather than a fixed number.
Kiniry: Yes, and all of the early work has been on the intersection of probabilities and programming languages. Now we’re going to see the intersection of probabilities and hardware design, where you end up with a mix of devices more often than not.
SE: One of the issues with any security is that as devices are connected to the Internet, they may be connected to less-secure devices. That can even happen in the same system. How do we solve that problem? Is it making sure all of the pieces are secure, or is it a top-down solution?
Kiniry: The work we’ve seen going on is about understanding systems at a high architecture level, figuring out which key components sit at the center of a secure space, and which components can be replaced with secure ones at a reasonable cost. We’re seeing that in military vehicles, in things that drive and things that fly. But it’s still early days, and the fundamental problem is you don’t see companies doing this until their customers care.
Newell: It’s important that we draw security boundaries around those things we really can defend. Making everything totally secure is probably not going to happen in our lifetimes.
Handschuh: We need to look at this from a threat-modeling perspective. You’re trying to isolate the pieces in your system that you really care about, like the ones that really should not be tampered with. You want to make sure those are secure, and simplify them wherever you can. There is the super-secure part of it, which contains all of the information needed to run the system, and then there is the rest of the system. It’s a complex problem.
SE: In an AI system, the system may adapt in ways you didn’t expect, and the algorithms are changing. How do we solve that?
Handschuh: You always have to have a roadmap. No system can be static and be successful over time, so it’s always a battle against new attacks and weaknesses that people have discovered. You try to counteract that by putting out new versions and updates and upgrades. Everything is moving all the time, and you have to be willing to change your implementation and approach at any moment.
Kiniry: The work that shows the most promise is using firmware differently. Firmware gives you the ability to tip things forward if there isn’t a deployment of new hardware. We’ve seen problems crop up over the past decade where they included large-scale security features in new chips, but they didn’t have the ability to pivot when vulnerabilities were found. Since then we’ve seen more careful attention to reasoning around microcode and firmware so that we can shift functionality in the hardware. That’s where I see the biggest promise.
SE: Are you referring to bare-metal programming?
Kiniry: The firmware often can be realized in bare-metal programming. But it also can be developed separately so you have an evolutionary path to making quick changes in the field.
Newell: The more complex a system is, the more likely it is to experience problems. Firmware can help you pivot. At the other extreme, you still need some root of trust in the system that is small enough, and that’s very hard to change. This is where something like formal analysis can help you achieve a high degree of assurance.
Kiniry: We have a bunch of RISC-V systems with patchable security today, and we have large corporations where their IP was never patched with firmware. The reason is that software is a lot easier to patch than hardware. My fear is we’re going to build this into architectures and systems will never be updated.
SE: Most attacks have been at the operating system level and above so far. What happens if that changes?
Handschuh: You want to make these systems as secure as possible, and you want to build in layers of security with a core that is as small as possible.
SE: One last question from a different perspective. It’s extremely rare that you have industry, academia and government all working on the same technology issues, which is what’s happening with RISC-V. How important is that?
Kiniry: It’s critical that it happens, but it’s also important to recognize those different entities have different value metrics. We’re seeing quite a bit of change, particularly with government funding. We’re seeing more and more teams that are absolutely balanced between commercial firms and academics. We’re also seeing more and more of this worldwide. But the main thing we need to realize is that professors are rewarded and promoted based upon papers and research grants, not shipping products. In a company, that’s different. So as funding opportunities appear, academics and companies need to start shifting priorities. Companies need to take research and turn it into products. That’s happening more in the RISC-V community than anywhere else.
Handschuh: We have a great opportunity to bring together industry, academia and government.
Newell: This is happening with security, as well. There are a lot of different approaches with security with RISC-V in general, and the RISC-V Foundation, in particular.
Kiniry: We just completed a program with DARPA. A lot of papers are going to be published, and we’re going to see a real flowering of designs and devices. The next wave of RISC-V devices, which will appear in late 2019 or early 2020, will use one or more approaches that come out of that program.
Related Stories
Open-Source RISC-V Hardware And Security
Part 1: The advantages and limitations of a new instruction set architecture.
RISC-V Inches Toward The Center
Access to source code makes it attractive for custom applications, but gaps remain in the tool flow and in software.
RISC-V: More Than A Core
Interest in the open-source ISA marks a significant shift among chipmakers, but it will require continued industry support to be successful.
Leave a Reply