Experts at the Table, Part 1: The advantages and limitations of a new instruction set architecture.
Semiconductor Engineering sat down with Helena Handschuh, a Rambus fellow; Richard Newell, senior principal product architect at Microsemi, a Microchip Company; and Joseph Kiniry, principal scientist at Galois. What follows are excerpts of that conversation.
(L-R) Joseph Kiniry, Helena Handschuh and Richard Newell.
SE: Is open-source hardware more secure, or does it just open up vulnerabilities to a much wider audience of cyber criminals?
Newell: We deal a lot with governments and defense customers. They have a tendency to believe everything should be secret. I take more of a middle ground view, which recognizes that complex systems are going to have bugs. In that case, secrecy can improve your security. But security systems can be protected by open source and improved. Any real security has to include simpler elements that protect the more complex systems.
Handschuh: With open source, you have the opportunity to review it and come up with comments, feed it back to the community, and as a group you can advance maybe not faster but better. You have more hands. Everybody is available to give you constructive comments, and then you can work together to make it better. That means you start from something that is open and published, and then you evolve it together by adding things and creating white papers.
Kiniry: Our government trends toward not having artifacts being public, but they definitely want to see everything. Openness helps with them as a client. The focus on interfaces is great, because it’s easier to harden the interface than the entire data set. The openness of APIs is where you see a lot of government funding already. There is a lot of traction on that topic. But openness has nothing to do with something actually being secure. It’s like good intentions. If we continue to build systems the way we’ve done in the past, focusing on features and simplicity and elegance, and ship things that have been simulated and not analyzed more deeply using formal methods, we’re going to be shipping a lot of stuff that we claim is secure but which is not. That will be embarrassing to a lot of people.
SE: If you are updating open source that is public, that may be great. But when hackers find vulnerabilities, they don’t necessarily publish those. So now a lot more is exposed for everyone to see. Is that worse than proprietary instruction set architectures?
Handschuh: By publishing the interfaces you get more people to look at it. Hiding things behind the scenes is worse because then you don’t know what’s going on.
Newell: There are different ways to analyze this. Formal analysis is certainly a good way. A lot of eyes on it is another good way. We are going down a formal route. We have a formal committee that is providing a description of the ISA interface. And then you need to look at the microkernel. But as soon as you get to a rich OS like Linux, you’re never going to be able to solve the bugs. If you look at set-top boxes, a lot of those hacks happened because the software was reverse-engineered. There is a place for secrecy, at least as a road bump to slow down these guys.
Kiniry: The struggle I see is at the intersection of policy and technology. With our current leadership, there is a tendency to hold vulnerabilities close to the vest. If the government finds problems, especially with hardware, we’re not guaranteed we will learn about them—even in the case of open systems. That’s problematic. We need to continue to work as a community to create an informed policy, whether that’s legislatively or the through DoD/executive branch. The flip side of that is fixing vulnerabilities in hardware is much more difficult than fixing vulnerabilities in software. Right now, I’m worried we’re seeing a balance of work to ship products with features that are in non-updatable systems. If you don’t have microcode that can be updated with assurance, and you ship something after spending millions of dollars on design and fab and the open community finds vulnerabilities, this will affect the whole community. VCs are going to be turned off to claims that things are secure, but which are bound to be insecure, and suddenly their products don’t have the feature sets they expect. That will have a knock-on effect for more prolific use of security in our community in the immediate term.
Handschuh: Proprietary designs have the same kinds of issues. Once you find out there are vulnerabilities, as we have so many times, people propose patches that don’t work or which slow down the system. We will have those issues with RISC-V, as well, and it will be hard to change the hardware. But globally we’re better off because we all learn from each other how to make it better, so that the next time around we can improve. Making things open and public always will help, rather than waiting until someone actually finds a problem and then nobody knows how to fix it.
SE: There has been a lot of discussion about the security advantages of RISC-V because there isn’t speculative execution or branch prediction, which were used in other proprietary designs to speed them up. Is that true?
Newell: I’m very optimistic for the future. We dodged some bullets where RISC-V wasn’t susceptible to attacks like Spectre and Meltdown. But that doesn’t mean it isn’t susceptible to some other kinds of timing analysis attacks. There is a broad range. One of the things we’ve done in the RISC-V Foundation is to set up a security committee, which reports directly to the board of directors. Thankfully the board has seen fit to elevate security as the top concern. I have great hope we will be able to develop RISC-V chips without timing analysis vulnerabilities in the future. We have work to do. It will take a few years. But I’m pretty confident we’re going to get there. We’re going to be able to create much more secure chips in the future.
Handschuh: We’re all trying to simplify design by making small hardware roots of trust that will help you execute the security portion correctly. That will work fine as long as you don’t apply that to complications involving secret data. Making things smaller and simpler and easier to understand is what’s going to help.
Kiniry: The big challenge I see is one of resources. We need the right set of expertise. We have working groups set up, with the core set of actors and the right set of expertise in people who have been given the time by their companies to contribute. But we need more people and resources from companies willing to give people a day a week, and we need more resources from government to help out with this.
Related Stories
Leave a Reply