Open Source Hardware Risks

There’s still much work to be done to enable an open source hardware ecosystem.


Open-source hardware is gaining attention on a variety of fronts, from chiplets and the underlying infrastructure to the ecosystems required to support open-source and hybrid open-source and proprietary designs.

Open-source development is hardly a new topic. It has proven to be a successful strategy in the Linux world, but far less so on the hardware side. That is beginning to change, fueled by a slowdown in Moore’s Law, rising design costs, and a growing need for more specialized processing elements in heterogeneous designs. This also has raised a long list of issues, starting with basic definitions, that in the past were largely ignored because it was simpler to use off-the-shelf proprietary solutions than to work with open-source hardware.

“Open-source hardware is open-source silicon, but open-source hardware also could mean open schematic or PCB designs of the sort we see in OCP (Open Core Protocol),” said Dominic Rizzo, OpenTitan Lead for Google Cloud. “There are also open-source hardware specifications, but those are less of a sea change than the rise of open-source hardware design collateral. In one sense, the RISC-V ISA is novel in that it’s an openly developed ISA specification, where the most popular ISAs are typically closed. There are a handful of other open ISAs, such as OpenPOWER or MIPS, but the implementations of all of these tend to be black boxes.”

Rizzo noted that what is unusual with RISC-V is the way the open source community has rallied around it. “We are seeing an increasing number of credible open-source, white-box silicon designs like Ariane, Ibex and the OpenTitan SoC built on top of the RISC-V open ISA.”

The Open Source Hardware Association maintains a list of certified Open Source hardware projects across a variety of application domains. As outlined here, hardware licenses rely more heavily on patent law than on copyright law. Copyright licenses may control the distribution of the source code or design documents, while a patent license may control the use and manufacturing of the physical device built from the design documents.

Some of the licenses explicitly state that users who benefit from an open-hardware license design may not bring lawsuits claiming that design infringes their patents or other intellectual property, noted Frank Schirrmeister, senior group director for product management and marketing at Cadence.

Krste Asanovic, chairman of the RISC-V Foundation and chief architect at SiFive, emphasized during a panel at the RISC-V Summit last month that there are differences between an open standard like RISC-V, and open-source hardware, which is source code for some hardware blocks in RTL.

“When we started RISC-V, we saw a need for an open standard,” Asanovic said. “At Berkeley we also developed open-source implementations of the standard. But by far the most valuable thing is the open standard. You cannot have open-source hardware without an open standard. But it’s also true in the industry that we have hundreds of open standards. The open standard is where the big value is. Open-source hardware is enabled by it, and at a much earlier stage. Open standards are widely accepted and widely used throughout the industry, whereas open-source hardware implementations of a standard are a relatively new thing. Again, the key thing about RISC-V is that it is an open standard. It’s enabling a lot of things because you need a processor to run software on any kind of hardware platform. That’s why there is an upsurge of interest in open-source hardware, because the ISA open standard enables people to build open-source hardware.”

Raik Brinkmann, president and CEO at OneSpin Solutions, considers open-source hardware to be design cores available for free from repositories or other public sites, most likely in RTL form. “Anyone can download, use, or modify these cores,” he said. “There will be a wide range of quality for open-source cores, and most likely there’s no one to sue if they don’t work. So there are risks to using open-source cores, but there also are established processes and procedures to reduce risk considerably.”

Many of the technical and legal steps taken when acquiring commercial IP also apply to open source. For example, users must vet the designers as much as possible and must understand the technology domain well enough to know about potential patent issues. They also must learn as much as they can about the verification and validation of a core, its usage in actual silicon, and other information that helps to screen out dubious designs.

“Above all, users must verify the core themselves,” said Brinkmann. “Many tools are available to judge the quality of a design, from finding coding errors to formally proving compliance to relevant standards. The verification performed must encompass the full range of design integrity, going beyond functional correctness to include safety, security, and trust. With this level of verification and good vetting procedures in place, open-source cores are an attractive and appropriate design choice.”

Schirrmeister agreed. “From a verification perspective, the main technical issue in the RISC-V domain is not about it being open source, but about RISC-V derivatives and modifications, and the need to re-verify designs once they have been modified. The key change here is who does the verification. For ‘standard’ versions of RISC-V, we currently see an ecosystem developing of companies that provide implementations of standard configurations or templates for tools generating application-specific instruction set processors. For those standard implementations, these companies effectively become a new breed of semiconductor IP providers, and cover a fair share of verification prior to any changes.”

Nothing is free
That is not the same as free IP. The true cost of RISC-V includes verifying any changes made to the design. “This cost can be significant,” said Schirrmeister. “For instance, commercial vendors publicly state that they on average subject every IP to between 5 trillion and 6 trillion emulator cycles and 2 to 3 petacycles of FPGA system validation, in addition to formal techniques. And as attractive as it may be to add instructions to reduce the needs for memories and dedicated hardware accelerators, once a modification to a processor is made, new tests need to be added and regressions need to be re-run to make sure that the addition has not introduced defects in other areas of the design. Formal and dynamic references models are required, and they need to be enhanced to allow checking changes the user may make. Bottom line, the effort to verify a RISC-V design, especially if changes to a configuration have been made, is incredibly easy to underestimate. ISA compliance is important and necessary, but for sure not sufficient.”

Alongside of this, the tools used today are being enhanced and made applicable to end-user verification requirements. This opens up big opportunities for EDA, both for the core processor as well as the solutions stack, to allow integration of processors into the system-on-chip environment. In addition, processors that are commercially available today likely will be updated over time, driven by customer demand, to support open-source hardware such as RISC-V, Schirrmeister said.

Fig. 1: This software stack covers bring-up, use case testing, profiling, debug, integration, performance analysis and protocol verification. Source: Cadence

The activity surrounding the RISC-V ISA has put the spotlight on other high-profile supporters of the open source movement. Google, for instance, is investing in open source hardware at the silicon level. “This past November we introduced the world to the OpenTitan project,” said Rizzo. “OpenTitan is the first open-source silicon root of trust (RoT). It will deliver a high-quality RoT design and integration guidelines for use in data center servers, storage, peripherals, and more. Open sourcing the silicon design makes it more transparent, trustworthy, and ultimately, secure. The silicon RoT technology can be used in server motherboards, network cards, client devices (such as laptops and phones), consumer routers, IoT devices, and more. We hope it will have far-reaching impacts on a variety of industries in the years to come.”

One of the industries Rizzo believes has the most to gain from moving from proprietary to an open implementation with OpenTitan’s RoT is cloud computing. “Our audacious goal is to remove the need for trust in proprietary implementations at the lowest layers of the stack.”

This same is true for other open-source platforms. Mendy Furmanek, IBM’s director of OpenPOWER processor enablement, noted during a panel session at the RISC-V Summit that the OpenPOWER Foundation started in 2013, the focus was to build out the POWER ecosystem starting with the system level. “It was opening reference designs for others to build POWER systems, beyond the systems that IBM built, and with that we were focused on opening up the entire firmware stack as well as the software stack. Coming into 2019, we went to the next step and said it’s time to open the instruction set architecture. So we have a fully open system stack. We came at it from a different direction of building out that software ecosystem, that system-level architecture, and then going into the instruction-set architecture. I fully agree there is a difference between the standard, the instruction-set architecture, and then open-source designs. So when I say opening the instruction set, I’ll never say open-source instruction set. Those don’t go hand in hand. It’s open designs, and then an open instruction set, which is an open standard that you can build upon.”

Pitfalls of open, custom SoCs
Many open source proponents cite Linux development as a reference point. Engineering organizations got caught up in the ‘free’ of Linux before trying to commercialize their design. There are many potential pitfalls to building a custom SoC from an open-source ISA.

Tim Whitfield, vice president of strategy for Arm’s automotive and IoT business, noted there’s no simple answer to this. “On a medium-complexity SoC, where design cycles can be 18 months to 2 years — depending on whether it’s a ground-up design or whether you’re iterating something that already exists — the fully loaded costs when you think about tools, people, compute and IP can easily run $1 million-plus. When we start to break that down, two of the biggest elements are verification and software development, and they really do tend to suck up the most time and effort and resources.”

This is well understood in the chip design space. “Verification, in particular, is a really complex problem to solve,” Whitfield said. “If you’re designing a device that’s effectively an open system, where anybody that’s writing software can run on it, the verification space is pretty much infinite. And it’s not just about does it functionally work. It’s about whether you move data around the system. Is it secure? Is it functionally safe? We’re moving into a world where more and more of these chips are getting into environments where they need to be totally safe. And all of these are additional costs — design costs as well as verification costs. If you contrast that at the lower end, there are clearly ways of creating silicon for significantly less than $200 million. That’s a really interesting world, and one in which we are definitely involved.”

This market is more complex than it might first appear, though. “For years we’ve had low-end embedded processors and have been trying to unlock that capability,” he said. “Community-developed open standards/open-source hardware and software is what’s going to enable people to create these low-end trillions of devices for very bespoke endpoint applications. He noted that Arm has been working with eFabless, which has a community platform to bring together developers, designers, IP providers to create silicon for $50,000 to $60,000.

At the other end of the spectrum, Furmanek noted she comes from a world where it costs $500 million to build a chip at the very high end. “At IBM, we see a change. SoCs will still exist but will lead to more of a chiplet form factor, with domain-specific-built chips. That’s where these open standards and high-speed buses come in. People building SoCs have had to spend so much time in verification, NRE, or even years in legal discussions about licensing, to the point that the part that’s really their innovation doesn’t get the focus, the time, or the dollars. They’re spending so much time putting the part that’s really common on die. In chiplet structures you can say, ‘Let’s open the common pieces, everybody can benefit, and then we can focus our time and energy on the really innovative piece.’ That is going to come about with the end of Moore’s Law. There’s going to be an explosion in new ways of thinking on how to tackle these challenges. How do we do acceleration? How do we disaggregate across the system? We’re going to see that shift, and it’s going to be helpful to everybody because you can really focus on where your expertise is. That’s where we’re going to see the dollars come down to the business model, as well. You’re going to see that you no longer have to spend all that money on pieces of IP that really aren’t your value proposition.”

Open-source hardware provides one piece of the puzzle, but it requires a whole ecosystem to support it. “To ensure an organization gets the most out of investments in open source, it needs to identify like-minded partners who are committed to the long-term development and design maintenance to be viable,” said Google’s Rizzo. “With OpenTitan, we chose to work with the lowRISC CIC. lowRISC is a not-for-profit firm that uses a collaborative, independent engineering methodology to develop and maintain open-source silicon designs and tools for the long term. Those tools include critical infrastructure elements necessary for a healthy ecosystem like RISC-V LLVM support. Open-source infrastructure like that is a major enabling force for open silicon. Our investments in OpenTitan are offset by confidence in having an independent, long-term organization shepherding the project committed to supporting high quality open source silicon design.”

What’s particularly interesting about this kind of ecosystem is how information moves back and forth between the ecosystem and the central organization. “We have a very long engagement with a very strategic large customer right now who has done a tremendous amount of evaluation on RISC-V — not just ourselves, but RISC-V in general,” said Jerry Ardizzone, vice president of worldwide sales at Codasip. “And they’ve noted some areas they would like to see changes in RISC-V. So they have actually, through us and directly, gone to the RISC-V Foundation and said, ‘Hey, I have some concerns about the ISA in this particular area for my particular application, and we think we have a very valid point and a lot of data here. We think you should look at it.’ And sure enough, they are looking at it. So my guess is other large customers, chip companies, system companies are making those same kind of inputs back into the Foundation and they are being heard. And don’t forget that a large number of participants and even founding members of the Foundation are some of these very large systems companies, not just chip companies, but systems companies that care a lot about where this goes and being successful and being the right technology.”

Legal risks
Once an engineering group develops open-source hardware products, they must consider how to protect themselves from patent disputes.

This is why OpenTitan, a lowRISC project, uses a standard Apache CLA, with an inbound patent license from contributors. Outbound OpenTitan uses an Apache 2 license, providing a patent license from contributors to recipients, to make, use, or sell articles based on OpenTitan code and designs. “It is Google’s and the lowRISC CIC’s intent to make OpenTitan a world-class open source hardware project for everyone, from hobbyists and hackers to enterprise vendors. To that end, OpenTitan is pursuing a variety of initiatives to facilitate open sharing of IP. Direct recipients of the repository will get an explicit patent license from contributors via the Apache 2 license. In addition, Google is continuously investigating separate patent risk mitigation initiatives for open source hardware more generally.”

For interoperability, there will have to be a balance with different cores coexisting with various hardware and software components, which raises the risk of fragmentation.

“Fragmentation is all about having that software ecosystem where you can have lots of cores you’re able to run and not having different pathways,” said IBM’s Furmanek. “You can have differentiation without messing up that infrastructure, and it is about the fact that you don’t want two things being done in two different ways and going down different paths. For the POWER instruction set we have a long history, from embedded processors all the way up to supercomputers. So the instruction set has all of that in it that’s been built over years. The software ecosystem has been built around that. As a result, we have a very strong, robust, mature ecosystem that doesn’t have the fracturing. It’s true if someone comes in and wants to build on the instruction set, why would they want to go off of that? They get the value of that software ecosystem. Not using that they’re going to go build their own software, and nobody wants to do that, so there is a benefit that comes in when the software is already there for you. You can come into that ecosystem and really get the benefit of it, versus going down and fracturing. At the same time, there are things that we have had to put in place in order to continue to govern that to make sure that you don’t get rogue players. Part of that is the partnership and the ecosystem as a whole, being together saying, ‘We want to have a strong software ecosystem that everybody can use, and that we have to have the right focus in order to have all the differentiation across the product lines and the different markets.’”

Arm has decades of experience around these issues, as well. “It’s self-evident that the community has been able to innovate and differentiate across products, and perhaps the commodity is the CPU,” said Whitfield. “We created a value capture. We said we’re not going to take all the value, we’re going to share the value, and allow everybody a level playing field to innovate, and that’s been successful. But what was successful in the past doesn’t make it successful in the future.”

Whitfield noted there may not be perfect agreement around the governance of the architecture, and the levels of importance of that to enable a software ecosystem. “Having strong governance has enabled things like Android to grow. Whatever the mobile phone, the app runs, and that comes with strong architecture privileges. There are parts of the system where you need to keep that strong governance. There are parts where perhaps it’s not as important. And we need to continue to evolve and help our partners do the things they want to do. We see that now people want to differentiate in different ways. At the very low end, the differentiation isn’t really through the CPU. At that low end it’s about how quickly you can get to the data. That’s what people want to do. They want to use devices to harvest data and monetize or do whatever they want to do with their data. And that’s really about the development community — how quickly can you get people to the silicon solutions they need, and whether that’s differentiated at architecture level, the micro architecture level, or the system level. How quickly can you get them silicon and a developer environment that enables them to securely connect to whatever they want to connect to, to provision their devices to be able to do things like over the air updates and get to the data, get to get to the money?”

An important aspect of this is secure provisioning while still providing accessibility to the right silicon. “As you move up the stack, you look at the community of people creating embedded devices on Arm,” said Whitfield. “There is plenty of differentiation with accelerators, with RISC-V cores, with DSPs, with specific ML software. There is plenty of opportunity, and the complexity of this is going up.”

Going forward, Arm intends to continue participating in open-source efforts. “Yes, we do participate in open source. We make huge contributions. Currently, there are about 1,000 software engineers contributing to open-source projects. We clearly continue to support open standards that have been talked about. We’re either creating them through things like AMBA and Platform Security Architecture, or through participating in open standards. That continues.”

Still, questions remain about where the line should be drawn around the architecture governance. What fundamentally is the right model to enable the technology?

“Open source hardware will play a part in SoCs, and it’ll be like the open source software world where there will be a mixture of proprietary and open or open source,” he said. “You see that in the RISC-V world at companies like Andes. They’re not open-source implementations of the processor, but they will appear in chips and alongside open source hardware. For this long tail and the innovation that’s been talked about to create that, is all about enabling through community platforms through collaboration. Companies like eFabless, which is creating models that really work, have RISC-V and they have an Arm design on that platform. It’s bringing communities together, mixing open source with open with proprietary, mixing tools, mixing IP. These are the models that are going to fuel innovation.

While some in the industry have likened the interest in open-source hardware to the Wild West, Google’s Rizzo compared it more to the exploration of uncharted territory. “Google is committed to working together with its partners — lowRISC, G+D Mobile Security, Nuvoton, Western Digital, ETH Zürich and the like — to best harness the potential of open-source hardware. To us, widespread adoption of open hardware isn’t something to prepare for. It’s something to embrace. We are investing heavily in the community and in the tooling and infrastructure — like LLVM support for RISC-V — that will support the eventual hardware.”

Related Stories
Will Open-Source Processors Cause A Verification Shift?
Tools and methodologies exist, but who will actually do the verification is unclear.
RISC-V Markets, Security And Growth Prospects
Experts at the Table: Why RISC-V has garnered so much attention, what still needs to be done, and where it will likely find its greatest success.
Open-Source RISC-V Hardware And Security
Experts at the Table, Part 1: The advantages and limitations of a new instruction set architecture.
RISC-V Challenges And Opportunities
Who makes money with an open-source ISA, the current state of the RISC-V ecosystem, and what differentiates one vendor from the next.
Will Open-Source EDA Work?
DARPA program pushes for cheaper and simpler tools, but it may not be so easy.
Building Security Into RISC-V Systems
Experts at the Table, part 2: Emphasis shifting to firmware, system-level architectures, and collaboration between industry, academia and government.
Open ISAs Gaining Traction
Emphasis on flexibility, time to market and heterogeneity requires more processing options.

Leave a Reply

(Note: This name will be displayed publicly)