Open-Source Hardware Momentum Builds

RISC-V drives new attention to this market, but the cost/benefit equation is different for open-source hardware than software.

popularity

Open-source hardware continues to gain ground, spearheaded by RISC-V — despite the fact that this processor technology is neither free nor simple to use.

Nevertheless, the open-source hardware movement has established a solid foothold after multiple prior forays that yielded only limited success, even for processors. With demand for more customized hardware, and a growing field of startups looking to build accelerators and solutions highly tailored to AI/ML algorithms, interest in open-source hardware has been rising. How big the market ultimately becomes, and whether open-source can deliver everything some companies are looking for today, remains to be seen. But there is growing recognition that open-source hardware has a role to play, and that is starting to attract more interest and investment from across the industry.

Still, misconceptions persist. “Often people say, ‘Open source, it’s free.’ But it’s not free,” says Dominic Rizzo, project lead at Google and project director for OpenTitan. “This is why we find the most successful open-source projects are ones where people have a long-term vested interest and are working together in a collaborative fashion. This contrasts to the style of open source where people developed something and kicked it over the wall as open source.”

What’s changed is the need for more customized solutions on a mass scale, and that may line up more on the side of RISC-V in particular, than open-source hardware in general. “RISC-V is an outgrowth of the trend to more customized computing and domain-specific computing,” says Frank Schirrmeister, senior group director for solutions marketing at Cadence. “You could argue it’s more an effect than a root cause. The root cause is that customers need domain-specific, workload-specific computing, enabled by domain-specific architectures and domain-specific languages. Any processor that is extensible plays in that domain. Open source is orthogonal to that.”

Open-source hardware also has some interesting challenges to resolve before being adopted commercially on a wider scale. “The IP must meet the same stringent verification standards, along with long-term support and maintenance within today’s commercial hardware, without breaking existing total cost of ownership models for SoC or system designers,” says Tim Whitfield, vice president of strategy for Arm’s Automotive & IoT Line of Business. “The risk is that the cost savings associated with licensing is often minimized by the time and money being spent on verification, physical design and software development, for a device with little or no differentiation. There are multiple groups emerging that are trying to solve some of these problems. However, in order to succeed and provide some of the building blocks for an SoC, they will need continued support and investment from willing participants. The consortia model has worked well for many standards around an SoC, and this is an extension of that model.”

Getting there will take time. “When people start thinking about open-source hardware, there are the purists who want everything open source,” says Rick O’Connor, president and CEO of the OpenHW Group. “Cracking this nut for the hardware industry — and the semiconductor industry, in particular — is going to happen in stages. Many thought that the RISC-V organization was about free and open-source implementations of processor cores. The foundation mandate is the governance of the specification of the instruction set, and the various standard extensions, and the extensibility of the instruction set. When they talk about an ISA being free and open, they mean you have the freedom to do whatever you want with this ISA.”

The culture is changing, too. “Open source is a significant investment, and you do need to have multiple parties with a vested interest in seeing the space succeed and succeed long term,” adds Google’s Rizzo. “I honestly think the verification IP is the thing that actually gives the RTL value. You can’t really have one without the other, especially when you want to go to production.”

The total value has to be assessed. “RISC-V as a baseline reference is stable and well-tested,” says Shubhodeep Roy Choudhury, CEO and co-founder of Valtrix Systems. “Companies can add their secret sauce, saving design time and cost. Collaboration with others distributes the costs of development. Verification cost may still remain high, as everyone will want to make sure that their IP works as per the specifications. Back-end costs are inevitable, so saving there is minimal. There is also the flexibility of choosing from multiple processor IP companies, so if an IP from one vendor does not work, there are other alternatives.”

Open-source hardware
Since its inception, the IP industry has come a long way, and the bar has been raised in what people expect from commercial IP.

Getting open-source IP to the quality level required by high-volume chips is a challenge. “This is where not-for-profit organizations like OpenHW Group come into play,” says Rob van Blommstein, head of marketing for OneSpin Solutions. “Many companies are contributing to make this a reality. You also have big companies that have a strong interest in reducing their dependency from foreign proprietary technology. Another important aspect of open-source hardware is that it unleashes so much innovation potential from small companies and individuals with access to free customizable cores and mature tool chains and ecosystem.”

“Open-source hardware has to look like, smell like, and feel like commercial IP that you would expect to get from a commercial IP vendor,” says OpenHW’s O’Connor. “At the OpenHW Group, we’re trying to get a collective thrust and overcome inertia with a group of companies that wants to make high quality IP that is well verified — with good functional coverage and code coverage in a true SystemVerilog UVM testbench that they would build on their own if that’s what they were doing.”

Some markets have additional drivers. “Security implementations are not meant to be secret,” says Rizzo. “Kerckhoffs’s principle says you shouldn’t depend on the secrecy of your implementation for your security. Security engineers, or people who work in this domain, want to know what’s going on because there have just been too many uncomfortable incidents or times when you said it was secure, but then someone found a problem. By making something proprietary, you don’t necessarily keep out the people who are looking for holes, but you do prevent researchers or the curious from looking.”

Above or below the line
The people who look towards open-source hardware tend to fall into two camps. “Below the line are people who are looking for free lunches,” says Simon Davidmann, CEO of Imperas Software. “They don’t want to pay for anything. If they can get access to something for nothing, they can get a certain job done. Then there are those above the line. Those are the people who want freedom.”

A continuum exists between the two extremes. “There are proprietary implementations of RISC-V that are closed source and only used in-house,” says O’Connor. “At the other extreme are commercial IP companies that are selling IP under license. There are IP companies offering a raft of open-source implementations in all kinds of different languages, such as Chisel, VHDL, Verilog or SystemVerilog, and in all kinds of shapes and sizes ranging from tiny controllers right through to higher-end server-class machines.”

Being above the line means knowing how much of the effort you can take on yourself, how much risk it entails, and the skills you will need to pull it off. “If you are above the line, you probably have tons of experience building huge SoCs,” adds Davidmann. “It requires you to have a methodology where you can profile your code, do a lot of analysis to identify where the code needs to be improved or what the communication channels need to be, choose which instructions to add, handle the architectural stuff that needs to be extended or customized, and then build models, test software on it, verify it, and do a lot more verification on the hardware. It is not something that should be undertaken by those without much experience because it is pretty challenging. But it is giving them the ability to do things that they couldn’t have dreamt of with traditional processors.”

Proprietary processor IP companies offer many of the same alternatives. But RISC-V has ignited interest in these options.


Fig 1: Open-source adoption model. Source: Semiconductor Engineering

Extensible instruction set
Some people want to maximize the freedom to innovate. “A small percentage will say I want to take complete ownership of the processor and I want to fully customize it,” says Graham Wilson, product marketing manager for ARC DSP processors at Synopsys. “They will invest and learn about the tools, and they will take the time. It is a strategic decision to learn how to build these new instructions. Then they also go in and modify the architecture of the core, add new interfaces, and strategically they’ve made that decision to own the processor starting from an ARC configurable extendable core.”

Some people take the middle ground. “These people start with a generic core that is well verified,” says O’Connor. “They take the verification infrastructure and bolt on their own custom accelerator or their own custom instructions. Obviously, they are on their own to verify those, but they can build onto the verification infrastructure that we provided. And if they want that to be a standard implementation in the open-source community, then maybe that’s something that we’ll curate inside the OpenHW Group.”

Even those looking to purchase a RISC-V core may still see a degree of freedom. “Many people who are adopting RISC-V are not looking to customize that processor,” says Neil Hand, director of marketing for design verification technology at Mentor, a Siemens Business. “They still have the freedom to benchmark multiple processors and can defer this decision. In the past you would typically make processor decisions at the start of a project, and it would be fixed from that point onwards. With RISC-V you have comparable ISAs from multiple suppliers, and you can move between different suppliers depending on the extensions and the architecture.”

As the industry matures and a greater number of extensions become available in the industry, the need to invent custom instructions may decrease. “You are not going to accelerate an FFT because it has already been optimized,” says Synopsys’ Wilson. “When new algorithms or new specifications emerge, like wireless communications or wired communications, you might find that customers would identify specific bottlenecks and then add instructions for that. But the industry would learn from that and build a more generic solution, or offer that as an extensible instruction within the processor package.”

Some processors may be domain-specific. “If you look toward the IoT world or some of the 5G devices, they are single-purpose devices,” says Mentor’s Hand. “They need to do one thing, and they need to do it very well. As we go into the new era of computing, driven by these purpose-built, application-specific compute platforms, you still want programmability because of the software ecosystem. But you will have unique needs from an extensions perspective. They do not necessarily become generalized everywhere.”

This will evolve over time. “If you go to some of the early cores, such as the PULP cores from ETH Zurich, they wanted some specific instructions that didn’t exist in the RISC-V base,” says Davidmann. “So they had to build their own custom instructions. Today, those instructions exist in the standard. The RISC-V instruction set has become pretty rich and covers a lot of things. There are working groups that are pretty close to finishing most of the work on additional sets of instructions.”

Extensible architectures
Some cores will be driven by their application domains. “In the case of artificial intelligence and machine learning, a lot of the underlying computation is the multiply accumulate,” says George Wall, director of product marketing for the Tensilica group of Cadence. “But there are a lot of efficiencies that can still be added to prevent the processor from multiplying zero by zero. That’s an example of where an extensible instruction set can come in very handy.”

Some of these algorithms are becoming fairly generic. “A CNN’s convolutional layer includes a great number of multiply accumulate operations, and they impede computational efficiency,” says Louie De Luna, director of marketing for Aldec. “CNNs need to move multiple blocks of data from a matrix to external memory simultaneously to avoid delays caused by multiple access to memory. Pure hardware implementations of CNNs lack the flexibility to address these issues because they require complex controllers for handling the calculations and data transfer, but custom instructions can be created to address these types of domain-specific issues.”

Others agree. “Consider SeeFar radar applications,” says Wilson. “These algorithms utilize a sliding window, and so you see custom instructions added to accelerate those. They are different from standard DSP functions. It is a data throughput function, but custom instructions help with the ability to pull data out from the load store or the registers through a sliding window.”

Some processors are beginning to incorporate embedded FPGAs, and this provides an element of dynamic reconfigurability. (See related story, Configuring Processors in the Field.) But this creates an additional problem. “If you have an embedded FPGA, and the supplier forces you to use a proprietary tool, then you don’t have a way to integrate that into your design environment,” says Mao Wang, product manager for FPGAs at QuickLogic. “It forces you to go through two or three different types of legal or licensing methodologies just to have that ability to use the software. The support structure also becomes pretty convoluted.”

The industry has failed to make FPGA programming easily accessible to the software community. “There are many more engineers graduating with computer science or data science degrees and not hardware-oriented degrees,” says Brian Faith, president and CEO for QuickLogic. “In a post-Moore era, hardware needs to be more like software, and if it can be, it will open up so many more potential uses for it. FPGA companies like the walled garden. They like users to stay on their tools and make it really difficult for open-source companies or tools to take hold. We are the first programmable logic company to openly support and embrace open-source tools for FPGAs.”

FPGA companies traditionally have protected their bitstream. Making that open means giving away device configuration information, timing and other information that was considered proprietary. “We are now providing that information to the community to include in open-source tools,” adds Faith. “You no longer have to try to reverse engineer anything in order to take a design all the way through and get a bitstream.”

Some open-source programs, such as OpenTitan, start with the RISC-V core and then add on top of it, placing the result into the community. OpenTitan is an open-source silicon root of trust. “We are open sourcing the RTL and the design verification IP — all of the things that you would need to work with a back-end partner to tape-out a chip,” says Rizzo. “It provides a set of logical security guarantees. When a machine boots up, we have a very deep low-level check so that we know that it is booting code that we are aware of, that we control, that we have signed.”

Adding services
As the open-source software community expanded, new business models were spawned from it. The same is now happening with hardware. “The cost of entry is very high if I do it all by myself,” says Cadence’s Schirrmeister. “However, there’s an opportunity for services in between. You could use a tool to automatically generate it where you start with an open-source architecture and go from there, or you could call in people who offer a mixture of services and tools. This would be a company that is familiar with the architecture. They have done successful modification, so their value-add becomes the ability to work with you, understand your needs, and help you modify and verify the architecture so you don’t have to start from scratch.”

Others agree. “Customers may know that they want specific instructions, and they may ask us to do it for them,” says Wilson. “They know the algorithm, they know where the bottlenecks are, but they need help to implement it. Those people may be better off working with the processor company that also understands the implications on the physical implementation.”

And these things can be stacked. For example, Codasip offers the SweRV core that was developed by Western Digital and is curated by the CHIPS Alliance, based on the RISC-V ISA. “We have added a support package around it,” says Roddy Urquhart, senior marketing director at Codasip. “This makes it much easier to implement the RISC-V SweRV core. We not only provide the core, we include support for traditional third-party design flows, as well as the components necessary to design, implement, test, and write software.”

Costs and benefits
For companies investing in RISC-V, it may mean spending dollars, making IP donations, providing expertise in specific areas, and taking risks. “The biggest thing we had to give was the fear of losing control,” says QuickLogic’s Faith. “In some respects, that’s actually more difficult than writing a check. It’s convincing a board or a management team that we don’t want a walled garden, we don’t want to protect how we lay out the device or how the routing architecture or the channels are. We don’t want to protect the timing information. That’s actually giving up more — because it is control — than any dollar amount that we would ever spend for this. I see this as an opportunity for growth, and I’m willing to take that risk and let go of my fear of losing control.”

Some companies are willing to provide manpower. “You can help by executing tests and running tests or providing guidance around how we are building the verification testbench,” says O’Connor. “You can have engineering participation, but members do not have to contribute. That’s how we’re creating a sustainable virtual team. But you don’t have to be a member to use the IP. Our IP is entirely open source. You can download it, put it into your device and away you go.”

But with participation comes reward. “If you want to influence the roadmap, to decide what functionality is in the IP, you need to be a member,” adds O’Connor. “If you want to influence the priorities of the projects that we take on, you need to be a member. If you want to influence the methodology and structure of how we verify that IP, you need to be a member and participate in the task groups.”

Greg Tumbush, an engineer for EM Microelectronics, provides a vivid description of how powerful engagements can be. “Interrupt verification is a huge job. When the OpenHW Group was discussing what the interrupt structure was going to look like, I advised them to use Core Local Interrupt Controller (CLIC) or Core Local INTerruptor (CLINT) interrupt methodologies. When you are conceiving your design, the closer you can make it to the standard core the better. First, the ISS is probably modeling it correctly. Second, the verification that they’ve done applies to you. And then also, anything that you develop as far as bug fixes, is directly contributable. If you’ve got totally divergent cores, there’s no synergy between your company and OpenHW. If I verify the interrupts, then anybody who downloads that core doesn’t have to verify interrupts, and that’s a big task off of their plate. By contributing you gain leverage and with leverage you can influence how the core is evolving.”

You have to go in with open eyes. “One of those tradeoffs is you give up control, at some level, and you do your best to steer things in a healthy direction,” says Rizzo. “It can be a hard thing to accept if you’re coming from a different perspective and you’re used to having a lot of fine-grained control. You have to come at it with a positive attitude and see the advantages of open source. It will take a little bit longer and it is going to require more discussion, but it’s worth those tradeoffs. We’re good at some things, but we’re not experts in everything. And so we went out there and we worked with specific chosen partners who really get the vision.”

Conclusion
Open-source hardware has become real with the emergence of the RISC-V processor ISA. The industry now has enough momentum that it will work through the problems and find solutions.

New business models are emerging, but it is unlikely that open-source hardware will ever look like open-source software. Hardware requires more investment, because taking it to an implementation carries much greater costs and risks. But with collaboration, it is possible that better architectures will be found and better solutions created.

Related
RISC-V Gaining Traction
Experts at the Table: Extensible instruction-set architecture is drawing attention from across the industry and supply chain.
RISC-V’s Expanding Footprint
Market opportunities and technical challenges of working with the open-source ISA
Components For Open-Source Verification
Building an open-source verification environment is not an easy or cheap task. It remains unclear who is willing to pay for it.
Open Source Hardware Risks
There’s still much work to be done to enable an open source hardware ecosystem.
RISC-V Challenges And Opportunities
Who makes money with an open-source ISA, the current state of the RISC-V ecosystem, and what differentiates one vendor from the next.



2 comments

Ami Vider says:

If RISC-V can do to AI what ARM did for mobile phones 15 years ago, they have a business model. AI computing is here and in demand, there is no real solution yet.

Andrew Nambudripad says:

ARM made a boatload selling IP soft cores. It was/is a great business model. Companies have always loved the whole “lock you into our walled garden” and ARM hits that niche perfectly. It’s the whole “are you a tool company who sells batteries or a battery company who sells tools”, sorta thing. Develop a brand and a following, then make your entire battery and tool kit proprietary. Now every customer is locked into your specific form factor. Recurring revenue into perpetuity, since there’s no interop

If you want to leave the x64(and PPC/SPARC to a limited extent) world, you’re left with limited options. Even if a company *had* trillions, *and* you poached half of Intel so your company immediately acquires the ‘institutional knowledge’ to manage to successfully tape out, *and* there was room at a fab to take your masks, you’re still looking at years of R&D to catch up to an x64.

ARM also positioned itself perfectly with 3 lines – low cost disposables, high-end cores that can perform somewhat on par with a standard general-purpose computing device, and the ARM-R for 26262/do178c/medical/mining-type stuff.

I haven’t seen anyone decap an Apple M1, but I’m sure it’s not too different from a standard ARM A– take an ISA and IP core, mildly alter a few things, effectively making your customers dependent on both your hardware and your software. Spilled a beer on your keyboard? Can’t replace that $7 controller buddy, gonna have to pay Applecare $800 to swap out the board, or hey we have a newer model you might want to look at…

RISC-V has no business model to begin with. It’s like Berkeley SPICE or Berkeley BSD, it was built open-from-the-start. Granted, you’ll have someone like Cadence step in and profiteer off the IP…which ironically is what Apple did with OS X and Berkeley BSD. Huh.

Leave a Reply


(Note: This name will be displayed publicly)