Making Sure RISC-V Designs Work As Expected

Open-source growth predictions are impressive, but the verification process can be harder than with commercial ISAs.

popularity

The RISC-V instruction set architecture is attracting attention across a wide swath of markets, but making sure devices based on the RISC-V ISA work as expected is proving as hard, if not harder, than other commercially available ISA-based chips.

The general consensus is that open source lacks the safety net of commercially available IP and tools. Characterization tends to be generalized, rather than specific for a particular application, and open-source tools are more difficult to work with and frequently less reliable. This has created a market for commercial implementations of both the RISC-V ISA, as well as tools aimed specifically for RISC-V, but it also has opened the door for commercially developed tools and IP that simplify and add consistency to RISC-V implementations.

All of this is happening amid rapid growth throughout the RISC-V ecosystem. Semico Research predicts the communications segment will achieve a 209% compound annual growth rate by 2025, and that RISC-V will capture more than 6% of the CPU core business in that market between now and 2025. The firm also forecasts the available market for automotive will have a CAGR of 160% during that period, and the total available market for 5G infrastructure will reach 19 million units by 2025, with RISC-V playing an important role in both markets. In total, RISC-V growth is forecast to increase 160% during that period in devices targeted at a broad range of performance levels.

But as with any new market segment, nothing is ever as simple as it looks on paper. Reliability is always a concern, and it is essential to have methodologies and tools that can track potential problems on a deeper level, such as formal verification and more extensive simulation.

“A crucial point in today’s complex international networks of value creation is the trustworthiness of electronic component designs,” said Roland Jancke, head of department design methodology at Fraunhofer IIS’ Engineering of Adaptive Systems Division. “Open source implementations of processor cores like RISC-V are supposed to handle this issue, assuming that a large community of developers will ensure error-free implementations. Although this seems to be a valid assumption for off-the-shelf volume products like a Linux kernel, it is likely to fail for very specialized processors. Therefore, such open-source designs need formal verification methods, too, in order to make sure that there are neither errors in the implementation nor backdoors deliberately included in the open source core.”

Joe Hupcey, verification product technologist at Mentor, a Siemens Business, agreed. “Formal verification’s strength is its ability to deliver an exhaustive analysis of a DUTs behavior. Given the immense creativity and breadth of hardware designs that open-source hardware enables, it’s essential that formal is included in the hardware developer’s verification strategy from the very beginning of the project, all the way through to post-silicon validation or to rapidly root-cause any bug escapes.”

For example, at the very beginning of the project, where systems engineers convert whiteboard sketches in spreadsheet models of instruction behaviors and interactions, formal property checking can be employed to verify the interactions of the state transitions/handoffs between instructions will not lead to deadlock.

“The ability to add instructions to open-source ISAs makes them a great candidate for formal since fully verifying all the implications of the changes using simulation would be a big challenge,” Hupcey said. “The task is two-fold — verifying the added instruction logic itself works and plays well with the ‘standard’ set of instructions and interfaces, and that it is bug-free itself. Again, an exhaustive formal analysis is essential to discover corner-case bugs that would be the source of intermittent failures, which would cause end-customers of the IP to call into question the quality of the end product.”

Regardless of whether the hardware spec is based on open source or a commercial vendor’s offerings, it’s not uncommon for companies to customize the interfaces between the processor core and peripherals to gain some power or area advantage.

“Whether a given interface is used without modification, or is being customized to help differentiate the end-product, integrating even the most mature IP/peripheral can produce unexpected issues,” Hupcey said. “Plus, even experienced developers can misread an IP spec, or improperly configure a handful of protocol parameters that can lead to confusing results and wasted time. Hence, formal proofs with assertion-based verification IP can be partially important when engineers modify IP, as this enables them to exhaustively prove any customizations or extensions of the standard protocol implementation do not violate the core of the protocol and/or create unexpected corner cases.”

Additionally, when a hardware bug appears in post-silicon lab tests, the pressure is on to fix it as soon as possible, and to make sure nothing else gets unfixed in the repair process. The immediate challenge is observability. But even if a certain cluster or IP is immediately suspected to be the culprit, the ability to setup particular initial test conditions, probes, and constrained-random stimuli to reach the internal areas of the DUT is a real challenge given the SoC is in its fully realized state.

“Fortunately, there are formal-based verification work-flows that can be used in-tandem with the simulation or hardware-assisted verification flows to rapidly and exhaustively root-cause the bug, and exhaustively validate that RTL fixes effectively and completely addresses failures,” he said. And given the inherent novelty of open source hardware projects, it’s inevitable that there will be unforeseen corner case bugs in the IP and its interfaces that only an exhaustive formal analysis will be able discover, he added.

Others agree. “Only formal can exhaustively analyze designs and provide the absence of bugs,” said Tom Anderson, a member of OneSpin Solutions’ technical marketing team. “Formal equivalence checking also verifies the implementation flow and ensures that no hardware Trojans are inserted into the design. There is strong interest in using formal to verify open-source RISC-V cores in particular due to the multiplicity of suppliers and design variations. This verification should be performed with third-party tools so that core integrators can re-run the same analysis as the core providers.”

Using every tool available
But not every tool does everything well, and as chips become more complex, using everything available is critical. Simulation, for example, still plays a critical role in understanding complex interactions and dependencies between different parts of a design, and for determine what’s legal and what will likely cause problems.

“There are open-source simulation tools out there, but they lack UVM support,” said Louie De Luna, director of marketing at Aldec. “When it comes to verification of a SoC, UVM has been really helpful. That’s a challenge, because users want to use open-source tools, as well. But at the end of the day, they still use commercial-grade simulation tools.”

Open source adds a unique twist, though, because these designs are meant to be executable from the outset. And this is why formal becomes such a critical component.

“In an open-source design, the idea is to provide something executable, with documentation, so that when somebody picks up this design and integrates it into their design, there is some contract at the interface level to say, ‘This is how this design is supposed to be used in terms of its interface and functionality,'” said Ravindra Aneja, director, application engineering, formal solutions at Synopsys. “You’re providing something that the designer can use and execute to make sure they are using this design in the intended way. In the past, engineers were provided with a minimal set of test vectors that could be used to simulate when integrating it. But now engineering teams are seeing a lot of value in formal verification because it is exhaustive and requires minimal setup, and those teams are being provided with assertions that are critical to formal verification — basically an executable spec on the legal stimulus that design expects, and how the design responds to that that stimulus. If the code is not connecting correctly, formal verification can reduce the time to get very quick feedback to say whether it’s using this design correctly or not.”

To some, this is all a dream come true in the context of RISC-V. “We used to dream about it, but now it’s a reality,” said Aneja. “You can go to the [core provider’s website], you can provide the instruction set that you need, and then basically they configure their CPU core based on the instruction set that you need. You don’t have to use the whole instruction set, but you can use what you need, and RTL can configure itself based on your requirement. You can imagine the people who are using that code. Everybody has their own requirement and everybody’s getting their own customized RTL on the fly, for all practical purposes. There, testability becomes even more difficult because the design can have so many iterations that you may not have considered.”

Understanding the configurability of the design and verifying it are critical to that. “Formal verification extend itself into that environment really well because you may have a iteration for which you are not only configuring the RTL, but you can also configure the set of assertions that will go along with RTL. When the design team is consuming that RTL, they can verify that iteration of RTL against those set of properties which are relevant to that configuration.”

Should formal models be open?
With any discussion of open source, the question of openness extends to related areas, and with RISC-V, one of those is whether formal verification models should be open.

“With assertion IPs for various protocols, typically all vendors had them integrated because the idea was that you wanted to monetize your effort since there’s quite a bit of effort in writing these checkers,” Aneja said. “But for the folks who are coming up with open-source ISA-based cores, whether RISC-V or otherwise, the idea is that they’re targeting a very large customer base and they probably want to expand that base. They can do that if they can make it easy to integrate into their environment. If they keep it too restricted, not many people will be able to use it. People want to integrate it as easily as possible, they want flexibility, and they want readability. So in terms of expanding the usage, there is a good argument that people should be able to see those assertions when they are using it. When people are writing these properties, not only they are writing at the interface level, but they also are writing properties for some internal functions of the design, as well. If you don’t have visibility into those models, it may impede usage that they are expecting, and that’s why this discussion about visibility into the formal model is coming into picture.”

It’s up to the company that is creating that model whether and how to monetize it. “If somebody is writing the core, and they are providing the formal model or formal assertions, they can monetize it in some way,” he said. “If they’re going to sell the IPs to 5,000 customers, and it’s licensed or there’s some other way to make money, that could be pennies on the dollar and they may not care. But if it is a third party that understands the architecture, and they are providing that formal checker or formal IP, they may want to get paid for it. And that may take them into a direction where they want to encrypt it. Looking at who is writing it and who is selling it, that will decide whether it will be open or not open. But for the community that is going to consume it, of course they would like it to be open.”

For the users of that design, the verification team will leverage that formal assertion IP because it allows them to debug it faster. For example, if the instruction set is not working as expected the designer can pull out signals and debug them. But they don’t really know how it has been coded because they cannot see it.

“When we call something open source, people have that expectation that they should be able to see as much as they need to,” Aneja said. “On the debug side, it definitely helps. But again, it depends. There is a technical reason for it, and there is a commercial reason for it. Technically, for the engineers consuming it, they want to see it open. But to the people who are writing it, they have a different metric that they track in terms of how they can monetize it, and that means they have to make some of it visible.”

Without that kind of visibility, any changes need to be tracked by the consumer of the open-source technology. If changes are made, it needs to be understood in context, and if it is sold commercially after that, there needs to be some way to track those changes.

“As a community, open source is a great idea,” said Aneja. “But at the same time, somebody has to take the responsibility that when something gets changed, all information needs to be refreshed, depending on how the code is being changed. Like any code in the open source community, there has to be some shared responsibility to make sure there are no logistics problems.”

For these reasons, because of the open nature of an open source design, it is more critical than ever to leverage every sort of verification technique, including formal verification and simulation, to make sure that nothing goes off the rails.

Conclusion
Open-source designs, including the RISC-V ISA, are just one more component that needs to be integrated into an SoC. But because it’s open source, engineers have the ability to make changes. That makes it critical to understand what changes have been made, and to use whatever tools are necessary to verify that it works in the context of wherever it is being used.

Formal models, formal verification, simulation, and whatever else can be used to ensure that everything is working to spec and within the context of a design are critical, and that includes whatever has been changed. Unlike commercial ISAs or IP, where the developer is liable for any future problems, no such safety net exists with open-source code. There are commercial tools and implementations for open-source that can help, but that’s usually not free.

Related Stories
Open Source Hardware Risks
There’s still much work to be done to enable an open source hardware ecosystem.
Will Open-Source Processors Cause A Verification Shift?
Tools and methodologies exist, but who will actually do the verification is unclear.
RISC-V Markets, Security And Growth Prospects
Experts at the Table: Why RISC-V has garnered so much attention, what still needs to be done, and where it will likely find its greatest success.
Open-Source RISC-V Hardware And Security
Experts at the Table, Part 1: The advantages and limitations of a new instruction set architecture.
RISC-V Challenges And Opportunities
Who makes money with an open-source ISA, the current state of the RISC-V ecosystem, and what differentiates one vendor from the next.
Will Open-Source EDA Work?
DARPA program pushes for cheaper and simpler tools, but it may not be so easy.
Building Security Into RISC-V Systems
Experts at the Table, part 2: Emphasis shifting to firmware, system-level architectures, and collaboration between industry, academia and government.
Open ISAs Gaining Traction
Emphasis on flexibility, time to market and heterogeneity requires more processing options.



Leave a Reply


(Note: This name will be displayed publicly)