RISC-V Verification Challenges Spread

Continuous design innovation adds to verification complexity, and pushes more companies to actually do it.

popularity

The RISC-V ecosystem is struggling to keep pace with rapid innovation and customization, which is increasing the amount of verification work required for each design and spreading that work out across more engineers at more companies.

The historical assumption is that verification represents 60% to 80% or more of SoC project effort in terms of cost and time for a mature, mainstream processor IP core. But the processor IP business model is based on one-size-fits-all, which allows chip companies to amortize NRE (non-recurring engineering) across many projects. RISC-V implementations tend to be smaller and more customized, and in many cases significantly different from one project to the next.

“A lot of people now have to verify the processor, whereas previously they weren’t,” said Simon Davidmann, CEO of Imperas Software, during a recent panel at the RISC-V Summit. “Verifying a processor, whether it’s the functionality or the performance, is something very new. It used to be done internally in the Intels, the Arms, the Arm architecture licensees, the MIPS, and others. It was all very proprietary and very homegrown. Those companies had huge resources to do it and have done a lot of very smart stuff with internal proprietary solutions that weren’t very public.”

RISC-V, meanwhile, has opened the door for companies from around the globe to modify the source code, which adds a whole new level of verification challenges for the companies making the modifications and for those implementing the modified cores in their devices.

“There is a lot of interest in tools to architect the core using high-level languages to generate the RTL and the testbenches,” said Louie De Luna, director of marketing at Aldec. “Verification needs to ensure high coverage, which means close to 100% code coverage, as well as functional coverage. And this is just the beginning, because most of what we’ve been seeing are small 32-bit cores with bare-metal OSes. There’s not much the way of application processors yet. For that they will need advanced emulation and hardware-software verification. So we’re recommending companies make sure their simulation testbench is emulation-ready, and today that’s a manual process.”

That will change, of course, as the RISC-V ecosystem matures.

“Now there is a whole verification ecosystem coming together for RISC-V, and companies are getting involved,” Davidmann said, pointing to such companies as OneSpin and Axiomise in the formal verification space. “Other companies like Cadence are coming in with verification IP, whereas Mentor and Synopsys are doing emulators and simulation. The processor DV is changing dramatically, and the ecosystem is evolving to help it. The challenge of RISC-V is inviting the EDA and verification ecosystem to come up with new and better solutions to help them, because now it’s not just one or two companies that have to do processor DV. It’s everybody.”

RISC-V also adds some new layers of verification that many engineers never needed to focus on in the past, such as performance verification.

“We do a lot of functional verification, but with RISC-V going everywhere in the ecosystem, the question of performance verification also will become very important,” said Nasr Ullah, senior director at SiFive, during the RISC-V Summit panel. “And it gets harder because not only do engineers have to know functionally how it works, but you have to have a set of engineers who understand how long will it take and what they can do to make it better. That is very difficult to do. For performance verification, the second problem — unlike functional verification, where you can focus on a lot of very hardcore low-level tests — is that you have to start worrying about application-level programs and how they interact with all the other pieces. Then it becomes a problem of being able to use just the Verilog simulator, for example. You have to start looking at other options to do this.”

The first step is to model the performance. “Then, once the RTL gets built and the complex interactions are working, you have to go to something that can mimic the entire system,” Ullah said. “Emulation tends to be what we have to use, and RISC-V has got some innovations in that. FireSim is a Chisel-based tool developed at UC Berkeley, which is a very fast emulation solution. We have been using it successfully to build our systems way before tape-out, and actually run full applications and OSes on them. It’s still an emulator, so you want to mix and match it with simulation, because you can’t run everything in emulation.”

Complicating matters is the fact that processor cores are the most complex IP to verify.

“It is relatively easy to add a few custom instructions on top of an existing processor,” said Sven Beyer, product manager at OneSpin Solutions. “Performance and code size analysis are not easy, but in many cases one can isolate the effects of these changes and make sure they provide the desired results for the target workloads. Functional correctness and security assurance, on the other hand, may have to be reconsidered from scratch for the entire core. The big challenge is to automate not only the generation of the core tool chain for the optimized processor, but also the verification. The key here is to go beyond compliance and exhaustively verify with a largely automated flow that the micro-architecture implements the desired instruction set architecture with its custom instructions without any additional functionality or unintended side-effects.”

Ecosystem challenges
To a large extent, the question is how much of this work can be re-used and leveraged across multiple designs.

Mike Thompson, director of verification at OpenHW Group, said the ecosystem is just now coming to grips with this. “Several years ago, what the ecosystem was trying to do was basically make RISC-V IP available,” he said. “There’s now a collection of RISC-V IP available. Now the ecosystem is struggling with how to disseminate that into a large pool of users. We get a lot of questions about how to adopt a certain core and add in their own instruction set. These are the kinds of problems that new adoptees of RISC-V are starting to consider, and it’s only now that we’re starting to see the ecosystem address that. This is still early days, and it will be interesting see how it evolves.”

At the same time, with the new design freedom afforded by RISC-V, processor verification is becoming a very big task for all SoC adopters, and it is playing out in DV teams in a variety of ways.

Steve Richmond, verification manager at Silicon Labs, noted this takes a bit of evolution from multiple tools. “A lot of these methodologies are already very much involved, and part of our IP teams — emulation, formal verification, simulation, closing coverage,” he said. “These particular activities don’t change all that much, but you’re still doing them from that perspective, it’s just really an issue of complexity. However, RISC-V does open up a lot of domain-specific programming paradigms, so we are tuning our solutions to do some particular aspect of one of our IoT SoCs, for example. It’s adding a bit of that layer of complexity to be able to take in a core and understand what we need to do with that core. And hopefully, by building on some of the tools that are available along with OpenHW Group’s efforts, getting a good foundation of IP put together so we can start to layer on customization of whatever directions our designers and architects end up wanting to take these cores in the future. It’s more of a delta development than starting from zero.”

SiFive’s Ullah agreed those technologies exist, and every company has its methodologies for doing that. “But RISC-V has opened up a wide paradigm of more people having to work together. We can bring a core, somebody else brings other pieces, and now we have to start thinking about interoperability of all the tools. There are great tools, but they don’t work together very well.”

There are other major issues, too. “Verification is very hard, and actually daunting to a lot of people. Many people don’t realize the complexities of it and the amount of verification a processor requires. If you bought a core from an Arm or whomever, they’ve already done the 10 to the 15 petacycles of instruction tests for you, so all you have to do is the integration. This idea of designing your own processor and adding your own things, you’ve got to learn a lot more about processor DV. That’s a real challenge. We try and help people with training and everything like that. That’s one of the reasons it’s very important to make use of existing solutions that people have developed over the last 20 to 30 years around SoC verification, and performance analysis around things like SystemVerilog, UVM and apply all these to the RISC-V methodology. But it is a daunting challenge.”

Aldec’s De Luna pointed to another challenge, as well. “We’ve seen use cases with RISC-V that we’ve never seen before,” he said. “We’re seeing that from the design side, but we’re also seeing corner cases on the verification side. And we’re seeing new UVM use cases that need to be addressed.”

That makes it imperative to leverage existing tools wherever possible, and to create new ones or extend existing tools where necessary. “The challenge that the industry gets into is they tend to forget verification is a solved problem,” said Thompson. “They just ignore it and then find out later once they’ve got problems in the lab that they should have done some verification. This is a story as old as verification. The industry is starting to wake up to the fact that there are solutions out there, and we need to apply them to RISC-V in the same way that we would apply them to something like a PCI Express endpoint or a DDR-4 memory controller. They really are the same tools, the same methodologies that will provide the same high-quality silicon. Verification is mostly about the grinding of the details, and there’s no magic bullet. This is just a natural part of the evolution. The RISC-V community is, and has been for several years now, woken up to the fact that it’s a lot of work. You’ve just got to do the work.”

Additionally, Aleksandar Mijatovic, senior digital designer at Vtool observed RISC-V is coming as a competitor to the throne currently held by Arm, which is still most architects’ first choice. “Without making this look too much like the Linux/Windows debate, it is unlikely that Arm will give up its position easily, but we are certainly likely to see more RISC-V usage in coming years. Mainstream, no; routine, yes. Mainstream will still use pre-verified proprietary things and trade cash for time to market, but we will see a lot of RISC-V around so that every verification engineer will know what to do with it.”

“Adding RISC-V to a system requires additional work for verification engineers compared to SoCs that are Arm-based. It requires verification of the CPU itself that will include additional resources and also have impact on the project schedule. It must take into account time to market, price of licenses, safety requirements, area of the chip, and power consumption as well as potential reuse of the CPU on the next generation of SoC, before a final decision on whether open source architecture will be used in the project,” noted Olivera Stojanovic, project manager at Vtool.

What can be re-used?
The big question is how much actually needs to be done for each new design — and how much of that can be offloaded to experts in the RISC-V ecosystem on a contractual basis because not everyone has the expertise to spot the holes and fix them. The challenge is to continue pushing innovation, while still keeping costs under control and preventing ecosystem fragmentation.

“Nowadays, we hit issues because of missing certain ISA extensions, such as instructions for DSP processing or another missing part of the spec,” said Zdenek Prikryl, CTO of Codasip. “Other types of issues are that the RISC-V specifications are sometimes not precise enough and leave too much freedom for interpretation. This could lead to fragmentation, which is a bad thing. We have to be careful about it.”

Having established tools and methodologies can help prevent that. “One of the things that OpenHW Group has been doing is developing the idea of having an industrial-quality RISC-V core as open source, but the verification environment is a state-of-the-art, commercial-style verification environment, which is available as open source that you can download,” said Davidmann. “You can download it, use it, and see the verification quality. If you make changes to the core, because it’s open source, you’ve got out of the box a high-quality verification environment. That’s a really new concept, and can make it less hard for people to adopt RISC-V and validate their processors.”

The goal is to make processor design verification mainstream and routine for all SoC teams, so that RISC-V innovation doesn’t become an excessive burden.

“Although RISC-V enables a plethora of use cases, the verification complexity increases manifold, which can then have an impact on the costs and project schedule,” said Shubhodeep Roy Choudhury, CEO of Valtrix Systems. “It looks like a difficult problem to solve, but careful planning and selection of methodologies and best practices in verification can help a lot.”

That includes more automation, using test suites that adapt to a system’s configuration, in order to ensure that different processor configurations are stable and well-tested at all times.

“This will make processor design verification mainstream and reduce the risk of adoption of a new configuration at the same time,” said Roy Choudhury. “Tools like stimulus generators, virtual platforms/simulators, and the processor tests should be designed keeping the configurability and extensibility of RISC-V in consideration. If the implementer decides to change a certain part of the design, the verification tools, tests and related infrastructure should be configurable enough to target the delta and its interaction with other components. The mapping between the tests and each processor component that they intend to verify must be present. This will allow the DV engineers to prioritize the tests as certain parts of the designs change.”

Tips from the ecosystem
For system architects getting into RISC-V for the first time, Silicon Labs’ Richmond recommends taking advantage of the infrastructure and the ecosystem that is already in place. “Grab cores and throw them on FPGAs. That certainly happened at Silicon Labs long before the hardware RTL folks were ever involved. There’s a lot out there, and there’s a lot to learn from the tool chain perspective. Also, leverage what is out there. There are certain activities being done at OpenHW Group from a verification perspective that you should be able to replicate within hours if not less.”

Thompson pointed to, “The RISC-V Reader,” for the newcomer to the ISA, along with pursuing the information out in the ecosystem.

Meanwhile, Ullah said open-source tools are worth looking into. “First, Verilator, the open source Verilog simulator. It turns out to be very fast, and very useful. Second, FireSim, the FPGA based emulator that comes out of UC Berkeley. I’ve been able to build my designs much earlier than I did when at Motorola or Samsung. Third, Sparta, the open source simulation framework we put out into open source. You can do performance modeling and functional modeling and they can work together.”

Also, as verification is not free, Stojanovic said the verification community must invest time and resources to contribute to standards and best practices for RISC-V verification. “Providing open-source solutions to the ecosystem will boost usage of RISC-V, but this still has a long way to go to be competitive in the market as the first choice of companies.”

To put it bluntly, the ecosystem will need to reach standards and features provided by traditional competition, Mijatovic said. “That means significant support for processor verification. Design teams are already fairly well supported with documentation and configuration options, but before silicon, everything needs to be verified. Ideally, for every configurable design, we will have an automatically generated verification environment, and a push-button approach to check results. Every effort by the ecosystem to provide out-of-the-box solutions for each part of verification will bring RISC-V closer to becoming standard implementation.”

Conclusion
The ecosystem plays a significant role in the evolution of RISC-V, such as clearly communicating the specifications, as well as avoiding fragmentation of the ISA.

“There’s a design ecosystem around the IP, there’s a software ecosystem around the OSes, but there’s also a verification ecosystem,” Davidmann said. “We’re working very hard to evolve. RISC-V International has a key part to play, and there are two things it needs focus on. One is the specifications to ensure they’re tight and clear, and to ensure that everybody is understanding the same thing. The other is compatibility and architectural compliance that they’re going to do because it’s essential that we don’t get fragmentation out there; that we can make use of the tools, make use of the compilers, make use of the software. The ecosystem could do a lot, and RISC-V International, and the members can do a lot to help the vibrance of RISC-V, and to help it evolve successfully.”

Related
RISC-V: What’s Missing And Who’s Competing
The open-source ISA is gaining ground in multiple markets, but the tool suite is incomplete and the business model is uncertain.
Components For Open-Source Verification
Building an open-source verification environment is not an easy or cheap task. It remains unclear who is willing to pay for it.
Open-Source Verification
Sorting out what is meant by open-source verification is not easy, but it leaves the door open to new approaches
RISC-V Knowledge Center



Leave a Reply


(Note: This name will be displayed publicly)