The Increasingly Ordinary Task Of Verifying RISC-V

Integrating an open-source core into a complex SoC is looking very familiar.

popularity

As RISC-V processor development matures and its usage in SoCs and microcontrollers grows, engineering teams are starting to look beyond the challenges of the processor core itself.

So far, the majority of industry verification efforts have focused on ISA compliance to standardize the RISC-V core. Now the focus is shifting to be how to handle verification as the system grows, especially as this task scales up with multiple cores and the addition of off-the-shelf peripherals and custom hardware modules. And as with any processor core, it’s just as complex and time-consuming of a project.

“We can see two verification challenges here,” said Zibi Zalewski, general manager at Aldec’s Hardware Products Division. “First is the complexity of the core itself and how to make sure it is correct and ISA-compliant. Second is how to test the system using the core. In both cases, transaction-level hardware emulation is the perfect choice — particularly if the emulation is based on the Accellera SCE-MI standard, which allows for reusability between different platforms and vendors. Combined with automatic design partitioning and wide debugging capabilities, this makes a complete verification platform.”

When the processor core becomes more powerful and brings in more functionality, RTL simulation is not enough. Nor does it provide complete test coverage in a reasonable time. With emulation, the speed of testing is much faster. That, in turn, allows the complexity and size of the tests to be increased, along with cycle accuracy, without extending the time necessary to run those tests.


Fig. 1: A RISC-V CPU under test. It is implemented in an emulator while the RISC-V ISS is part of an advanced UVM testbench. Source: Aldec

Zalewski noted that when using emulation, the core itself might be automatically compared with the RISC-V ISS golden model to confirm the core accuracy and cover the ISA compliance requirements. In addition, the testbench used during simulation can be re-used for emulation, so it is worth making sure the testbench is ‘emulation ready,’ even at the simulation stage. That can enable a smooth switch between simulator and emulator without developing the new testbench. The strategy will also pay off in the case of adding custom instructions to the RISC-V, which are designed to accelerate algorithms in the design, because with hardware emulation it is possible to test and benchmark these instructions against developed algorithms much faster than in a pure simulation environment.

Much of this is familiar to engineers working with any embedded processing core. The challenges of integrating a RISC-V core into a larger SoC are similar to integrating Arm and ARC processors.

“From a prototyping perspective it’s not that different,” said Johannes Stahl, senior director, product marketing at Synopsys. “That’s the beauty of RISC-V. As a tool and IP provider, we always have the dilemma that we would really love to ship an example with the processor, but we are not shipping our ARC processor RTL source to everybody looking at our verification tools. What we have done in the last two years is use the open-source Rocket Chip, Berkeley’s RISC-V-based SoC generator, to create an example for emulation and prototyping. It explains the methodology and flow. Initially, one of my biggest concerns was that it is machine-generated language, and whether there would be issues with the compilation of that for the all the platforms. We haven’t seen that, so even though it is machine generated language — it’s ugly, it’s unreadable — our compile front end seems to be digesting it quite well.”

A possible area of concern might be that an RTL core on an emulation or prototyping platform doesn’t do much good if a debugger can’t be connected. “The state of the industry there is still pretty nascent,” Stahl said. “Some companies are trying to make development really cheap by using open-source debuggers connected to their cores, and that doesn’t really work that well. It’s not as mature, by a far shot, compared to the traditional cores.”

Successful designs are happening with RISC-V-based cores in AI designs, and haven’t generated much grief in terms of support. Stahl said users appear to be working around issues with their IP vendors, so from an emulation or prototyping perspective it’s just like any other core — perhaps with slightly more robust support.

Roddy Urquhart, senior marketing director at Codasip, agreed. “Developing and verifying RISC-V designs is not fundamentally different to other processor architectures. Thus, commonly used approaches such as creating virtual prototypes with instruction-set simulators, using FPGAs or commercial emulators, are among the methods used with RISC-V. Ultimately, it is essential to achieve sufficiently high code and functional coverage.”

At the same time, there are ever more choices for open-source RISC-V development boards and FPGA implementations, observed Salaheddin Hetalani, field application engineer at OneSpin Solutions. “These are great for software development and performance testing of target applications, and lower the cost to develop an idea into a prototype. But they are not a substitute for rigorous functional verification of the core, particularly when the core is extended with custom instructions. A formal verification app specific for RISC-V enables a quick and affordable core verification, delivering a level of quality that would normally be available only to IP providers with deep pockets and decades of experience. This is crucial in the development phase, to avoid wasting time debugging functional issues in the lab, or blowing a project delivering low quality hardware.”

This is especially important for domain-specific architectures, which include a higher degree of configurability. “You’d better have the right tools, including emulation and prototyping, because you want to run real workloads,” said Frank Schirrmeister, senior group director for solutions marketing at Cadence. “The tools also include formal verification, as well as significantly more demand for architecture analysis — essentially to make this analysis right now in terms of what to move into software and what effect that will have on the hardware underneath.”

RISC-V is an outgrowth of the trend to more customized computing and domain workload specific computing, rather than specific architectures. “You could argue it’s more an effect than it is the root cause for anything, because the root cause is really that customers need domain-specific/workload-specific computing, enabled by domain-specific architectures and domain-specific languages,” Schirrmeister said. “The whole notion of configurability is an outgrowth of all that and, as a result, you need to verify more because the more changes you make, the more verification needs to happen. At the end you need to run the real workload to get a specific set of coverage identified. Then, when you make those changes, you want to see the effects.”

Whole system verification
Once a processor or CPU subsystem is checked, the whole system verification process can be approached. The same techniques used for verification of the other hardware elements of the SoC, including custom hardware and peripherals, can be used here. And all of it can be implemented in the emulator and verified with the same UVM or SystemC testbench used during the simulation, Zalewski pointed out. This methodology allows for long test sequences, such as UVM constrained random, to build complicated test scenarios and accelerated SoC architecture benchmarking simulations to optimize the hardware structure and components.

SoC projects these days require not only hardware development, but also complicated, multilayer software code, which means software and hardware engineering teams will be working on the same project with complex verification requirements and big challenges on the software-hardware interface, he said. “Software teams usually start the development in isolation, using software ISS or virtual platforms/machines, which tends to be enough when there is no reliance on interacting with the new hardware. But when the system grows with peripherals and custom modules, the software must support not only RISC-V and its close surroundings, which can be modeled in software, too, but also the rest of the hardware modules by providing operation system drivers, API or high-level apps.”

To make sure those two worlds work together and are synchronized while developing and testing the whole project, Zalewski pointed to transaction-level emulation. “When using a hardware emulator, we can test all RTL modules at higher speed with flexible debugging functions, but there is more. An emulator host interface API, usually based on C/C++, allows us to connect the virtual platform used by the software team to create one integrated verification environment for software and hardware domains of the project. Co-emulation strategies that use, for instance, a SCE-MI macro-based emulator API and a TLM interface in a virtual platform, allow the whole system to run at MHz speeds. That shortens the boot-up time of an operating system and allows for parallel debugging of the processor and hardware subsystems.”

Zalewski contends the benefit of a hybrid co-emulation platform is that the software engineers don’t have to migrate to a completely different environment when the design’s RTL code matures. Their primary development vehicle is still the same virtual platform. But due to the co-emulation, it now represents the complete SoC, including custom hardware. This way both software and hardware teams can work on the same revision of the project and verify the correctness and performance of the design without waiting on each other.

“The beauty of a hybrid solution is that you have the IP predefined and stable but it’s only available as a virtual model,” he said. “We see that in most hybrid engagements. You have something like a high-end CPU. It’s stable, you have the processor model, you don’t need to put it into the emulator for some of the software, and for some of the verification aspects you still need the full accuracy. From a verification perspective, how fast you get to the model is key because the whole virtual platform portion hinges on model availability, and that extends to hybrids. Again, this is all an outgrowth of the need for domain specificity and the ability to make changes. Whether it’s RISC-V, whether it’s Tensilica, whether it’s Arm or ARC, the verification needs to happen in that context.”

Conclusion
No matter what ISA a processor core may be based upon, the industry is rallying around the bigger trend of computing specificity, domain specific architectures, domain specific workloads and workload optimization.

“You don’t need a new hammer,” said Schirrmeister. “You may change your methodologies, depending on what type of tests you create for it and so forth, but it’s still the same hammer and screwdriver.”

As a result, there may not be much that is groundbreaking from a verification of RISC-V-based designs. But this is good news because developers can hit the ground running, and well-equipped.



Leave a Reply


(Note: This name will be displayed publicly)