Selecting The Right RISC-V Core

Ensuring that your product contains the best RISC-V processor core is not an easy decision, and current tools are not up to the task.

popularity

With an increasing number of companies interested in devices based on the RISC-V ISA, and a growing number of cores, accelerators, and infrastructure components being made available, either commercially or in open-source form, end users face an increasingly difficult challenge of ensuring they make the best choices.

Each user likely will have a set of needs and concerns that almost equals the flexibility of the RISC-V offerings, extending well beyond traditional PPA metrics into safety and security concerns, or quality considerations. That could include the adaptability of the verification collateral, which enables the architectural extension and the necessary verification to go along with it.

Traditionally, three levels of prototyping have been deployed — virtual prototypes, emulation, and FPGA prototypes, including hybrids between them. Each platform is then used for a variety of purposes, including software verification, architectural validation, functional verification of the hardware, performance analysis, and more.

While the design and software ecosystems for RISC-V are becoming established, the configuration and verification ecosystems are trailing and require new technology to be built. It is the very flexibility of RISC-V that creates huge challenges for verification, above any beyond what is required for the verification of fixed processors. It also makes hardware-software co-development not only possible, but necessary.

Co-development
In the past, hardware was selected and then software developed to run on it. With RISC-V, the hardware is often driven by software. “The first thing you have to choose is what standard RISC-V options you want,” says Simon Davidmann, founder and CEO for Imperas Software. “The RISC-V feature set currently has 200 or 300 options. How do you know whether your algorithm would benefit from a floating-point unit, or SIMD, hardware multipliers, or even a vector engine? You have to work out the hardware capabilities you’re going to need, and can afford, for the type of application or the job that you’re wanting that processor to do. That itself becomes a bit of a challenge.”

Prototypes are required to make those kinds of tradeoffs. “If the designer’s objective is to evaluate performance and fit for purpose, then virtual prototyping is the only viable choice,” says Steve Roddy, CMO at Quadric. “Building hardware prototypes is more than 10 to 50 times more time-consuming than creating a SystemC model of a subsystem or entire SoC. The SystemC virtual prototype generally runs fast enough to answer performance questions, such as how many frames per second of throughput can I get with this processor core, or what is the peak and average bandwidth requirement of function X, within an acceptable accuracy envelope.”

Getting the right accuracy can be difficult. “It’s all about accuracy and your ability to spin a model very fast,” says Frank Schirrmeister, vice president for solutions and business development at Arteris IP. “The right accuracy is defined by whatever your question demands, and generating those isn’t trivial. If you are an ASIP provider, you will be able to generate those from whatever template you have. Depending on the question you may need pipeline accuracy, you may need memory accuracy, it doesn’t need to be fully accurate, but when you have a CAD department involved, they are too afraid of answering the wrong question.”

But accuracy is a tradeoff against speed. “While some virtual prototypes are cycle accurate, these often run too slowly to be able to have the necessary software throughput,” says Imperas’ Davidmann. “The highest-performance virtual prototypes are not performance engines, because they don’t model processor pipelines. They look at it from a software point of view, where you can compile it and run it on hardware, and you can see approximate performance by looking at instruction counts or approximate timing estimates. This should be enough to make this kind of architectural decision.”

It often takes several prototypes. “We generally prototype for two reasons,” says Venki Narayanan, senior director for software and systems engineering within Microchip Technology’s FPGA business unit. “One is for architectural validation to make sure we meet all the performance metrics and requirements and functional validation. The other reason is for embedded software and firmware development. We use different levels of prototyping techniques, with the most common being to use our own FPGAs to develop an emulation platform for both architectural and functional validation. We also use architectural models like QEMU to build virtual platforms for both performance validation and embedded software development.”

The number of possibilities is growing. “There are many ways that companies are prototyping with RISC-V today,” says Mark Himelstein, CTO for RISC-V International. “These range from single board computers at the maker level, to enterprise LINUX capable boards. Emulation environments (like QEMU) allow developers to progress with software before their hardware is complete, and there are off-the-shelf parts everywhere from embedded SoCs (from companies like Espressif and Telink), to FPGAs (from companies like Microsemi), to the upcoming Horse Creek board from Intel and SiFive.”

It comes back to the performance/accuracy tradeoff. “Physical prototypes take far more design effort, because you are connecting and synthesizing real RTL, but they deliver far greater accuracy and throughput,” says Quadric’s Roddy. “A physical prototype in an FPGA system, be it homegrown or from the big EDA companies, takes effort to bring-up. But it can run an order of magnitude faster than a SystemC model, and several orders of magnitude faster than full gate-level simulation. Design teams typically will pivot from C-based models during the IP selection process to physical models for both verification of the actual design after IP selection, and as a system-software development platform.”

Once you know what feature set you want in the hardware, you can look to see if someone already has created a solution that fulfills most of your needs. “The chances are that with all the vendors out there, there will be a commercial solution that will have the type of thing that you’re looking for,” says Davidmann. “But with RISC-V, you do not have to accept that solution as is. A significant part of the value with RISC-V is the freedom to change it, modify it, and add different things that you want.”

That customization is the competitive edge. “The true potential of RISC-V lies in the openness of the standard, enabling everyone to put their own unique twist to the implementation of a core,” said Rupert Baines, chief marketing officer at Codasip. “This is where software/hardware co-design is crucial. But what many RISC-V companies offer is more similar to traditional off-the-shelf offerings. Those can certainly be good enough for many designs, but the true magic to differentiate comes from taking on an approach of custom compute. This can be achieved only by gaining complete access to customizing the processor to fit the application at hand, and using automation tools to enable and speed-up these modifications. We have customers doing this today, and the performance results are extraordinary.”

Selecting an implementation
There are many ways to implement a set of features, such as the number of pipeline stages, or speculative execution features. Each will have a different tradeoff between power, performance and area. “The ISA flavor, be it RISC-V, Arm, Cadence’s Xtensa, Synopsys’ ARC, doesn’t really impact modeling and prototyping goals and tradeoffs,” says Roddy. “A system architect needs to answer questions about SoC design goals regardless of the brand of processor. At a technical level, the RISC-V bandwagon is really in a stable position in the market relative to modeling and performance analysis tooling support. There are numerous competing core vendors, each with different implementations and processor features. As a main system CPU, it doesn’t have the longevity of an Arm, and therefore fewer ecosystem players in the EDA world have broadly validated, ready-to-use modeling support for off-the-shelf RISC-V cores from the variety of RISC-V vendors. As a configurable, modifiable core, the RISC-V world lags in the level of instruction-set automation that Tensilica has spent 25 years building. Thus RISC-V has less modeling support as a ready-made building block and less automation to use as a platform for instruction-set experimentation.”

But that is only one aspect of an implementation that needs to be assessed. What is the quality of it? If you want to modify it, how do you revalidate it?

Performance is the easiest of these to assess. “This is no different than going to any traditional processor vendor,” says Davidmann. “They will tell you this core gives you this many Dhrystones per watt, they’ll give you the typical processor analytical data, which says this is how fast this microarchitecture runs. They have all that data, and anybody licensing a processor core will be familiar with that data and will go talk to them and get that information. They probably will have many selectable options in their data sheet, and they’ll say, ‘If you turn this option on, you get this or that.’ You can look at it on the data sheet, on the websites for the vendors.”

At this level, you probably need cycle accuracy. “I see most people pumping it into an emulator and running enough data through it to make a reasonable decision,” says Schirrmeister. “I don’t see that moving up to virtual prototypes any time soon. Some companies are talking about FPGA prototypes, where you have your own single board solution. Depending on the question you need to answer, you may decide to configure it, generate it, and then pump it into an FPGA to run more data through it, with the appropriate software routines on top of it. The industry has sufficiently fast entry ways into emulators and prototyping to make this possible. The basic problem is that you want to make this decision based on as accurate data as you can, but you may not have that accurate data at that time when you want to make that decision.”

Many of these prototypes have to include more than just the processor. “Virtual platforms provide the ability to integrate with other external physical hardware functions, such as memory and sensors operating in a real-world environment,” says Microchip’s Narayanan. “Hybrid systems can bring together virtual platforms with physical prototypes for other external functions. FPGA emulation and prototyping helps with finding timing-related bugs, such as race conditions, as this is more cycle accurate and external functions are running at speed.”

Verification
Because processor design has been in-house for a long time, there is no public verification ecosystem for building a processor and the features of RISC-V require a much more flexible verification solution than has ever existed in the past. The creation of this is just beginning to happen.

“When it comes to verification, RISC-V has admittedly been behind, but it’s catching up fast,” said Codasip’s Baines. “Processor verification requires strategy, diligence and completeness, in tight collaboration with the design team. Verifying a processor design once it is finalized is not enough. When selecting a RISC-V core, make sure to work with a vendor that has proper verification methodologies in place.”

Getting this right relies on deep verification expertise. “There are industry metrics like Dhrystones, or CoreMark, so people can compare performance,” says Davidmann. “But how can you compare verification quality? There needs to be a level playing field so that each vendor can say, ‘This is how we do it.’ We need some quality metrics to do with verification.”

This is where the open-source movement can help. “If you look at the RISC-V ecosystem, you have a large number of very experienced processor developers,” says Schirrmeister. “There are two extremes. One is I am getting a core from a vendor and if it doesn’t work, you have a problem with them. On the other end, I have total freedom and do everything myself. An equilibrium is developing somewhere in between these two extremes. You get something where a certain amount of verification is provided by your vendor, and then the extensions are your own responsibility.”

And this is where metrics come in. “ISA compatibility is just the first rung in a ladder full of complexities that only a few companies have climbed,” says Dave Kelf, CEO for Breker Verification Systems. “Prototyping may be the only way to fully ensure reliable processor operation, but leveraging real workloads to drive these prototypes scratches the surface of real processor coverage. This is at odds with the competitive efforts of an open ISA driving accelerated development and time to market issues.”

But what are those metrics? “In the OpenHW quality group, we’re trying to work out what these metrics should be,” says Davidmann. “That includes things like functional coverage, because it’s not just simple instructions. For a high-quality processor, you need much more than that. You need to have a methodology for verification, where there’s confidence that your comparisons against a reference are covering everything. Functional coverage just shows you’ve got the test, but that has to be coupled with a methodology that compares against some form of known reference. We’re going to be adding fault injection technology so that it becomes possible to find out if your test bench actually detects problems.”

 Fig. 1: Defining the architecture of a RISC-V verification solution. Source: Imperas

Fig. 1: Defining the architecture of a RISC-V verification solution. Source: Imperas

It will take a suite of tools. “As the RISC-V ecosystem matures, commercial implementations are beginning to support defined market segments,” says Ashish Darbari, founder and CEO for Axiomise. “We see support for markets, such as automotive, that require functional safety compliance. We see support for IoT, requiring security. RISC-V vendors are investing in advanced verification techniques, including virtual prototyping for architectural modeling and performance. Tools are now available for early adoption of formal methods to prune out bugs early in the design process and avoid bug insertion as designers struggle to catch corner-case bugs with simulation on the processor-memory interface.”

One of the tools that will be necessary is the ability to generate testcases based on a feature list or set of capabilities. “The automated generation of test content to drive prototypes that take into account verification complexities in a timely fashion is key,” says Breker’s Kelf. “These generation mechanisms are now starting to emerge on the market.”

Conclusion
An ecosystem is only as good as its weakest component, and for RISC-V that is the EDA toolchain. The reasons for this are twofold. First, until recently, there was no commercial market for processor verification tools. While they existed in the past, they had all either disappeared or been dissolved into the legacy processor companies. Second, the flexibility of the RISC-V ISA creates a new system-level optimization approach that requires a new set of tools. It takes time for this opportunity to be understood and for commercial tools to appear that properly address it.

Related
A Minimal RISC-V
Is there room for an even smaller version of a RISC-V processor that could replace 8-bit microcontrollers?
RISC-V Pushes Into The Mainstream
Open-source processor cores are beginning to show up in heterogeneous SoCs and packages.
Efficient Trace In RISC-V
How to work with the new RISC-V debug standard.
How Secure Are RISC-V Chips?
Open source by itself doesn’t guarantee security. It still comes down to the fundamentals of design.



Leave a Reply


(Note: This name will be displayed publicly)