Do Necessary Tools Exist For RISC-V Verification?

Existing tools can be used for RISC-V, but they may not be the most effective or efficient. What else is needed?

popularity

Semiconductor Engineering sat down to discuss the verification of RISC-V processors with Pete Hardee, group director for product management at Cadence; Mike Eftimakis, vice president for strategy and ecosystem at Codasip; Simon Davidmann, founder and CEO of Imperas Software; Sven Beyer, program manager for processor verification at Siemens EDA; Kiran Vittal, senior director of alliances partner marketing at Synopsys; Dave Kelf, CEO of Breker Verification, and Hieu Tran, president and CTO of Viosoft Corporation. What follows are excerpts of that conversation. To view part one of this discussion, click here.

SE: What does a RISC-V verification flow look like?

Kelf: We see the verification of processors as a stack of activities, but there’s lots of loopbacks in that stack. A lot of companies treat conformance to the ISA as a separate activity. They’ll do a ‘Hello World’ test as a first stack to make sure things are up and running, then they will run as many conformance tests as they can. They try and match the ISA, and then they start testing the micro-architecture. A lot of people stop there. What we see happen is as they run the conformance tests, they may appear to get conformance, and then they start writing the micro architecture tests, find some things that are broken and realize that conforming to an ISA is a lot more complex than just making sure the instruction run properly. As they go further up the stack, they get into, ‘Can we verify the cores as they relate to the rest of the system? Can we boot an OS on it? Does it have the necessary performance? Can they profile the design to make sure the performance is correct and start those validation activities to reveal more bugs in the verification?’ When we’re testing a regular ASIC or regular core, you can run through all the verification activities, get really good coverage, and then do the validation on the end. You often don’t have to go back to verification. With these processors, you do. You have to go backward and forward through that verification stack all the time. This really slows you down, and the whole verification of that architecture. The golden reference model is becoming critically important. The Imperas models are recognized as state-of-the-art industry standard models by a lot of people. We’ve been working with those models. Bringing in a really solid core reference model that you can rely on is becoming a critical thing. You can test the micro-architecture, you can test some of the interaction with the rest of the system against that core model, and really getting a clearer picture of what’s going on in the actual processor.

Tran: I have some doubt as to whether the idea of a golden reference model is viable when it comes to RISC-V. If you go back to the day when Unix came on the scene, there were a handful of suppliers that had their own implementations. You had Solaris, SVR4, AIX from IBM, and so on. The idea of achieving a common executable that can run across the board, from all these different Unix/Linux implementations was just not possible. Every vendor is incentivized to build value add and custom extensions that will give them differentiation versus the other. We see that here with RISC-V. Unlike with x86 and with Arm, where the majority of the implementations are under the Intel or Arm umbrella, you literally have hundreds of different institutions and organization building their own RISC-V implementation. When you talk about things like vector extension, where the spec is so large, many implementers have decided to implement only a subset of that extension. How would you be able to create a common golden reference model to be able to verify execution against this variety of implementation? Second, when we talk about verification and validation of execution, you have to go further up the stack into the tool chain and the operating system. Take, for instance, the vector extensions. Every vendor that I’ve spoken to, and work with, has their own compiler, their own LLVM implementation in support of their vector extension. And none of them is compatible with the others. So you can take LLVM compiler from vendor A, generate code, and it wouldn’t be very efficient for the implementation from vendor B. From that perspective, I’m skeptical as to whether it’s even possible to come up with a common model that can give you a baseline against these variations.

Davidmann: I clearly disagree with that comment. RISC-V is a complete nightmare because there are so many options. This is one of the challenges for compatibility and compliance. There are so many configuration options, which are all legal, and the big question is, how do you create a reference model? But it’s worse than that. Every three months, there’s a new version of each extension. In our simulator, it is a complete reference. It is completely configurable for any of the independent subsets, but also the versions. We have 11 versions of the vectors, 4 of which have gone to silicon. I don’t think there’s a problem with this, as long as it’s designed and architected correctly. RISC-V offers the opportunity to do things better. We can’t live with the old way of having just one Arm or one Intel. That isn’t going to work. If the old world was you can have a reference model which does one thing, the new world is you have a reference model that can do 100 things. And that’s where we are going. Otherwise, RISC-V will never fulfill its destiny. We have to solve those problems.

Hardee: We know processor implementation and the devil is in the details. We certainly agree with you that SystemVerilog, Verilog, is much better at capturing those implementation details. But you’ve got to verify that implementation against a higher-level model that is capturing the intent. That is not a single reference model. It could be many, or a standardized way of creating a reference model for those many variants that we’re talking about.

Davidmann: Five or six years ago, I was part of the RISC-V International organization that looked into formal and ended up choosing SAIL as the language for building a golden reference model. What we got wrong is that SAIL is not very configurable. It is great for one architecture. For Arm, it is fantastic. They have this whole flow from a definition, all the way through correct-by-construction, all the way down, coming from a formal description, and it’s brilliant. The challenge for RISC-V is it has this infinite configurability by design. And so there is a real challenge in the modeling of that in SAIL. That’s why Imperas went for a dynamic model.

Vittal: RISC-V is being adopted by almost every company out there. Even the leading semiconductor vendors are doing RISC-V designs, as are many startups. But the key is the being able to have a successful verification plan where you have very high-quality stimulus to achieve your coverage targets. Both verification and debug go hand in hand. Hardware/software debug, stepping through the code to simultaneous look at the problems, is key. Going back to the flexibility of the architecture, that’s providing challenges — and where the opportunity lies for all of us. Innovative solutions are being developed. There’s a lot of collaboration happening between the RISC-V vendors and EDA tool companies, as well as other EDA partners and so on.

Kelf: There are companies that have figured this out to a certain extent. The infinite configurability of RISC-V, all these things are true. But at the end of the day, Arm and Intel have solved the verification of their somewhat less configurable processors. They have a flow, or a series of complex flows, and those flows include a bunch of different activities. Arm uses a lot of formal tools, a lot of different things to do it. A good place to start might be looking at what some of these guys are doing in their flows, and trying to automate some of it. You need something that can be used by all of the folks trying to do RISC-V processors and collaborating together — collaborating to come up with these more generalized flows, and see if we can standardize some of this stuff. And not in the sense of a proper standard, but a de facto standard way of verifying a RISC-V processors that does work across the industry and creates some real compatibility, not just instruction-set compatibility.

SE: A couple of you have mentioned that we do need new tools, new flows. What is missing today? How are we going to work out what the things are that somebody needs to provide?

Kelf: There are a lot of folks doing RISC-V processors internally, and they are starting anew. They’re learning how to do processor verification. Companies like Codasip have brought in people with a lot of experience and expertise from places like Arm and Intel and others, who do understand what to do. So we see some of these companies now producing flows where they are considering issues such as, ‘Can the processor support full coherency? Does it work across the rest of the system? Can the security instructions, like the PMP (physical memory protection) instructions inside RISC-V operate correctly?’

Davidmann: When we started with RISC-V five or six years ago, there was nothing specific for RISC-V. We had the Verilog simulator, you had some formal stuff, you could write some properties. There was GCC, and you could run it and debug it. That was it. What we’ve seen over the last five years is that people have evolved a lot of tools and technology, which have been derived by learning how processors are verified in proprietary ways. And we’ve been trying to make that more public. We’ve been trying to understand how it’s been done in Intel and Arm, and the types of technologies that have been used. We’ve been working within OpenHW, where I look after the verification task. It is about open-source silicon with industrial quality. It is not about using open-source tools. What we’ve learned over the last few years is a lot of different techniques, a lot of different ways to do things, and we’ve evolved and built tools like this configurable reference model, like technology which does the verification for you, like functional coverage that we’ve been evolving into ways for trying to check how well Linux runs. People have been building test generators. Other companies have been building formal tools, like the OneSpin Technologies out of Siemens that are focused on RISC-V. There have been three or four other companies that have been involved in the formal side of things. What we’re seeing is there are some specific RISC-V technologies being built, there’s some verification IPs being built, and more and more, the EDA vendors are learning the methodologies people need, and they’re building the tools. But it’s early days. We are only five years into the real use of RISC-V, and probably a couple of years into the commercial bit of it. There was five years of academic stuff before. And companies like Codasip, and the other commercial vendors of silicon IP, are really evolving and building the technologies internally for verification. We’re trying to help build them as commercial tools, as are some of the EDA vendors. We’re at the beginning of this new age of RISC-V verification technologies.

Vittal: The mainstream processor developers know what they’re doing. They’ve done x86 and Arm before. They are adopting RISC-V and they know exactly what to do. They’re also leveraging the open-source community. For the mainstream, when you look at RISC-V, it’s being adopted by the mainstream designers. That’s where they need a methodology. That’s what is missing. Synopsys offers everything that is required to do verification and validation, both software and hardware. We have VIPs, we have formal techniques, we have data path, but what is missing is a methodology. And the methodology comes with expertise of the processor verification engineer and other experts.

Eftimakis: That’s the secret sauce of IP vendors. That’s what we do internally.

Davidmann: Companies like Imperas are trying to make that more public. It might have been proprietary IP before. We give a 90-minute tutorial on a RISC-V processor verification reference flow. It lays out all the different bits you need, and what technologies are available today and what is not available. We talk about test generators based around commercial technology.

Vittal: We have something similar called a cookbook, which our customers can download from the portal that uses an open-source core. That can take you through the whole verification process.

Beyer: It’s key to add new tooling, but we need the very configurable RISC-V reference model, and to make that available to the tooling and the flow. Then we can build something around it so that people without deep experience can get to the point of having decent verification experience for the RISC-V cores.

Eftimakis: We have integrated tools into our flow, including Imperas and OneSpin. That’s something that we see as a benefit of being part of RISC-V, because we can leverage these tools that have been built for the ecosystem and integrate them into our verification flow. We can combine comparison with models, simulations, formal verification, assertions, etc. That’s a benefit that we gained from being part of this ecosystem.



Leave a Reply


(Note: This name will be displayed publicly)