Plan for multiple complementary verification methodologies for different levels of processor integration.
With the explosive adoption of RISC-V processors, processor verification has become a hot topic. This is due both to the criticality of the processor IP in the SoC and to the fact that many experienced SoC verification engineers are doing their first processor verification project. While there are similarities between SoC verification and processor verification, there are also significant differences, including the complexity of the processor and the processing subsystem. We see in many cases that verification teams start with open source tools and models, due to the low barrier to entry, then move to commercial RISC-V verification solutions once the complexity of the problem is better understood.
The RISC-V Processor Verification Disconnect occurs because the consumers of the processor IP expect a product with the quality of the major processor IP developer. While it varies from processor to processor, commercial processor IP developers have reported using 1015 cycles for verification of processor IP, the equivalent of running 104 simulators continuously 24×7 for 1 year. Using hardware assisted verification (HAV) tools such as hardware emulation and FPGA prototyping helps close the gap, but these emulation and prototyping products are typically a scarce resource in companies. The big question then is, “What is my verification plan to close this perceived gap between developers and consumers of the IP?” Or put another way, “How am I going to make efficient use of the resources, both engineers and tools, at hand?”
Let’s start by elaborating more on the processor verification solution space axes and challenges. These include:
Where your processor fits on this multi-dimensional space is going to determine methodologies and tools to be employed for processor verification. The goal, of course, is to develop a comprehensive verification plan that your team can execute on to deliver quality IP to your consumers.
An interesting starting point for the verification plan is looking at the level of integration for the processor IP and deciding how dynamic and formal verification tools will be used at the various steps along the integration path. An example of this is shown in Table 1:
Table 1: Dynamic and formal verification can be used at various levels of processor integration.
If we drill down to just the single hart (“hart” is a hardware thread for a RISC-V processor) level (no shared memory, no cache management operations, and obviously just the single hart), we can look at the 5 levels of verification methodology, as shown in Figure 1:
Fig. 1: 5 levels of verification methodology for single hart DV.
Everyone is familiar with, and comfortable with, post-simulation trace-compare methodology. Ease of set up and use is the great advantage of this flow. However, it has disadvantages:
These disadvantages limit the usefulness of this flow. Often this flow is used for simple instruction verification, with asynchronous lockstep continuous compare used to verify asynchronous event functionality and other complexities at the individual hart level.
Asynchronous lockstep continuous compare is a co-simulation methodology, with the RTL and reference model simulations running in parallel, in lockstep, in the verification environment.
Figure 2 shows a block diagram of the Synopsys verification environment for the asynchronous lockstep continuous compare flow.
Fig. 2: Co-simulation asynchronous lockstep continuous compare flow for single hart RISC-V processor verification.
Another interesting topic is commercial versus open source tools and models. This is relevant when looking at reference models, functional coverage and test generation. Open source tools and models are a tempting starting point for verification because these are easily available and free to use. However, there are advantages to using commercially available tools and models:
We should summarize our discussion on what is needed in the verification plan to close the verification disconnect. Most importantly, the verification plan must be driven by metrics, specifically functional coverage. The verification team needs to consider the parameters of the processor being developed, and plan for multiple complementary methodologies for different levels of processor integration. While open source tools and models are a possibility, a detailed make versus buy analysis should be performed to understand if the resources required for maintenance and enhancement of open source code, and support of the users, will be less than licensing commercial products. While mostly this is a quantitative exercise, qualitative comparisons should also be made, including understanding how many processors have been taped out using the specific open source and commercial tools.
Leave a Reply