Analysis: As bottom-up runs out of steam, companies need to start looking at a top-down approach.
Verification traditionally has followed the path of the design team. When they change their methodology or tooling, verification engineers follow and attempts to incorporate it into their flow.
The few times in the past when verification has attempted to lead, it has not ended well. An example of this was the attempt to get design teams using assertions. Assertions are proven to be valuable in the formal verification area (they are called properties in that domain and are an essential aspect of static verification). Consequently, it seemed like a good idea to get them inserted into the design flow, where they could help to locate bugs quicker and uncover issues that potentially remain dormant in the design. But that was not to be. Design teams saw little value in assertions, which would have involved learning a new and very foreign language to them. Most companies have stopped trying to force this change, even though those who have adopted assertions are somewhat positive about the results.
But the design flow is definitely in flux, brought about by the changing nature of the design process. Gone are the days when the entire chip was designed in-house. A chip today contains a large quantity of IP blocks, often covering 90% of the chip surface. While the area left for custom logic is only a small percentage of the chip, it still has the same level of complexity that entire chips had just a few years back. Yet the design team has an additional problem—how to integrate the pieces of the system together. This translates into an even bigger problem for the verification team, which has the responsibility for validating system-level functionality, performance and power consumption. Making the situation worse is the fact that the EDA industry has been slow to provide tools to help either team with these tasks.
Across the semiconductor industry there is widespread recognition, due to rising complexity and costs, that change is required at every step of the design chain. Verification is no exception. In verification, the approach historically was bottom-up due to the lack of executable models until the RTL implementation became available. Integration and verification of the blocks and subsystems runs into capacity and performance problems in software simulators, making emulation and rapid-prototyping essential tools. When the entire system has been assembled, it is too late to find problems with incompatibles or problems with performance and power consumption. These types of issues have to be found much earlier in the design process when changes are less disruptive.
Another change is the increasing dependence on software. The hardware is a platform that enables software to provide the necessary functionality, and software teams already are considerably larger than hardware teams. The cost associated with software is rising rapidly, and even fairly simple devices can have many millions of lines of code running on them. The software team requires access to an accurate representation of the hardware much earlier in the process.
What’s becoming clear is that a top-down flow has to augment the existing bottom-up flow. Behavioral models for hardware are becoming more common, and these can be connected to form a virtual prototype of the hardware. These virtual prototypes, which can execute the same software as the final silicon, can execute at speeds close to real time and are available long before the RTL coding has been completed. While there are still issues with model availability, this route is showing a lot of promise. In addition, it enables the verification team to start thinking about the problem in a different manner, something that was not truly possible in the past. Rather than just working to eliminate bugs from the design, it enables them to prove that functionality is present in the design.
But does this mean that we can abandon the bottom up verification flow? Unfortunately not, but there is an opportunity to redefine the verification task as a combination of the two flows without it taking additional time or effort. The size of the verification team has been steadily growing over the past decade, and this is primarily caused by the attempts to perform exhaustive verification. This can be very wasteful. Jim Hogan, a noted venture capitalist for EDA and semiconductor companies, commented in a panel discussion during the Cadence Verification Summit, that the goal in his companies is to perform the minimum amount of verification. “Verification builds confidence, and I want to be able to go to production as soon as I have sufficient confidence in the hardware. I can fix issues using software later.”
Bottom-up verification
Building an IP block, especially when you don’t know how the block is going to be used, means that it has to be verified for all possible conditions, configurations and settings. An IP company’s credibility hinges on the IP being bug-free. Users of IP would not see as much value if they have to debug those blocks.
Methodologies such as constrained-random stimulus generation, supported by the SystemVerilog language and encapsulated within the Universal Verification Methodology (UVM), are tuned for this kind of exhaustive verification task. The randomization ensures that conditions not specifically thought about are verified, and functional coverage shows the areas that have not yet been verified, allowing the constraint solver to be focused on those remaining issues.
But this is not reality for most IP companies. When a new IP block is developed, there usually are some leading companies that will be the first to use the block. These early adoptions generally are treated as partnerships. The block, while having gone through extensive verification, is not yet bulletproof. It may have been verified in only a few configurations and with some limitations on the flexibility of its interface. But how does an IP company convey what has been fully verified? This is a common complaint from IP users and is also an indication of the limitations with the coverage models in use. There is a large disconnect between what may exist in a specification and the coverpoints defined to show that functionality supporting the specification has been observed in the design.
Just because a block has been 100% verified does not mean it will work correctly when integrated into a larger design. Specification issues may exist, or even a misunderstanding of the specification. In addition, when integrated with other blocks, resource sharing may hamper performance or power management and control may not work as expected. Thus verification has to be performed on the integrated block. But how much verification is necessary? It may not be possible to execute all functionality once integrated and so new coverage models have to be developed.
An additional problem with bottom-up verification is with the tooling itself. The UVM has no notion of an embedded processor in a design, meaning that processors have to be removed during verification and each processor bus treated as just another interface. Many IP blocks now contain one or more processors and so this is becoming a larger issue.
Top-down verification
Top-down verification is an emerging area with only a few tools available today to support it. However, even without tooling, many companies are putting elements of it into practice. Most of these methodologies start with the definition of use cases or scenarios. These define what functionality must be present in a system. Hogan has stated that Apple has 40,000 or more of these scenarios to define what a phone must do. These scenarios also could define things such as performance, power and other non-functional requirements. Without automation coming from tools, most of these scenarios are likely to be created as directed tests. Code is written for each of the processors and interaction with a testbench is cobbled together in an ad-hoc manner. Once a use case has been executed, it demonstrates at least one way in which the required functionality can be shown to support the design.
The principle of constrained-random can be applied to this type of design, although not using the existing UVM methodology. It would need to be updated to be able to generate code to run on the embedded processors. However, it is likely that much better solutions will emerge that are more aligned with the top-down approach and can target non-functional aspects such as performance and power just as easily as functionality. The emerging consensus, based on tools from Mentor Graphics, Breker Verification Systems and Vayavya Labs, is that these tools will be graph based. Each generated test directly targets a particular scenario and so this strategy will be a lot more efficient than the existing constrained-random approach. Mentor has suggested that they may be one or two orders of magnitude more effective than UVM-based verification. When this efficiency is coupled with the execution speed of behavioral models, top-down verification shows promise and integration issues may well be a thing of the past.
Meet in the middle
Before getting too excited about the possibilities associated with the top-down flow, the reality is that both will probably be around for a long time and that they will have to work together with the minimum amount of overlap possible. However, it is clear that the bottom-up flow cannot survive for much longer and now is the time to start thinking about the overall verification strategy being deployed in your company.
In the future, we will delve into many of the issues touched on in this article. If you have attempted to use a top-down verification strategy, or if you have concerns about it, I would love to talk to you about your experiences. Please contact me here.
Leave a Reply