Start Verification Early To Avoid Pitfalls Later


While it is agreed that verification should start as early as possible, it doesn’t always begin until simulation. This can be suboptimal for many of today’s massive and complex SoCs.

It is well understood – at least from a theoretical point of view – that design verification should start as early as possible. The reality is that that doesn’t always happen for a variety of reasons such as enormous time to market pressure, too many new features to add, lack of foresight and discipline among other things. But progress is being made.

Harry Foster, chief scientist for verification at Mentor Graphics pointed out that functional verification must be an integral part of the entire specification and design process—and not an afterthought. “Certainly the success of the entire project depends on the critical functional verification step. Therefore, there are often many stakeholders involved in the functional verification process—not just the verification team.”

That said, the functional verification process must start early in a project—at the project specification phase, he explained. “The key point here is that you should ‘never specify what you can’t verify.’ The same is true for design—it’s important that the design be verifiable. Hence, the verification teams should be actively involved in both specification and design—learning about key features that must be verified—and identifying potential architectural and design problems that might complicate the verification effort and warrant simplification.”

At design house Open-Silicon, principal SoC architect Jeff Scott noted that their design teams like to start functional verification in parallel with the RTL design, essentially as soon as there is a solid specification. “Ideally, the verification team will participate in the review of the specification and potentially point out architectural and/or design tweaks that might help verification proceed smoothly. Once the design is started, work on the verification environment can proceed. This can involve anything from the framework, scoreboarding and other utilities, as well as models to predict functional behavior, including verification IP from the IP providers.” As a framework, Open-Silicon generally follows OVM/UVM methodology.

The advantage of starting verification in parallel with RTL design is obviously schedule improvement. The disadvantage is that the verification environment is subject to a bit of churn as the RTL design progresses and specification changes are made. However, Scott believes the overall schedule will still be better if the work is done in parallel. “Of course, doing the design and verification work in parallel requires separate teams, but I think this helps overall design quality by having two sets of eyes interpreting the specification, and at some point they have to agree.”

Foster agreed. “The process of functional verification is really no different than design—and success depends on good verification planning. Planning originates by first reviewing the architectural and micro-architectural specifications with all stakeholders—and extracting the ‘verification objectives’ during this process. At this stage, it’s important to focus on ‘what’ needs to be verified (that is, the verification objectives), versus focusing on ‘how’ you plan to verify the design. For example, some teams make the mistake of focusing on creating their testbench infrastructure too soon (that is, the ‘how’), and in doing so, they often miss the identification of important verification objectives (that is, the ‘what’).”

Starting with verification in mind
The topic of verification for many designers is a philosophical one in terms of approach. For Michael Sanie, senior director of verification marketing at Synopsys, if it were up to him, he would have every designer start with verification in mind and design in a way that makes verification simpler because there are definitely things that can be done to do that.

“Unfortunately this is not a reality yet because of the specialization that’s happening and it is increasingly becoming the case where designers design and verification guys verify. The design guys get started going with their own verification and then sometimes they don’t do all the great things that could make verification simpler,” he observed.

The approach also depends on the meaning of verification, Sanie said. “If you think verification is simulation – which is really something that starts a little later and you have to have a core of the design figured out or designed before you can start running it or parts of it through simulation. But there are different types of verification that start earlier which is also done by designers.”

One of these is Lint: language checks or structural checks, which is done by designers as early as they want – even when they have just a few hundred lines of code. They can start looking at designs before it is complete, he noted.

Another simple thing that could help with verification is assertions, but not every designer has the discipline to start with assertions in mind – they will do it afterwards. While it does slow the design process slightly, it doesn’t render the designer ineffective, it just must be considered up front, Sanie said.

“If you were doing things right then what you would do is start planning for verification at the architectural level so you can do performance validation, you architect in a way that later on you can do performance validation easier. You can do block by block verification easier; you can create boundaries that can be put into a much easier form of randomization so you can write testbenches easier. But today very rarely people do that upfront planning mostly because of specialization – design guys are not verification savvy. Their job is to get designs out quickly,” he pointed out.

One way to ease this tension is to have the verification architect be in contact with the chip architect before simulation where they would typically come into the picture.

However, Sanie said, this is a delicate balance. “There are times, to be honest, on optimizing for verification, the process may take you away from being optimal on the design side – that’s the trade-off that many people are not happy with. Then of course you have time-to-market pressure with a bunch of new features and all of that – that becomes a bottleneck.”

Progress is happening
There is some good news in all of this. “People are beginning to understand the difference between the verification obligation that integration throws up and the verification obligation which is intrinsic to the functionality that you are implementing,” offered Pranav Ashar, chief technology officer at Real Intent.

While the complexity of today’s chips is quite high, the fact that a certain process is continually being better understood for the whole integration requirement is allowing static techniques to help the verification of these SOCs, he said. “Anybody would say that yes, it is always better to verify as early as possible but if you want to verify early then you have to know what you are verifying. You have to understand the problem that you are tackling and you have to have enough information at that level to make a meaningful stop at the verification. Whatever you do in terms of the verification there has to process through the refinement steps that you are going to take later on.”

This all boils down to the fact that the importance of verification planning cannot be understated, Mentor Graphics’ Foster concluded.

 

 

Additional resources:

On verification planning: The Verification Academy is a free resource that is available to any engineers whose goal is to develop the skills necessary to mature their organization’s advanced functional verification process capabilities.



Related posts