What Not To Verify

Successful verification increasingly depends on being much more selective.

popularity

It is well understood that verification is all about mitigating and managing risk, and success here begins with a good verification planning process.

During the planning process, the project team creates a list of specific design functions and use cases that must be verified—and they identify the technique used to verify each specific item on the list. That list can include subsystem-level simulation, system-level simulation, formal property checking , emulation, FPGA prototyping and post-silicon validation, among others. Then, list is partitioned into sets to help assess risk, separating out critical functions, secondary functions, as well as non-verified functions.

“The goal of this exercise is to associate the level of risk with the functionality that must be verified so that appropriate emphasis and resources are assigned to the task,” said Harry Foster, chief verification scientist at Mentor Graphics. “Items in the critical functionality set can render the chip dead if they do not work correctly at tapeout. This situation prevents the project from performing post-silicon validation of the design in the lab. Hence, to mitigate risk, the project team must have extremely high confidence that critical functions work correctly prior to tapeout.”

Items in the secondary functions set are certainly important, but they will not kill the chip in the lab, Foster said. “For example, it might be possible to enable some optional performance-related requirement on the design. In certain situations, due to schedule or resources, the project team members might limit the amount of verification they apply secondary (non-critical) functionality.”

Items in the non-verified functions set involve functionality that the project team has identified as minimal risk to the success of the project if not verified, he noted. “For example, today’s SoCs generally integrate many internally developed and externally purchased IPs. To effectively manage verification cost, the project team often consciously decides not to verify certain functionality in these re-used or purchased blocks. However, as part of the non-verified functions set, the project team must capture the assumptions about functionality they believe the IP provider previously verified and under what configurations. Then, the project team must review these assumptions with the IP provider. This is the non-verified functionality list.”

One of the problems with commercial IP is that developers typically have only a vague idea of how it will be used, how it will be connected in the final chip, and what else is nearby.

“It is impractical to simulate all the scenarios that the IP will be used in,” said Prasad Subramaniam, vice president of design technology at eSilicon. “Most people simulate the most commonly used scenarios. There could always be a corner case that was not simulated or verified that could cause a problem, even for familiar IP. Typically one builds in enough flexibility in the system to address these types of issues in software, so the risk is minimized. With hard IP, especially mixed signal IP, the same applies with respect to process conditions. It may not be possible to verify the IP at all PVT corners. One typically develops a robust design with enough margin and characterizes the silicon to understand where the fallout, if any, could occur.”

And, in many cases, that’s good enough. Ganesh Venkatakrishnan, verification team lead at Open-Silicon, noted that because an SoC is mainly an integration of pre-verified IP blocks that are sourced from either the in-house IP team or from a third party, one of the areas that need not come under the scope of SoC verification is a re-verification of all the internals of these IPs. “We can limit our verification to connectivity tests and system-level use case testing. Inherently, some level of IP internals get verified in this process. On coverage, as well, we can limit it to interface level of IPs.”

However, he said, RTL blocks that are newly designed or those that haven’t undergone a rigorous verification need to be thoroughly verified at a block level. A full functional and code coverage is highly recommended. “As far as RTL simulators, we typically run stripped down versions of software code to test hardware-software co-verification. A full-fledged co-verification test environment is anyway feasible only on prototyping platforms like virtual, FPGA, hardware emulators or a combination of them. Then, gate-level verifications can be restricted to multi cycle paths, and high speed interfaces where setup/hold requirements are very stringent. Typically, only around 10 percent of the total number of test cases in RTL verification is required to be run at gate level.”

At the start of the project, the SoC architect, design and verification leads work together to identify areas that cannot be compromised from a verification perspective. That requires a proven SoC verification methodology, in order to help ensure the designs are taped out with a higher chance of first-pass silicon.

“In the context of a use-case definition, this is typically something users think about at the system-on-chip (SoC) level when they are integrating blocks,” said Frank Schirrmeister, group director of product management and marketing at Cadence.

Schirrmeister contends the IP itself always has to be fully verified because it has to live within different environments. “For sub-systems and SoCs, the situation becomes a bit more grey. In this instance, users specifically define the use cases and scenarios they expect the system to work in. In doing so, they are implicitly leaving out the scenarios that are not on the list to be tested. As a result, the specific use cases for which the design is intended to work are explicitly tested. If an end user deviates from this and tries a scenario that has not been tested, I would expect the potential risk of finding additional bugs, which then are put into the errata, will likely be bigger.”

This is where software-driven verification and the proper definition of scenarios, including the decision on how to use graphs, comes in. “The Accellera Portable Stimulus Working Group is currently dealing with this as they determine how to use a stimulus across different engines,” he added.

Benoit de Lescure, director of application engineering at Arteris is seeing something similar. “At the system level, the SoC team will strongly focus its validation efforts on the use cases it knows and understands and leaves out anything that is not intended to be supported. Of course, most of the value of buying external IP is in the fact that they are already validated, so the SoC team will only verify the IP integration, not the IP behavior itself. For that they rely on us—and they are very explicit about that.”

He pointed out this is particularly true for Arteris customers. “They never attempt to verify the network on chip IP behavior itself. Rather, they will verify only its interactions with their system (integration, performances, configuration).”

At the end of the day, when it comes to mitigating risk, Foster concluded, “Something else that must be addressed during the verification planning process is — borrowing a phrase from former Secretary of Defense (Donald) Rumsfeld — it’s critical that the project team’s verification process addresses the ‘unknown unknowns.’ Today, we have developed verification technology to help us uncover bugs related to things we didn’t think of during the planning phase, such as constrained-random simulation and formal property checking. Hence, during the planning process it is critical that today’s SoC projects not only put in place an effective process that verifies the ‘known knowns,’ but adopt technology that will help them identify bugs associated with the ‘unknown unknowns.’



Leave a Reply


(Note: This name will be displayed publicly)