Solid Verification Methodology Essential To Productivity

Functional SoC verification continues to push the limits of available resources, forcing verification teams to look for new ways to improve productivity.

popularity

Verifying SoCs from a functional perspective pushes the limits of already lean resources, driving verification teams to seek out new ways to improve productivity of verification tasks. Of course, with the verification task being a time-bound one, the challenge is daunting.

It is well understood that consumer electronics is pushing the technology envelope in terms of the amount of technology that is packed into a single SoC, while also putting huge scheduling pressures on engineering teams. They are tasked with delivering increasingly complex SoCs in the same time (or less) as their previous project, while meeting low-power requirements and generating a higher amount of software content.

RTL verification continues to be the workhorse for verifying designs in the most cost effective and most debug-efficient way because at RTL a design can quickly be simulated at multiple levels. “They can do it at block level, they can do it at a cluster level where there are multiple blocks, and it can be done at the SoC level. There is a lot of flexibility in terms of the hierarchy at which you can verify. You can run the simulations quickly, you can find problems you can do the deep to debug triaging quickly,” explained Swami Venkat senior director of marketing for Synopsys’ verification group.

For this reason, RTL verification continues to be one of the most important focus areas for verification engineers to verify SoCs.

Directed test and randomized constrained verification are two of the most common approaches used in functional verification. But Jayaram Bhasker, architect at eSilicon, observed that the latter is not used nearly as often because engineering teams are limited in time, and therefore end up just doing the directed testing as part of their verification plans.

Synopsys’ Venkat disagrees. “More and more engineers are deploying constrained random based verification. There used to be a time when an engineer had to think about the actual behavior of the design and conjure up scenarios in his head and write what used to be directed tests that are targeted at specific functionality of a particular design. One of the inherent limitations there is that it depends on the ability of an engineer to think of all the possible scenarios, and obviously because it is manual, it is going to be slow.”

In constrained random verification, the software generates various scenarios rather than the engineer creating them manually. “As long as you have multiple different stimuli and each representing different scenarios, you would be able to run them on the machine and you can have lots and lots of simulations running at any point in time,” he said.

In this usage scenario, the efficiency and the effectiveness of the constraint solver becomes extremely important and the ability of the tool to perform coverage analysis becomes equally important because there a lot of random tests running. “The engineer would want to know the additional coverage that has been achieved by each individual test and compare that against the spec to get some sort of measurable metric to know the progress that the environment is making or the verification engineer is making,” Venkat added.

But eSilicon’s Bhasker said there are problems with this approach. “You never know when verification is complete, and that’s one of the challenges. You’re really time-bounded. Otherwise you can go on verifying for as long as you can. The problem is you cannot bound it, you cannot say, ‘Can you test the chip in one year?’ There’s only so much you can do in one year. I doubt if there is any chip out there that does not have a bug in it. The question is what is the probability that a customer will notice that bug.”

To achieve the highest overall verification productivity, Cadence prescribes a solution that includes emulation, acceleration, simulation and FPGA prototyping tools, as well as an ‘enterprise manager’ that helps the engineering team create a verification plan and then serves as the window into the metrics that the tools produce in order to analyze them.

“Verification is a case of doing just enough. If you try to verify everything, it is an infinite process. It’s a case of is it good enough? Is the risk contained enough? That risk of, ‘Is it good enough?’ is usually measured by coverage. Enterprise manager or approaches like that create plans and aggregate all the coverage and other metrics from all the tools that lets the manager engineering manager–the one who says it’s ready to release,” said Mitch Weaver, general manager of Cadence’s advanced verification solutions group.

And this fits into the company’s Time-to-Integration strategy, which includes the following tenants:

  1. Use IP that is integration-ready. It is pre-certified and is delivered with associated software, not just the hardware IP. It is pre-verified, comes with the related testbench and is ready for integration.
  2. Employ a disciplined, mixed-language-capable re-use methodology using UVM as the base, which is a standard verification methodology that promotes reuse.
  3. Use tools that are optimized for SoC verification.

Weaver noted that verification environments often include more lines of code than the design itself, so re-use of the verification environment is a huge, huge deal.

Methodology is the foundation
SystemVerilog, VMM, OVM and now UVM are widely adopted methodologies that provide a uniform, holistic path for different engineers to stitch together verification environments, whether they are developing verification IP, a verification methodology, mixing and matching multiple verification components from an earlier project or from a derivative project.

The move to smaller geometries is another driver for approaching verification in a holistic manner. Fortunately, many engineering teams today do spend the time up front to create proper and thorough verification plans, Synopsys’ Venkat observed. Also, as the number of IPs on an SOC continues to grow, engineering teams are using more verification IP. “If you look at an SoC today compared to similar designs about 10 years back the number of protocols on a chip has gone up so much—in order to improve their overall productivity we see people using a lot of verification IP.”

Bernard Murphy, CTO at Atrenta, pointed out that there are some interesting ideas emerging around assertion synthesis. One involves looking at the simulations that already have been run and building assertions based on that information. That’s not just what the behaviors should be. It also includes what behaviors have not been tested. When you use it in the application you’re straying into an area that was never tested.

“At the integration level, one thing people think about when they think about static checking is formal verification. Can I formally verify that things are hooked up correctly? And certainly that’s one method. Another method is path tracing—just walking along paths from the pin on one IP to another IP and asking if these things are connected correctly,” Murphy said.

The trick in all of this is that it’s not so much how you check it, it’s how you represent what you want to check, Murphy said. But as for the best way to represent what you want to check, there isn’t a well-defined answer to that right now. “Assertions are certainly part of the answer because they are at least well defined. The problem with assertions is they are primarily functional. They don’t have any architecture content, so if I mistakenly connected an interrupt to an error state reset then without doing simulation there’s no obvious way I could tell that that was a mistake because they’re both signals, they both toggle, they do stuff, but architecturally it doesn’t make sense. You can’t really capture that in assertions as we know them today. There needs to be some new ways of representing these things, and God forbid we should talk about standards for what it is we’ve got to represent.”

Raising the bar
To develop the most effective methodology for improving functional verification the use cases for the device must be completely understood, but the approach to the verification is also critical.

“At the IP level there is a certain comfort in terms of how to get reasonable productivity levels from functional verification,” according to Andy Meyer, verification architect at Mentor Graphics. “The bigger challenge for our customers is when they try to integrate those IPs into an SoC. What we try to bring to the party is the productivity, the measurement of that productivity, how to measure the results in a quantitative fashion, and the effort level–so you have a way of asking if you are in fact approaching it right, and how do you know when you’re improving it.”

“Where people have been and where we are trying to encourage people away from is the [thumb-in-the-air guesstimate] based on the last project into something that you can actually measure. That means the use and collection of metrics and the analysis of metrics—how am I doing, how many regressions did I run, how many bugs did I find per regression hour or per simulation cycle per engineer time? These are the kinds of things that give you at least some way of measuring and getting a feel for where you are, even though you never really have an accurate understanding of what’s happening in an SoC because it’s too big and too complex,” he continued.

Mentor calls this task verification management, and includes the collection of coverage data. “It really started with the focus being able to rank and assess how well you are progressing in your verification through your coverage progression. Over the last few years it has grown into other areas as well because verification management is not just the coverage, it’s the results you get–you have to actually analyze the results of the simulation,” explained Steven Bailey, marketing director at Mentor.

After the results are analyzed, they must be collected over a number of products to see how results are trending. It isn’t as simple as just coverage data, he pointed out, it must include metrics on the churn rate on the code of their design as well as how well they’re progressing on reaching their coverage, how many verification cycles it’s taking.

This kind of tracking helps engineering teams understand the numbers of verification cycles they need to perform, and project the compute capabilities that will be needed. “When they start tracking that that’s also when they start noticing they need horsepower like emulation and they start realizing that emulation is actually cheap compared to tera cycles of verification and all this gets brought together,” Bailey added.

Meyer believes this is the beginning of a trend that actually begins to look at where results come from and where the effort was allocated — it’s those two together that give an actual measure of productivity.

One of his favorite examples is a company that employed this methodology for an entire category of testing and was able to determine, ‘What do we mean by results?’ One of the obvious results was, ‘Did we in fact see an RTL error ever in the last four generations of this product show up based on this entire category of stimulus?’ The answer in some cases was ‘no.’”

“I think that’s where you begin to see the serious productivity gain of taking this approach…the point is if you don’t measure, you don’t know,” Meyer concluded.



Leave a Reply


(Note: This name will be displayed publicly)