It’s not enough to have a verification methodology. Productivity must be measured to stay ahead of the game.
By Ann Steffora Mutschler
In this era of mammoth SoCs that require the utmost in verification complexity, it’s not enough to have a methodology. Design and verification teams also need to measure their productivity to constantly stay ahead of the curve.
The more sophisticated customers are measuring a lot of things, explained Steve Bailey, marketing director at Mentor Graphics, “and for them it’s not just measuring what they are accomplishing in the design. They are measuring their full use of their IT infrastructure—their server farms—and then looking and seeing, ‘For running petacycles of verification what do we have to show for that?’”
Then they compare the data and the functional coverage and the number of finite state machines that are interacting with each other. The verification teams are basically constructing these finite state machines to create distributed finite state machines. Some of these are explicit, such as power management, where there are system modes of operation, and many of them are implicit because of the way the various state machines are interacting.
“They interact through another state machine, which is the interconnect that is becoming more and more complex,” Bailey said. “They are trying to look, especially at the chip level, at what exactly that they are accomplishing with the tests they are running. Just booting Linux doesn’t tell you a whole lot. When you boot Linux you’re basically exercising a lot of memory modes and stores, but you’re not really testing the functionality.”
Michael Sanie, senior director of verification marketing at Synopsys, pointed out that people have tried to put a science around this. Coverage is used to do those measurements, as more verification is performed and tests are added to see the result of what they’ve done. “There’s a lot of science that’s been put into it, but still there’s a lot of art that goes into it.”
At least part of the problem is the sheer number of blocks that need to be verified. “It’s the same thing you did in the past,” Sanie said. “You’re doing blocks and you look at coverage. What has really complicated things is when you put things together at the SoC level. Now you’re looking at the interrelation between blocks, and the complexity goes up a lot. Managing that and finding metrics for that is very difficult. Coverage is still used, but again, there’s only so much coverage you can do.”
It also comes down to the metrics themselves—which ones are used and how they are used. There’s a lot of philosophy built in, Sanie said.
In a real life example that shows the two extremes in how engineering teams approach verification, he said he visited a networking company in the morning and visited one of their competitors in the afternoon. “In the morning, the guys were saying, ‘We’re measuring coverage. We don’t believe in FPGA prototyping or emulation to write all of these complex testbenches, and we run coverage and measure that. We run a lot of regressions and our chips are great, we are very successful, we’re selling a lot of them, etc.’ In the afternoon, I go to the other company, they have the same type of product, same segment, but a different philosophy. They say, ‘We don’t believe in coverage. We create this very expensive FPGA board, we buy emulation and we run a lot of cycles through these, lots and lots of software. And then once we have a good feel for it, then we tape out.’ They are also very successful, they make a lot of money and they have good chips. These are the extremes. Most of the world is somewhere in between, but the reason that it really shocked me was that it happened back to back. Same day, same city, maybe a few miles apart.”
A lot of what worked in the past is going to carry forward, he said. There are a lot of things that people have learned over time that come into play when a methodology is being developed, and the companies with good verification architects are really leveraging that knowledge to make better chips and do better verification.
Clearly, as design issues have moved beyond testbenches and simulation, the focus has landed squarely on verification, which encapsulates methodology and best practices such that you can get to that next level of productivity, according to John Brennan, product manager for verification planning and management at Cadence.
Cadence calls this approach metric-driven verification, which it defines as the planning and management layer that encapsulates and embodies the whole approach to how to improve productivity. It addresses such issues as, “‘How do you get better visibility? How do you increase the level of abstraction so you’re not looking at these low-level minutia details, that you’re actually looking at something you can control and apply to significant problem areas. And how do you get to verification closure faster,” he said.
This is applied from IP level—from initial design creation—all the way through and up to the system level. Brennan noted the SoC level is still a bit of the wild, wild West.
Like other aspects of the design, verification and manufacturing flows, too much data is a challenge. “You end up with a lot of touch points because there’s so much data. Customers are overwhelmed by data and sometimes they can’t themselves get out of their own way because there’s too much data being thrown at them. Part of the productivity gains are just being able to extract out all that crap and say, ‘Here’s exactly where I’m at on a block by block basis or a feature by feature basis or a person by person basis. Where am I having trouble? What do I need to throw some extra help onto?’” he explained.
Changing role for EDA vendors
When it comes to improving verification productivity it can seem as if there are more questions than answers. With the complexity of the verification task, the main challenge for tools providers is changing.
“As a vendor, the game is changing,” said Sanie. “It is necessary but not sufficient to have all the tools. What we’re seeing is that the focus of vendors needs to change from selling tools to solving problems.”
And because today’s SoCs are so complex, the biggest strides in improving verification productivity are likely to come from the engineering team looking at and analyzing what happened and then figuring out how to tweak what is already there to gain even greater coverage, Mentor’s Bailey concluded.
Leave a Reply