Continuous, Connected And Concurrent Verification

At least some verification is now required at every step of design through manufacturing, but the challenge is making it all work together.

popularity

By Ed Sperling
It’s a wonder that any electronic system works as intended, or that it continues to work months or years after it is sold. The reason: SoCs have become so complex that no verification coverage model is sufficient anymore, no methodology covers every aspect of verification, and no single tool or even collection of tools can catch every bug or prevent them from being there in the first place.

What that means is everyone—from the architect to the implementers to the integrators to the official verification team, and on to the manufacturers—is involved in verification on one level or another. Even the users are involved in this process. Fixing bugs in software after a device hits the market is a common occurrence these days, including some nasty bugs that have gained global media attention and should have been caught well before the SoC ever reached tapeout.

In his keynote speech at the Synopsys User Group this week in Santa Clara, Calif., Aart de Geus sounded the alarm.

“Debugging is the theme for the next five to six years at the signal level, the transaction level and the protocol level,” de Geus said, noting that the solutions will require root-cause analysis—including the ability to go backward—along with an increasing emphasis on formal technology and techniques to increase coverage. “This has to be brought together.”

That appears to be an understatement. The commonly quoted statistic is that verification accounts for 70% of the NRE in a design, which is the one commonly referred to by both Synopsys and Cadence. Mentor Graphics has put the estimate as high as 80%. But the reality is there is no way to really check those numbers. And increasingly, it’s becoming far more difficult to separate verification from every other task, which makes the number, at best, a good guess.

Even in large companies, where verification is a highly regimented part of the design flow, the timing for when to start continues to change.

“We’re doing the implementation verification earlier and earlier to make modifications,” said Leah Clark, associate technical director at Broadcom. “At 28nm a lot more of the design is embedded in the implementation than in the past.”

Broadcom isn’t alone in seeing this shift. Chipmakers everywhere are pushing verification in every direction.

“The more advanced SoC companies are doing this with virtual platforms,” said Janick Bergeron, verification fellow at Synopsys. “They’re starting at the architectural stage and if they don’t get it right the focus on their TLM and application and debug. That’s also why they’re making chips as programmable as possible.”

Connected and concurrent verification
At least part of the driver behind verification at every step of the flow is that it speeds up the overall design process. The best option is getting the design right the first time. Given the current state of tools and the rising complexity of chips, with a few engineering change orders thrown in, that’s impossible. The more realistic approach is to do at least some verification work before the design progresses very far. John Brennan, product director for verification planning and management, terms it “continuous verification.”

But continuous verification is also concurrent verification, because a flow is no longer one step followed by another step. It’s multiple concurrent steps, ranging from power modeling and transaction-level modeling, followed by other concurrent steps, such as synthesis, place and route, and software development using virtual prototypes. Inside of large companies working on complex chips, this concurrent design of hardware, software and the integration of IP has an element of verification associated with each stage because it’s much faster to do it that way.

“What we’re seeing is multi-specialist verification,” said Brennan. “People are specializing in verifying connectivity or subsystems or embedded software at the low level or high level. But for any given piece, how you coordinate that with everything else becomes a real challenge. We’re evolving into an era where verification planning and management is becoming a problem. Today, the tools are not sufficient to handle this. They’re not robust or multi-user friendly enough. The next generation solution will have to support a broader set of engines, metrics and connectivity.”

Most verification teams aren’t there yet. They still break things into smaller chunks to be able to deal with them more effectively. Block-level verification has frequently given way to subsystem or IP or software verification, but full system-level verification is non-existent.

“What’s changed here is the design complexity,” said Pranav Ashar, CTO of Real Intent. “Designs are getting huge. They’re growing exponentially, even without the addition of ESL abstractions. So what’s happening is companies are doing bigger designs with small teams, and the way it’s being dealt with is the verification obligations are being divided up into areas like CDC, power, timing constraints, DSP and time management.”

Ashar said that verification has been able to keep pace with this complexity so far, in large part because of greater use of static techniques. But there also are new things to check that weren’t a problem in the past, such as variable proximity effects, multiple modes and the impact of turning blocks on and off (or somewhere in between) on signal integrity, as well as the impact of new techniques such as dynamic voltage and frequency scaling, body biasing and near-threshold computing. The rapid rise of third-party IP and the need to integrate what essentially are black boxes only complicates things further.

“Years ago it was pretty simple to verify a chip,” he said. “Now, the complexity is coming from integrating a large number of blocks. The tools have to be smart enough to explore.”

Future challenges
Tools also have to keep pace with shrinking market windows. Just being able to provide sufficient coverage and confidence that a design will work is no longer enough. That coverage has to be coupled with faster time to market, and the only way to speed up this process is to spread out the verification across the design and make sure changes in one area are reflected in another.

“Everything now has a verification component to it,” said Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence. “And everything is moving forward. But what do you do when the verification works on three engines but not the fourth? Which ones do you trust?”

He said there has been a big effort to mix and match these engines because chipmakers want an open platform. The Standard Co-Emulation Modeling Interface (SCE-MI) and the Unified Coverage Interoperability Standard (UCIS) are major steps in that direction. But management of all of these pieces, all at the same time and from different teams—sometimes in different time zones—is a big challenge, and one that may never be fully solved. Everything has to be synchronized repeatedly, and changes have to be reflected across every aspect of the combined design and verification process.

Programmability and software certainly help smooth out these problems. So does extra circuitry or margin. But as designs continue to grow, and feature sizes continue to shrink, all of this eats into performance, power and area, which is a violation of all progress in semiconductors. That puts even more pressure on verification and design teams to get it right faster, to route out bugs more quickly, and to communicate up and down the flow—and it greatly increases the stakes for EDA vendors to solve these issues.



Leave a Reply


(Note: This name will be displayed publicly)