Will It Really Work?

Third-party IP, increased complexity, parasitic effects and software are making the verification challenge more difficult. Can this be fixed?

popularity

By Ed Sperling
Estimates of how much time it takes to verify a complex SoC are still hovering around 70% of the total non-recurring engineering costs, but with more unknowns and more things to verify it’s becoming harder to keep that number from growing.

Verification has always been described as an unbounded problem. You can always verify more, and just knowing when to call it quits is something of an art. Moreover, with software now thrown into the mix, engineering teams have to decide what’s good enough for tapeout and what can be fixed once the chip is already in the market.

Making that decision is becoming tougher, though. The amount that has to be verified is less clear, in part because of the growing amount of outside IP that is now included in designs. Of the 70% or 90% of IP that is used or re-used in a complex SoC, less than 50% is now commercially purchased with the remainder internally developed, often for previous projects. The amount of commercially generated IP is expected to rise over the next few years, though, basically creating a series of black boxes that companies didn’t create internally.

While much of this commercial IP will be sold as pre-verified, what works in one design may not work exactly the same way in another. That’s particularly true with different process technologies. A general-purpose process built for speed may cause IP to behave completely differently than one optimized for low power. And in stacked die, two known good die may no longer work when they are packaged together.

“The new world is a broader supply chain for chips,” said Mike Gianfagna, vice president of marketing at Atrenta. “There is a need for better visibility in the supply chain, including everything from early predictions to yield to the track record of the supplier. There are multiple points of failure. For data management, planning, thermal and mechanical analysis you need fundamental enabling technologies. At the same time there is a re-invention of the industry into smaller, more niche markets.”

Knowing what to verify
Just knowing how much to verify is a challenge. Taher Madrawala, vice president of engineering at Open-Silicon, said this is not a simple decision because file sizes for verification are becoming enormous. That means what gets left out of the verification process may be as strategic as what gets included, because all of this can affect time to market. Verification budgets remain tight, both from a manpower and equipment standpoint.

“On top of that you don’t always have access to all of the functionality,” Madrawala said. “That’s especially true in 3D stacks or system-in-package. You don’t always have access to increased functionality because some things are encapsulated inside the package.”

He noted that from an NRE perspective, the percentage spent on verification has remained constant from 90nm down to 45nm. That has been helped by more standards, including modeling of IP in C or C++, an increased use of emulation, and the ability to run tests on multiprocessing computers. But with compressed schedules and greater complexity, those numbers can change.

There also are differences of opinion about what works, what will continue to work, and what needs to be changed in the future, both from a physical and a functional standpoint. Tools vendors insist that most of the capabilities are already there to do verification, even though they will need to be speeded up through better modeling at a higher level of abstraction with a greater reliance on multiprocessing servers. They also say that verification teams need to learn to use the tools that are out there better.

Chipmakers generally acknowledge the need for better training on the tools, but they say the growth in complexity will create the need for additional testbenches. In particular, there will need to be new tools for partitioning designs and verifying the results once stacked die become more mainstream.

“As complexity grows, integration will be the issue,” said Prasad Subramaniam, vice president of design technology at eSilicon. “You will need specialists for each part of the design. People’s specialties will get narrower. And then you will need people to manage more specialties. The generalists, who will be the architects and higher-level engineers, will define the problem. Once they have made the decision about what to do, then the specialists will take over. But there will also be a lot of feedback. This will be an iterative process. There will be meetings where you need to reconcile differences and make adjustments. There will be a lot of collaboration, and verification will start from the get-go.”

Verification strategies
There are two main approaches to verification. One is to verify the pieces. Another is to verify the system. Both are necessary, but the order in which they need to be done as part of a verification flow can vary greatly for even derivative chips.

Samta Bansal, 3D IC lead and silicon realization digital project manager at Cadence, said that in stacked die an incremental approach will be needed to do verification. “If you analyze it all together it overcomplicates the process,” Bansal said. “For one thing, not all of the pieces will be available at the same time. A more feasible approach will be to verify each chip in a stack as part of a verification flow, then focus on the microbumps, TSVs, LVS and DRC for alignment and ultimately create a single file.”

That’s not so simple, of course. In stacked die there are physical verification issues that can complicate the functional verification, notably stress and power. And there is now software that needs to be considered in the mix, with the trend toward an increasing portion of the stack.

“Functional and physical verification are both important but independent tasks,” said George Zafiropoulos, vice president of solutions marketing at Synopsys. “In both cases, verification is moving up in system complexity. We’ve gone from blocks to lots of blocks to lots of processes and I/O, and there is more stuff coming. Complex interface IP at the periphery of the chip has gone up by an order of magnitude. The design team can’t verify everything, though.”

Zafiropoulos said design teams used to think there was not enough time to do verification at the block level. He said that putting 100 blocks together increases the challenge exponentially.

“A lot of this is bottom up,” he said. “You build sub-circuits up to the chip and then in multiple chips. You can’t afford to have errors inside these blocks. But you also need to change the scope of what has to be done. In the past, one engineer could comprehend everything on a chip. Now we’ve gone from the guy who knows everything about a chip to teams that are in different companies and maybe different countries.”

The result, he said, will be a gradual change in three areas. First, more and more engineers will do verification, rather than just specific verification teams. Second, all engineers will become more software savvy. And third, new kinds of tools will be introduced, including formal approaches.



Leave a Reply


(Note: This name will be displayed publicly)