The interplay and interdependencies of technology are forcing companies to rethink how they design chips.
By Ed Sperling
The move toward concurrent design is escalating at advanced nodes, driven more by the need to ensure that everything works than previous efforts aimed at efficiency and time-to-market.
While the concept has surfaced before in limited doses—engineers and EDA companies have been talking about doing more things simultaneously for the better part of a decade—there are some interesting new twists at the 32nm/28nm process nodes. Much more needs to be done concurrently. Instead of just co-design of software and hardware, or trying to create a tighter bridge between place-and-route and synthesis, the list now includes everything from packaging to PCB layout and the functional verification of all of it.
Moreover, the interdependencies between the various pieces of the design flow are rendering concurrent design mandatory rather than optional. It can be the difference between a chip that works and one that doesn’t, and one that is competitive vs. one that offers limited or no gains in performance and power reduction. A tweak in the layout, for example, can affect power and signal integrity. Adjusting the power can affect performance. And any of these pieces can affect timing closure. Add in power islands, multiple voltage supplies and multicore processors and the tradeoffs become even more numerous.
“There are interrelationships between everything these days,” said Mike Gianfagna, vice president of marketing at Atrenta. “At 0.13 microns we were separating timing closure, routing density and package design. There were no cross-dependencies. Now, any change in routing density can mess up timing and any change in timing can mess up packaging.”
Joining forces
At the core of all of these issues are timing, power and signal integrity. Signal integrity has surfaced before, but it has been relatively quiet for the better part of a decade because of the ability to guard-band at older nodes. Adding margins into a 28nm chip means that performance might actually be lower than at 40nm and it might use more power. As a result, the issue has to be solved head-on.
“Signal integrity used to be a secondary issue,” said Sudhakar Jilla, director of marketing for place and route at Mentor Graphics. “Now it’s a primary effect. Timing and power were the most important. Now it’s timing, power and signal integrity. At 28nm there are a dozen or more corners, and each effect behaves differently at different corners.”
RTL-to-silicon power optimization is the second pillar of this three-legged stool. Industry sources say Synopsys and Cadence are both working on this, while companies such as Calypto and Sequence (now part of Apache Design Solutions) have had tools in the market for years. But what used to be a nice-to-have feature will now be required, which explains Apache’s acquisition and the development work underway at the big EDA companies.
And finally, all of this has to be verified to make sure there is timing closure and fewer errors—an increasingly complex task considering that the view of the system is now well beyond just the SoC.
“From a signal standpoint, the IC guys are struggling just to provide a good signal from one component to another,” said Zhen Mu, product marketing manager for PCB/package analysis tools at Mentor Graphics. “You now have SerDes approach 10 gigabits per second. And when you look at the package and the board, anything in the path of the signal could contribute to the degradation of the signal. So you have to make sure you model with more details. But the more you model, the more complicated the software.”
The effect on design teams
One of the most pronounced effects of these changes may be on the design teams themselves. The advantages of moving packaging design to lower-cost centers of labor may make sense at 40nm, but at 32/28 the advantages are outweighed by in-house communications.
“We have two major design centers, one in the United States and one in Romania, and now we have packaging in both,” said Javier DeLaCruz, semiconductor packaging director at eSilicon. “The packaging group is treated as part of the design team. Our solution has been not to use software automation for this. When you look at the big EDA players they’ve put together tools that can communicate without people talking to each other. What we’re finding is that people speaking together makes the most sense.”
It also requires more input in the front end of the design cycle, areas were packaging, functional verification, power management and software have rarely even been considered.
“When packaging input starts at the very beginning, by design it’s already downstream compatible,” DeLaCruz said. “We’ve also combined some other operations. The ball pad for the PCB implementation is held by our packaging team. Normally that’s a separate team.”
But he noted that some of these combinations and communication strategies are size-dependent. A massive IDM will need to set up a formal communications infrastructure, for example, because the kind of back-and-forth communication necessary in concurrent design would overwhelm the infrastructure. That can be just as effective if the right people are passing along the information, but it also may limit the cross-learning that will become necessary to design chips at advanced nodes.
Conclusions
While large IDMs held a major advantage in the past, there are questions about whether that strategy will continue to offer the same kinds of advantages at advanced nodes. With all the pieces now being considered further up in the design cycle—many actually at the architectural level—all companies will have to compete on the same footing.
“With DFM everyone realized that manufacturing variability was not an afterthought,” said Gianfagna. “You used to be able to guard-band that, but now if you guard-band you have a non-competitive part. This is a huge problem, and we hear from customers that by the time they get to the back end they can’t loop there because it’s too complex. You need to balance timing closure, packaging, thermal issues.”
And the issues will become even more complex as designs move to 22nm and 3D stacks, where interrelationships between chips in a stack will begin showing effects up and down the various layers of the design. The only way to deal with this is to understand more of the pieces up front, which will create the need for far more architects, increased communication at all levels, and a lot more headaches that need to be solved on a white board even before the design process gets underway.
Leave a Reply