Rethinking Models

Power, software and packaging—separately and together—are raising some interesting questions about how to make architectural models better.

popularity

By Ed Sperling
The move to future process nodes will require more than just new materials, better layouts and higher levels of abstraction. It also will require a fundamental re-thinking of how high-level architectural models are created and what’s included in them.

While the Transaction-Level Modeling (TLM) 2.0 standard has provided significant improvements for everything from layout to verification, power is creating a whole new layer of issues that is affecting everything from signal integrity to noise in the package and on the printed circuit board. At 45nm, many of those issues were manageable. At 22nm, with heavy IP re-use, multiple voltages and increased software-hardware co-development and verification, the existing models are falling far short of what’s needed for developing future SoCs.

“The most important thing to understand about power is that it’s global,” said Cary Chin, director of technical marketing for low-power solutions at Synopsys. “We’re used to working with hierarchical designs but power affects everything. If you look at logic, one corner of a chip doesn’t affect another corner. Timing is path oriented. Congestion is local. But when you think about power, if you dissipate one microamp it will affect everything else on the chip. All that matters is the total. And that goes for the package and the board.”

Surprise packages
For most chip architects, the package has been, at best, a nagging thought that has to be dealt with down the line. In fact, about the only real consideration has been the actual cost of the package, which can affect the overall profit equation for a new chip. That’s about to change in a big way.

Rather than just an afterthought, the package is becoming a big up-front consideration that can affect the overall functionality of a chip. That means it needs to be modeled more effectively, and it needs to be included in the initial architectural models. The cause of this change: power.

“On the package side over the last four or five years, the electrical properties of the package have not changed much,” said Dian Yang, senior vice president of product management at Apache Design Solutions. “But at advanced nodes the power of the chip is increasing. The original package impedance can handle 1 watt or even 10 watts, but now it’s 20 watts or more.”

There’s more to that equation, too. Impedance is increasing as the voltage is lowered to handle lower power, while frequencies are actually increasing from 100MHz into the gigahertz range. The result is more noise, which degrades signal integrity in a chip.

“You need to consider the package itself up front and how many layers you’ll need for power and signals,” Yang said. “Those decisions also have to be done at the very early stages of the chip, not later on.”

Package models also need to be beefed up to deal with these issues. Yang said the current power models are crude. They only deal with low frequencies and narrow bands rather than the broadband communication now required in all designs.
That leaves a lot of guesswork in the design of the package itself. Many companies overdesign, as a result, which increases margin—resulting in lower performance and higher cost—or they underdesign and cause problems in the functionality of the chip.

All a-board
The package is just a point in a problem that is escalating even further to the printed circuit board. For most chip architects, this is even less of a consideration, but power considerations have changed that considerably. What happens on the chip used to be largely separate from what happens on the PCB, but re-use of IP and parts is creating havoc almost everywhere.

“When we had a 3.3 volt part and voltage tolerance of noise at 5% voltage ripple, that was a lot of noise that could be dealt with,” said Steve McKinney, business development manager for Mentor Graphics’ Board Systems Division. “Today we have 1 volt parts and 5% tolerance, so the ripple tolerance needs to be tighter at the same time the current has gone up. The result is we’re seeing much greater demand on the PCB and the package to provide adequate coupling.”

Re-use of existing components and IP only exacerbate the situation. While it makes no sense to re-design everything on a 28nm SoC, which can include hundreds of millions of transistors, interfaces and complex logic, memory and I/O, not everything works perfectly the second or third time around.

“In the past, you had a whole plane running at 3.3 volts,” said McKinney. “Now you may have five different voltages with less copper, fewer channels for current and less opportunity for decoupling on the board. Everything on the PCB is being driven by the low-power requirements on the IC.”

Power trips
The problem with that scenario is that power isn’t being effectively modeled up front, either. Estimations are inaccurate at best, despite the need to account as early as possible in the design cycle.

“Power estimation has two components, characterization of the device and system activity,” said Pete Hardee, solutions marketing manager at Cadence. “With SystemC TLM models you’re abstracting data being simulated and at the transaction level you’re abstracting signals. But you have to bear in mind it’s an abstraction, and abstractions aren’t particularly accurate.”

That problem grows as IP is added into the equation. Qi Wang, senior architect at Cadence Design Systems, said that everyone agrees power needs to be estimated more accurately earlier in the cycle, but so far that isn’t possible. “The paradox is that you cannot get an accurate estimates of power. An IP power model is lacking, and with complex IP you cannot use a single number to model power. Plus, at the system level you have to deal with software. Power depends on the profile of activities. A different application requires different power.”

Conclusions
The need to plan for power in the architectural stage isn’t a new concept. What is new is an understanding of how many other pieces have to be built into that planning.

So far, there is no single model that can address all of these issues, from the IP, the packaging and the physical SoC, to the software that runs on it and the board across which all of this works. TLM 2.0 was a major step forward in raising the abstraction level of SoC architectures, which can help all the way through to the verification stage, but the next phase of development will need to incorporate more than just transactions.

This is complex stuff, for sure, and it only grows more complex at each node with re-use and as chips are stacked or connected with interposers. All complex designs begin with a good blueprint, but at future nodes that blueprint will need to be re-thought, revamped and expanded to include factors that didn’t need to be considered in earlier versions.



Leave a Reply


(Note: This name will be displayed publicly)