‘What If’ In 3D

Stacking die increases the number of variables, the risk, and the benefit of understanding tradeoffs early in the design cycle.

popularity

By Ed Sperling
‘What if’ questions have become standard across multiple pieces of the design chain for any SoC, but the number is multiplying at each new process node.

When the industry begins moving to 2.5D and 3D over the next couple years, the number of tradeoffs will likely move from overwhelming to unmanageable. That will set in motion a number of efforts in semiconductor design.

So what’s behind these changes? One is simply the sheer number of tradeoffs that stem from SoCs with hundreds of millions of gates, I/O on and off the chip, an increasing amount of re-usable and commercially available IP and an overall effort by chipmakers to eke more performance for less power out of a design.

“There is no killer app anymore,” said Fred Cohen, director of the OMAP wireless ecosystem at Texas Instruments. “There are 20 of them. It’s impossible for an SoC vendor or an OEM to master all the complexity and innovation. So you need to engage with all these companies, and you need to do it almost on a daily basis.”

In the design space, that also means better modeling and more granularity for exploration. If there is an explosion of ‘what if’ dependencies, more needs to be dealt with at the architectural level. And it has to be dealt with more effectively.

“You synthesize from C or C++ down to RTL because you can simulate so many alternatives at a high level that the chances for stumbling on the optimal solution are higher,” said Wally Rhines, chairman and CEO of Mentor Graphics.

That’s been a driver behind the recent uptick in high-level synthesis and ESL modeling, but so far the tools are incomplete. All of the major EDA players say they are now working on solutions in this area.

“Models are becoming more refined,” said Frank Schirrmeister, director of product marketing for system-level solutions at Synopsys. “From there you need appropriate characterization of those models. Right now people make assumptions and then they refine those models once the data is available. The big question is yet another ‘What if.’ What happens if we move to a smaller technology node or 3D. Does it all become unmanageable?”

Lots of questions, not so many answers
Similar questions are being asked all over the semiconductor industry. In the FPGA arena, where advanced modeling has largely been hidden, companies are beginning to look at the possibility of similar design methodologies for 2.5D and 3D stacks.

“This kind of performance analysis and estimation and power analysis and estimation has never been seen in the FPGA world,” said Ivo Bolsens, chief technology officer at Xilinx. “You know the platform and understand the delay and the power consumption from building a component because you have the silicon with an FPGA, but the tools are not there to do the optimization.”

What’s important here is the ability to do architectural exploration with enough granularity to be useful. Power modeling, for example, tends to rely on high-level estimates up front, but that data is notoriously inaccurate and can cause problems further downstream in the design flow. Exploratory or “pathfinding” tools have been talked about for years, but the amount of complexity has grown at advanced nodes to the point where there is now a real market need.

Fig. 1: A 2.5D design. Source: Xilinx

Location, location, location
One of the interesting things about stacking die is a fundamental shift in what gets placed where—not so much from a thermal standpoint but from a connectivity standpoint.

This is more than a layout issue. It’s a complex series of tradeoffs between power and performance, taking into account proximity effects, heat, electromagnetic interference and even electrostatic discharge. In 3D, understanding which blocks talk with which blocks, when, and by what means isn’t a simple decision. It’s filled with tradeoffs. A processor that used to communicate over a standard bus through a general-purpose operating system may now include a dozen heterogeneous cores, each with a specific function and running at different voltages. Two or more may be working at the same time, or there may be only one important processor core running with an accelerator.

“In 3D the prime real estate will be in the center of the chip,” said Kurt Shuler, director of marketing at Arteris. “In 2D the most efficient way to route signals was at the edge of the chip. That creates a different problem set and a different set of tradeoffs. Models now have to be topographically aware, too.”

There is increasing attention being paid to network-on-chip approaches to deal with tradeoffs more easily, substituting one piece of IP for another, for example and reconfiguring the network to deal with that. “A company’s first chip might be 50% new IP and the derivatives might be 20% new IP. Within the system, the only lever for change is the interconnect. In 3D that’s a similar situation. To accommodate different subsystems you need a quick way to do that.”

Business, but not as usual
Two known good die stacked on top of each other may produce two bad die. That kind of risk cannot be overstated in stacking die because the investment is far too high for getting it wrong, both in terms of NRE costs and missing market windows. But with stacked die, risk also increases by the number of layers being stacked together.

“What 3D stacking adds is more ways of hurting or helping ourselves,” said Drew Wingard, chief technology officer at Sonics. “Once you winnow the choices based on economics, then you have to figure out what are the high-level user benefits. You may get more features, save power or energy and optimize on cost. You can see that in Apple’s approach to SoCs. Some of their products have been done with components that are behind the competition, but their focus on the user is so strong that they always hit it right.”

Those kinds of tradeoffs occur in power and performance, as well. “In general-purpose computing, you make everything run as fast as possible,” Wingard said. “It’s all about not having a choke point. SoCs are usually a set of primary use cases. They’re much more complex, and 3D will become even more complex.”

Conclusion
What emerges from discussions with engineers across the IC design space is a recognition that more has to be done up front, with greater accuracy and with greater ease. Rather than reducing the role of design engineers to the mechanics of place and route, it points to a much more complex set of tradeoffs that can be made using an automated set of tools.

“This is all about re-thinking the ‘what-if,’ said Mike Gianfagna, vice president of marketing at Atrenta. “It’s a bridge between what you propose for an on-chip interconnect, for example, and then you try out ideas. The analysis will last a few hours or, worst case, overnight. Do you need a wide TSV or a narrow TSV? Which is the best answer? What’s the projected cost and what are the parasitic routing delays.”

Gianfagna noted that the goal is an optimized configuration, which may be cheaper, lower performance for some uses, and more expensive and higher performance for others. But all of this will need to be tried out and tried again on a vast level, potentially opening vast new opportunities for tools, for a new type of expertise, and for far more interesting IC designs.



Leave a Reply


(Note: This name will be displayed publicly)