Dealing With New Bottlenecks

Thin wires, routing congestion and cautious tool development are making advanced designs more difficult. Hardware security issues emerge as a new concern.


By Ed Sperling
While the number of options for improving efficiency and performance in designs continues to increase, the number of challenges in getting chips at advanced process nodes out the door is increasing, too.

Thinner wires, routing congestion, more power domains, IP integration and lithography issues are conspiring to make design much more difficult than in the past. So why aren’t there more tools? The answer is, there are. EDA vendors continue to make huge investments as a percentage of revenue, and the number of available options and avenues for solving problems has never been greater. But there is a limit to what can be done, even by the largest R&D groups, and they are faced with the reality that not all of these approaches will pan out. Which materials, processes and architectures ultimately will succeed is, at best, a very expensive gamble.

Routing issues
The first pain point that most designers encounter is in the mass of wires bundled around SoC memories. Those wires are getting thinner at each successive process node—and in each new IP block or subsystem they buy—and the number and length of the wires is increasing to the point where the problems are now becoming critical.

“We have long talked about wire delay dominating cell delay as you move to lower geometries,” said Ashwini Mulgaonkar, product marketing director for physical design at Synopsys. “At emerging nodes (20nm and below) the problem is only amplified. At each successive process node, the scaling of wire width and thickness results in a continuous reduction of cross sectional area. At 14nm, we see as much as 60X difference in resistance compared to the 65nm technology node.”

In fact, resistance becomes such a dominant factor at those nodes that it requires new methods to achieve design closure.

“At these nodes, the emergence of telescoping metal stacks exhibits a 20X to 50X difference in wire resistance between the lower layers of metal and the upper layers,” said Mulgaonkar. “Such heterogeneous layer resistance profiles demand intelligent routing control. Routing critical nets on the upper layers to take advantage of the reduced resistance, adding non default rules such as double or triple width wire widening or spacing from pre-route through post-route optimization are needed techniques for timing closure. Route-based pre-route optimization is another enabler, where you can benefit from a congestion-aware global route when doing high fan-out net synthesis and identifying the critical nets that are routed using upper layers of metal. Resistance-based route and post-route optimization, timing-driven and cross-talk driven routing are other key enablers to timing closure.”

The bottom line: Because more of the delay is determined by wires, it’s essential to utilize intelligent wire control. It isn’t just about wire resistance, either. It’s also drive resistance, which increases almost proportionally to the scale. That in turn leads to weaker drive strengths and less robust circuitry.

More unknowns
And it’s not just about routing. That’s only one issue. The complexity in advanced chips, driven in part by power concerns and the need to conserve every last milliwatt in order to extend battery life has evolved into one of the biggest technology races in history. Dark silicon, once considered an exotic future problem, has become the norm. Multiple power domains are a given. But so are multiple cores, complex software, and characterization gaps in IP.

“From the point of view of RTL and verification, there is more integration of functionality on a chip,” said Oz Levia, vice president of marketing and business development at Jasper Design Automation. “The convergence of things at the system level now matters at the SoC level. So low power is not just about clock gating, anymore. We’re seeing triple-digit numbers of domains with power sequencing. The tools are struggling to keep up. We see that with simulation and even emulation.”

While the top-line battle is for power and performance, the complexity required to improve energy efficiency and maintain or improve performance also has created some unforeseen ripples. One of those areas is hardware security—a problem that has been exacerbated by the need to add more features, lower power and improve performance of those features.

“There are no specific questions that can be asked for security and no guarantee that someone can’t override access,” said Levia. “And it’s not just a functional point of view. There’s also overclocking and overheating. This is also a byproduct of integrating more on a chip. And it’s happening at older nodes where they are making more aggressive use of their own and others’ IP. From the block level it looks as if nothing has changed, but under the hood there are a lot of master and slaves, power optimization and security issues. And then there is added complexity of more and more software.”

Caution on the tools side
From a tools perspective, this should seem like an obvious opportunity. The problem is that complexity has created way too many opportunities, and not all of them will pan out. While the most advanced customers are demanding solutions, the cost of developing new tools is daunting—and the timing is uncertain.

“There are a lot of new ideas that are promising, but they’re expensive,” said Qi Wang, technical marketing director at Cadence. “Everyone always tries to squeeze the last bit out of older technologies, so you cannot move too early. Test chips involve a huge investment. The result is that you cannot get fully ready for new things on your own. In my opinion, once the industry is closer to the end of a technology, you see more and more collaboration of the ecosystem so they can share the cost.”

This is complicated by the fact that some of the new development work is now happening at older process nodes. Techniques such as back biasing, for example, are being used in new ways. STMicroelectronics, for example, has used it in conjunction with fully depleted silicon on insulator at 28nm, even though the technique had largely been abandoned at older nodes. The same thing is happening with mixed signal designs, as mainstream moves to 65nm and 40nm and power becomes more of a concern. For example, lower Vdd to save power works at older nodes, but not at the most advanced nodes. This is particularly true in the automotive market, where high temperatures require power savings.

“This isn’t always forward-looking anymore,” said Wang. “You have to exploit every angle, particularly as the lines between analog and digital get blurred.”

Mixing and matching
Nowhere is this more true than in discussions about stacked die. While the semiconductor industry is still waiting for a big chip company to do for stacked die what Intel has done for finFETs, stacking remains a serious contender for solving many of the problems in chip design today. It simplifies routing congestion with either TSVs or interposers, overcomes wire resistance by shortening distances and increasing the size of the pipes, and it allows chipmakers to mix and match IP—particularly analog IP—from a variety of process nodes.

“The primary motivation is memory access,” said Bernard Murphy, CTO at Atrenta. “There is not as much need to put logic on logic, or logic by logic. But the real challenge is a standard way of integrating all of this stuff. The standardization of the technology and acceptance is still challenging.”

The timetable remains frustrating to many proponents of this approach. But that same frustration is evident at the most advanced nodes, as well. So far, the only company commercially producing finFETs is Intel at 22nm, and that process doesn’t use double patterning. Combining finFETs with multi-patterning remains a challenge, both from a technology and a business perspective.

“It is not all a free ride,” said Synopsys’ Mulgaonkar. “FinFETs can deliver faster timing and lower leakage power, but these devices are generally stronger with higher gate capacitances, typically 2X to 3X that of their planar equivalents, and this can lead to higher dynamic power. A key challenge of the three-dimensional FinFET structure is that of the self-heating effects. In planar transistors, the heat is more easily able to flow out of the active regions through the substrate and thus be efficiently dissipated. With the fins in FinFETs being condensed regions, heat has a more difficult time being dissipated and this localized additional heat can result in an increase in the leakage current of the device. Also, there is some concern that due to negative bias temperature instability (NBTI) over time these devices can get slower, impacting the performance rating. An additional issue is the localized heat within the transistor structure can increase the temperature of the overlying metal routing structures.”

That can significantly increase electromigration effects within wires. So for each step forward, there are consequences and new problems to solve. Complexity causes more complexity, and while solutions do exist, all of them carry a price tag of additional complexity, cost or both.