Moving to the next process nodes is creating new opportunities and a whole new set of challenges.
SoCs using 16nm and 14nm finFETs are expected to begin rolling out next year using a 20nm back-end-of-line process. While the initial performance and power numbers are looking very promising, the challenges of designing and building these complex chips are daunting—and there are more problems on the way.
First, the good news. Initial results from foundries show a 150% improvement in performance and area from 28nm to 14nm finFETs, in roughly half the area. FinFETs also allow companies to reduce the supply voltage, and because leakage is contained, clock frequency can be boosted with less thermal impact so performance can be cranked up again. Moreover, industry sources say the initial numbers at 10nm show another 2X density improvement with a 35% to 40% power improvement.
But increased power density, increased complexity and a proliferation of data carry their own price tag. The actual production costs don’t increase significantly and EDA tools have been updated to automate many of the new challenges—everything from double patterning to automating the handling of parasitics and unknowns (Xs). Still, there is much more to consider for each new design. What used to be standalone effects, such as electromigration in a wire, now need to be considered in the context of the interconnect or other components of an SoC.
“In the past, you didn’t need to know the electrical impact of layout and place and route,” said David White, engineering group director at Cadence. “The old notion was that if you were concerned about electromigration, you could just widen the wires. But electromigration rules are not specific to part of the wire because it’s not just the wire. It’s what surrounds it. So you can’t wait until the layout is done to electrically verify it. You have to verify the electrical decisions as you make them. You have to optimize and tune as you go. You make changes to the routing, hit re-simulation, tune and then optimize for performance.”
This is a big shift from 28nm to 20/16/14nm, said White. “We’re seeing more and more designers doing early layout and supervising the layout.”
EDA vendors have different terminology for how they deal with these issues—in-design and in-circuit, for example—but they are addressing the same challenges. Because of increased density of power, wires and other components, everything has to be considered in relation to everything else. This is a big challenge with complex SoCs, and while this has always been done to some extent, it has to be done much more accurately at the new process nodes. Making it even more difficult, it’s hard to set rules or establish consistent methodologies with so much variation between designs.
“People are still getting tapeouts done, and we’ve seen several test chips already,” said Mary Ann White, product marketing director for the Galaxy Implementation Platform at Synopsys. “They need a little more handholding than usual, but the marketplace is adapting.”
What’s changed, she said, is the recognition that “everything is surrounded by neighbors.” The number of design rules has more than doubled, and there are more rules in general.
Reliability issues
This explosion in data, coupled with increased rules and understanding everything in context, inevitably lead to discussions about reliability. From the verification side, the hot topic is coverage. On the design side, it’s physical effects. And for chipmakers, it’s all of the above and more.
“Electromigration is one of the really big issues we’re dealing with, and we’re seeing for the first time that customers are mandating electromigration as a signoff metric,” said Arvind Shanmugavel, director of application engineering at Apache Design Inc. “In the past you could get buy with average current for this. Now, the foundries are mandating peak current checks. There has been a big change from 40 to 28nm. On the other side, you’ve got the signal. Electromigration is bidirectional. The way to deal with that is to simulate with vectors in a true transient approach. For that you need accurate numbers for that. You can’t estimate it anymore.”
This need for granularity and better accuracy is echoing across the entire design flow—but with the ability to choose exactly where to apply it. Reliability in a complex SoC is a relative term. Because not all functions are critical, not everything has to be fully optimized, and not every use case can be understood on the design side. But the flip side is that it also can’t be overdesigned, because at advanced nodes any excess circuitry can affect the overall power and performance of a chip. That makes for some tricky tradeoffs.
“You need accurate table lookup and geometry information,” said Shanmugavel. “So for power grid electromigration rules, that requires both. You also need to track electrostatic discharge, which can shut down an entire chip, and you need to be able to keep electromagnetic interference in a particular zone. That all needs to be simulated.”
Wires and interconnects
So what’s the verdict on advanced nodes? In some respects, the tools and the migration to finFETs and 16/14nm are much simpler and cleaner than expected. In other respects, it hits the laws of physics squarely in the face. The resistance in wires and interconnects hasn’t changed, quantum effects are still looming over the industry—particularly after 10nm—and III-V materials to improve mobility of electrons are still in the research phase, along with carbon nanotubes, graphene, silicon photonics and new types of memory. The big question is when they emerge from that research phase with enough predictability and low enough cost to be useful—a balance that EUV lithography was supposed to have worked out three process nodes ago, which is why engineers now have to utilize multipatterning.
“Electromigration, power delivery and resistance all go together,” said Greg Yeric, senior principal engineer at ARM. “Then you add in vias, and it’s getting painful with via resistance at 16nm. At 10nm it gets worse. Wide I/O and 3D-IC are one of the biggest knobs we have to turn to deal with this. Hopefully the ecosystem will be ready for it by 10nm.”
The alternative, said Yeric, is a cost that isn’t apparent—adding more metal to cells or limiting structures on an SoC.
“There is a lot of opportunity in managing electromigration at the physical level,” he noted. “This is in the state that statistical design was in eight years ago. There’s a lot of overdesign and guard-banding.”
Change of direction
Guard-banding and thicker wires have been a commonly used answer to electromigration, too.
“Previously, you would say that if you have any space you would just double the vias and put in DFM, first for redundancy and second to improve your electromigration and reduce the resistance or both,” said Carey Robertson, director of product marketing at Mentor Graphics. “It’s the same thing in terms of wire spreading or wire fattening. You would do this as a post-processing step. Now the design implementation tools are putting in wider wires. To alleviate electromigration you either reduce the current or widen the wires.”
Because that’s not possible anymore, more data actually needs to be processed and solutions need to be more accurate—and also more flexible. And in the end, design teams will need to rethink many parts of their internal flows and have to make adjustments that won’t necessarily migrate easily from one process node to the next. Complexity is now reaching beyond the tools and into the way they’re being used, and the impact of that is yet to be assessed.
Leave a Reply