DFM Challenges Abound Below 20nm

With added complexity and increasing cycle time, the challenges are mounting for SoC designs below 20nm. These issues are also driving the need for more highly skilled engineers.


By Ann Steffora Mutschler
As semiconductor design teams struggle to wring the last few percentage of die shrink from a technology node, much of the ability to do that rests on the EDA tools.

From place and route through DFM checks—essentially, everything that happens before the design is sent to the fab or foundry—it all must be tightly integrated with the manufacturing process so it correctly reflects what the process will print.

“We’ve pushed the limits to keep it manufacturable and that’s about it, and we want to get the very best die size from a technology point of view,” said Subi Kengeri, vice president of advanced technology architecture in the office of the CTO at GlobalFoundries. “But if the place and route and the EDA tools are going to be so inefficient that all the technology value does not get translated to the SoC level value then you’ve failed. Who cares how capable your technology is by itself, if it does not get translated to SoC-level product value?”

At 20nm, the most obvious impact of the required double patterning on DFM is cycle time. Design teams must build in additional cycle time due to some foundries (i.e. TSMC) performing a gray-level check, explained Manoj Chacko, product marketing director at Cadence, which means they don’t require the designers to decompose the layout. Other foundries require their customers to decompose and then do the DFM checks.

“Decomposition is not a simple thing, meaning there are a lot of complexities here—the cycle time at the signoff DFM checks, the runtimes are going to take longer,” Chacko said. When we talk specifically about DFM, like for example, a litho verification check is going to be more complex. Even today at 20nm with the litho verification check, the layout has to be decomposed, then the litho checks have to be run and it takes quite a lot of time. When we talk about triple and quadruple splitting there’s definitely a direct impact there. The most obvious one is if we propagate this up into the design chain, it’s really getting more complex because the routers have to do more than double—three colors, four colors. Then you add one more level of complexity to that.”

To make matters worse, there are two flavors of double patterning. One is litho-etch litho-etch (LELE); the other is self-aligned double patterning (SADP) or spacer patterning.

“What if, say, at 10nm the foundry says, ‘for these two layers, we’re going to use litho-etch, litho-etch, for another two layers we’re going to use self-aligned double patterning.’ Imagine the complexity now. These kinds of things are within the realms of possibility,” Chacko said.

David Abercrombie, advanced physical verification methodology program manager at Mentor Graphics confirmed for triple and quadruple patterning there is already work being done. “We already have people using code and testing it out.”

He noted that the term DFM means ‘design for manufacturing,’ and in essence that’s all multi-patterning is. “You have to design it in a way that it can be manufactured. In particular there’s this new constraint that in order to recreate the shapes that you want on the mask at the dimensions we’re talking about you have to constrain the layout in a way that the shapes can be placed on separate masks, so it inherently is a DFM requirement because it’s a design constraint driven by a manufacturing requirement making it able to be manufactured. Whether it’s double, triple or SADP just depends on the manufacturing tricks, so to speak.”

Those approaches affect other DFM technologies, though, such as hotspot protection.

“Now it’s not as simple as it was because for a given layer, you’re not printing once, you’re printing twice,” Abercrombie said. “Whereas before double patterning you would take the layer and you would simulate the contours of the lithographic image, then basically do spacing checks or width checks on the contours to find out as printed, the question now is will it actually be robust?”

Simply put, once the lithographic process is simulated, the result is measured. With double patterning, the litho simulation has to be done on two separate masks, separately, and then those contours must be overlayed on top of each to see what the final contour will look like.

“Then because the two masks can misalign compared to each other, you’ve got to then do corners,” he said. “The same thing impacts fill—the fill data has to be colored too. Now you’re not only balancing the density of a mask, you’re balancing the density of two. Not only do the sum of the two masks including the fill have to meet some level of density, but each mask has to be uniformly dense unto itself or the etch process associated with printing and etching them. So fill becomes one of the tools to do that, not only to fill it so that you get a total amount of data uniformly but then biasing the colors of the fill shapes. Imagine in the surrounding real shapes (the shapes that make up the circuit) that for some reason there was more of one color than the other. Then when you put in the fill, you’d want to bias the coloring of the fill the opposite way to even out in that region the color percentage of each. You have to start looking at doing things like that.”

Complexity drives collaboration
Given the challenges at 20nm, there is a lot of collaboration that happens early on in the EDA-foundry ecosystem. “Essentially the motivation and the goal of the foundry partner and the EDA partner have been to reduce the impact for the design community,” Chacko said. “Think about at 20nm right now, two years ago there was a lot of anxiety about whether designers, place and route people, do a decomposition. You look at it now and the rules are all there—there are double-patterning rules, there’s decomposition. In the end, that’s what’s going the happen. The EDA tools will develop and whatever has to be done will happen. Essentially the impact is designers will have to budget more verification time. Signoff definitely is getting more intense as you think of the next nodes because there’s more complexity, more checks to be done because of not just one mask, now two masks, three masks and too many interactions and therefore the critical problem is the predictability of the yield. That what really becomes key.”

The risk for the foundry is going up, and to mitigate the risk they require more verification in the design and the signoff flows. That means DFM verification will become more complex.

“It’s not that they’re trying to make things more difficult, but they have good consistency with their process and the infrastructure that they are delivering, which are the tech files and the rule decks and so on. So if you give a foundry the design and the foundry does OPC, they do this decomposition, they do OPC, they make a mask. The key thing is that they are enforcing a nice tie-in with what they are using in manufacturing or not having too much imbalance between their manufacturing flow and the signoff flow,” he added.

Help wanted
All of this complexity also translates to a need for additional engineering talent, Abercrombie said. “The designs are bigger, there’s more complexity not just with double patterning but with all the other various rules. You just look at the number of rules in the design rule deck. It continues to increase almost exponentially from node to node to node. It gets more complex; there’s more to account for. You’re doing more layouts that are harder layouts.”

What this boils down to is that semiconductor companies must hire additional skilled engineers, as well as purchase additional CAD tools. “Not only do they need more tools, but they need a tool that now can do double patterning or pattern matching or all of these other things. They need to run on more CPUs because the designs are so huge that you just need more CPU power to crunch it.”

Leave a Reply

(Note: This name will be displayed publicly)