DFM And Multipatterning

Experts at the table, part 3: Getting patterning right the first time; shifting methodologies; plus, the growing challenge of designing while trying to stabilize tools and processes.

popularity

Semiconductor Engineering sat down to discuss DFM at advanced nodes with Kuang-Kuo Lin, director of foundry design enablement at Samsung Electronics; Jongwook Kye, lithography modeling and architecture fellow at GlobalFoundries; David Abercrombie, advanced physical verification methodology program manager at Mentor Graphics; Ya-Chieh Lai, engineering director for DFM/CLS silicon signoff and verification at Cadence; and Soo Han Choi, a member of the senior staff for foundry R&D in Synopsys’ SoC design group. What follows are excerpts of that conversation.

SE: What’s are the big concerns with multi-patterning? And how do you make sure it’s right the first time?

Abercrombie: We used to be able to do it in isolation. We could own performance and make it scale more. Not with triple patterning. Now you’re working on software problems that are incomplete, and they’re inherently non-performance friendly, by definition. The problem is technically unsolvable unless there is a combined effort from the designers and the foundry to say, ‘Here’s how they’re going to constrain the problem such that it will run with reasonable performance.’ There’s no way to hand an EDA solution to somebody such that, no matter what they do with it, that will give them performance. That’s no longer the case. We have to agree how it’s going to be implemented and tuned, how it’s going to be used, and with those restrictions, now we can make it work fast. You go randomly playing with that scenario and you’re up the creek.

Lin: It’s a three-way agreement. We have EDA, foundries and customers. The customers have their own mindset in terms of how they want to run their design flow.

Choi: Generally, foundries have virtual lithography simulation environments and patterning-margin-checking test vehicles for the next node. Based on simulation and the wafer results of these simulation environments and test vehicles, foundries can make the proper design rules to ensure that the patterning is right the first time.

SE: In the past, chipmakers would create designs using EDA tools and then the foundries would fix them. Is that even possible anymore?

Choi: Foundries try to make complex design rules to prevent the process hot-spots and mistakes in designs. In addition, foundries have started to use pattern-matching technology instead of normal design rule descriptions to prevent mistakes in designs.

Abercrombie: It’s certainly still the case where you get a base tool out there, you pound on it, your customers pound on it, we learn that didn’t work or something else needs to happen and we iterate. That’s always existed. The fixing it has become three-tier, at least. It used to be fix the tool. Now it’s fix the tool, fix the deck, fix the methodology you’re using on that deck with this tool. Only when you do all of that do you get an answer that’s going to work. Everybody has to iterate and fix, not just the tool vendors. We’re investing a lot more manpower in deck development than ever before.

Lai: That’s why there’s a push to get more coloring information to design it early. You can get to the point where it’s taped out, it’s at the foundry, but it’s fundamentally not manufacturable. You can’t decompose it. Something bad has happened. You need to get that information early to designers when the chances of fixing it are very high.

Lin: The traditional flow model does not work. EDA cannot throw a tool to the foundry, the foundry cannot throw a deck to the customer. Now it’s more of an iterated model where the three of us collaborate and discuss what should be there, what are the proper tools and desk and what is the proper methodology to make all of this work together. That’s what we need to strive for.

Kye: We have been collaborating. The only difference today is that the methodology is changing. We’re going to add more content. We’ve been shrinking and shrinking and shrinking, but with 14nm we now have double patterning, and at 10nm we will have triple patterning. It’s a different methodology, but it’s a continued evolution.

SE: Does it make it harder to produce derivative chips with multipatterning?

Kye: Analog is especially hard to shrink. We’re investigating whether this is a real showstopper. There is so much work to do with analog for a single node. We may have more innovation possible for analog. For I/O, it is probably not scalable, but we may be able to do more by adding restrictions. It hasn’t seen the limit yet, but it may be limited by multiple derivatives.

Lin: People have to be cognizant of the coloring context with derivative chips. You have to be aware of the surrounding color environment.

Lai: It’s also dependent on whether the derivative chip is within the same process. If it is, control gets better. But if you’re going from one node to another and you’re thinking you’re going to shrink it, you can’t change the double patterning.

Abercrombie: You can’t port from one foundry to another, either. That’s getting more and more difficult.

Choi: Because of overlay errors between two masks, it’s very hard to produce derivative chips.

SE: This is a different slice of Moore’s Law. If the fundamentals change for derivatives, then the whole equation changes, right?

Kye: Yes. Analog parts already have started changing. Typically we talk about a 0.7 scale factor by node, which is true for logic, memory and SRAM. With analog you’re lucky to get 0.9. But now we’re talking about 0.6 and 0.5 for analog. That has to happen to continue this scaling.

Abercrombie: The analog guys have always been the artists. They create handcrafted stuff. That’s why it won’t scale. When you go to a more regular, digital version, it scales. Everyone has to get on the bandwagon of regularity.

Kye: Our customers are looking at that, but I don’t know if it’s going to happen or how sensitive it is or how long it takes. We definitely have to do something, though.

SE: What’s the biggest challenge in moving to 14nm and beyond?

Lai: Collaboration will drive a lot of decisions, and increasingly we need to know exactly what the foundries are doing as early as possible.

Abercrombie: But increasingly they don’t know. It’s been odd designing software. You run the spec and then you’re done. But that’s not the case anymore. ‘It’s not working. So how about trying this?’ And next week, ‘Let’s try this.’ The economic impact on us is that it requires more development staff and development time because we’re iterating what we’re building.

Lin: There’s a lot of exploration.

Abercrombie: There was one case in particular where we went down a path and spent six months designing a tool and we threw it out the door and went off in a completely different direction.

SE: It didn’t use to happen, though, did it?

Abercrombie: Not nearly as often. These are multi-patterning techniques or DPT color. ‘How is that going to work? We think it’s going to work like this…’

Choi: We are seeing the same problem. There needs to be more collaboration between the people creating the solution and the designers.

Abercrombie: We’re all experimenting together. Odd cycles are illegal, even cycles are legal. But most odd cycles and even cycles are clean. Combinations of things are not easily described. What kind of error mark are we going to produce.

Lin: This is all moving so fast that it’s like cooking and eating at the same time. We don’t have the time to really sit down and think all of this through for the early adopters.

Lai: We have successfully gone into double patterning. That’s a very big deal. Things are progressing and people have figured out how to deal with odd cycle variations. Now we have to figure out how to deal with triple patterning violations and how to feed that back to the designer in a useful way.



Leave a Reply


(Note: This name will be displayed publicly)