Different Ways To Boost Yield

Using physical design and diagnostics are hardly new techniques, but they are being used differently—and by different kinds of companies.


By Ann Steffora Mutschler
In the race to get products to market with shortening product cycles, steepening the ramp to yield is critical. The introductory phase of a product is the point at which margins are highest and market share can be most easily gained.

This is no surprise to chipmakers. What is surprising is just how much more difficult it has become to achieve acceptable yield quickly, particularly at advanced process nodes. And that’s true even with restrictive design rules, the most advanced tools and the best methodologies and engineers. Still, for most large chipmakers, not meeting early deadlines with sufficient yield isn’t an option.

“That’s the sweet spot for profitability,” said Joe Swenton, architect for Encounter Diagnostics at Cadence Design Systems. “The sooner you can ramp to volume is becoming more than just profitability. In some cases it’s becoming a business imperative. You’re not going to stay in business unless you learn how to do this.”

As a result, IDMs already have integrated diagnosis-driven yield analysis—also referred to as volume diagnostics—while fabless companies are in an earlier stage of adoption. The primary usage of volume diagnostics is for yield ramp: accelerating yield learning to decrease the time required to achieve nominal yields for high-volume devices. In that environment, users may be running thousands of failing die through diagnostics per day to identify systematic yield limiters that are becoming more prevalent in the smaller geometries.

Traditionally what people have done to address yield is look at a lot of data from a lot of different sources, primarily from the manufacturing end of the process, using technology such as in-line inspection where pictures are taken of the devices as they are in different stages of production, explained Geir Eide, product manager for yield analysis at Mentor Graphics.

After that, high-level test results are used to determine, for instance, if the logic passed but more of the memory is failing than is normal. “When there seem to be more types of a particular type of failure, then people would normally go into a process of failure analysis where you basically look at some individual parts—say, a handful of parts. And depending on what you think the problem is, you would use different types of microscopes and equipment, or very often, physically slice and dice the chip to de-layer it in order to get to the source of the problem, then take a picture of it,” he said.

The problem is that as chips get bigger, line widths get smaller, and everything is faster, there are some challenges in the failure analysis process. “You definitely want to find the defect, but even if you find a defect for that part you are looking at, do you know if that particular part represents a problem you can solve? In other words there’s always, and especially when you are ramping up there are usually many different problems happening at the same time,” Eide pointed out.

Diagnosis-driven yield analysis is about trying to find a better way to identify any systematic issue before failure analysis and select better devices for failure analysis.

Specifically, this is done by using diagnosis, which is a way to analyze production test results to find out what’s going on with each failing device. From there, the test team tries to learn from the test results where the defect is located and also what type of defect it is—whether it behaves like a bridge between two nets, or an opening, or whether it is a defect in an interconnect or internal to these different logic cells on the design.

Not new technology
This is hardly a new concept, of course. “People have been doing it since the beginning of our industry,” said Sagar Kekare, group manager for product marketing of manufacturing yield management at Synopsys. “As the complexity of yield-limiting failure mechanisms kept increasing, at some point, the tests that give you an idea into something that yields or something that doesn’t had to become more complex and you couldn’t use just the test data to say this is where the failure is. That’s where people started thinking about test slightly differently. They went from functional tests into structural tests—DFT and ATPG. Diagnosis is basically the key benefit on the yield front from the DFT and ATPG approach in addition to other benefits including coverage, the quality of test—all are high benefits of DFT and ATPG.”

From a yield analysts’ perspective, being able to diagnose the test data and triangulate the failure to a spot or a few candidate spots was a big advantage of a DFT/ATPG structural test approach. “People have been looking at diagnostics to do this job in yield learning for a few years now. It’s not something very new,” Kekare continued.

Diagnosis can reveal what happened to a single failing device, while yield, on the other hand, is a statistical distribution. What has happened in the last 2-1/2 to 3 years is that test teams needed to go beyond understanding what was happening to one chip that failed to determining why they were only getting 60% yield, for example. Going from diagnosing a single failed device to looking at it as an entry into statistical analysis is new.

Kekare believes the tipping point occurred when the 90nm/65nm nodes started getting more production usage. He pointed out that sub-100nm nodes have a very strange behavior in terms of manifesting design process interaction issues. “These design process interaction issues really are for yield analysis people an unhappy marriage of what was in the design to begin with, a marginal element or a marginal path, with whatever is the process marginality in the fab.”

These issues just increase going into the smaller nanometer node technologies but there is no clear indicator as to where they are. This was the impetus for test engineers started bringing data from wafer inspections together with diagnosis.

Cadence’s Swenton agreed. “Volume diagnostics has been a useful tool for a number of years but it’s really catching on as more of a requirement due in part to the shrinking geometries and the fact that traditional in-line inspection methods are becoming less useful because of the non-visual nature of defects. It’s critical to be able to run as many failing die as you can afford to. Obviously it needs to be a statistically significant sample size, and this is done for both logic diagnostics and chain diagnostics used in volume.”

Added Mentor’s Eide: “You know that of 100 defective devices you see, that 90 of them are diagnosed to have a defect in the same net. Then you know just from that location that most likely those 90 must have something in common.”

But if every device is diagnosed in a different location, by adding layout-aware diagnosis you can get the location as well as a better idea of the type of defect and what kind of physical features are part of that net segment where the defect is believed to be. If a particular type of via or logic gate is associated with each of these defects, even if they are in different locations, then you know what they have in common. That part is new. Also what is new is this second step in the process, which is to then provide more sophisticated way of analyzing these diagnosis results.

The analysis is done by combining a few things. Given that engineers by nature like to do what they have done before, it is about representing the more detailed information in the same way they are used to seeing it before. As such, wafer maps and Pareto diagrams are used, but with more data.

In a recent presentation, GlobalFoundries and Mentor showed a few case studies from some of their test chips:

Source: “Optimizing Yield and Performance in a Nanometer World”, Thomas Herrmann, GLOBALFOUNDRIES and Geir Eide, Mentor Graphics.

Fabless Adoption Starting to Rise
While adoption for diagnosis-driven yield analysis has been strong in the IDM space, fabless companies are just starting to put together yield-learning teams.

“The adoption rate among IDMs I would say is near 100%, and among the fabless entities, just like any other new thing that we ask fabless team to do, it starts with some of the top-level, marquee accounts—graphics processor guys, cell phone guys, and then it starts percolating down to others. We’ve seen the same kind of trend here also in adoption of volume diagnostics. Most of the IDMs already are doing volume diagnostics one way or another, either a commercial solution or they’ve built some solution in-house held together by scripts and things like that. But they are all doing it in one form or another. The big fabless companies are doing it, but the medium and smaller fabless are just beginning to hear about it and possibly try it out in one or two off situations,” according to Synopsys’ Kekare.

Yield hasn’t been a core competency of fabless companies because it was always something that the foundry would handle. The idea that the design team has to get more intimately involved in managing and improving yields is still a bit new.

This is why the technology is also referred to as design-centric yield analysis or design-centric volume diagnostics, because design teams need to get involved today in a different way than they did in the past. Today, design teams need to know if there are any strong effects that are limiting either the yield or the performance of the newly taped out device that are coming from design steps. They can originate in the use of design elements like standard cell libraries or via structures that are developed by the foundries, use of recommended rules from the foundries, or different teams are designing different blocks in the design and they use slightly different styles in designing those blocks.

“These are issues foundries cannot answer, and that’s why design teams are also seeing more and more need to get in, even though there is the hesitation because of this history of foundries taking care of yield issues. But there is also the realization that this requires use of a lot of design data and me as a fabless, I’m not comfortable giving all this data to the foundry,” he said.

Test approach needs adjustment
At the same time, you can’t really have a free lunch, Kekare reminded. “You want to be able to diagnose the dies, you need good data coming out of the testers to use for diagnosis and most often, the approach to testing has been stop on fail.”

But this approach is flawed, he argued. “What happens is that you save the time on the tester…but what you save on the tester utilization cost, you lose on your ability to trace the sources of this failure and figure it out quickly.”

The yield teams and the test teams need to agree that at least when ramping up the product, they will abandon the ‘stop on fail’ approach, and instead use a ‘continue on fail’ approach so that all the tests are completed. This means that even after the first test fails, the second and successive tests will be run. “All this data is really important to do the triangulation and localization of the failure. So what you lose in that test cost, you are going to gain multiple fold back in the ability to figure out where the failures are and solve them quickly. This is a conceptual decision that needs to be done by the teams early on,” Kekare added.

Leveraging Existing Structures, Patterns
Interestingly, the diagnosis-driven yield analysis process primarily leverages design for test (DFT) structures and the production test patterns that are already there and which serve as the foundation for this flow, Mentor’s Eide noted. “The basic principles are already there, but when it comes to practical implementation it’s mostly about data management—basically making sure that the actual files you need to run diagnosis are actually available when you want to do diagnosis.”

In theory, pretty much everybody has the data, he said, and it’s more about how to use it.

For instance, diagnosis happens when devices are actually manufactured and part of what is used for the process is the actual design description. “Part of the power here is that when we go through this process we have all the design information so we can find design-specific problems more so than, let’s say, the traditional methods that are more fab-centric. But then again, in the old days, once you had sent your design files to the foundry and it got manufactured, you could basically forget about the design files and put them on a tape somewhere. Now we need those files six months later,” Eide said.

It comes down to having the data available from design, and the test results from the tester as the basic infrastructure to enable it. In terms of the actual information it is pretty much already there; it’s about managing it.

Leave a Reply

(Note: This name will be displayed publicly)