Test Challenges Grow

Experts at the table, part 2: Balancing rising complexity with driving down the cost of test; consolidation in the test industry; hierarchical, protocol-aware and wafer-level test.

popularity

Semiconductor Engineering sat down to discuss current and future test challenges with Dave Armstrong, director of business development at Advantest; Steve Pateras, product marketing director for Silicon Test Solutions at Mentor Graphics; Robert Ruiz, senior product marketing manager at Synopsys; Mike Slessor, president of FormFactor; and Dan Glotter, chief executive of Optimal+.

SE: In our last discussion, we were talking about test costs. Are there any more issues with test costs?

Slessor: If we are all looking at the same test problem, it’s a simple matter to drive test costs down. But you are constantly chasing an increasing amount of complex requirements. As suppliers, we clearly need to make investments to meet those technical requirements. And so, you are chasing a moving target on the technical front, while being asked to drive whatever the marginal cost of that is down. That could be the cost per die or cost per transistor. In addition, the industry’s scaling continues to be challenged by classical front-end Moore’s Law. And there’s a lot more of the technical requirements that we’re being asked to meet that are accelerating on the trajectory that front-end lithography is on. It’s an interesting problem in that we have a certain R&D budget and a certain set of engineers. Which of these problems should we really focus on driving down the cost of test, while meeting these accelerating technical requirements? So, one of our fundamental challenges is the economics of this whole thing.

Glotter: I’d like to offer a different approach to the cost of test. What we’ve seen in the last two or three years is that there is a huge penalty on quality and reliability. At the end of the day, test was originally ‘go or no go.’ But today, we know that customers, or potential customers, lost business to known suppliers because of their inability to maintain the right level of quality or reliability. Today, that is easily measured by RMAs. We’ve seen public companies that lost whole lines of something. And by that, they lost everything. They might have tried to shrink the cost of test to this or that, but this becomes a very dangerous game. This brings me to a definition of what is the cost of test. Usually, it’s counted by the cost of a tester, the prober, the probe card and some other consumables. Today, none of our customers would dare do testing only with that. Now, they are bringing in software tools in order to obtain data from wafer sort to final test. So I think it’s going in a completely new direction. I think we need to look at the cost of test very differently. Test, in relation to quality and reliability, are the name of the game. This is a good era for test.

SE: We’ve seen a major wave of consolidation in the ATE industry over the years. Is this good or bad for the industry?

Glotter: Remember, we are a software company. Five to 10 years ago, we were integrating our technology with a dozen ATE vendors. Today, that has been reduced to two or three. It makes our life much easier. It also makes the life of our customers much easier in the sense that they can spend R&D money more appropriately. It doesn’t matter if it’s EDA, probe cards or testers. There are expectations that customers are setting. And you do not have an endless amount of R&D money that can be spread all over.

Slessor: Consolidation has been in response to a fundamental problem of needing to focus a certain amount of investments on solving a problem across a wide variety of companies. It doesn’t make a lot of sense if you have a lot of redundancy there. Now, if you have only one supplier for anything, most of our customers don’t like that very much. They won’t let that happen. As long as you have various tool vendors, and an efficient use of R&D and resources, the industry is in a reasonable equilibrium position, given where the overall semiconductor market is going.

Armstrong: Certainly, the merger of Advantest and Verigy has been healthy for both companies, in my opinion. From a customer perspective, I am happy to say that we’re getting excellent grades. Customers, who were once leery about the merger, were saying: ‘I am a dual vendor ATE shop.’ And now, they are happy to say: ‘We are an Advantest shop.’

Ruiz: I will interpret the question as ATE consolidation and the impact of EDA. Briefly, there hasn’t been much impact in terms of what our customers are saying. That’s typically because customers today are targeting multiple testers to run their test programs. Through the use of standards, the test programs are standard.

SE: How do we test the next system-on-a-chip (SoC) or giga-gate devices?

Pateras: A lot of this has to do with reuse. People are leveraging IP for reuse. Test has to go along with that. So, you need things like hierarchical test strategies, where you re-use patterns. You must re-use patterns for DFT. You also require better access to this IP from within the SoC. So you have the new IEEE IJTAG standard P1687, for example. It hasn’t passed yet, but we are seeing a lot of interest in that. It now provides for a standardized way of talking to all of these various pieces of IP you are designing for debug. It also comes down to my time-to-market argument. You need to do this quickly and efficiently. Let’s say you have multiple pieces of IP. You might have high-speed I/Os, clock generators, PLLs and various instrumentations. You have to initialize these things, get them tested and debug them in a timely fashion. A methodology where you can deal with a slew of third-party IPs in an efficient way is critical. Standardization, and the automation to support that standardization, is critical.

Ruiz: Hierarchical test is definitely something that is needed. In some aspects, you can think of it as divide and conquer, particularly for the largest designs and SoCs. Again, I will go back to my previous statement about integrating test with design. In order to enable those technologies, it’s really a matter of how do you standardize that. The larger companies have the resources in which they can go off, build in their own methodologies, and put in hierarchical test. In fact, that’s what happening today. There is a question for those that can’t afford those resources. How do they go about developing a methodology? We see the best response to that path is using standards. For example, there’s the IEEE 1500 standard. Standards help enable designs for a broader set of customers. But it also has to be tied back to the design and to not break the performance goals, power goals and area goals of a design.

Glotter: Most of what we are trying to do today is to develop technologies that are faster, better and cheaper. We are also trying to do something that has not been done until today. This is the integration of everything together in the test flow. For example, in the ITRS, there is a buzzword called data feed-forward. As I said before, I think most of our customers today would not dare do test without data feed-forward or data feed-backwards. Meanwhile, if you talk about standards, it is becoming quite cumbersome. When you look at a multi-chip package or whatever, you have one company that has enough money to integrate that standard. Then, you have another company that would put something in a design without the right standard. But the problem is that everything needs to speak together.

Ruiz: I want to clarify something. There are larger companies that actually have the money and resources. They have big enough teams that they can do ad hoc methods, whereas standards actually enable the technology for a broader set of customers.

Glotter: That’s correct. But at times, it’s not the same game. Here’s a simple example. In a multi-chip package, you see certain parts from Intel, Toshiba or whatever. They have a certain of level of quality. Then, there are devices made by others. And then you need to take everything into consideration and do something in which the devices speak a certain language.

Armstrong: I would like to follow up on that. I agree with these digital-centric comments. And certainly, I agree there are some big trends we need to follow. But in the SoC space, there is one thing that we particularly need to worry about—power. Power in today’s devices is king. As geometries shrink, our noise margins and timing margins become less. And we can have random faults that crop up anywhere and anytime. The challenge is that everyone wants their cell phone to last for a very, very long time. Certainly, there is an analog or RF element in that. Today, the power aspect is one of the key trends I would focus on, in addition to the digital trends as well.

SE: I hear about the need to use protocol-aware test techniques for SoCs. So in other words, do we need to go back to functional test in order to test complex SoCs?

Armstrong: Protocol-aware is a technology that really helps with the time-to-market question. It helps the design and test engineers speak the same language. There are a lot of other buzzwords that are meant to make the test more efficient. But for SoCs, the tests are changing. We as an industry have to figure out how to confront that. In addition, ‘More than Moore’ is real and the devices are also changing. For example, if a part has a complicated MEMS sensor on it, we’ve got to handle that as well.

SE: Can we just test SoCs and other complex chips at the wafer level?

Slessor: The obvious answer is no. There are ways of doing things, largely in an evolutionary way. It may also take some investments in technologies like adaptive test. Still, wafer test is done today, so that you are not packaging bad parts in the first order. There are still a bunch of failure modes associated with packaging those parts, whether packaging them individually or with multi-chip modules. So that model of being able to figure everything out at the wafer level, or most things out of the wafer level, is actually diverging. You try and test more things at the wafer level. But then you bring them together and see the devices in the interaction mode. So, wafer test is still important, because you don’t want to kill your microprocessor with a 50-cent DRAM. But it’s not the be-all and end-all.

Glotter: We are going back to the golden era of wafer test. Quite a few years ago, there was a trend to go from wafer test to more final test. But today, it doesn’t matter if it is an MCP or a wafer-level package. The more you can test it at the wafer sort level the better.

To read part one of this roundtable, click here.
To read part three of this roundtable, click here.



3 comments

Test Challenges Grow | Intelligent Testing | S... says:

[…] Test Challenges Grow Experts at the table, part 2: Balancing rising complexity with driving down the cost of test; consolidation in the test industry; hierarchical, protocol-aware and wafer-level test.  […]

[…] read part two of this roundtable, click here. To read part one of this roundtable, click […]

[…] read part two of this roundtable, click here. To read part three of this roundtable, click […]

Leave a Reply


(Note: This name will be displayed publicly)