Test Challenges Grow

Experts at the table, first of three parts: New techniques for reducing cost; limitations of existing approaches; what’s changed and future challenges.

popularity

Semiconductor Engineering sat down to discuss current and future test challenges with Dave Armstrong, director of business development at Advantest; Steve Pateras, product marketing director for Silicon Test Solutions at Mentor Graphics; Robert Ruiz, senior product marketing manager at Synopsys; Mike Slessor, president of FormFactor; and Dan Glotter, chief executive of Optimal+.

SE: What are the biggest challenges in test from your perspective?

Ruiz: I would say three areas. One is improving the quality of the test program so that test programs can capture more defects and more defective parts. The challenges grow because at more advanced process nodes the defects become more difficult to detect using standard or previously used techniques. The second area is dealing with the cost of test. This is something that has gone on year after year. At some point, the industry introduced compression technology to help deal with that. There are other types of techniques, such as doing multi-site testing, which is becoming popular to help lower the cost of test. There is always pressure to bring those costs downward. And the third is to maintain designer productivity. For a lot of design teams, test is not a big value add. It is value add certainly for the design. But from a designer’s perspective, who is trying to get out a 2 GHz design, test is something he has to deal with. So, maintaining design schedules, while putting in test that will deliver a cost-effective, high-quality test program, is fairly important. In part, that happens by bringing down the walls between test and design and making sure that’s one process.

Armstrong: To me, the re-use of IP blocks and the integration of them into silicon from potentially multiple vendors, and potentially multiple pieces of silicon from multiple vendors, brings with it a plethora of opportunities for the test group. Obviously, the question is whether those IP blocks come with test content? Can that be effectively integrated? Can that test content be multiplexed and sequenced in a way that makes sense to streamline the test and reduce the cost of test? How can we integrate other aspects of the fab in order to optimize the test? There are a lot of aspects to this. We’ve got to help the industry figure out how to integrate that effectively.

Slessor: The biggest challenges for the test supply chain are the technical costs of test as well as the business model. There is a tremendous amount of innovation, customer requirements and a need for continued R&D spending. There are also cost pressures. And, in fact, the test industry itself has not been a bastion of profitability in the last decade. Understanding what our business model is and what the economics are going forward are probably our biggest and overarching challenges.

Pateras: There are myriad challenges. If I have to choose one that keeps me awake at night, then it’s probably scalability. We are seeing designs now in the giga-gate range. And it’s breaking a number of things. It’s breaking memory footprint, resulting in weeks of test generation time. All of these things are not scaling. So our goal is to look for the next-generation of scalability from a test generation perspective. There are a number of things we are doing there. One of the main things there is to take the divide and conquer approach. It’s taking advantage of hierarchy. There is IP re-use as well. So if you’re doing test for IP, you want to be able to re-use that test data at the next level of hierarchy and combine things. It’s all about re-combining and re-using test.

Glotter: My belief is that test can only do so much. A test program can be good, but it only looks at one chip. From the EDA point of view, it looks at one chip. When you test it on a tester, it looks at one chip. No one can do a perfect job without looking at the entire arena and assess what’s going on. Sometimes, there is a problem with the tester. Sometimes, there is a fault with the prober, the probe card or the test program. Why is this becoming important? For example, you need to qualify a multi-chip package. You have multiple sources of test schemes as well as probe cards, EDA, testers and so on. Then you need to decide if it’s good or bad. And there are lots of issues and they continue to grow. It has nothing to do with geometries. It has to do with how to take multiple things and put them together. This has given rise to a whole new attempt to give answers that are not there today.

SE: How has test changed over the years and where is it going in the future?

Pateras: One thing is that test has gone from something you had to force the designer to do, to something that is part of the job. DFT is a requirement now. It’s no longer a debated thing. Without it, you don’t have a manufactured or an economically viable part. But beyond that, it’s morphing into additional value add. For example, test data now can be used to analyze and improve yield. You see test not only being used for manufacturing, but for systems test and reliability. And in the automotive arena, for example, you absolutely need on-chip circuitry for monitoring, reliability improvement and redundancy. In addition, if you look into the future with the Internet of Things, reliability becomes a key. The ability to diagnose and gather data also requires on-chip IP and monitoring capabilities. These are things we will see from a manufacturing test perspective.

Slessor: It seems to me that test has evolved in response to one metric that our customers continue to beat. And that’s how many pennies does it cost to test a die. Certainly, companies have done well in leading the charge, where they invest in some R&D to enable whether its higher degrees of multi-chip test, faster speeds or some other aspect. The supplier innovation allows that customer to reduce that cost of test. And therefore, you get compensated for it. But it seems to me that all of the different moves in the industry, whether they would be innovation or structural from a company consolidation perspective, have all been driven by the need to drive down how many pennies does it take to test a die.

Armstrong: I am proud of what the ATE industry has done. If you look at the evolution of ATE today, the level of the performance that’s put into these systems is so much more than anyone could have envisioned back then. Looking forward, to get to very low costs, I see a movement toward higher site count testers. But that’s only useful for the high volume areas like consumer parts. For the low-volume products, we still need to do a lot of refining on how we need to handle them. On the other hand, we’ve done an incredibly good job of compressing patterns, focusing the patterns on the most probable faults, and in some cases, where we don’t need to test appropriately. Still, the complexities of the chips are going to keep growing. I am pretty bullish that we have not topped off as an industry. The challenge is how are we going to use those transistors? Then, if we have all of those transistors, how do we best need to test them? One trend is redundancy and repair. If you have adequate redundancy and repair, I can envision a place where test will no longer be needed. For now, however, you can’t do without test. You still need yield feedback and other technologies.

Ruiz : There is definitely a trend of re-using portions of the DFT logic for more than just test. One example is taking diagnostic information out of the field to improve yield. Another trend is re-using that circuitry to help with functional de-bug. In addition, a couple years ago, customers were not interested in the IEEE 1500 standard, which would introduce additional gates on the design to improve the testability around the cores. But it seems like those concerns are going away. So we will continue to see this trend, where test is considered important enough to actually put more and more test IP, or DFT, onto the design. That will put more pressure on the design flows and technologies, such as synthesis, to be more interactive with test. We will see those types of trends in the technology going forward.

SE: What about test costs?

Ruiz: There are well known techniques regarding the cost of test for specific parts, namely the digital part. However, the trend is to integrate more analog components on a design. And from our customers’ feedback, that’s where the bulk of the test cost is coming from. It’s the amount of time to test analog. It’s the nature of analog. But that will be a concern, especially for design teams that in the past were not looking at mixed-signal designs.

Armstrong: If you were to take a technology node and look at the cost of that technology node over time, the cost of test has continued to go down. The challenge is that everybody looks at the leading-edge technology node, which brings with it the leading-edge prices. Clearly, the cost of test is also driven by transistors. On the other hand, we’ve predicted that scan chain test is going to double in the next three years. And that’s a significant factor for test times in general. Obviously, there are some things the EDA companies are doing to help with that. The bottom line, in my opinion, is that the cost of test will continue to track and will reduce in terms of the test per transistors. Going forward, we will be need to work together on how to handle those IP blocks, such as testing them in parallel.

Pateras: Fifteen years ago, the cost stopped tracking. The cost to test a transistor was not decreasing as quickly as the cost of manufacturing a transistor. Then, we got into a whole new paradigm of compression and on-chip DFT, which brought the curve back down. So the logarithmic curve has gone down exponentially. We will continue to see that a couple of more generations. It does require a focus on new techniques like compression, on-chip DFT, and more parallelism. The other factor we are concerned about is the time to market. Design windows are very short. The effort to generate tests, in some cases, is not tracking. So one of our focuses now is to work on minimizing the time to generate and verify tests. And that is something that is not decreasing exponentially per transistor. There is a lot of innovation required there in terms of how to parallelize that effort and how you re-use existing test.

To read part two of this roundtable, click here.
To read part three of this roundtable, click here.



2 comments

[…] To view part one of this roundtable discussion, click here. […]

[…] To read part two of this roundtable, click here. To read part one of this roundtable, click here. […]

Leave a Reply


(Note: This name will be displayed publicly)