Experts at the table, part 3: Analog/mixed signal test is still only partly baked; what happens with stacked die; the return of wafer probe.
Semiconductor Engineering sat down to discuss current and future test challenges with Dave Armstrong, director of business development at Advantest; Steve Pateras, product marketing director for Silicon Test Solutions at Mentor Graphics; Robert Ruiz, senior product marketing manager at Synopsys; Mike Slessor, president of FormFactor; and Dan Glotter, chief executive of Optimal+.
SE: In our last discussion, we talked about testing SoCs. How do we test the new wave of analog/mixed-signal chips in the market?
Ruiz: There are challenges for designers regarding test for mixed-signal IP. Our philosophy has been to provide test with the IP. The test is shipped with the part itself. As for SoCs, it’s more than just digital today. There is also mixed-signal involved. For each of those SoCs, we have a solution. But regarding mixed-signal test, it’s something that is automated. Not only is the functionality complete, but they can pop it into a design. Customers have to load up the instructions and fire off the BIST engine. However, I would like to go back to an earlier statement about meeting schedules. You need to meet the turnaround times with shrinking market windows. That automation is needed. So test needs to be provided for those parts. For parts that are custom, there are ways to help to automate portions of that, particularly the digital portions.
Pateras: Analog and mixed-signal test have been challenging for many, many years. Analog test is where digital test was probably 25 to 30 years ago. It is like the Wild Wild West in many respects. It’s a much harder problem. Part of it is the lack of structured approaches in testing analog. It’s very much a hands-on, manual and creative kind of process. One of the fundamental things we don’t have is the ability to measure how well we are doing. Digital test was transformed 25 years ago, when we came up with the concept of fault models and fault simulation. Here is a metric to tell you how well we are doing. That was a way to understand how good or bad our functional patterns were. And then, we moved to more structural approaches. Now, we’re trying to develop the same thing for analog, which is analog-based fault simulation. How do we standardize on describing fault effects or defect effects on analog circuits? How do we measure them in some efficient way? At Mentor, we’ve developed an analog fault simulator and worked with many customers to determine how useful or efficient that is. We are also trying to get some standardization in place. Once we have that, we can start working on making those tests more efficient. We are starting to see some progress but it’s early. Certainly, there are high-speed I/Os and PLLs. We have solutions like BIST, which can test these things quite well. But if you look at more general mixed-signal components, such as DACs, ADCs and RF, then it gets trickier.
Armstrong: I agree that it’s largely the Wild Wild West. Analog test tends to be parametric in nature. I’ve seen analog continuing to migrate towards digital. The RF to bits types are new and getting us more digital. But you still need very good control over the analog sources and things to measure them. Another trend here is systems-level testing. This is filling in around the edges in many ways. People are looking to use that more, and unfortunately, there are no fault models that I know of for how systems-level testing feeds in to the rest of it. Everybody boots their part before they ship them. Some people are doing it at wafer probe. The challenge is how do we assimilate the systems-level test aspects and perhaps diminish the traditional scan test, as we are covering the faults otherwise.
SE: Where are we in the stacked die or 2.5D/3D era? How do you test these chips?
Pateras: If you are familiar with Gartner’s technology adoption curve, there is the hype factor. There is also the adoption phase. We’ve passed the hype part in 3D. It’s a technology that certainly will get adopted because there is a lot of data out there and a lot of experience with this. We definitely see memory stacking, whether that’s a memory stack on a 2.5D interposer, or even in some cases, a memory stack on a logic die. That test problem is well understood. Generally, we have solutions such as memory BiST, for example. It could even be a Wide I/O-based memory, but there are solutions in place to test stacked memories. When you talk about an interposer, I don’t think there is much of a problem. It can be tested and dealt with by traditional methodologies. But when you are stacking logic die and TSVs, you see more challenges. The biggest challenge is test access. From there, it’s a question of getting die from different vendors using different methodologies. How do you get test access? Standards are critical at that point. You need standards. There is a working group called 1838. They are working on a standard way of accessing dies and TSVs in a stack. It’s going painfully slow. They have been working on it for four years. But that’s going to be a necessity unless you do everything with one vendor. This could be the case in the short term. There are other factors that come into play, as well. Again, it comes back to the divide and conquer, or hierarchical, approach. If you stack multiple die, you want to test the die individually and then test them in the stack. This means you need to re-use those tests. You don’t want to re-generate a test for the stack. And then, in the third component, wafer sort becomes much more critical. You don’t want to be stacking bad die. So, the whole known good die issue comes back into play. You need to do more and more testing at the wafer level. By that, I mean mixed-signal and I/O. Things like high-end I/O become critical at wafer sort. We are already seeing that happening in terms of pushing more things at wafer sort. That will drive more embedded test and more on-chip DFT resources. It will eliminate the constraints that may exist at wafer sort probing.
Armstrong: I do think people are making good progress, and some are actually shipping products in this space. We need to go back to the fault models and see what happens to these designs. You can have problems that propagate there. But the real challenge for all of us is to figure out where are the test insertion points. Chip on wafer is one approach, but there are other approaches. The other one comes down to the binning problem. It comes back to systems-level test. This might allow me to know whether I have a bin one, two or three. I want to make sure I have the bin ones on the same interposer and ship that product. I don’t want bin three in there and have to throw away some bin one parts. We simply can’t throw out an expensive ASIC because of a 50-cent DRAM. There are a lot of challenges here and that’s why I believe wafer probe is seeing a resurgence.
Glotter: From our point of view, we don’t care if it’s 2.5D or 3D. Generally, we call them MCPs. By the way, 3D exists in memory. One is called the Hybrid Memory Cube. There are other innovative companies developing MCPs, like Xilinx and others. To the point, this whole issue won’t exist without a new concept called a clearinghouse. Let’s take Xilinx for example. You are sourcing parts from multiple companies, such as memories, analog and SoCs. Today, there is only the notion of good bins. But when put them in a stack, you may be taking one device from the edge of a wafer from one company. You might have another device that has been tested at one step. And then, you have a device with some issues. And if you stack them together, you may get a scrambled egg. That scrambled egg may come back as an RMA. Now, what we are doing for our customers is taking care of the databases of the RMAs. You see issues, because it’s a scrambled egg. It could be a performance issue. Or it can be a reliability issue. We are trying with our customers to create a consortium. It’s not exactly a consortium, but it’s a group that will say: ‘This is the clearinghouse. This is the clearinghouse methodology. And in this methodology, this is how one company, Toshiba, needs to speaks with Intel, Qualcomm or Broadcom.’ It’s a whole new concept on how you deal with this MCP arena.
Ruiz: We’ve spoken to quite a few customers on all of the 3D-IC implementations. They are looking at various challenges, such as thermal analysis and routing technologies. The biggest challenges have been more on the design aspects. And the part of least interest is actually test. But test needs to be done. For customers, it’s least interesting because they use existing technologies. There are board-level techniques for test, which can be applied to 3D test. Standards like P1838 will help. That will help with the productivity. But I think it’s really a matter of methodology and cost of test that are the biggest drivers or concerns for the implementation of 3D-ICs. Regarding implementing test for 3D-ICs, there are well known techniques. Perhaps we can look at additional fault models for 3D-ICs.
Slessor: The challenge is not how to do it. It’s how to do it and meet the cost targets. There are tradeoffs. The push toward knowing more about the individual die that you will package together is certainly placing a bigger premium on wafer test. This includes the fidelity of wafer test, how clean your probe card is, and how many different vectors you are going to run to find different fault modes. This is key, especially early in a product life cycle where you are still building up the learning on how this die is going to fail and where you will have yield problems. The industry has at least partially solved this problem with multi-chip modules. But figuring out how I am going to distribute whatever my budget is for test for this product, and in between different parts that have different costs and complexities, it primarily becomes an economic consideration and much less of a technical problem. It is figuring out how to solve them in a way, where the part delivers the performance the end user wants at a cost point that enables them to have a successful business. That’s what we are wrestling with. How much are you going to test known good die? And how much are you going to do known good partial stacks and all of these different test insertions you can envision? You can’t do them all. Otherwise, you will lose money.
To read part two of this roundtable, click here.
To read part one of this roundtable, click here.
[…] To read part one of this roundtable, click here. To read part three of this roundtable, click here. […]