Pressure is mounting to reduce test costs, while automotive is demanding more ability for circuits to test themselves. Could this unsettle existing design for test solutions?
Mention Design for Test (DFT) and scan chains come to mind, but there is much more to it than that—and the rules of the game are changing.
New application areas such as automotive may breathe new life into built-in self-test (BIST) solutions, which could also be used for manufacturing test. So could DFT as we know it be a thing of the past? Or will it continue to have a role to play?
Test is one aspect of design that engineers wish they didn’t have to worry about. “It means they have to add circuitry that has nothing to do with the functional spec,” says Joe Sawicki, VP and general manager of the Design-to-Silicon Division of Mentor Graphics. “They have to deal with the congestion issues. They have to generate the test patterns for the tester. They have to do a lot of work.”
However, without the data that comes from the tester, you have no way to control fabrication quality or to be able to do diagnostics that can affect yield.
It all comes down to economics. “There is a huge increase in the throughput that is required for test per unit of time,” says Rob Knoth, product management director at Cadence. “Fewer test contacts, huge downward pressure on time, and huge pressure in reducing the data volume—that all adds up to make test a huge concern. If there is any inefficiency in your process, be it with compressions ratios or pattern count, it multiplies itself and it socks you right in the bottom line. This cannot be hidden by clever design somewhere else. It directly affects profit and loss.”
That equation is directly affected by complexity, as well. “The abstraction of test cases has to go up,” says George Zafiropoulos, vice president of solutions marketing at National Instruments. “There is so much deeply buried state space because so many processes have to create test at the software level, not just at the pins. In the old days, you could do vector-level test. Now, you need to run more sophisticated software on the processor. The I/O is highly serialized. And rather than just running a scan chain, you have more functional test coming in through a serial interface, whether that’s low-level or RF or even video data. The physical interface is changing.”
was viewed initially as a point tool, but increasingly it is integrated into the flow. “We need to close the loop to provide the tools with access to high volume manufacturing data,” says David Park, vice president of worldwide marketing at Optimal Plus. “Rather than the ad-hoc information that comes back from the fab or the test engineers today, we need full-time data to be available from high-volume manufacturing. Both foundries and IDMs can bring data back all the way to the EDA tools to feed data into the DFT environment. There is exponential growth in the amount of data being collected. This is because they want to do more than just pass/fail—anything that can help with quality and yield improvements. Manufacturing data is increasingly valuable.”
DFT is thus a delicate balance between cost, quality and yield and requires continuous learning through diagnostics to improve aspects of these.
Anne Gattiker, principal research staff member of IBM, adds another aspect to the list – reliability. “Chips are operating in increasingly demanding scenarios using lower voltage, higher temperatures, and doing more work more often. We need to be able to detect subtler defects, and this includes reliability. To detect these subtler defects, we will need to increase the amount of analog measurements.”
Everyone has to work hard to ensure cost and quality is controlled. “We need good fault models so that we can be sure we catch the defects,” says Sawicki. “We need to ensure we are doing things that lower the amount of time on the tester so that you don’t spend more on test than the die itself, but in addition there are new things in the area of diagnostics that can be more effective.”
It was increasing test times and costs that fundamentally changed the test philosophy a couple of decades ago. Until the advent of scan chains, most testing had used functional vectors, but the length of tests was directly impacted by increasing sequential depth. Adding scan chains basically turned a sequential problem into a combinatorial problem.
FinFET adds complications
DFT is one of the areas of design that thankfully does not inherently get more complicated with each new node. “Each node is not really adding new requirements beyond the increasing complexity,” says Steve Pateras, product marketing director at Mentor. “FinFETs are the one caveat because that increased concerns about testing those transistors more thoroughly. That was a slight discontinuity.”
Others agree. “It is a matter of looking at what is new in a process node,” says Robert Ruiz, senior director of marketing at Synopsys. “What types of new faults are going to be manifested? With finFETs it is very clear – it is the fin that is new. We did some analysis and a lot of the defects manifested in the fin will result in the device being slowed down. Thus we have seen a rise in interest for at-speed test. This is a transition fault test that is guided towards the longer paths through the design. This is often referred to a slack-based transition test.”
This required some new fault models. “Delays from variations in the fins of finFETs may still be functional, but it may not work as expected,” says Cadence’s Knoth. “It requires the integration of several tools to be able to analyze many of the issues. From a diagnostics point of view it is very compelling because you have to be creating fault models of the cells and applying those, and getting good correlation to what happens on the tester and diving back into the layout, back into the cells to say what actually happens.”
What is changing with each new node is more transistors and added complexity. “There will be new challenges coming from new materials and new cells but as you go to a new node the size and complexity of the design itself is going to increase and that will put pressure on test times,” Knoth adds. “At a lower process node you may attempt to increase the compression ratio but that puts a huge pressure on congestion and wire lengths. How can a DFT solution handle the physical aspects of compression and not just the logical aspects? With each new node it gets harder.”
Power is the limiter
Given unlimited power available on chip the problem would be simpler, but there are tradeoffs. “This is a zero sum gain,” says Pateras. “Time against power management. You want to do as much as you can as quickly as you can, and so parallelization of test is something that you want to increase to manage test costs. But it has to be managed against power.”
And that is where difficulties start. “Toggle activity is inherently higher during test because the whole point of test is to do it as quickly as possible and as broadly as possible,” continues Pateras. “So you want to toggle as much activity as you can, and this is counter to what you want for low-power design.”
Cost enters into this picture, as well. “You cannot just build a beefier product so that test can be better because that is wasting money,” points out Knoth. “There has to be awareness of the physical aspects of DFT to make sure that we are intelligently partitioning the scan lines, that we are doing placement-aware scheduling so that not everything in one corner of the die is being tested at the same time, instead spreading it out, and doing more intelligent clock gating to manage ATPG power.”
Low power design adds another set of challenges. “Tools have long been able to handle multi-voltage designs,” says Ruiz. “As you implement DFT, the tool has to be aware of power islands, voltage islands, know about level shifters, etc. In terms of optimization, the tool should minimize the crossing between voltage domains so that the scan chains do not have to span those boundaries. Retention cells have to be dealt with and tested. But most of that is taken care of automatically in the synthesis tools.”
Automotive
Mentor’s Sawicki said design engineers dislike test because it has nothing to do with the functional spec. But that is changing for some markets. “In automotive applications, test becomes part of power-on self-test, which is required for ISO 26262, especially within the safety-critical applications. Reliability monitoring is another area for extensibility.”
Anyone designing a part for automotive has seen test costs grow. “The ISO 26262 standard is mandating higher quality and reliability for these parts, and so people have to reassess the amount of test and the amount of DFT they are introducing into the parts,” says Pateras.
Sawicki suggests that logic BIST (LBIST) can satisfy some of the needs of ISO 26262, but it has some problems. “Fault-tolerant design was conceived many years ago, and we do need to get closer to that. Even with redundant systems that are self-repairing, we still need yield so that we can have fewer wafers. People are trying to figure out ways to do this.”
While ISO 26262 does not mandate quality today, many see that as inevitable in the future. “ISO 26262 is about functional safety,” says Ruiz, “but it is typical to have automotive companies looking for quality levels at less than 1 defect per billion parts. So certain segments are adding more patterns to improve quality.”
“Automotive companies are talking directly to semiconductor providers, because quality starts at the semiconductors,” says Park. “They want to be able to feed data back to them so that the robustness of the final systems can be assured.”
While standards such as ISO 26262 may talk about defect rates, it does not prescribe the tools or the methods that are to be used. There are automotive standards that relate to digital test and there could be analog ones in the future.
Analog
” is the white elephant in the room,” says Pateras. “We started with digital test, and mixed-signal has been relegated to the sides as if it were black magic. More designs are becoming mixed-signal, and if you look at cost, test cost of mixed-signal is becoming a bigger proportion of the overall test costs. It is often more than half. In automotive, tier 1 suppliers are mandating fault metrics for the mixed-signal parts of the chips. These are new problems and there is no real automation in existence to help with this. There are no fault coverage metrics or pattern generation for mixed-signal.”
Knoth agrees: “The bulk of innovation in ATPG has been on the digital side. It is a more tractable problem. Analog is ripe for a lot more innovation. Traditionally, analog testing uses more heavy-handed ways to solve this. The criticality of the mixed-signal parts is rising given the increasing numbers of sensors, more control logic, etc. The need for analog test is increasing.”
There is no equivalent of scan for analog. “For standard mixed-signal IP such as a PHY, BIST is the standard technique,” says Ruiz. “It is usual to build this capability into the IP.” He also sees a need for more general solutions in the future. “Early progress for analog test is in the area of measuring the effectiveness of functional test for analog. Performing analog simulation, and being able to do fault injection into the analog portions of the design so that changes in the stimulus/response can be seen, is coming.”
At least some of this can be done using components already on the chip, though, according to NI’s Zafiropoulous. “If you’re dealing with complex power distribution and multiple voltages, how do you know they all come up at the right time? You can add ADCs and monitor the voltage rails. But if those resources already exist on the die, they can be repurposed. So if you have an ADC already there, you can sample parts of the chip.”
BIST
The combination of automotive and analog is causing the industry to take a closer look at built-in self-test. Jeff Rearick, senior fellow at Advanced Micro Devices, believes this is the right path for the future. “DFT is expensive in terms of area, it is expensive in terms of test time and it has undesirable physics. What is the alternative to structured DFT? D is for Design. It is not for EDA. EDA wants to insert scan chains.”
Instead, Rearick contends that the solution is in design. “If you gave the designer a challenge to say that you wanted a design capable of testing itself – they will find ways to do that. They have lots of transistors and lots of processors. They should be thinking more like memory BIST that looks at operations, and not logic BIST, which is just using scan chains.”
Again it comes back to the economics. “If you can show that what you are doing will save money – you win,” continues Rearick. “If you get a 1% increase in yield for a 1% or 5% increase in chip area, you may get a payback. Of course, if you increase the space too much…”
But EDA is yet to be convinced. “For efficiency reasons and cost reduction you will always want to use some off-chip resources for test when you can,” says Pateras. “LBIST will improve over time, but you will always have better efficiency using off-chip resources.”
Ruiz points out that LBIST has been the panacea for test for a long time. “The reality is that there are multiple issues and there has been no commercial success. There is interest in self-test, not for manufacturing test, but within automotive where they have safety requirements. Logic BIST is orthogonal to manufacturing test. There are some people who will run LBIST during manufacturing test, but it is not the primary aspect of test.”
The equation may change for the future. “LBIST was a clever idea but it requires a certain level of sophistication and a certain overhead,” says Knoth. “Most people just did not see the value.” However, he hears people like Elon Musk with their grand plan, and is inspired. “Musk talks about the pervasive use of autonomous driving, which is much safer than human piloting. He sees it as an ethical imperative to get autonomous driving out there. It is a transformational force for humanity, which is very much dovetailed with things such as LBIST.”
An added complication for many automotive and IoT devices is that they need to be cheap, and in many cases are severely pin constrained. The cheapness also constrains the amount of area increase possible for test, and in turn that means a back-to-the-future technology, where functional test may see some resurgence.
Conclusion
Test, just like many aspects of the design flow today, is seeing a divergence in the demands being placed on it. Those who continue along the path of probably have no time to reassess the situation, but for those who plan to remain on older nodes, test may be one of those areas in which some aspects of the economics may change—or changing requirements may force their hand.
In part two of this article, new DFT techniques will be examined. It will also look at new requirements that come from technologies such as 2.5D and 3D integration.
Related Stories
Tech Talk: ISO 26262
What can go wrong in designing to this automotive standard.
Are Chips Getting More Reliable?
Maybe, but metrics are murky for new designs and new technology, and there are more unknowns than ever.
Power Management Heats Up
Thermal effects are now a critical part of design, but how to deal with them isn’t always obvious or straightforward.
Leave a Reply