Safety-critical markets add new challenges for testing methodology, which can affect functionality, reliability and yield.
With increasing focus on safety-critical semiconductors—driven by ADAS, IoT, and security—functional safety concerns are going through the roof. Engineering teams are scrambling to determine how to conduct better in-field or online testing because test no longer can be an afterthought.
This has been a common theme across the automotive ecosystem for the past few years, and as the automotive electronics market continues to ramp it has taken on a new sense of urgency. But along with that, the methodologies for conducting those tests and the ramifications of the power involved in those tests are coming under increased scrutiny because of their impact on yield, quality, and the long-term reliability of systems.
“Test engineers and functional designers are raising these issues today as they grapple with how they fit into this,” observed Rob Knoth, product management director for the Digital and Signoff group at Cadence. “The common thread is that test is becoming part of the functional system, so it’s more like another operating mode. That means test has to have very tight connections to both the RTL for functional simulation, and emulation, as well as the power analysis of the system because it’s going to create incredible hotspots.”
Luke Schreier, director, automated test marketing at National Instruments agreed. “With the convergence of different technology blocks all into systems-on-a-chip or systems-in-a-package, engineering teams combine not only processors but elements of designs that may traditionally have been handled with DFT or scan or other techniques, right next to RF or power or sensors or other things that enable the kind of connectivity that are becoming prevalent in the IoT space. As we get into automotive radar and driver assist, among other things, there are parts of that, because of the nature of the chip, are going to be safety critical. Whether or not you have a good BiST (built-in self-test) methodology for verifying the functionality is critical. There may also be regulatory standards that require testing it in a certain way.”
Particularly as it relates to sensors or ADAS, there is also the need to carefully consider the software algorithms that have to work in conjunction with the sensor data that’s coming from these chips. There is a lot of calibration required to get this right.
“As a result, these devices are always recalibrating themselves,” said Schreier. “Every part of that analog has to be verified and sometimes trimmed, and then verified with other algorithms that are supposed to compensate for the process irregularities or to get maximum efficiency on the device that it was built. You can’t replace that with any DFT technologies that are prevalent, so you end up having to do that test.”
But the actual test modes can be so complex that to test in situ, or to do a full system emulation, is difficult. So test modes are put into a chip to be able to take advantage of whatever internal BiST is possible, such as subsystem that does not require the full stack of the protocol or of the algorithms to verify its existence.
“This has been happening in cellular and wireless designs for years, wherein to actually make a call on some of these devices was not economical,” he said. “The RF front ends or power amplifiers would all have test modes that could be put in the chips for the purpose of validating some of the structures internally, and getting a pretty good sense that it would work with the communication protocols on top of it, just as a way to get the volumes without suffering the test times. All of this is coming into play with test currently.”
Additional power challenges from test mode
There is some overhead associated with test, however. Compared with functional mode, for example, test mode adds power challenges.
“A much higher number of devices are made to switch all at the same time in the test mode,” said Preeti Gupta, head of PowerArtist product management at Ansys. “Even if the test mode runs at a slower clock, a large amount of current gets drawn over very little time right after the clock edge. As this high current couples with on-die and package inductance, Ldi/dt, it results in significant voltage drop and wrong logic states in turn. The result is manufacturing yield loss—parts that are good to work in the functional mode but produce bad results in test mode. Shrinking geometries that allow packing more devices in the same area and multi-core designs further exacerbate the amount of concurrent switching and subsequent average, peak, di/dt and thermal issues.”
From a power delivery network perspective, there have been several instances of scan mode failure of chips due to excessive voltage drop, and ensuring power grid integrity during test mode has become a mandatory sign off requirement, Gupta noted. “Design teams are rapidly analyzing long test patterns generated by emulators to identify power-critical peak and di/dt windows for subsequent transient analysis of the power delivery network. In fact, power integrity should not be an afterthought. Power delivery network design must account for test needs early.”
Fortunately, multiple interesting techniques exist and can control the test mode current, including low-power techniques such as clock gating. That traditionally was turned off in test mode for better observability, but it now can be done in a controlled manner, Gupta said. Additionally, power grid design and test strategies can be optimized up front for maximum utilization of the expensive tester resource while also ensuring robustness.
Along these lines, Chris Allsup, senior staff technical marketing manager for the Design Group at Synopsys, pointed out that a variety of DFT optimizations, ATPG algorithms, testing techniques, and verification methods have been developed to manage manufacturing test power issues.
One of these is power-aware ATPG, he said. Because the power rails in a device are designed for functional operation, the increased level of switching activity due to scan testing can cause additional and potentially excessive IR drop and false failures during manufacturing test. Specialized algorithms can be deployed to manage the switching activity caused by the test patterns it generates. So if shift power needs to be reduced further, tools make it possible to apply hardware-assisted shift power-reduction techniques such as automatic shifting of zeros into entire scan chains during scan insertion, and gating logic on the functional outputs of scan cells. Power-aware optimizations are employed to gate only those chains or scan cells with the largest contribution to functional logic switching activity.
Another technique for managing manufacturing test power issues is reducing DFT power in mission mode, Allsup said. “It is important that any DFT circuitry not increase dynamic power consumption when the design is running in its mission mode (i.e., in the functional state). Some libraries contain scan cells with a dedicated scan output pin, usually a buffered version of the functional output.”
Supporting scan cells also can gate this dedicated scan output during functional mode to minimize switching activity on separate scan chain nets. A second source of unnecessary power consumption is from spurious switching activity created within the compression logic. To avoid this, blocking logic can be added to suppress all switching activity from the functional logic at the compressor inputs, except when scan chains are shifting, he said.
Also, for designs in which all power domains are always on, fault models should allow high test coverage of defects on level shifters. However, for designs with powered-down states and possibly retention cells, it is necessary to test the device in multiple power states to achieve high test coverage of low power design elements because there are specific defects that may not be covered by always-on testing. Testing a sufficient number of power states, in which each switchable power domain is set to its inactive state at least once, will allow these additional defects to be sensitized. Multiple techniques are then used to observe defects unique to low power design elements such as isolation cells, retention cells, power management, and power switches, Allsup explained.
Finally, it is important to perform specific design and simulation checks to verify testing on low power designs including:
Test has always been the bogeyman where the worst pathological situations are created on the tester.
“Even more interesting isn’t scan, but it’s things like LBiST and MBiST where it’s not sitting on the tester but now it’s in the product,” Knoth said. “It’s actually moving. Understanding the power impact of what test is putting in there creates a huge interplay with emulation.”
To be sure, with so much at stake in applications such as ADAS, test becomes another operating mode, and the devices here are being battle-tested for a reason.
“The volume of workloads that they have to prove that this thing is functionally safe require them to move to emulators,” he said. “And now you’ve got this test component. If you rewind the clock, a lot of test was done at the gate level—scan insertion, BiST insertion. But we can’t do that anymore. We have to prove the functionality of it, and we have to look at the power concerns.”
Putting Design Back Into DFT
Structured test solutions have had a profound impact on test time, cost and product quality, but the industry is starting to look at some alternatives.
New Drivers For Test
Pressure is mounting to reduce test costs, while automotive is demanding more ability for circuits to test themselves. Could this unsettle existing design for test solutions?
Choosing Power-Saving Techniques
There are so many options that the best ones aren’t always obvious.