Existing test concepts are being leveraged in new ways to meet stringent automotive requirements.
Test concepts and methods that have been used for many years in traditional semiconductor and SoC design are now being leveraged for automotive chips, but they need to be adapted and upgraded to enable monitoring of advanced automotive systems during operation of a vehicle.
Automotive and safety critical designs have very high quality, reliability, and safety requirements, which pairs perfectly with design for test (DFT). As a result, the automotive industry electronics industry has expanded the use of DFT technology well beyond just the design flow into the overall system architecture and functional validation plan.
“Here, test can’t be an afterthought,” said Rob Knoth, product management director at Cadence. “Whether it’s safety mechanism insertion or ATPG testing for high quality before manufacturing, the test plan has to be part of the system architecture. At the same time, test must be as lightweight and as transparent as possible so that it doesn’t become a burden.”
Neil Stroud, senior director of technology strategy for the Automotive & IoT Line of Business at Arm, agreed. “Built-in self-test (BiST) is not a new technology in itself, but it is becoming more prevalent in automotive SoCs—especially when it is part of a safety-critical or high-reliability system such as braking or steering control. These self-tests are implemented within the SoC hardware, and they do a very good job of diagnosing any faults within the logic (LBiST) and memory (MBiST).”
However, Stroud noted that a system will not rely solely on BiST mechanisms for a couple of reasons. “First, today the function under test must be taken off line to test it. It cannot be used for the intended application at that time. Online BiST will become more prevalent in the future. Therefore, today ‘BiSTing’ usually occurs at power-up or power-down.”
On top of that, there is always a tradeoff to be made around the operating frequency of the testing due to dissipated power. “When considering any safety critical design, a ‘divide-and-conquer’ approach must be taken where BiST capability is a key part of the test strategy. But it should be complemented by other detection mechanisms and capabilities, such as error correction codes (ECC), parity detection, software test libraries, lock stepping, and so on.”
New approaches for existing technology
A key part of this involves gaining access to those portions of a system that need to be tested, and that has become problematic because there are not enough test pins to bring in test data. One solution is to utilize the USB or PCI Express technology that is already built into a device, said Steve Pateras, senior director of marketing for Test Automation at Synopsys.
While this may seem like an obvious choice, Pateras said it wasn’t so easy at first to convince ATE companies to support these interfaces. But once they understood the problem, they jumped on board. Advantest, for one, is adding capabilities into its testers as well as new hardware so the equipment can be used to directly drive USB and PCI Express interfaces.
With the hardware piece in place, the question then became how to generate test data to be applied through USB or PCI Express interfaces, and how to utilize that data on-chip.
“We needed two other components,” Pateras said. “On-chip, we needed to have the ability to interface to these high-speed interfaces, and we needed IP circuitry on-chip that could take this high-speed packet data and then translate that into standard DFT-based data, because we couldn’t change all of our infrastructure. We needed some sort of interface to link between these very high-speed packet forms of data transmission and our traditional scan-based, low-speed data.”
That IP was developed and integrated into the chip and verified from the USB side. It also is able to extract failure and diagnostic data.
The third piece of the puzzle was determining how the tester would get the test data in USB format, which needed to be resolved because when test data is generated, it usually is in basic scan form. Now it needed to be generated by software that could perform both the creation of USB packet data, and accept incoming packet data from the chip, which would be the test results or test fail data. It also had to be able to depacketize it, interpret it, and ultimately create failure log files, Pateras said. This is active software that runs on the tester that does all the translation, to allow for all of this to work.
Because existing functional high-speed interfaces are used, these interfaces are accessible throughout the life of the product, Pateras continued. “These are not dedicated pins for test, which ultimately go away, as in they are not part of the ultimate chip package, they are not accessible. These are functional interfaces that are there on the chip, on the board, on the module. And they provide continuous access to, let’s say, a link from a USB into our DFT structure, where we have access to our DFT structures through that USB even in the system, even in the car. It means we can apply various tests and diagnostics from the car’s software system through existing portals, which are USB or PCI Express interfaces. This provides a direct interface into the DFT on the various chips so that virtually any test can be applied or any diagnostics can be run without any additional infrastructure needed. This allows any chip from any manufacturer, basically, that has USB for us to then interface to.”
As a result, it’s not just static patterns being created and applied to the chip. It’s intelligent software that learns over time and can improve the tests through the diagnosis as it learns. And because it’s part of the system, it can talk through the cloud and send information it’s gaining from a distributed system to a centralized location for general learning about how different things are failing on different cars. “Once that learning is achieved, and new tests or new diagnostics are derived, that information can be sent back to all the cars to improve the tests over time. And all of this is controlled and applied through existing functional infrastructure of USB, and PCI Express interfaces.”
Farzad Zarrinfar, managing director of the IP Division at Mentor, a Siemens Business, echoed the recognition that machine learning will play a major role in automotive test going forward.
“When we were dealing with older nodes like 65nm or 90nm, there was not much demand for Monte Carlo analysis, but demand for Monte Carlo analysis and variability analysis is dramatically increasing,” Zarrinfar said. “It’s a manifestation of the process nowadays reducing. Machine learning is going to have to play a major role in looking at test parameters and looking at test data. By looking at test data in this way, we’ll be able to very intelligently identify the parameters that need to be changed to enhance the quality of the products. Here, machine learning and AI will play major roles in IP, and especially memory technology. We are seeing that in AI technology from an architectural point of view. Near-memory computing is becoming prevalent. It’s not like the old days, where you would have a RISC processor and the RISC processor would do all the jobs. The architecture of the memory that lend themselves to AI, and the computations that lend themselves to AI, are dramatically different than a traditional RISC architecture, and the multiply-accumulate functions coupled with memory can play a major role in low-latency, power-efficient computations.”
Tracking test
Another significant requirement of playing in the automotive ecosystem is traceability of every part of the chip and documentation. These are essential parts of compliance with various standards, including ISO 26262 for functional safety. Even for deliverables that will not go into the final chip design, most companies in and around the automotive ecosystem choose to become certified in ISO 26262 in order to assure customers they are intimately aware of the safety requirements.
For any automated system, it becomes very important to know why a certain decision was made, said Ranjit Adhikary, vice president of marketing at ClioSoft. “For example, for an automotive IP, it is important to know what experiences people are undergoing because the lifespan on the IP is more than 10-plus years. So you need to know if an IP is no longer in production or otherwise unavailable, and what equivalent IPs could be used as a substitute. You also need to know who are the customers. For example, the companies you partner with—what are the variations, what issues they have faced? This means it is very important to have traceability. Further, from a documentation standpoint, you may have USB 3.0, for example. This is a specification that has been used for a lot of automotive IPs. The standards keep changing, so the traceability aspect comes in here because you need to see if a particular version has been used, and what other IPs have been used. Verification documents must be tracked as well.”
Improving LBiST
In automotive applications, particularly in autonomous vehicles, tests must be run when the car is operating. But even before fully autonomous vehicles hit the road, there is increasing demand to be able to test devices continuously, or at least periodically, for liability reasons.
“The only real way to do this for testing the logic at least in all these chips is with logic built-in self-test (BiST), which is a pseudo-random or random-pattern-based approach where you scan in random data, you apply the data, and you didn’t compress all the results into a signature,” Synopsys’ Pateras explained. “And then you look at whether the signature is correct or not to know if all the tests pass.”
Even though LBiST has been used for automotive parts for years, there are some inherent problems with this approach.
“Everything gets compressed into a signature, and you cannot have any unknown or unexpected test results,” Pateras said. “During the design phase, you’re actually running simulations, you’re calculating the signature, and that’s what you look for again to test in hardware. The problem is that the simulation is based on expected responses of the circuit, and a simulator may not always get the right responses. There are things that happen in hardware that may not properly addressed through simulation, such as timing marginalities, false paths—things that if they are not properly dealt with can end up being X’s in simulation. These can be dealt with if they are known, but often these can be timing marginalities that occur post-tapeout. If these unknown states are not caught by simulation, then the signatures are incorrect and the LBiST will not run.”
There are only two ways of circumventing this, Pateras said. “One is to not use LBiST, which leaves no way of doing a test in system. The other way is to do an ECO or design respin to deal with it, which is extremely costly.”
Finding a solution to this issue is critical because most automotive applications have very small windows for applying tests—somewhere on the order of 10 milliseconds. As such, reducing test time is significant.
Conclusion
While existing BiST methods are being applied to new and increasingly challenging problems within the automotive ecosystem, there is more work to be done to achieve the level of functional safety that autonomous vehicles will require in the coming decade. The good news is that all hands are on deck to leverage what is currently well understood to solve tomorrow’s automotive test and functional safety requirements.
Leave a Reply