Changes Ahead For Test

Testing microprocessors, microcontrollers, application processors, and other system-on-a-chip devices grows more complicated.

popularity

Testing microprocessors is becoming more difficult and more time consuming as these devices are designed to take on more complex tasks, such as accelerating artificial intelligence computing, enabling automated driving, and supporting deep neural networks.

This is not just limited to microprocessors, either. Graphics processing units are grabbing market share in supercomputing and other areas. And microcontrollers, often considered the less sophisticated cousins of the mighty MPU, are gaining in complexity as they are pressed into action for the and other new applications.

Currently, the merchant microprocessor test equipment market is surprisingly small, estimated to be worth $65 million a year, according to Risto Puhakka, president of VLSI Research. A key reason is the move by Intel, the leading microprocessor manufacturer, several years ago to design and make its own MPU testers. Those testers are made available to Intel Custom Foundry customers, but no one else outside of Intel.

Intel and its foundry clients use the proprietary High Density Modular Test (HDMT) technology platform. Some other big chipmakers also have turned to internally developed testers. Meanwhile, other IC manufacturers use a mix of commercially available automatic test equipment and their internal test systems, while still others have resorted to DIY test setups, enabled with PXI-based instruments, such as the various modules offered by National Instruments and other vendors.


Fig. 1: Intel’s HDMT. Source: Intel

But the merchant microprocessor test equipment market is starting to change as microprocessors become more difficult to test.

“The DFT (design-for-test) tools seem to be running out of steam at various levels,” Puhakka noted. “So what happens then is you have to keep the parts on the tester for a longer time for more complex test runs by the ATE system. And this is not a microprocessor alone. This is also other types of complex SoC devices. That seems to be the primary trend we see happening on the non-memory test front at the various levels. That’s basically the primary problem that people are tackling, which is good news for ATE suppliers because the test time starts to get longer and you’re going to need more testers than before.”

Today, mobile processors account for more than half of total SoC test, although the market for microcontroller testers is growing. VLSI Research doesn’t have a specific market figure for MCUs, but it is believed to be in the hundreds of millions of dollars per year. Microcontrollers are going into specialty automotive electronics (along with power devices), consumer electronics, wireless gadgets, and wireline systems.

Complexity is causing challenges for DFT software tools, as well. Microcontrollers are more responsive to design-for-test technology, but microprocessors are a tougher nut to crack.

“Parallel testing is also a big part of the cost-of-test equation,” he said. “It doesn’t bring any more the benefits. If you double the number of devices you test you don’t get a 2x productivity gain. You get a fraction of it. And once you go far enough, the benefit is a diminishing return. The productivity gains from that are slowing, which again points to more testers are going to be needed for more complex devices. It’s the wireless smartphone that’s been driving the business for the last several years very, very strongly. The significance of the high-end digital business has definitely been reduced over time and it’s moved into the smartphone arena. It’s interesting to see how that changes as AI and some new data center devices emerge. The competition is heating up on the digital side a little bit more. That could change the dynamics from the smartphone-centric back to computing-centric.”

Steve Pateras, product marketing director at Mentor, a Siemens Business, notes that Mentor has built up its DFT product line over the years with the 2009 acquisition of LogicVision, adding built-in self-test (BIST) tools for logic and memory chips, and the 2016 purchase of Galaxy Semiconductor, a provider of test data analysis and defect reduction software. The Galaxy products are now represented in Mentor’s Quantix Semiconductor Intelligence Suite – specifically, Examinator-Pro, Yield-Man, and PAT-Man.

“We are the market leader in DFT,” Pateras asserts. He oversees Calibre, Mentor’s physical verification line; Tessent, which includes the former LogicVision products; and Quantix.

Three key areas for Mentor DFT include gigascale designs, addressing the complexity of large SoCs; hierarchical design; and compression. Second is Tessent’s silicon learning products for improving productivity during silicon validation and yield ramp phases. Automotive is third.

“Clearly, automotive is just exploding, automotive electronics is exploding. Electrification, automobiles coupled with ADAS, autonomous driving,” Pateras says. “Autonomous and ADAS function is resulting in strong growth and the need for better quality and reliability in those electronics. We’re referring to this as a perfect storm.”

MCUs vs. MPUs
There are big differences between testing a microprocessor and a microcontroller.

“MPU is really driving technology in terms of smaller and smaller process geometries,” said Derek Floyd, director of business development at Advantest, who specializes in power, analog, and microcontroller. “MCU is typically using mature technology nodes. MPU is driving transistor counts into the billions, which are typically used for providing parallel processing capabilities (multiple cores). Because of these homogeneous cores, MPU testing can benefit from running the same test patterns simulations on identical cores.”

MCUs are targeted at different markets, and that is reflected in the testing approaches. “MCU devices don’t share this luxury because they are typically single-core,” said Floyd. “And due to the wide variety of peripheral circuit components integrated into their design—including RF connectivity, A-to-D and D-to-A converters, counters, power management ICs, FET switches. Those require different types of tester instrumentation, such as the DC Scale AVI64 on the V93000, which can cover analog, precision mixed-signal, power and high-voltage digital I/O in one card.”

Still, there are some similarities in MPU and MCU testing, “Many test basics exist,” he noted. “Almost all digital parts of devices are tested using scan testing to obtain a very high level of fault coverage. In addition, parametric tests such as leakage, input/output levels and power supply profiling are common across multiple chips. Memory and flash testing are similar and BiST [built-in self-test] techniques also can be leveraged.”

Toward system-level test
Two trends have dominated test in these areas. One is fault coverage. The other is cost reduction for testing.

Fault coverage is becoming much more problematic for two reasons. One is that systems are getting more complex and heterogeneous, so an MPU or an MCU may be only one of multiple processing elements in a system, and their behavior in terms of power, performance or electrical characteristics may vary greatly depending upon what else is included in that system. Second, more of these devices are being used in safety-critical markets such as automotive, industrial IoT or medical, so they have to function under harsher conditions sometimes and they frequently have to remain functional for longer periods of time—sometimes 10 or 15 years, versus 2 or 3 for a consumer device.

“A key trend driven by higher integration is that devices are operating with lower voltages and with more functional diversity,” said Floyd. “What we see is the requirement for more power supplies and more power. The market is demanding more and more processing power from MPUs and MCUs, which are now ubiquitous. Good examples of processing power include ADAS systems for vehicles and AR/VR solutions for gaming. MCUs are the core of IoT solutions. The impact on test as product costs drop means that COT needs to be carefully monitored and minimized.”

Cost reduction, meanwhile, requires increasing the level of multi-site test for high-volume devices. In effect, this moves testing much farther left in the production process, a trend that been underway for numerous steps in semiconductor design all the way through to manufacturing. How that plays with system-level test (SLT) remains to be seen, but SLT is becoming much more of a focus for test equipment vendors these days.

“In the flow, from pre-silicon or pre-PCB to post-silicon or post-PCB, every stage of the process is siloed,” said George Zafiropoulos, vice president of solutions marketing at National Instruments. “Each silo has a different answer, and vendors service a pre-silicon silo differently than post-silicon. Testing is also siloed. With a board or chip, the early stage of test is an abstract behavioral system model.”

By building test that spans what are today business or functional siloes within chip companies, a more cohesive system-level approach can be developed. This is resonating across a number of different areas.

“We see a few key market dynamics where system-level test is emerging as an important test methodology,” said Anil Bhalla, senior manager, marketing and sales, for Astronics Test Systems. “MPUs continue to move toward lower node count, and we see them going to the sub-10nm in the next wave. At that 10nm node, with current test methodologies and a typical 85% at-speed ATG, there can be billions of transistors that are not tested. As MPUs continue to move into the automotive segment, safety becomes increasingly important. We expect that the electronics in our automobiles will work in all types of temperatures, so thermal testing over a wide range of temperatures is important.”


Fig. 2: SLT. Source: Astronics

A key factor here is not just how individual components work, but how they work together in context with other parts of a system. That also affects test coverage.

“The number of concurrent activities that our cars will need to do is dramatically increasing, and we want to make sure that all the different scenarios are being tested by what we are calling ‘concurrent scenario testing,'” Bhalla said. “For example, assume that you are in your car with your navigation system directing you, your satellite radio playing, your kids are streaming video on Wi-Fi, and an incoming call comes in on your Bluetooth. All of a sudden, another car swerves in your lane. What takes priority? From a test perspective, this can be a very difficult set of patterns to write in a typical ATE program, but with a system-level test approach it is much simpler. We expect to continue to see the same pressures to reduce the cost of test while at the same time test devices are continually increasing in complexity.”

There are ways to refine that even further. Not all tests are of equal value, which plays into the silo concept. “Testing everything the same is wasteful,” said David Park, vice president of marketing at Optimal+. “So chips are getting more complex, packages are getting bigger, and system-level tests are taking longer. But rather than catch everything at the system level, there are some things you can catch sooner to determine the goodness of a device earlier. The goal here is to be more intelligent about how you apply a fixed budget for test. That saves you money, your tests are more targets, and you save on test time, which is a huge cost.”

In addition, not all definitions of a system are the same. For example, system-level testing can apply to microprocessors, boards, or even storage systems.

“Storage SLT is still in its infancy, much as memory test was three decades ago,” said Scott West, SLT product marketing manager at Advantest. “However, an SSD’s state machine has virtually an infinite number of combinations that must be tested effectively without an infinite amount of time available to perform brute-force iterations. While ICs have a set number of vectors or memory patterns to be run, SSDs feature a huge number of constantly changing states – factor in unplanned events (such as power-cycling), and the result is a huge number of cases that are difficult to cover. SSD providers can’t address every eventuality – but they must be able to ship product to their customers with absolute confidence it will work.”

Conclusion
As microprocessors, microcontrollers and other advanced SoC devices become more complex, they are presenting new challenges to automatic test equipment and related gear, such as handlers. Parallel site testing is not yielding the same productivity boosts that were seen in the recent past, spawning a number of ideas that are just beginning to play out in the test market.

Test strategies need to be redefined in a system context, but that raises questions about whether the market will bear the cost. The question being asked privately is whether costs can be reined in to keep test at current percentages of the total chip budget, or whether they will rise and subsequently need to be amortized against the total system cost rather than just a discrete part such as an MCU or an MPU. So far, there are no answers. What is clear is the old approach is running out of steam. How these new approaches will play out remains to be seen, but change is definitely coming to this sector.

—Ed Sperling contributed to this report.

Related Stories
How Testing MEMS, Sensors Is Different
These devices require more than an electrical input and output.
Time For Massively Parallel Testing
Increasing demand for system-level testing brings changes.
2.5D Adds Test Challenges
Advanced packaging issues in testing interposers, TSVs.



Leave a Reply


(Note: This name will be displayed publicly)