Integration Challenges For ATE Data

Collecting data to boost reliability and yield is happening today, but sharing it across multiple tools and vendors is a tough sell.

popularity

Tighter integration of automatic test equipment (ATE) into semiconductor manufacturing, so that data from one process can be seamlessly leveraged by another, holds significant promise to boost manufacturing efficiency and yield. The challenge is selling this concept to fabs, packaging houses, and their customers.

Data involving yield parameters, process variations, and intricate details about every process layer — has been collected and used inside of fabs. But much more can be done with that data than in the past, particularly when it links different processes and systems, and this is becoming particularly important in sectors such as automotive and aerospace where reliability is a pressing issue.

The focus is now on leveraging ATE data, not just for immediate yield optimization, but also to predict long-term reliability and performance. By assimilating more granular data during design, engineers can craft solutions based on real-time data. But as fabs work to combine established legacy systems with newer, modern equipment, technical compatibility issues can arise.

“Data management and connecting different vendor systems is always a big issue in the industry,” says Rich Dumene, principal test architect for automotive solutions at Renesas. “Modular ATE — using instrumentation from multiple vendors in a single test cell — historically has not been very successful due to the tight integration needed to make an efficient test solution and the lack of a robust way to communicate between different instrument vendors’ control software in a short enough time to make benefits of integration worthwhile.”

The challenge involves combining different vendor systems. Not all of the equipment is compatible, and some of it was purchased decades ago.

“We have on the order of a couple thousand testers worldwide, divided among 15 or 20 different types,” says Dale Ohmart, test engineering manager for Texas Instruments. “We still have testers in production that were first used in 1978, amazingly enough, and we haven’t found a need to get rid of those things.”

Fully amortized equipment typically is viewed as a competitive edge, both because of the cost of replacing it and the cost of training engineers on new equipment. “There’s legacy equipment that is 20 or 25 years old in some factories,” adds Rich Lathrop, senior director of business development and technical sales for Advantest. “Certainly, you see in our fleets an incredible amount of re-used equipment from the entire install base.”

These machines are a testament to the durability and reliability of earlier innovations. However, their continued use also presents some challenges. As semiconductor technology advances, newer systems are introduced with more sophisticated features and capabilities. This mix of old and new creates a diverse technological landscape. Integrating these systems, each with its own specifications and operational nuances, requires a deep understanding of both past and present technologies. It’s a delicate balancing act.

“Most of our customers have been around a long time working in this space,” says Andy Galpin, marketing manager for services at Teradyne. “They’re having to manage these integrations over periods of years, along with iterations and evolutions of equipment and information and data structures and things like that. They all have their own unique systems and ways of doing that.”

One solution to bridging both the variety of systems from different vendors, as well as legacy and modern equipment, is the adoption of a common data standard and format. By implementing a unified data protocol, discrepancies arising from diverse system outputs can be minimized. This streamlines the integration process, and it ensures consistency in data interpretation across the industry. With a common data standard in place, scalability becomes more feasible, allowing for easier system upgrades and the inclusion of future technological advancements without disrupting the existing infrastructure. But getting everyone on board with a data standard for fleet management and integration isn’t going to be easy.

“You’ve got such disparate positions by the customers,” says Eli Roth, smart manufacturing product manager at Teradyne. “The move to universal standards across customers and technology is the challenge, because there’s a lot of legacy costs that are carried and a lot of legacy systems that work. So pushing to a standard when you’ve got existing systems in place with large customers is a challenge.”

Like most things related to semiconductor manufacturing and quality, the emphasis is on what is proven to work over time. “It’s remarkable how old some of these systems are, and there’s still a lot of old-school test engineers that either have disconnected systems, or systems on isolated networks,” says Josh Prewitt, chief product manager for SystemLink at NI. “But there’s just a tremendous amount of value in getting your systems connected and getting a secure way of publishing the data to a common place. It’s becoming more important than ever for all those machines to play well together.”

Data management
One of the major issues with integrating a larger number of ATE machines from different eras is the massive amount of data that has to be collected and organized in ways that make it useful for both test engineers and fab managers. This data often varies in format, granularity, and structure, and it poses a significant challenge in terms of consolidation and interpretation.

“We are generating more and more data,” says Mark Roos, CEO of Roos Instruments. “And all the focus is on yield improvement, which comes from either adjusting limits, or maybe going back to the designers and adjusting the design — or going to the customer and adjusting the expectations — but it’s all about the data. A typical tester might generate 20 to 30 gigabytes a day, and you might have 5,000 of those testers. We can’t use all that data, and one of the reasons is there are no standards for collecting this data.”

There’s also no standardized way to utilize that data. “It’s really important to model the data accurately,” adds NI’s Prewitt. “If it comes in unstructured, it’s basically just a mountain of trash. It has to be transformed into a common data model in a certain format. That process is critical for generating useful data.”

But developing richer and more robust data streams from multiple vendors’ equipment is only part of the challenge for creating a more holistic view of the process. Equally crucial is the establishment of a unified data management system that can seamlessly integrate these diverse data streams.

“For data output, testers have been dominated by STDF (Standard Test Data Format) for many years,” says Advantest’s Lathrop. “We’re also trying to provide more flexibility for what data the fab can access, and when. That is evolving. The data output is becoming richer in content than in the past.”

STDF remains a popular and widely used format due to its long history and well-established adoption by the semiconductor industry, but it does have limitations. As chip designs have evolved and become more complex, STDF files have grown in size, leading to storage and processing challenges. In addition, STDF’s structure can be rigid, making it less adaptable to newer testing methodologies and data types. And the lack of comprehensive metadata capture also can limit the breadth and depth of analysis.

“Most ATE vendors have aligned on supporting the venerable STDF file format for data logs, but the format is relatively limited in what it can handle and is an out stream only,” notes Renesas’ Dumene. “The test description field in the STDF is limited to 255 characters, and is left up to clever engineers to make naming standards to capture the conditions and information about the test.”

Dumene pointed to a recently balloted standard, SEMI E183 RITdb – Specification for Rich Interactive Test Database (RITdb) — that seeks to address this need with larger and more flexible data sets, as well as machine-to-machine bi-directional communication.

Machine learning
For many years, a primary goal of testing was to get more data, and the right kind of data to help test engineers understand what is going on in a chip. But equipment technology has advanced while architectures have shrunk and multi-die solutions have proliferated. Fabs have also grown larger and increased the number of testers. This has led to an extremely rapid growth of data that is difficult to manage.

“I think as a general statement, that there is more data than we know what to do with right now,” says Lathrop. “We are getting used to how to manage that, and I think machine learning is the best way of thinking about this. Machine learning and data processing will become core competencies required in the industry, but are we able today to utilize all the data we’re collecting? I’m not so sure we’re there yet.”

While Machine Learning (ML) can provide the tools and techniques to efficiently process and analyze vast data sets, ML is still in its relative infancy. The quality of the models predictions depends on the quality and quantity of the data used to train it, and it produces poor quality results when faced with an unexpected problem that lies outside its training database. Actual deployment of ML models in real-world scenarios so far has been hit and miss.

“A big challenge for ML is identifying the valuable data that needs to be collected into a large model,” says Teradyne’s Galpin. “Generally, the machine learning models are happening in big data structures where that data science is happening. Most of the problem is getting access to data and getting the data formatted in a way that your data science can work. Why does an ML model not get into production? Because it feels like you’re spending all this time massaging and getting data available and not getting to what seems like value. And that’s the trouble with ML as a production solution.”

“We’re all having an explosion of data,” adds Prewitt. “Now more than ever, we’re really wanting to get all the data to a central location where we can automatically run reports and automatically do just a simple analysis. You know, a lot of times people want to talk about AI and ML and stuff like that, but I think we have a long way to go just with basic statistics and getting, again, getting all the data off the test system. Let’s make sure we can get the stuff to a common location and a common data format where we can search and filter and sort and query and stuff like that.”

Feed Forward and Feed Back
The goal of Feed Forward and Feed Back is to leverage the test data that is derived throughout the test processes to improve the visibility of a chip’s functionality or yield. Consider a scenario where a product undergoes three test insertions at the wafer sort phase and two more during the final test in its packaged form. Data from an insertion at the wafer sort can be fed forward to influence the test content in a subsequent package test insertion. The objective isn’t merely to reduce the test duration, it’s also about tailoring the test content for optimization without exceeding the allotted test time.

Conversely, if specific failures emerge later in the product’s lifecycle, it becomes imperative to channel this information back to the earlier test stages. Essentially, it’s like having foresight—identifying potential issues and preemptively adjusting the tests. If, for instance, there’s excessive fallout later on, especially post-packaging (which entails added costs due to the packaging process), it’s beneficial to integrate this insight into earlier test programs.

“The goal is being able to compare simulation to validation to production data and see where their gaps are, or see where things deviated from the simulation,” says NI’s Prewitt. “The validation and production told us something different, so how do we go back and then help improve our simulations to be more accurate? Because the more accurate you can get your simulations, then the shorter your validation time and faster you can get to production because you have a basically a better understanding of what you’re going to get.”

This entire process of Feed Back and Feed Forward is fundamentally about harnessing the vast amounts of data produced and utilizing it seamlessly. This facilitates not only improved early yield but also enriches the intelligence of test content in subsequent stages.

Security
That also raises questions about security, as well, because data frequently needs to move between companies.

“Security keeps coming up continuously regarding data availability and data hoarding, and what data can be made available,” says Teradyne’s Roth. “If you’re a fabless design shop and you’re sourcing your device into the same fab as your competitor, where’s your competitive advantage? It’s not coming from the node you’re on or from the process. You’ve got to have some competitive advantage coming from somewhere, and a lot of people are seeing that in their data.”

Data is vulnerable at many points, but it’s particularly vulnerable when it moves from one place to another. When that data includes valuable IP, the risk increases. One way that fabs help reduce this threat is through air gaps, but even those are not completely safe.

“Many companies are thinking air gap is a way to save the day,” adds Roth. “The security trends in manufacturing and operations are less about cloud attacks or network attacks. It’s about some bad actor within the company bubble who is able to penetrate the air gaps.”

This makes companies very skittish, and it has led to some interesting solutions. “What customers are looking for is to ensure that all of their data stays only with them, and it’s not shared,” says Prewitt. “We might get some telemetry about how many times they click some button or something like that, but our customers’ data is our customers’ data. We don’t have access to it. That way, if we somehow do get compromised, the data is not compromised.”

End-to-end encryption also plays a key role in securing proprietary test data in fully integrated ATE systems. Encrypting the data at both ends means that even if a system is compromised, the data is useless without the API key.

“All the data going from the tester to system is all encrypted,” adds Prewitt. “Actually, every test system gets its own unique API key. So if at any point in time someone thinks that some test system may be compromised — let’s say somebody pulled the security keys off of it — they can revoke that key and all the other test systems can be working without any issues whatsoever. Revocability is important.”

Much of this is modeled after the U.S. Department of Defense’s zero trust approach for suppliers. “Zero trust security is a big trend,” says Galpin. “Even the military is pushing towards zero trust security models. So how do you protect your IP? How do you protect the data that’s coming in and out of your IP? That data is not going to just be passed around to share, because it’s a competitive advantage. Where and how are you going to share it within a customer space? It’s a big question that everybody’s wrestling with. There’s not a universal answer.”

Cost management
Even with the potential for improved yield and efficiency offered by a fully integrated ATE environment and data management standard, it can still be difficult to secure the resources needed for its implementation.

“There’s always a push and pull between cost and coverage,” says Lathrop. “The struggle is where do we add the coverage, and how do we add the coverage? How does that impact manufacturing costs?”

Test equipment operates on the thinnest of margins, where differences in cost of even a few pennies per wafer can have a significant impact when scaled up to a full production run. This makes the industry particularly sensitive to any changes or investments that might affect these costs.

“You do have to know, contextually, who you are talking with, to know what they care about,” says Teradyne’s Galpin. “If you’re talking to an operations-focused customer, or a general manager-focused customer, or a design engineering-focused customer, what they each care about is a little different.”

And given that each group has its own budget, there can be some push-pull even within the same organization. “Engineers are worried about how to spend their engineering budget to get a product to market faster and increase yield,” says Roth. “Operations guys are worried about their operational expense. Procurement people are pounding us on what the cost of a piece of CapEx should be? It’s an entire circle, but there’s very few who look at that entire piece. It’s like squeezing a balloon constantly left or right. It’s a balance, but you are always squeezing the balloon trying to get to your target values.”

Conclusion
Technology for ATE environments leans heavily toward fully integrated data collection and analysis systems, but the path to get there is rocky. Legacy systems and modern systems must be linked, data management systems and machine learning models need to be constructed and standardized with Feed Forward and Feed Back efficiencies, security must be ensured — and all of this needs to happen while keeping costs in line. The rewards, however, are potentially significant, even if they’re spread thinly over a myriad of systems.

“How do you justify to a VP or a CEO that it’s worth spending money on this? You have to convince them that there’s a return on investment on this,” adds Roos. “If I say I want $5,000 a tester per year, I’d better come up with some way that it improves profits by $5,000 a tester per year. It’s very challenging. We know it’s the right thing to do. We know that we have all this data and we need to do something with it. That’s clear. And we think we’ve solved the issue of at least making it easy to generate and low cost to distribute. But we’re still at that issue of how do you justify it if you can’t point to a clear financial return number?”

Related Reading
Test Challenges Mount As Demands For Reliability Increase
New approaches, from AI to telemetry, extend well beyond yield.
Using Smart Data To Boost Semiconductor Reliability
Tools and algorithms are improving faster than a willingness to share data, but at some point economics and talent shortages will force big changes.



Leave a Reply


(Note: This name will be displayed publicly)