How the new test data standard can make the test floor more accessible in real-time.
Data is critical for a variety of processes inside the fab. The challenge is getting enough consistent data from different equipment and then plugging it back into the design, manufacturing, and test flows to quickly improve the process and uncover hard-to-find defective die.
Progress is being made. The inspection and test industry is on the cusp of having more dynamic ways to access the data coming from equipment and software. New open-source standards are available to help improve the data that can be pulled from test machines, along with proprietary tools that data analytics companies offer.
Makers of semiconductor equipment and fabs literally have been working overtime to improve data consistency, which goes well beyond just creating a standard data format. In fact, the Standard Test Data Format (STDF) has been used for decades, but it is not extensible or flexible enough. On the test floor, equipment of different ages and from multiple vendors generate data in a variety of formats. The data gathered from one machine can vary even by the human tester who collects it. To get useful analytics out of the data, it has to be scrubbed and often converted.
“One of the biggest problems we see is that with STDF data almost every customer uses the fields differently,” said Keith Arnold, senior director of analytics solutions at PDF Solutions. “There are certain fields that are very difficult because there’s really no way to validate them. Even though it’s a standard, nothing restricts users from putting in pretty much whatever they want to put in there. And it can be as simple as someone just misspelling something because the operator had to enter it by hand.”
The manufacturing process simply cannot progress to adaptive test without clean data. “The biggest problem people have with doing any kind of machine learning is, first of all, collecting the data, cleaning the data, and getting all the associations correct,” said Arnold, a former member of SEMI’s CAST working group.
SEMI’s Collaborative Alliance for Semiconductor Test (CAST) working group was created to develop standards for test data. This year, two standards — TEMS and RITdb — were approved by CAST to improve on STDF. (The third branch of CAST’s standards work is chip ID and traceability, which is beyond the scope of this article.)
Open source
Both Advantest and Teradyne lobbied for an open-source proxy to handle the test data early in the last decade. In fact, Teradyne, which created the STDF standard, is now spearheading the TEMS efforts.
TEMS and RITdb standards came out of the desire to collect clean data that can be used to improve chip quality and production speed. The difference between TEMS and RITdb is that TEMS uses a client/server approach based on HTTP and JSON (JavaScript Object Notation). It is not intended to be two-way real-time. Instead, it’s more of a collection tool. TEMS is a reporting interface that streams upward from the tester to client/server. Control information does not flow back from the client/server to the tester. TEMS is one-way street.
RITdb, in contrast, has a wider scope. “RITdb is intended where some AI or machine learning thing is watching the data and goes back and changes what the tester is doing, or changes the assembly process. It’s intended to be a reactive organism,” said Mark Roos, CEO of Roos Instruments and co-chair of SEMI’s CAST RITdb Taskforce. RITdb is like a data lake on the test floor.
“You can make arguments that we can use one or the other [TEMS vs RITdb], but at the time we felt that we needed something that was clean. So that’s TEMS,” said Roos. “RITdb, on the other hand, started as a way to solve this issue of unknown data on the floor — defining the data. And then, as we get more into adaptive tests, we moved to more of a real-time streaming control thing where the data would come out and the responses would come back.”
Fig. 1: The goal of the CAST’s Task Force on Tester Event Messaging for Semiconductors (TEMS) was to develop a standardized ATE data messaging system based on standard internet communication protocols between a test cell host and a server. A test cell is shown above, showing what is in scope and not in scope for the TEMS. Source: SEMI
Benefits
TEMS is unique in the way it collects and routes data. It gathers data from the tester, as well as the software and hardware connected to the tester by designating a test cell. That data then can be sent from one to multiple tools that need access to it. Previously, only one-to-one data sends were possible. TEMS also does this in real-time, so the engineering team can look into the testing process.
“Let’s say you are a company using OSAT, but you want to know how many devices you have,” said Laurent Bonneval, who is part of the TEMS Task Force at Teradyne and chair of the TEMS Task Force at SEMI. “Instead of having to wait for a report from the OSAT, you can see in real-time the number of devices and which testers.”
That, in turn, can trigger messages to whoever needs to be notified immediately of a test issue.
“What TEMS and RITdb both bring to the table is that they’re more extensible,” said Ken Butler, strategic business creation manager at Advantest, who is working on development and deployment of data-analytics based solutions in the Advantest Cloud Solutions ecosystem. “TEMS and RITdb anticipate other data types, such as other data that you would want to communicate, with more opportunities to provide metadata in and around, besides just the raw test responses. But it does that in terms of cell performance, cell health, and that kind of thing, which is available in those formats. STDF did not anticipate that back when it was developed decades ago.”
Providing data in real-time has been a key goal with TEMS and RITdb. “We believe that the need is for this real-time collaboration.” said Roos. “Not every application might need that, in which case they can use TEMS.”
TEMS messages, while they only go in one direction, are delivering data in real-time to anyone who needs it. “With real-time, we have to be careful with this term because it means everything and nothing at the same time, depending on the scale,” said Bonneval. TEMS streams data continuously in during the tests, which means the data is available as the test runs.”
STDF, in contrast, is static. Data is not available until a machine finishes all the tests, and then you still have to wait for the OSAT to send the data. That makes real-time analysis impossible.
TEMS is moving more data to more tools, something STDF can’t do. “With TEMS, we are spreading all these results to different tools to digest, to analyze the test results in real-time. We switch from one-to-one direction to one-to-multiple directions, which is industry 4.0 — where the data is not only one to one, but to multiple sources,” said Bonneval.
The advantage of TEMS is you can choose which data you want. “TEMS will capture the data and send it to a data application server (DAS),” said Bonneval. From there, it depends who is sitting in front of the desktop. “If you’re a test engineer, when you develop a test program, you want to do some characterizations, you want to see the test results. TEMS will send the data to any software by Teradyne or some other company, where it’s going to display the test result in real-time. You can ensure that what you have created will be stable after you compute in real-time a statistical value. Now, if you are a manager, you are using different toolset and you don’t know exactly what is going on at the OSAT. With TEMS, now you can focus on your discharge issues. Is it used in the correct way? Is it testing all the time, or is it testing only 10 hours per day? You also can ensure that number of devices this company plans to deliver — based on the data captured items and after some analysis with — will be possible. TEMS will provide the right data in order to do it.”
The data shows the tester as well as everything in the test cell. “The idea will be to capture the state of your tester. Is it testing? Which kind of system are we testing? Is someone connected? Which kind of configuration — hardware, software — because being sure that you have good quality testing is not only a question of the quality of the tester itself, but also of the environment where it’s located,” Bonneval said. “But in case of something changing, TEMS will automatically tell you in real-time, ‘Okay, this item has been changed, this event occurs on your tester.’”
If you build it, will they will come?
Some test and data analytics firms that are deep into their own solutions may consider the open-source TEMS and RITdb a solution looking for a problem. Because it is still very early days for TEMS standard, customers are not clamoring for it yet. Semiconductor design and engineering teams and OSATs also may like the systems they have now. These systems work well enough or hit their needs, and they are systems that took time to create and test. Supporting a new data standard just because it is a standard isn’t a primary goal if customers don’t ask for it.
Besides Roos Instruments, “I’m not I’m not aware of a lot of commercial support for anything other than STDF out there in the marketplace right now,” said Advantest’s Butler. “People may be developing their own internal flows around this capability in order to be able to use it.”
The IDM or OSAT customers always determine what level of data they want. “It will depend on the customer. Some customers want to output all the data. ‘All the data’ means the images, the measurement data that is derived from the image, and then maybe post computation, so post processing data,” said Ben Miehack, product manager for Onto Innovation. “Others don’t do that. They allow the tool to do the processing, and then the image is discarded once it’s processed. That means you have all this image data that could be from 10 to 50 gigabytes or more per wafer, but it’s literally consumed and discarded in a matter of microseconds. So it’s collected, processed, the anomaly is tabulated in a list, and then the image content is discarded. That way you only get defect or metrology data. It’s a customer-by-customer basis and policy. An IDM might collect more of that data than an OSAT, where an OSAT is just providing summarized data.”
A specific use case may be all a customer is looking for. “They want to interact with the handler, live, based on some data that’s coming off of the cell, and perhaps even the handler itself,” said Keith Schaub, vice president technology and strategy at Advantest America. “For example, they’ve measured temperature on the handler, and they may want to combine that with some data that’s coming off the tester and then rebuild the device lot. You want this sort of capability, at least in the early stage, because you need to re-bin the device during the test. So the STDF doesn’t necessarily go away. You need a mechanism to support that, and both TEMS and RITdB do that. It’s a way to manage the data flow, and bring data that’s useful into what we call the integrated workflow.”
Part of the challenge is the nature of data in test and manufacturing. Standardization has to be flexible. “There are two ends of the standardized data spectrum,” said David Fried, vice president of computational products at Lam Research. “At one end of the spectrum, every piece of data is standardized. All the data looks the same. It has the same structure and can be loaded into a flat file. At the other end of the spectrum, the data doesn’t have any standardized structure whatsoever. In this case, every record has its own data format and data type. Unfortunately, neither end of the standardized data spectrum works well for data analytics. Both ends of the spectrum are completely unacceptable.”
Various reasons make either extreme unworkable, said Fried. “On the overly standardized end, you have to accept there are different types of data. For example, on a piece of process equipment, there’s a process recipe. That recipe is static data in a fixed format that is loaded onto the tool. However, as the tool is being operated using that recipe, it generates time-dependent sensor data as the tool executes that process recipe. The sensor data is time series data, not fixed data like a recipe. You’re not going to standardize time series data and static data into the same format. You can’t. They have different axes and different dimensionality. In real life, you’re never going to get to the point where all data is standardized.”
There’s also an economic reason for standardizing some data. “If no data is standardized, it makes it difficult to cost-effectively perform data analytics. The complexity, effort and investment in joining data, combining data, and referencing data from different data types can far exceed the value of that data and any expected return on your investment. The trick here is to figure out the best data standardization effort that is in the middle of the spectrum. How much work do you put into standardization? How much do you standardize? How much effort do you put into developing the same type of data structure, the same axes, the same type of flat file? Alternatively, how much do you accept different data types, different data styles, and the complexity of joining and combining that data? The further that you move toward data standardization, obviously the easier it is to perform data analytics. Unfortunately, it is much, much harder and requires more effort to move towards the highly standardized side of the spectrum.”
The data knot
One persistent challenge is that companies are highly protective of their data. Complaints about sharing of data date back decades, and the fact that data can be shared more easily using TEMS and RITdb doesn’t mean it actually will be shared among everyone who can benefit from that data. The shift both left and right for various processes in the design-for-manufacturing flow provides useful opportunities for improving more than just test processes, but only if the data being shared is narrow enough not to give away any competitive secrets.
This is complicated by the fact that test in some components is not a fixed step in the flow anymore. It happens in real time in the field, as well, particularly as devices are used for longer periods of time. If defects show up in multiple products, that data needs to be sent back to the fab to identify and correct those errors. And if the failures are occurring early enough in the product lifecycle, then that data needs to be shared much more quickly than in the past among all of the relevant stakeholders in that flow.
“The more data points you have, the more accurate you can be moving forward,” said Uzi Baruch, chief strategy officer at proteanTecs. “What you’re seeing in production is what we call ‘deep data.’ It’s the chip sending telemetry data on its performance and behavior to the outside world. That can be activated during test, when you’re still in manufacturing, but also in mission mode. Data by itself is great, but it also can be combined with algorithms that take advantage and understand what’s going on and connect all the dots for what is coming out of the data. And it can be combined with other data sources that are important to them.”
To make all of this work together also requires some consistency in the data, or at least a way of aggregating it and making it available. That costs money and it takes time, because in many cases fabs and OSATs will run equipment that has been fully amortized for years in order to remain competitive on wafer/chip pricing.
Expecting a big, drastic change-out of data collection systems is likely out of the question on a test floor. “You have stuff that works and that you do every day. The test floors are constant. They’re chaotic. You can’t shut down a test floor for five or six hours to reboot everything. And so it’s very challenging,” said Roos. “We see the opportunity, and everybody wants to be in smart manufacturing. Everybody wants to do this. But when you look at the effort involved, it takes money, and there’s not a really big market for the data and the control side of the industry.”
Implementing TEMS on equipment is not overly cumbersome, but in some situations — such as with 20-year old equipment — implementing TEMS will take longer.
“The equipment itself is not going to change, but if we have to layer in some software in order for them to be able to generate TEMS files — for example, to be able to move inside their systems to communicate information from one point to another — then we’re facilitating that kind of capability,” said Advantest’s Butler.
Everyone is moving in the same direction toward using test data in real-time, having more visibility into the test process, and training an adaptive test system sometime in the future. Someone had to make a standard. Build the standard and eventually the customers will come.
“We have now three customers focusing on using it,” said Bonneval, speaking about TEMS.
“What we heard from talking to various groups is that there are a very small number of customers who are interested in moving in this direction,” said Butler. “And so, where they want to replace their internal capabilities or existing, default capabilities that are typically based off of STDF and move to TEMS, we are ready to work with them. And in those cases, they come to us and we’re working with them to facilitate whatever their requests are so that they can use the system in the way they intend. So we’ve got a small number of engagements like that but it’s more of a limited customer support type situation as opposed to a big strategic change as we have characterized.”
Waiting for adaptive test
“As people get more and more into adaptive test flows — even though the industry has been talking about that for a while, it’s still something that has not a huge amount of traction,” Butler said. “It’s not that everybody is using adaptive test in all test applications. It is selected places where people do that. But it’s one example of the type of thing where RITdb and TEMS anticipate those kinds of capabilities, and they will service those kind of capabilities as they grab a foothold. Then, more and more people are going to want to move that direction. There’s a certain suite of newer capabilities that are going to drive adoption across the industry, and we’re still kind of early in those phases.”
Adaptive test is only going to be used more often as the industry creates more ways to achieve it. Lam Research, for example, is working on capturing the power of data coming from the process. “This is an important issue at Lam, as we’re really driving a digital transformation in how process development is accomplished,” said Fried. “We’re looking at time series data, static recipe data, sensor data, and process result data. We’re looking at all of that data and trying to ingest it from our laboratories and multiple data sites, and then performing advanced analytics on that data to drive future process development and hardware design. We also want to unleash the power of AI on finding trends and correlations, and on automating our advanced analytics as much as possible.”
Advantest is still on the CAST and standards committees. “We have members that are on both, and supporting both. Because it’s so early, we’re monitoring, other than adopting and driving. It’s still too early,” said Schaub. “There is traction, and it is showing us the future of where things need to go, but by no means is it, ‘Okay, let’s do this versus that.’”
— Ed Sperling contributed to this report.
Related Stories
Leave a Reply