Where And When End-to-End Analytics Works

Improving yield, reliability, and cost by leveraging more data.

popularity

With data exploding across all manufacturing steps, the promise of leveraging it from fab to field is beginning to pay off.

Engineers are beginning to connect device data across manufacturing and test steps, making it possible to more easily achieve yield and quality goals at lower cost. The key is knowing which process knob will increase yield, which failures can be detected earlier, and what new process/design interactions most impact yield. But to achieve that, the connectivity between different data sources needs to be in place.

Yield management and lifecycle management data analytic platforms promise two key things. First, teams can increase the data quality and integrity to ensure confidence in everyday data analysis that supports meeting yield and reliability goals. Second, effective analytics platforms provide insight into relationships among all the data sources — from design simulation forecasts of on-die monitor values, to inspection images and system data.

During semiconductor design and manufacturing, data is generated pre- and post-silicon. For years, this data was siloed in different places. Engineers worked with data at a given process step, for instance, often not taking advantage of potential learning outside one’s reach — primarily because it was not readily accessible.

This has slowly been changing over the past 15 years. First, large IDMs began adopting feed-forward applications during test manufacturing. Within the foundry/fabless environment, similar applications have become more possible because of data analytic platforms, decreasing cost of memory storage and easier access to big-data computing environments. This facilitates the end-to-end connection of data, along with the generation of new and actionable insights.

The problem is that not everyone’s end-to-end is the same. Engineers have different perspectives, given their use cases and responsibilities. As a result, the questions they are most interested in answering can differ.

End-to-end directions
Facilitating end-to-end analytics is not simply a matter of having all the data in one data lake or repository. Expertise in data content and in use cases play a key role in fulfilling the promise of end-to-end analytics. The particular use case dictates the direction the analysis takes.

Typically, when engineers discuss semiconductor manufacturing data their framework is chronological. So “upstream” means earlier in production, and “downstream” means later. As wafers move through the fabrication process, data in the form of equipment monitors, metrology measurements, and inspection images is generated. Similarly, the assembly process has the same kind of data, but now it is performed on units (or parts), or a set of units.

Fig.1: Wafers and devices flow chronologically from multiple fabs through the device assembly and packaging processes. Source: Onto Innovation
Fig.1: Wafers and devices flow chronologically from multiple fabs through the device assembly and packaging processes. Source: Onto Innovation

With on-die circuit monitors this chronology can now extend into system and in-field data and thus provide a useful connecting thread of measurements.

At each stop along the manufacturing line there exists data per module. One also can consider all the data generated together with its associated meta data in end-to-end data. In a sense, that represents an encapsulation of all the data for the wafer, or lot of wafers, unit, or tray of units. This data often is siloed with device measurement data, equipment monitor data, and factory operations data separated as distinct data groups. Typically, these data groups are stored in different data management systems, each with its own data format quirks.


Fig. 2: Encapsulated data per process step. Source: A. Meixner/Semiconductor Engineering

Similarly, for on-chip data, a broader context of a measurement with respect to the device’s thermal, power, and computing workload can be relevant when setting up a data analytics platform. But to have an effective data analysis platform, there needs to be an understanding of the relationship among the data gathered at each stop in the flow, as well as a means to connect data within each stop.

“To be able to properly put data relationships into the right construct you have to be able to understand the use cases at the end for all that data. For this you need content experts who can create good questions about possible changes in data,” said Mike McIntyre, director of software product management at Onto Innovation.

 Fig. 3: Data produced during wafer fab manufacturing and assessment of cross-data interactions. Source: Onto Innovation
Fig. 3: Data produced during wafer fab manufacturing and assessment of cross-data interactions. Source: Onto Innovation

Others agree on the value of domain expertise in connecting upstream to downstream data. The longer the timespan between connections, the more important this expertise becomes. A team of experts is required especially when engineers are connecting manufacturing data to in-field, system-level data.

“If you do some analytics on the wafer, there may be use cases with in-field data that the engineer doesn’t think of because they only look at wafers,” said Paul Simon, group director of silicon lifecycle analytics at Synopsys. “In general, it may be that some data that gets generated upstream is necessary downstream. Engineers who work on the upstream part may or may not know the value of that data. That’s why you need to bring that domain knowledge together. So on my team, there are in-field people, test people, assembly people, manufacturing people, and design people.”

Where end-to-end works today
In manufacturing, engineers commonly use a feed-forward method of data analysis. Product test engineers were early adopters of the technology, and they successfully use wafer level test data to determine the best subsequent test or assembly choices, while skipping the long burn-in test.

“We’ve been working with Advantest and our OSAT partners about putting our data exchange network out on test floors. So that information from upstream processing (E-test wafer probe) can be brought down to when testing at burn-in, final test, or a system-level test,” said John Kibarian, CEO of PDF Solutions. “Once you know the context of which market that chip is going to, you can take all the previous information and make a decision regarding the next test manufacturing insertion — specifically, whether it’s suitable for that application or not. And that’s really all around enabling our customers to be as agile as possible with the way production choices are made.”

Adaptive flow is not just for test. With improved data analytics, fab process engineers can respond to multi-variant changes, thereby reducing the risk of yield and quality excursions.

“I get into discussions with people all the time and ask, ‘Is this data important to my yield?’ In a normal operating factory, you’re not going to see anything related to your yield, because your yield should be normal,” said McIntyre. “If a single tool moves out of that normality, it still may not show up in your yield. Why? Because everything else that’s suppressing your yield is still larger than that one signal. The only time a tool, an operation, or a metrology reading should impact your yield is when it is the most significant of all of the other effects that go into your yield calculation.”

To control a wafer factory operation, engineering teams rely on process equipment and inspection statistical process control (SPC) charts, each representing a single parameter (i.e., univariant-based). With the complexities of some processes the interactions between multiple parameters (i.e., multi-variant) can result in yield excursions. This is when engineers leverage data to make decisions on subsequent fab or metrology steps to improve yield and quality.

“When we look at fab data today, we’re doing that same type of adaptive learning,” McIntyre said. “If I start seeing things that don’t fit my expected behavior, they could still be okay by univariate control, but they don’t fit my model in a multi-variate sense. I’ll work toward understanding that new combination. For instance, in a specific equipment my pump down pressure is high, but my gas flow is low and my chamber is cold, relatively speaking, and all (parameters) individually are in spec. But I’ve never seen that condition before, so I need to determine if this new set of process conditions has an impact. I send that material to my metrology station. Now, if that inline metrology data is smack in the center, I can probably disregard the signal.”

With designers embracing a greater number of on-die circuit monitors, these too can be considered data sources. Using on-die circuit monitors to provide data during the test process and during in-field usage is a new end-to-end data trend. Engineers look for significant changes in data values from one manufacturing test step to the next, as these may indicate a failure or a risk of failing later. With these same monitors, engineers can observe changes during a product’s lifetime and use the data to predict early life failures or to better comprehend the interactions between actual workloads and system performance. To effectively leverage this opportunity, engineers must understand their product learning objectives, and where and how on-die circuit monitors can best support those objectives.

“The data depends on what the customer’s specific concerns are going to be and that will depend on the design and end system,” said Aileen Ryan, senior director of portfolio strategy for Tessent silicon lifecycle solutions at Siemens EDA. “If they’re designing a base station versus if they’re designing a data system for a car, they probably have different concerns. We work with them in advance to design the in-system circuitry to support their specific needs.”

Once the designer chooses the circuitry, extensive simulation is typically performed.

“We provide many benefits from our platform-based approach, which takes input from every stage from design, validation, testing and in-field thanks to data generated from our Universal Chip Telemetry technology,” said Marc Hutner, senior director of marketing at proteanTecs. “We connect design information, like Monte Carlo simulations, to enable our customers to compare simulated vs actual performance. Machine learning applied to both the pre-silicon and post-silicon data sources enables a clearer understanding of device operation. The insights from one stage then can be used to inform the understanding of other stages — for instance, an offset of voltage causing a performance shift of the end product. Our common data language enables applications that previously were not possible.”

Connecting end-to-end data
To be successful at using data across any end-to-end direction requires a common name, tag or index.

Data tags connect the different data sources generated at a manufacturing step. With acquisitions of fabs, it’s not uncommon to run into different naming conventions, such as wafer lot names. These differences can impede comparisons between factory equipment and overall factory operation performance metrics. Having the same terminology facilitates determining commonalities from failing die, units, or customer returns, which in turn provides insights into possible causes for failure.

The equipment history (aka equipment genealogy) can be quite illuminating, but to see patterns across the supply chain requires traceability — a way to track an individual device throughout the supply chain. In tracing a wafer device/package unit through a manufacturing operation at any specific manufacturing step, there can be 20 different etching machines each with 3 to 4 different operators and 200 different FOUP carriers. Similarly, consider that utilization of a complex SoC in an automobile will differ depending upon the driver and driving environment over its lifetime.

Joining this data together requires a common identifier, or ID.

“In some cases, we need all the raw data and in other cases we need aggregated data. How it’s aggregated then depends on a lot of things, and you need traceability,” said Simon. “If you have a car failing in the field, you want to be able to search the data for what parts went through what equipment. Which wafers did they come from, and what kind of test history?”

Others note that without traceability, you can’t deliver on end-to-end analytics.

“When you’ve got all this test data, you can go back and analyze it and do some more engineering on that particular failing part — as long as you’ve got the identity thread,” said Dave Huntley, business development director at PDF Solutions. “By the way, that’s not always the case. But to connect data you must have that traceability thread. Otherwise, you’re wasting your time.”

The span of time to make connections varies. From chip design to placement in a system can easily take 2 years. Depending upon the system, the product’s lifetime can be 4 years (data center) or 15 years (vehicle). That leads to the feedback from in-field to validate predictive models.

However, caution is needed. “Connecting data over a long time span when looking at history is a function of accurate accounting and traceability of data,” said Onto’s McIntyre. “When connecting data or trying to connect data in a forward-looking manner, such as a forecast, the ability to represent the future — or more specifically, the uncertainty of a future in that forecast — is quite often not accounted for.”

Conclusion
The Holy Grail of end-to-end analytics is now within tantalizing reach of semiconductor engineers. The promise of end-to-end analytics is predicated on data integrity, use cases, and connectivity between the various data sources. Connectivity has a number of directions — multiple data sources and their associated databases for manufacturing, on-chip data monitors reporting out at test, and device data and history throughout the manufacturing supply chain.

While pre-determined use cases guide the data relationships of interest there continue to be new questions that engineers can now ask. Ultimately, two questions are still the most important: Does the semiconductor device meets the end customer’s needs, and is the semiconductor supplier making a profit?

Related Stories
Testing More To Boost Profits
With performance binning, chipmakers profit more from test.

Using Analytics To Reduce Burn-In
Data-driven approach can significantly reduce manufacturing costs and time, but it’s not perfect.

Finding And Applying Domain Expertise In IC Analytics
It takes a team of experts to set up and effectively use analytics.

Finding And Applying Domain Expertise In IC Analytics
It takes a team of experts to set up and effectively use analytics.

Big Payback For Combining Different Types Of Fab Data
But technical, physical and business barriers remain for fully leveraging this data.

Enablers And Barriers For Connecting Diverse Data
Integrating multiple types of data is possible in some cases, but it’s still not easy.

Too Much Fab And Test Data, Low Utilization
For now, growth of data collected has outstripped engineers’ ability to analyze it all.

Data Issues Mount In Chip Manufacturing
Master data practices enable product engineers and factory IT engineers to deal with variety of data types and quality.



Leave a Reply


(Note: This name will be displayed publicly)