Digital Twins Find Their Footing In IC Manufacturing

Technology will speed time to yield and add efficiency, but standards are needed for it to live up to its potential.

popularity

Momentum is building for digital twins in semiconductor manufacturing, tying together the various processes and steps to improve efficiency and quality, and to enable more flexibility in the fab and assembly house.

The movement toward digital twins opens up a slew of opportunities, from building and equipping new fabs faster to speeding yield ramps by reducing the number of silicon-based tests needed to qualify a device. Using digital twins, yield excursions potentially can be isolated and rectified more quickly, reducing scrap, and improving equipment utilization rates.

However, several challenges must be overcome before digital twins proliferate in semiconductor manufacturing. The chip industry still needs to develop a common digital twin framework and a standard interface format. In addition, a secure way of sharing data between different vendors needs to be built into different processes. In response, multiple parties throughout the semiconductor ecosystem are collaborating to address these needs and take today’s digital twins solutions from the tool level to the fab level, and from there up to the operational enterprise level.

One of the most compelling arguments for digital twins is its potential to replace costly wafer runs with software runs. “With digital twins, the reason you’re using a software model instead of the actual physical device or system is because you can get your response to particular real-time information faster and hopefully at a lower cost,” said Anjaneya Thakar, executive director of product and manufacturing solutions at Synopsys. “So the question is, ‘How can you use real-time data coming out of the fab to predict what’s going to happen, for instance, to gain better process control?’ Digital twin is a technology solution that enables faster and cheaper decisions anywhere.”

Others agree. “Having digital twins as part of an interconnected, physical-virtual semiconductor ecosystem has the potential to improve the speed-to-solution and reduce the cost of innovation,” said Joseph Ervin, senior director for Semiverse Solutions at Lam Research. “With virtual materials and equipment, experimentation can be drastically faster, less expensive, more accessible, richer in data, and less costly.”

Same song, different beat
The semiconductor industry already is using digital twins for specific applications, most notably run-to-run control and lot scheduling and dispatching. “Digital twins are not new,” said James Moyne, an associate research scientist at the University of Michigan and consultant to the Global Services Group of Applied Materials. “Way back, before we call them digital twins, we were doing digital twin work. For instance, I started a company that did run-to-run control in the 1990s, predicting final film thickness and uniformity in an etch process.”

What has changed is the ability to use digital twins to map interplay between different electrical and physical characteristics.

“A digital twin is a purpose-driven, dynamic digital replica of a physical asset, product, or process,” said Moyne. “One purpose could be predicting when a component is going to fail, for instance. Or it might involve tracking the movement of wafers through a fab in a schedule/dispatch system to ensure on-time delivery of product. The dynamic nature of a digital twin is captured in its real-time data updates from the physical process, and the use of real-time data makes the model’s predictions as accurate as possible. With digital twins, it’s possible to run multiple what-if scenarios at a much lower cost than that associated with running wafer-level design of experiments for each what-if scenario.”

“Customers want to create these digital twins because once customers have a digital twin, they’re better able to track the past and get deeper insights. But most importantly, they can predict future behavior,” said Sameer Kher, senior director of product development for systems and digital twins at Ansys. He emphasized the need for both intelligent algorithms and physics-based modeling. “A great model for digital twins combines physics and simulation with data and machine learning. Using just data-based modeling or just physics-based modeling alone will not achieve the best results.”

Most experts in the digital twin arena suggest starting out small and gradually stepping up to higher levels of abstraction and greater levels of process complexity. “We follow these basic principles of design, build, operate, and optimize, and this essentially is a loop that keeps on repeating (see figure 1),” said Indranil Sircar, global CTO for Manufacturing & Mobility Industry at Microsoft. “You know that you can’t solve all the problems at once, so you basically start by doing one small thing and then build from there.”

Fig. 1: Digital twins steps from data acquisition to predictions to what-if exploration and optimization. Source: Microsoft

Sircar noted there are lessons to be learned from other industries that are more mature in their implementation of digital twins. “We are looking at bringing together this whole construct of the metaverse, and the metaverse is nothing more than essentially how we start to visualize the process of digitally building things before you build them physically. You can create a digital twin of a system. For example, Boeing is talking about building their next aircraft completely digitally first, and being able to visualize and interact with that twin,” said Sircar. “This enables you to take IoT data and overlay it on top of the digital twin to visualize the impact of it, and then run probability scenarios using AI or ML and even be able to achieve certain autonomous capabilities, resetting the processes or even changing the recipes as certain things happen. But there is a maturity curve, and certain industries are moving faster than others mainly because of complexities within the data and the models.”

Unfortunately, the digital twin solutions in fabs today tend to be developed as point solutions. “You’ve got your run-the-run-control process team. You’ve got your health monitoring team. You’ve got your predictive maintenance team. They’re all developing and deploying solutions using their own algorithms, their own interfaces, and their own verification and validation techniques. So today’s solutions are really developed in silos, which creates a bunch of difficulties for rolling out new solutions, maintaining the solutions, and making them interoperable,” said Moyne.

A very familiar digital twin
One way of viewing the larger digital twin picture is to look at a digital twin that many people use every day — GPS apps on mobile phones (see figure 2). In these applications, the data is anonymized (maps don’t reveal the occupants at an address), and the digital map accurately represents real-world road lengths, intersections, and traffic levels. The GPS map is continuously fed updates of traffic congestion and alternate routes in real time to be able to deliver accurate predictions of arrival times, which are also informed by actual arrival data from other users.

Fig. 2: GPS is a digital twin made available to different parties using different applications, like a fab or OSAT uses a DT for specific applications. Source: PDF Solutions

In a similar manner, a digital twin in a fab or OSAT factory has applications whose value relies on accurate models of tools, components, and/or process steps, which are tied to high-quality underlying data from the physical tools, components, and processes.

“The basic data that is fed into the system can be accessed by multiple users, whether it’s into a design, a process,  or equipment — or all three of them at one time. So that has huge benefits to an enterprise, said Ranjan Chatterjee, vice president of Smart Factory Solutions at PDF Solutions. “A key part is you need to have the software designed with clear KPIs (key performance indicators), and the standards should be implemented in a consistent manner, so that different digital twins work together and they can process data in real-time. Then you know the digital twin actually captures what you want it to capture.”

The first step in creating a digital twin involves determining its purpose, such as predicting when a component will require replacement. The engineering team then goes about gathering the relevant data set. Next comes data modeling and putting it into context with metadata, followed by creation of the dashboard with assignment of out-of-bound alerts either in the cloud or on-premises. Next, the data is analyzed and machine learning algorithms are brought in to predict an outcome based on a prescription. Finally, the step is optimized, giving a go/no-go signal to proceed to the next step.

Synopsys’ Thakar illustrates how the best-known method (BKM) of a process can be arrived at more expeditiously using a digital twin model relative to the traditional process improvement methodology engineers follow (see figure 3). For example, a trench etch might be followed by a CD measurement, a process that is repeated as the process knobs on the etcher (e.g., pressure, temperature, RF power, etc.) are adjusted. “The whole concept of the digital twin is that it’s a trusted representation of the actual specifics of what’s happening inside equipment. There is an exhaustive physics-based model that is controlled by some abstracted knobs on the tool that the user can manipulate to alter the etch profile. By using a digital twin to give you faster feedback, which translates into cost savings.”

Fig. 3: A digital twin can significantly reduce the number of build and test cycles needed to qualify a process. Source: Synopsys

One of the areas of low-hanging fruit for digital twins involves workforce training. There are substantial benefits associated with learning how to operate complex, specialized tools using a digital twin interface prior to operating, for example, the actual lithography scanners, etchers, or CMP systems. In fact, workforce training is one discipline being funded under the CHIPS for America Program’s $285 million investment to build the CHIPS Manufacturing Institute, a public-private partnership focused on developing, applying, and using digital twins in semiconductor manufacturing, advanced packaging, assembly, and testing processes.

Such funding is well timed. “There is a significant start-up cost associated with digital twins because there is an infrastructure layer that needs to be set up. So even if you have point infrastructure solutions, it needs to be federated and that tends to be quite expensive,” said Ansys’ Kher.

Calls for standardization
While the first substantial fund dedicated to digital twins in semiconductors seems like a starting point, some companies have had a roadmap in place for years. “Lam’s vision for the future is a full representation, or digital twin, of all semiconductor manufacturing systems and processes,” said Ervin. “We are doing some of this today, embarking on creating digital twins for devices, processes, plasma reactors, and our equipment.”

Ervin cited four layers that are needed to generate a digital twin for a process tool:

  • A chamber-level twin, which requires plasma digital simulation, like that provided by Lam’s VizGlow
  • A process-level digital twin, which requires the feature-level simulation available in Lam’s SEMulator3D modeling platform.
  • Equipment-level digital twins that rely on sensors and in-line metrology on the wafer fab equipment
  • Facility-level digital twins for the facility in which the equipment is placed.

Interoperability is needed to pull all these layers together, so a standard interface is necessary for the industry to get the greatest benefit out of digital-twin technology. DTs within the same class, and from one class to another, must be able to share data. This is where ontology comes in.

“People see that interoperability is becoming critical, and there are ways of reducing the friction involved in sharing data between multiple partners and vendors,” said Microsoft’s Sircar. “For instance, we developed an ontology-based approach called MDS, manufacturing data services, where providers can send the data through multiple MES software domains, and we are working with the Digital Twins Consortium on this. It is getting to the point where you can actually do this much faster than you could a few years ago. But the question is still contextualization of the data, or the ontology associated with it, because everybody has their own way of defining equipment, for instance.”

The call to standardize the digital twin framework speaks to how to organize all the various components and programs that a digital twin uses, including software and hardware — both in the cloud and on-premises. For instance, various software programs already are in use on the factory floor, such as those for APC and FDC (fault defect classification), and supply chain management. In addition, there are various software tools that use mathematical models for transfer learning, federated learning, physical modeling, and data-driven modeling on platforms such as Python or MATLAB. To promote a culture of reuse, extensibility, and interoperability, these tools must share a common architecture or framework.

And because the semiconductor supply chain consists of a range of companies in various geographic locations — including the design community, original device manufacturers, fabless semiconductor companies, foundries, assembly and testing facilities, and end customers — there is a need to standardize with respect to a common digital twin framework, likely under SEMI and/or NIST, that will enable interoperability and reusability.

Another pressing issue appears to be settling on a common interface language so that different applications and layers of digital twins can communicate. “Then we can increase the scope of digital twin applications so we can expand beyond our four walls into areas like inventory management, customer profiling, and just-in-time manufacturing,” said Moyne. Connecting digital twins together leads to further benefits. “A good example might be, ‘I’m going to take this run-to-run controller, which is optimizing my process, and I’m going to hook that to my scheduling and dispatch. Now I can send my most important wafers to the processes that are producing the best wafers.’ We call this aggregation of digital twins.”

Making this scheme work will require standards. “The first standards that are going to have to come out are going to be a definition of what a digital twin is, and what a digital twin framework is, and what are the basic components of it,” said Moyne. “So even at that stage, if you read three different papers, you’ll see three largely similar definitions. But there’s no single definition. So we need to get there first, and that’s something SEMI is working on. There’s government funding from the CHIPS Act that could be used to do that. The next thing beyond the definition is an interface specification. No matter what type of digital twin it is, it always has a model of one or more models. It always reports some kind of prediction as well as the prediction’s accuracy. So what are the required ins and outs to doing that? And then finally, with respect to the supply chain, we need to integrate with upstream and downstream standards organizations — especially in regard to the security aspect, because everybody’s got their own vision of what security means in their piece of the manufacturing supply chain. So data must be conveyed without data leakages.”

Preventing leakage is a widespread concern. “Leakage happens when you don’t have a very consistent approach,” said PDF Solutions’ Chatterjee. “We have to figure out how to implement these solutions together. There are evolving standards that are going on in the area of security, and interoperability needs to happen in a consistent manner.”

This is why standards are so essential. “We need to look at all the existing standards,” said Microsoft’s Sircar. “For example, automotive OEMs have similar challenges in that they are integrating parts from many suppliers, and there’s a lot of IP and lots of software, and they’ve come up with good solutions. So perhaps that can be inspirational.”

Conclusion
Digital twins are poised to play an important and growing role in reducing waste and making the most of limited human capital as the industry heads into another upturn and more facilities are built, equipped, and brought online. But the new digital twins will have a much broader impact than those in the past, helping to improve semiconductor manufacturing at every level by raising the abstraction level across a variety of processes to connect various parts of the manufacturing ecosystem.

Related Reading
Digital Twins Gaining Traction In Complex Designs
Improvements are still needed in integration of data and tools, but faster multi-physics simulations are becoming essential for context-aware optimization and reliability.
When And Where To Implement AI/ML In Fabs
Smarter tools can improve process control, identify the causes of excursions, and accelerate recipe development.



Leave a Reply


(Note: This name will be displayed publicly)