Getting The Biggest ROI On Your Digital Twin

Creating a successful digital twin relies on asking the right questions and knowing what problem you’re trying to solve.

popularity

In the semiconductor industry, digital twins are the focus of a lot of attention, with substantial investments from industry players and governments alike. This year the European Union and the United States have pledged hundreds of millions of dollars in grants and funding opportunities, including the new CHIPS Digital Twin Manufacturing USA Institute. Ultimately, many people see great value in innovating, commercializing and scaling digital twin technology.

As with many trends, digital twins are the subject of speculation and fervor. Unfortunately, this enthusiasm can drive well intentioned users and organizations to choose solutions they don’t need – or spend too much time and money before arriving at reliable ones.

Getting back to basics, a digital twin is a virtual representation or model that serves as the real-time digital counterpart of a physical object or process. This concept is not new to the semiconductor industry, but digital twins are experiencing a resurgence of interest, along with other industry efforts to use data to make decisions across the entire value chain. Digital twins are being applied to manufacturing execution systems (MES) to optimize and model complex manufacturing environments, VR models that visualize tools in real time and physical models that simulate processes and predict results, among other applications. Across all applications, AI is a driving force, as many manufacturers seek to inject advanced AI into their digital twins to enable more dynamic modelling.

No matter what industrial environment you are in, manufacturers have numerous ways to approach how they employ digital twins. Selecting the best path forward comes down to finding the simplest way to address business problems, clear the noise and ensure consistently trustworthy results. Developing digital twins in an industrial setting involves asking the right questions and continually making sure you are solving the right problem – and that you are not venturing too far away from that solution.

First, you need to determine the problem at hand and the outcome you seek. For example, are you trying to understand how a tool or process area is performing, or are you trying to predict the outcome of a process?

Next, ask yourself if you are trying to create a digital twin of reality as it is or how it should be? Furthermore, if there are conflicts between these two things, how can they be merged? In addition, are you trying to predict how a process should perform based on physical inputs or predict results statistically? And what happens if these things are not matched?

Another important step: clearing the noise and not buying and building components you don’t need. Ask yourself, do you need a model that outputs results to make decisions, or do you need to visualize something? Are you using technology that can visualize the right things? It is easy to get caught up in the latest tech and advanced visualization capabilities, but many applications don’t need all the bells and whistles!

In addition, you need to determine if you have the right infrastructure and supporting systems to obtain, organize and distribute data in your processes. When it comes to organizing data, you should make sure that the appropriate amount of time is spent ensuring initial data cleanliness and organization, while also making sure that data is programmatically stored to continually feed the models. As for model security and traceability, once developed, how can you be sure which model is in use? Can you trace a poorly made estimate or decision to a specific model?

Solving these digital twin-based problems are especially important when dealing with the semiconductor process. After all, the process generates immense volumes of data, so it is crucial to ensure data context is understood and captured upfront. With the stakes high for a model making critical decisions in a fab, you will need to make sure the data is clean, trustworthy and consistently available.

There are even more questions to ask and matters to consider, but these are a good place to start.

All of which brings us to the elephant in the room: data.

It is the true lifeblood of any viable model or digital twin, no matter if it is using AI, machine learning, or traditional statistical or rules-based estimation. The problem is that in many cases, data is unorganized, impure and not logically stored to maximize its value, requiring data scientists to devote significant time to collecting, cleaning and organizing data. While this is important, the true value of your team members comes from their expertise in model development and data mining, not rote data cleaning.

To cut down on one of the biggest expenses of building any reliable digital twin, you should select a solution that completely handles the challenge, preferably one with a proven track record for storing critical data securely and in context, enabling insights and data utilization quickly. The platform should be able to ingest data continuously from any process area, serving as a true single source of truth for process data to feed a digital twin.

Getting digital twin technology right is an absolute must for the semiconductor industry, and it seems like momentum is building in alignment with that vision. When developing digital twins or using AI models for semiconductor processes, there are many challenges to overcome and many distractions during development.

Solving the data problem is not the end-all. It is a substantial task that can eat away at critical timelines during the development process. And if data is not well organized and stewarded over production, it can ruin the investment over time. It will take expert collaboration across processes, data and operations to truly get the most return out of digital twins.



Leave a Reply


(Note: This name will be displayed publicly)