It’s easy to see a digital twin as nothing more than a simulation model, but that would ignore a very important difference.
Ever since Siemens acquired Mentor Graphics in 2016, a new phrase has become more common in the semiconductor industry – the digital twin. Exactly what that is, and what impact it will have on the semiconductor industry, is less clear.
In fact, many in the industry are scratching their heads over the term. The initial reaction is that the industry has been creating what are now termed digital twins for the past 30 years. In some ways, they’re correct.
Suitable definitions can be found from a paper from the aeronautics, aerospace and defense industry:
Within the context of the semiconductor industry, Frank Schirrmeister, senior group director for product management and marketing at Cadence, defines the term as it applies to an emulation product. “A digital twin is a digital representation of a product or system under development representing a functionally correct, predictable and reproducible representation of the product or system at the appropriate level of fidelity to perform verification, performance analysis and system validation tasks.”
Many add another important distinction. “Conceptually, it also integrates data from the actual operation of the system in the field,” says Roland Jancke, head of design methodology for Fraunhofer IIS/EAS. “Thereby the models are improved, and operation strategies are adapted. The digital twin learns throughout the whole lifecycle and hands that knowledge over to its real-world twin.”
In addition, some things are better in the virtual world. “Using a combination of data and simulation you can improve monitoring by adding virtual sensors,” adds Sameer Kher, director for systems and digital twins at ANSYS. “For example, this enables you to add temperature probes into an IC where there is no physical way to measure it.”
Much of this may sound very familiar to people in the semiconductor space. “You take the behavior of a chip, be it function, thermal, mechanical, CFD—all of the various processes of the chip—that is the entire purpose of the EDA industry,” says Joe Sawicki, executive vice president, Mentor IC EDA at Mentor, a Siemens Business. “Providing digital twins for behavior is done so that it can be simulated, validated, and you can predict the yield of the device.”
Others agree. “Are you trying to design the electronic system, or are you trying to develop software for that system, are you trying to validate that software with real-world connections in a real environment, perhaps using an FPGA prototype?” asks Marc Serughetti, senior director of business development and product marketing for automotive solutions in Synopsys. “The key is that a digital twin is the means to an end.”
Digital twin scope
The concept of a digital twin has been used in chip design almost since the first integrated circuits. “Plan and develop a model instead of the actual circuit, and make sure both show the same behavior when stimulated with the same signals,” says Fraunhofer’s Jancke. “You may even develop the software using the model and trust that it will run on the hardware as well. These principles are inevitable in the design of today’s multi-billion-transistor designs.”
The reason why we are now hearing the term digital twin is because of the increasing scope of the EDA and semiconductor industries. “At the highest level, consider the airplane as the scope,” says Schirrmeister. “The digital twin is a digital version of the design, the system, to which I can apply real-life data and do some meaningful analysis.”
Automotive has been using digital twins for many years. “They have been doing it in the mechanical space and now they are looking at how they do it in the electronics space,” explains Serughetti. “Ultimately, the electronic system is not independent from the mechanical system because you are trying to control something. You have to bring the mechanical twin together with the electronic twin together with the software twin—all together as you build the system.”
The automotive industry should take the semiconductor industry as a guide. “They need to build a similar approach to what the semiconductor industry has done,” adds Serughetti. “They want to establish a virtual development process and representation. In this case at the SystemC-level.”
Getting the right abstraction can be tricky. “It all depends upon fidelity,” says Schirrmeister. “If the system is fully modeled with all of the detail, which you can never have, you get exactly the same results as the physical system. If not, you can still do some meaningful analysis at the system level looking at what the real data would mean to the digital twin.”
“We often talk about hybrid emulation, which brings together RTL running on the emulation box working in parallel with a virtual prototype and that has a very different level of abstraction,” says Serughetti. “What you put in the virtual prototype is not necessarily what you are trying to verify. If you take the RTL model and use it for vehicle simulation, it will not work because it is too slow. You need a different model for that.”
This trend is reviving a term that fell out of favor a few years ago. “For the chip, it means electronic system level (ESL) models are needed that are abstract enough and capable of being integrated into a larger system context simulation,” says Jancke. “Then the overall concept can be validated, control algorithms and operation strategies can be developed and even failure scenario run prior to real silicon.”
Digital twins can be very abstract. “I can build a digital twin of the full product, which is something like the iPhone SDK,” says Schirrmeister. “That is a digital twin. I can do software development on it and I can apply the data from a real design to see if I get a phone call while playing Angry Birds and getting a calendar invite—will it actually do something wrong.”
Obtaining value
Rarely does significant value come from improving a capability that already exists. Only when a completely new capability is offered does it becomes really interesting to the industry.
“If you are doing a networking chip, you can run packets through it all day long,” says Sawicki. “It is easy to do within your digital verification environment. Same thing for a CPU. I can boot an operating system, run applications and that is sufficient. But when you are talking about something running against a LiDAR array, some imaging, some other sensors running alongside a braking system—that is where people became very interested in grabbing our digital twin for the processing element so that they can do more significant verification. Does this start to find issues that you could not find without it? There is an awful lot of engineering being bet on that.”
A digital twin manages all that data. “The amount of data that you have to look at becomes so big that you cannot make sense of it with a physical system,” says Serughetti. “This is why you need the digital twin. In a digital environment you have more facilities to aggregate data to do simulation, etc.”
Schirrmeister agrees. “We are increasing the scope to a level where no human being can understand it in their heads. It comes down to the specific tasks that they will get used for. What I am building is a specific subset and a very specific fidelity with specific interfaces into the real world and doing it in a way that is much easier than in the real world.”
The business value needs to be obvious. “We use it for failure prediction,” says ANSYS’ Kher. “Using simulation, we can predict steady state temperature or behavior and what that would mean in terms of failures. It also enables ‘what if’ analysis that can be used to perform optimization. Given the current set of operating conditions, using the offline digital twin we can predict and optimize behavior of the physical equipment.”
Not just simulation
A simulation model is one form of digital twin, but data can also be a digital twin. “Simulation is when you want to see the behavior,” says Serughetti. “Imagine that I am someone who is trying to aggregate data associated with wiring in the car. My digital twin does not require mechanical or behavioral information, it may only require knowing how things are connected with each other. So there is the concept of a digital twin to meet a certain objective.”
It also may be limited to a specific part of the design flow. “You may be satisfied drawing correlation based on data where you do not need a digital insight from simulation,” says Kher. “It may be because it is straightforward. Pure data may get you 60% level of accuracy in terms of predictions, and may be appropriate for some cases. But when you do need more insight and you need physics, that is when simulation-based digital twins come in.”
Bills of material (BoM) represent another type of digital twin. “The F35 is integrating 200,000 parts from 1,600 suppliers using 3,500 integrated circuits and 200 unique chips with more than 20 million lines of software,” says Schirrmeister. “There are tools for automotive where you can look at the part numbers. From the VIN number, I want to identify if cars with this particular chip in it, for which there are three suppliers, fail more often than others? That is an analysis I can do in my digital twin by applying the real data.”
Extending into production
Some of the concepts are being utilized closer to home. “You also can think about a digital twin in the manufacturing process,” says Serughetti. “The twin plays a role where the problem is how to optimize the manufacturing process.”
This was a focus of the DVCon keynote speech given by Fram Akiki, vice president of electronics industry strategy at Siemens PLM Software. “When we consider the concept of first time right for a design, imagine the importance of having an equivalent first time right for a semiconductor production facility. When looking at a 7nm, 300mm facility that costs upwards of $15B, before you actually want to physically implement that you’d better have a really good virtual, digital model of how this facility will be built and optimized.”
Fig 1: Extending the concept of the digital twin into production. Source: Siemens
Akiki pointed out the boundaries between the stages are blurring and dissolving. He believes that design has to become more sensitive to production capabilities and the potential costs associated with using particular capabilities.
In many cases, equipment within a production sense also can benefit from a digital twin. “We have built a digital twin for plasma-enhanced chemical vapor deposition (PECVD),” says Kher. “We need a detailed computational fluid dynamic (CFD)-based thermal analysis of the equipment augmented with some of the controls. When new vapor is injected, the temperature can change at the surface of the wafer, so it is important to regulate and monitor that. This is an application where a digital twin that essentially focuses on the surface temperature of the wafer has to model some of the external inputs around it.”
Building the digital twin
Most models built for chip design are directly in the development path of the product. As an industry we understand the problems associated with integrating disparate models together, especially when they employ different abstraction or physics. “There is no one homogeneous platform or environment,” points out Kher. “There is a heterogeneity of solutions and system. Simulation-based digital twins have to integrate into whatever operational platform exists. For a factory, that may be a manufacturing execution system (MES), etc. This can be tricky to figure out which one to integrate into. There is some integration work required.”
It is made more difficult when models have to come from multiple sources. “Several models from different suppliers in multiple languages using diverse simulation principles have to be integrated into a single efficient simulation model,” adds Jancke. “This poses tough requirements on the interfaces between the individual parts and the overall framework.”
Some companies help with the creation of digital twins. “We do create virtual prototypes for several semiconductor companies,” says Serughetti. “Then we go to their customers and they have a different definition for the system, which is the SoC, a microcontroller and board components. They want tools that enable them to create it independently. But it does take a lot of expertise to bring those models and simulation together.”
Model availability is a problem. “The problem with virtual prototypes is timing,” says Sawicki. “This is not about clock cycles; it is about when you can actually get the model. By the time you have put together a model that has sufficient accuracy to have value in that overall system run, it is so far down the process that it no longer helps you. RTL may not be the natural behavioral level to be simulating, but it is available when you need to be simulating it. Emulation allows you to be able to get a meaningful number of simulations through the system.”
Model maintenance is another issue. “Until we have a flow where there is a golden entry model—where everything flows automatically out of it—I will have some nasty effects with the early representations,” says Schirrmeister. “I will never update the virtual platform to keep in sync with the actual implementation. I need to automatically create the actual twin so that everything in the digital twin manifests itself in the actual twin. The digital twin may not have the right fidelity, it may not be in sync functionality-wise. If I updated some registers, my digital twin might break because the software may not run on it. So I need to go to higher-level modeling. The challenging problems still exist.”
This may not be a problem for some of industries most interested in the digital twin. “They may use model-based system engineering (MBSE), which is used to map the requirement through the engineering process into the actual simulation artefacts and then all the way back into the system,” says Kher. “How do we make sure that any changes are being propagated back to the requirement so that the generated models are valid? I don’t think you have to solve the entire problem in order to get digital twins. You need to build these system-level models as part of the engineering and validation phase, where you are combining the chip with software, with thermal models—whatever the pieces of information you have at the system level—in order to meet the original requirements.”
Kher does have some good news on the model front. “Gradually, as models start to flow through the chain, you will see more adoption, and eventually [models] may become requirements. Bigger companies may require their suppliers to generate twinable models. It is coming, but initial success tends to be equipment manufacturers that can extract their business model to add services to their customers.”
Conclusion
The EDA industry can approach this in one of two ways. It can assume we have all of the answers and try to push existing solutions into the rest of the industry, or it can listen to them and learn about their specific needs, perhaps creating some better model development flows.
“We are taking many of the same techniques that have been used on the IC side and applying them upstream into a system context,” said Akiki. “It is not just about linking digital twins and digital threads into a digital fabric, although being able to take that expertise and deploy it upstream into a system is proving to be powerful. There are some reverse techniques that also are happening, where we have looked at certain issues from a system perspective in a behavioral model, that have application in SoC development—particularly as it relates to things such as functional safety.”
We all need to learn. “Some principles still need to be introduced into the world of IC design, such as improved simulation models from field data,” points out Jancke. “We also see value in Portable Stimuli from concept level through circuit design to test equipment, to name only a few.”
The key to understanding digital twins is the application of real data. “Each problem may have a specific set of techniques and capabilities that have been built over the years that we can draw on,” says Kher. “We need to be able to capture real data and use it to validate quickly.”
Leave a Reply