Properly defining what digital twins are is an important part of determining their usefulness.
These days it seems like we could play business bingo when watching presentations at conferences, checking off the most keywords mentioned. Hitting the terms AI, ML, IoT, 5G, and edge computing all together almost guarantees your presentation to be a hit. In recent years, the term “digital twin” has gotten a lot of attention. Recent discussions with Brian Bailey and a paper I wrote for GOMAC on this has made me think. What exactly are digital twins? Are they worth it? What’s the return on investment?
My slight cynicism above aside, as I am writing this while being in Albuquerque for GOMACTECH, the term “digital twin” really has created some waves recently. And it has some real merits, once properly defined. My paper for GOMAC is called “System Emulation and Digital Twins in Aerospace Applications.” As I had written previously in “Emulating Systems of Systems,” system emulation bears the potential of improving pre-production quality with first time success of designs, allows to collapse the serial development pipeline, speeds up verification runtimes and allows easier and earlier distribution of platforms for software development, not to mention that increased safety during testing, as a crash in a simulator is much better than putting real lives at risk. System emulation also has significant overlap with what the industry calls digital twin: virtual representations of the system to which the same stimulus can be applied.
I found a suggested definition for digital twins in “Establishing a Fully-Functional Digital Twin or Digital Thread in Aviation, Aerospace, and Defense:”
“Current state” makes me think of the different stages of development in my world of chip design. Therefore, I took the liberty to make up my own definition in the context of system emulation.
In the 12 to 18 months or more that it takes to develop a complex chip, various representations of the design will exist—pure virtual platforms, RTL simulation, emulation hybrids, prototyping, and, finally, prototype silicon. All of them can be considered digital twins during development. I tried to overlay them onto the project flow that I have used before illustrating typical hardware/software development flows in the graph below.
Our users will often use the different early representations to confirm issues they may find in the actual silicon. For post-silicon debug, they’ll reproduce the scenarios on the earlier representations, essentially the digital twins of the silicon. Hopefully, all issues can be worked around in software or fixed, but from time to time, issues—for instance, around performance—can be found and reproduced but no longer fixed, as silicon is done by now. That’s the moment in which teams regret not having done enough early analysis using virtual platforms or using architecture analysis with accurate interconnect and memory analysis in hardware engines fast enough to run system traces.
The interesting observation that Brian Bailey and I arrived at when discussing digital twins is that the chip development loop is actually almost closed. Every next model is somewhat derived from the previous one, sometimes even automatically like with high-level synthesis.
In contrast, when representing a bigger system in which the chip resides, there is, at least today, little to no direct connection from the system model to the chip implementation. It’s an open loop. The description of a fully “executable specification” of the system and all its components is desirable and some progress has been made. However, its application at the mission level has limitations due to the abstraction in model-based descriptions.
Similarly, back to lower levels of abstraction, while an array of hardware-based emulators is an intriguing concept to execute a full car or airplane—as a full digital twin—it would have many challenges. For instance, just having all the components available for emulation at the right time is a challenge. Some components will be available in silicon already, some will be still conceptual and only representable as C-models, others may be available as RTL for emulation.
So bottom line, while the digital twins during chip design are created as part of the natural development flow, for bigger systems like the complete airplane, car or industrial installations as I had described in “Embedded World 2018: Security, Safety, And Digital Twins” with applications for the Microsoft Hololens, digital twins may have to be created in a dedicated fashion representing exactly the portion of the system for which data collected at the physical twin has to be fed back into.
And then the next step is to verify and validate digital twins for correctness. We may be closer to difficult issues than I thought of when writing “The Revenge Of The Digital Twins.” We better figure the verification aspects out, otherwise, the times ahead may be excitingly scary.
Leave a Reply