Systems & Design
SPONSOR BLOG

The Revenge Of The Digital Twins

Verifying that AI behaves as intended will become an important safety issue.

popularity

How do we verify artificial intelligence? Even before “smart digital twins” get as advanced as shown in science fiction shows, making sure they are “on our side” and don’t “go rogue” will become a true verification problem. There are some immediate tasks the industry is working on—like functional safety and security—but new verification challenges loom on the horizon. As in processor verification, in AI applications, the AI itself must be verified without actually knowing the functionality it performs. And there are ethics lessons to be learned for engineers.

During workouts and flights, I tend to watch Netflix. One of the most intriguing shows I have watched recently is “Black Mirror,” which provides very thought-provoking themes about a not-too-distant future. Two of the episodes were especially interesting in the context of digital twins: “White Christmas” and “USS Callister.”

“White Christmas,” starring “Mad Men’s” Jon Hamm, is set in a world that blends virtual and actual reality. Everybody has been outfitted with “Z-Eyes”, that literally allow you to block out people that don’t want to see you. The digital twin portion happens in a sub-storyline. A “cookie” is inserted in an affluent person’s head to learn her behavior, preferences and memories. The cookie is a digital twin trained using machine learning using her real brain. However, once extracted, the cookie—the “virtual You”—is so close to reality that it is not too happy to realize that it is not the “real you”. Becoming the controller for its owner’s home automation is not exactly a dream job. But as it is the perfect digital twin, it “knows” what the “real you” wants.

In “USS Callister,” “Westworld’s” Jimmi Simpson plays the flamboyant CEO of Callister, Inc., a virtual reality gaming company. His brilliant CTO has issues with living in his shadow and keeps virtual versions of some of the employees in a “modded” version of their game Infinity, in a virtual world skinned to represent his favorite show, Space Fleet. Much in contrast to the real world, here he is the brave captain and hero. Virtual versions of the employees are brought in as punishment for calling him out for staring, not smiling enough, or resetting admin permissions on a test build of the software for 14 minutes. Oh well. Needless to say, the digital twins are not at all happy with their virtual “existence” and end up plotting an escape plan.

So how do we avoid digital twins who go rogue?


Source: BigstockPhoto, Cadence

For EDA, digital twinning comes in quite naturally as we are constantly preparing different representations of a design for uses like verification and software development, as I previously wrote in the context of embedded world and in the context of system emulation.

First, functional safety is key. We need to make sure that a system is always reacting in a safe fashion, and, more importantly, returns to a safe state. For a car that may mean to stop. That doesn’t quite work for a plane at 30,000 ft. In “USS Callister,” the system is clearly not functionally safe as somebody ends up getting stuck in the virtual world. The safe state would have been to exit the game. EDA has come very far in this area. While fault simulation tools have been around for decades, they have now evolved into complex flows connected to failure modes, effects and diagnostics analysis (FMEDA). Visit us at the upcoming ARM TechCon to discuss this. We will be in the automotive pavilion available for discussion. In addition, Alessandra Nardi will present on “Meeting Functional Safety Requirements for the Next Generation of Automotive Applications” and yours truly will present together with Arm’s Jason Andrews on “Optimizing Hardware/Software Development for Arm-based Embedded Designs.”

Secondly, verification of security is critical. The digital twin should not be hackable. Early execution of a chip design in simulation, emulation and prototyping lends itself very well to early analysis of the hardware/software effects. For instance, we are working closely with Tortuga Logic to optimize execution of their security rules expressed in their Sentinel rule set on emulation. A good example was shown at the Design Automation Conference in “Enhancing Your Existing Verification Suite to Perform System-Level Security Verification.” The verification piece starts at about 7:30 minutes in.

Beyond safety and security there are new verification aspects looming on the horizon, too. Today, advanced verification of systems on chips (SoCs) uses a flow that starts with specifically expressing the functionality. For instance, specific scenarios can be described using the results of the Accellera Portable Stimulus group. These scenarios represent specific functions, and allow users to efficiently verify specific scenarios like “Take a video buffer and convert it to MPEG4 format with medium resolution using any available graphics processor. Then transmit the result through the modem via any available communications processor and in parallel decode it using any available graphics processor and display the video stream on any of the SoC displays supporting the resulting resolution.”

While for SoCs the functions are largely known and can be described and verified as above, for AI—much like for processors today—the end function is variable, not known while you design. I heard Professor Hennessy, Chairman of Alphabet, in a talk mentioning that at Google the engine code for language translation was reduced from 500,000 lines of code to 500 (see also this tweet here). The actual function moves into the training of the underlying convolutional neural net (CNN). What used to be done in functional verification (in the case of the 500,000 lines of software code) moves into conceivably much less verification that the actual processing in the CNN works, plus verifying the datasets that train it. It’s that verification of the dataset that is critical to make sure all cases are covered and the CNN does not have a chance to do unpredictable or wrong things. Extending from CNNs to what the industry calls machine learning—learning without being explicitly programmed—and beyond that to artificial intelligence, verifying that it cannot go rogue will present a complete set of new verification challenges.

Going back to the “Black Mirror” episodes, the ethical questions of what to do and not to do with the technological progress become very apparent. Unfortunately, technology can be used for good and evil, so we engineers get more and more responsibility in the brave new world ahead. How far away are we? Well, we are still a bit away. The human brain features 1014 neurons, and SoCs are now getting in the range of being able to run 108 neurons. However, in the last 40 years, the number of transistors per chip has grown by 50,000X, so we need about 20X to get there.

Given the accelerated pace of growth, we may not be as far off as we think. Let’s use the time wisely!



Leave a Reply


(Note: This name will be displayed publicly)