Verification In The Era Of Autonomous Driving, Artificial Intelligence And Machine Learning

The importance of data is changing traditional value creation in electronics and forcing recalculations of return on investment.

popularity

The last couple of weeks have been busy with me participating on three panels that dealt with AI and machine learning in the contexts of automotive and aero/defense, in San Jose, Berlin and Detroit. The common theme? Data is indeed the new oil, and it messes with traditional value creation in electronics. Also, requirements for system design and verification are changing and there are completely new, blank-sheet opportunities that can help with verification and confirmation of what AI and ML actually do.

In the context of AI/ML, I have been using the movie quote a lot that “I think I am 90% excited and 10% scared, oh wait, perhaps I am 10% percent excited and 90% scared.” The recent panel discussions that I was part of did not help that much.

First, I was part of a panel called “Collaborating to Realize an Autonomous Future” at Arm TechCon in San Jose. The panel was organized by Arm’s Andrew Moore and my co-panelists were Robert Day (Arm), Phil Magney (VSI Labs) and Hao Liu (AutoX Inc.). Given the technical audience, questions were centered around how to break down hardware development for autonomous systems, how the autonomous software stack could be divided between developers, whether compute will end up being centralized or decentralized, and what the security and safety implications of a large amount of collaboration would be—basically boiling things down to a changing industry structure with new dynamics of control in the design chain.

For the second panel I was in Berlin, which was gearing up for the celebrations of 30-year anniversary of the fall of the Berlin Wall. The panel title could roughly be translated as “If training data is the oil for digitalization enabled by artificial intelligence, how can the available oil be used best”? The panel was organized by Wolfgang Ecker (Infineon) and my co-panelists were Erich Biermann (Bosch), Raik Brinkmann (OneSpin), Matthias Kästner (Microchip), Stefan Mengel (BMBF) and Herbert Taucher (Siemens). Discussion points here were centered around ownership of data, whether users would be willing to share data with tool vendors and whether this data could be trusted to be complete enough in the first place.

The third panel took place in Detroit, from which I just returned. It took place at the Association of the United States Army (AUSA) Autonomy and AI Symposium. Moderated by Major Amber Walker, my co-panelists were Margaret Amori (NVIDIA), BG Ross Coffman (United States Army Futures Command) and Ryan Close (United States Army Futures Command, C5ISR Center). Questions here were centered on lessons learned from civilian autonomous vehicles and what the differences between civilian and Army customization needs are. We discussed advances in hardware and how ready developers are for new sensors and compute, course resilience and trust and what the new vulnerabilities for cyber-attacks in AI would be, as well as design for customization and how the best of both worlds—custom and adaptable—can be achieved.

Discussions and opinions were diverse, to say the least. Two big take-aways stick with me.

First, data really is the new oil! It needs protection—security and resilience are crucial in an Army context in which data in the enemy’s hands could have catastrophic consequences, and privacy is crucial in civilian applications as well. Data also changes the value chain in electronics. As I had written before in the context of IoT, the value really has to come from the overall system perspective and cannot be assigned to individual components alone. In a system value chain of sensors, network, storage and data, one may decide to give away the tracker if the data it creates allows value creation through advertisement. Calculation of return on investments is becoming much more complicated.

Second, verification of what these neural networks actually do (and not do) is becoming critical. I had mused in the past about a potential “Revenge of the Digital Twins,” but these recent panel discussions have emphasized to me that indeed the “confirmability” of what an CNN/DNN in an AI does is seen as critical by many—in both automotive and the Army contexts, safety of the car and the human life involved is a risk if we cannot really confirm that AI cannot be “tricked.” Examples that demonstrate how easy it is to trick self-driving cars by defacing street signs make me worry quite a bit here.

That said though, from challenges come opportunity. Verification for CNN/DNNs and associated data sets will likely be an interesting new market in itself—I am definitely watching this space.



Leave a Reply


(Note: This name will be displayed publicly)