What’s missing from the engineering ecosystem.
To enable truly self-driving cars — the ones without a gearshift or a steering wheel — there must be a confluence of technologies, a refinement of the business models, regulatory and safety requirements, and insurance concerns.
So how close is the automotive ecosystem to reaching the goal of truly autonomous driving? That depends on your vantage point.
As far as where automakers are today with some of the key technological systems, Professor Edward Lee from UC Berkeley summed it up rather critically: “We have no idea what we are doing.” He believes a big part of the problem is a lack of good models for cyber-physical systems, which combine digital behaviors and software with physical dynamics. What’s missing is the know-how to construct physical realizations that are faithful to those models.
Circuit design, in contrast, is now so well established that a system can be designed as a bunch of logic gates and latches. It can be fabricated with very high probability that the physical realization will behave just like the model. That isn’t the case with self-driving cars, so the engineers instead are prototyping and testing. This is a highly iterative trial-and-error process where engineers try a design, see if it works, drive the self-driving car for millions of miles, and then analyze the data. Lee considers this a bad way to engineer systems, but right now it’s the only way to get the job done.
, a Cadence fellow and CTO of the company’s IP Group, takes a more upbeat view. He said that even if we don’t know what we’re doing, “it doesn’t make it different from any other thing that we’re doing with electronics. I don’t know if that is comforting or disturbing.”
Most outsiders would consider it disturbing. For engineers, this is part of the process. “It certainly is the case that when any problem gets sufficiently complex, you really cannot hope to have any sort of a provable model,” said Rowen. “You’re going to rely on trying things and seeing what works. This is especially true in vision systems, because vision is inherently very noisy so there are a lot of things that are going on. You get a lot of bits and you have to figure out what’s probably going on. There are no absolutes, and that’s what makes it an interesting problem. It also makes it a hard problem, and it means that we will start out not knowing what we are doing, and by the time we get really good at it, we will only know very barely what we are doing.”
The reference model for many of these systems is the human body. “And humans are also very noisy and easily distracted and don’t notice things, depending on what their blood sugar is, what the kids are doing in the back seat, what music is playing on the radio, and who they are in the middle of a text conversation with. So, it’s not like we have to be perfect. We only have to be better than humans. That’s actually not so hard when it comes to visual systems, particularly if you’re saying what kind of reliability, what kind of absolute guarantees can you provide? The evidence so far is overwhelmingly that self-driving cars are significantly safer, significantly more dependable than the average human driver. It’s far from being a solved problem, but it’s not far from being better than humans.”
The good news is that the industry still has time to work out these issues. Andy Macleod, director of automotive marketing at Mentor Graphics, observed that creating autonomous vehicles is not the leading challenge facing the auto industry today. He pointed to a recent study which identifies the biggest concerns for auto execs this year as connectivity and digitalization of the vehicle. Presuming this is true, then the logical consequence is that R&D dollars will go there primarily, at least for now.
Coming into play here are consumer demand, the technology barriers, and the ability for automotive OEMs to make money. Then, from the technology perspective, Macleod sees a number of areas that need to converge in order to make self-driving cars a reality.
First are radar, camera, machine vision, ultrasonic sensors, and the usual advanced driver-assist technologies, which already are available on the Tesla. Collectively these technologies are a big step toward autonomous driving. “Consumers like it and will pay more for ADAS functionality. It’s viewed as a co-pilot, and the value proposition is that you’ve got a car that basically can’t get into an accident, or you have to work quite hard to get it into an accident,” said Macleod.
However, there is an increasing tension between the co-pilot technology and truly autonomous driving technologies. Here, the industry seems to be going in two different directions. “If it’s true autonomy, where you can sit in the back seat and do e-mail, will people pay a premium for that? Are consumers ready for that? I’m not sure that they are. It’s more likely to be tied to the idea of mobility on demand, where you can program a vehicle via a smartphone and it will take you somewhere, likely in some kind of highly urban area,” he said. When viewed from the perspective of different use cases for ADAS versus fairly autonomous driving, the monetization, business model, and consumer needs are very different.
The second and third areas of technology that must develop further are the mapping data that self-driving cars will need, along with automotive artificial intelligence (AI) — deep learning for cars to detect their environment. To this end, Time Magazine published in its March 7 issue a graphical continuum of prominent scientists, inventors, entrepreneurs and futurists who have been vocal about their AI perspective. Starting on the side of believing that AI will benefit humankind moving to the belief that AI will doom us all, the spectrum includes Ray Kurzweil, Sam Altman, Michio Kaku, Bill Gates, Stephen Hawking, Nick Bostrom and Elon Musk.
Click to enlarge. (Source: Time Magazine)
Given that Musk is CEO of the first commercial self-driving car company, Tesla, it does seem ironic that he would also call AI ‘our biggest existential threat’ and assert that ‘AI is equal to summoning the demon.’
There is a great divide of opinions here, and a full spectrum across how this technology that can be applied. “The only kind of AI that we are sure will do a good job in the short- to medium-term is exactly that kind of visual system replacement where you’re really just trying to replace quite a bit of lower brain function, not really trying to replace human judgment in any of the philosophical senses,” said Rowen. “It is true that there is a continuum, and that by perfecting this kind of cognitive computing, we will take ourselves a step closer to doing higher kinds of cognition that Kurzweil might be talking about. But it’s pretty clear that we can do a decent job on this as measured by that mediocre benchmark of humans and make our systems considerably safer within strict bounds that we will not want to put these in challenging positions where there are moral judgments such as: ‘I’m going so fast I’ve got to turn left or turn right and hit this crowd of people or that crowd of people. Which crowd of people is more worthy?’ Those are relatively classic philosophical questions that I don’t think we are prepared for, and are fundamentally unlikely, but which will raise some interesting questions in the future. In particular, everybody sort of agrees that we will have some accidents, and we will find the system to blame, and that all sorts of interesting liability issues and philosophical questions will come out of that.”
One of these parallel lines of questioning will be in insurance regulation around self-driving cars, which the automakers and insurance providers already are grappling with.
“There are a lot of issues in terms of what the state departments of motor vehicles are doing in terms of requiring licensing and certification, what the U.S. Department of Transportation is doing, and what the insurers are getting ready for as this becomes a piece of what they are going to face, because self-driving cars will be out there. But the software, in a sense, will be responsible. They have to figure out how the liability gets passed around. The carmakers are pretty concerned that all liability will shift from drivers to them as they are the providers of the software, weighed against the fact that the total number of accidents and the total amount of, in particular, injuries and deaths, should be meaningfully impacted by this. So in the grand scheme of things you say, ‘Everyone should love this because fewer people will die.’ But it’s certainly impossible to imagine that the manufacturers will be reluctant to take on the implied liability.”
Another essential development to making self-driving cars a reality is vehicle-to-vehicle and vehicle-to-infrastructure communications.
“That’s going to be the last thing to get you to that last 0.1% of true autonomous driving— meeting all of the regulations and having all of the safety requirements needed,” said Mentor’s Macleod. “We need to have cars talking to each other, and talking to the infrastructure. That’s going to be mandatory. But this will require huge investments in new infrastructure and investment in car connectivity, as well.”
It also will require some significant changes in engineering curricula. “This is a whole new skill set for the automotive company, which comes more from the physical side, and it’s a whole new skill set for the software people, which come from the cyber side,” said UC Berkeley’s Lee. “They don’t understand the dynamics of the physical world, or they don’t know how to integrate that with their software. These kinds of problems make it a wonderful area to be doing research, but it doesn’t make it such a great area to be a practitioner in industry because you’re working with very immature tools.”
The good news is that all hands are on deck. The automotive space is an area where major companies see a big opportunity and are investing heavily in areas such as cognitive computing elements, neural networks and radar systems. They are also more involved than ever in helping companies in the automotive ecosystem build these platforms, including the analog pieces or the high speed interfaces, the memories or the computing elements, and even some of the novel packaging that goes into making things work in the form factors and thermal conditions that must be dealt with in the automobile.
Most experts don’t question whether autonomous vehicles will begin showing up on roadways around the world. At least for now, the die seems to be cast. But how aggressively this will be rolled out, how consumers will react once these vehicles reach the market, and how quickly the massive automotive ecosystem will adapt to these shifts will remain fuzzy for some time.
Related Stories
The Challenge Of Updating Cars
3 Key IoT Benchmarks
Rolling Out Automotive Security
How Much Security Is Enough
From D & C Rochester NY:-
“”Now we have self-driving cars, with the promise that it will bring good benefits to society. Recently USA Today carried a full-page report on self-driving cars. The personal automobile (motorized transportation in general) is one of the few technology products that has touched deeply day-to-day living for decades and continues to do so. “Self-driving car,” “driverless car” or “autonomous car” — is no longer fiction (remember the television show Knight Rider?). Google is committed to making self-driving cars a reality. There are also reports of self-driving trucks on road test.
With so much activity, some questions come to me, as an observant user of technology products through decades.
1. What is the need /motivation for this “breakthrough” product? Convenience or performance improvement?
2. Is it possible to entirely replace the human driver? The human drives many kinds of vehicles. In that sense human is the universal driver. Is there a universal electronic driver in the wings?
3. Can this product be trusted to be good personal family transportation vehicle — as good as the present and better?
4. Are the design, technology, and infrastructure— even if acceptable on test tracks — sustainable, reliable, and safe?
5. What are the methodologies, procedures and acceptance criteria used in identifying and testing exceptions?
6. Can the self-driving car companies put documentation on the design requirements in the public domain? Can the general public give inputs on “rubber meets the road” issues? The challenge is convergence of technologies, behaviors and real time ground uncertainties.
7. Does this product /system demand replacing natural intelligence with electronic intelligence?
8. We experience a driverless or driving-less ride even today as we take rides on buses, trains, aircraft, etc., But with the artificial driver, will the experience be different, and how? Many scenarios are unpredictable in advance, but humanly addressable in real time.
9. Does the realization challenge of a self-driving car product/system belong in the class of “problems of constrained optimization”? If so, will there be an irreducible minimum set of unaddressed critical constraints? Can we afford to ignore this set?
10. Last but not least, are there any projections on cost of ownership?
Some questions are very general and some are somewhat specific. Nevertheless, they are relevant. The impact footprint of the self-driving car is incredibly large and dynamic — both spatially and temporally.””