A first-hand experience with autonomous driving and where it falls short.
Every so often, I undertake a project that reminds me why I love working in semiconductor marketing. Back in August, I hopped behind the wheel of a Tesla Model X to film a video for OneSpin about how formal verification can help designers to satisfy the ISO 26262 automotive safety standard. If you haven’t yet seen the video, you can watch it here: http://bit.ly/2ycK5Yp
The Model X itself was the epitome of luxury—Elon and company really did think of everything when they appointed these vehicles, from stylish instrumentation to soft-as-butter leather upholstery. What really made the day memorable was the chance to try Autopilot 2.0 and let the Tesla drive me around. This was every bit as cool as I had imagined it would be, but I have to admit that it was also fairly nerve-wracking.
We were fortunate that filming proceeded largely without incident; for the most part, Autopilot worked like a dream and enabled us to get some really neat shots for the video. I had three experiences while driving the Tesla, however, that crystallized exactly how important safety will continue to be as autonomous vehicles make the inevitable transition from novelty to ubiquity:
1. Near-death experience… for an unsuspecting rodent. A squirrel dashed out into the road in front of me while we were filming. Now, I’ve heard that Tesla’s automatic braking system engages substantially later than the average human feels that it should, and I want to believe that the vehicle would have stopped in time. But, as it was showing no sign whatsoever of slowing down, this particular human chose not to wait and see if it would kick in. I intervened and was able to take control of the car in time to brake hard, swerve slightly, and avoid squishing the squirrel.
Verification issues raised: How small is too small when it comes to automatic braking being triggered? Would the car stop for someone’s cat? Could it tell the difference between a real animal and a stuffed toy? Would it brake for already-dead roadkill?
2. Time to re-stripe this road—or else. There was a moment when I was driving on Autopilot in the leftmost lane on a street with less-than-perfect lines. Some recent roadwork left many sections patched over and unstriped. As we approached an intersection, the car struggled to recognize where it was supposed to be and diverted us rather sharply into the left turn lane, when it should have continued straight. Per the Autopilot use instructions, I should have had to apply my left turn signal to ask the vehicle to move into the turn lane. Instead, it decided on its own. Though no harm was done, it was certainly unnerving.
Verification issues raised: How do we prepare self-driving vehicles for real road conditions, since we so seldom encounter ideal ones?
3. That’s NOT the speed limit! The most disconcerting situation occurred while we were driving along at one end of a rather long road, one that transitions, as it enters a residential area, from two lanes in each direction and a 45 MPH speed limit to only one lane each way and a drastically reduced speed. We engaged Autopilot in this slower zone, but the Tesla failed to recognize the lower speed limit of that stretch of road and accelerated to 45.
Verification issues raised: How often is often enough to update the vehicle’s system concerning road conditions, varying speed limits, school zones, etc.? How can we ensure, for example, that it will recognize and respond appropriately to a temporary construction zone based on caution signs posted by crews at the scene?
On the whole, I was impressed with the Tesla Model X and Autopilot 2.0, and the whole experience left me looking forward to seeing where self-driving technology will take us (pun intended). How apropos that the process of making a video about safety-critical verification underscored to me, in real ways, just how critical such verification is.
Leave a Reply