Sensors, Sensors Everywhere

Tackling the data fusion (and media overload) challenge in autonomous driving.


Here’s a statement that will surprise no one: autonomous driving was once again a big theme last month at SAE World Congress in Detroit. This is the case at nearly any automotive or tech show these days. There were dozens of displays related to self-driving tech on the exhibit floor, a fact captured in the various social media feeds and news coverage of the event.

Mentor was part of this story with the announcement of the DRS 360 autonomous driving platform. I was in the Mentor booth for a few days and, along several others on our technical staff, spent many hours answering questions and running through the platform’s main benefits. (In case you missed the news, those highlights include centralized raw data fusion meeting the low latency and high accuracy requirements for full Level 5 autonomous driving. Pretty cool.)

Given the frothy media coverage and marketing campaigns related to self-driving cars these days, it can be difficult to parse and identify the megatrends. Here’s my take on three autonomy-related challenges and how raw data sensor fusion can play a role in solving them.

V2X. There is an ongoing debate over the best standard for reliable, real-time communication between autonomous vehicles, as well as between vehicles and roadside infrastructure. There is no doubt that with more information, we will be able to make better decisions, but it’s an open question as to which protocol will be used (DSRC, 5G). Broadly, V2X can be thought of as a type of sensor data that compute platforms on autonomous vehicles will need to integrate, along with data from LiDAR, vision and other onboard sensors. In the case of the DRS360 we see V2X as another type of sensor with relevant information that could help us improve quality of the decisions. Accordingly our plan is to eventually be able to integrate this information regardless of the protocol into our platform and environmental model.
Deep learning. Approaches based on rules-based perception may not be fully scalable. An alternative, the use of deep learning algorithms or the deployment of neural networks, might better approximate how humans see and detect their environment. While autonomous vehicles use various types of sensors, today neural networks are used mostly for processing vision sensor data. We believe there is an opportunity to improve performance by applying neural networking to fused data from multiple sensor modalities. Among the technical objectives – making sure that these types of algorithms support safety standards like ISO 26262, and developing procedures for verification and validation of overall systems built around these algorithms.
Cost. Some of the most popular sensor technologies in today’s autonomous vehicle designs – especially LiDAR deployments – are excessively expensive. This presents a barrier to the rapid, near-term adoption of autonomous vehicles. Sensor costs need to come down, and the best way to do that is to create solid state sensors with unnecessary processing stripped away. Another way to drive down cost is to offer a fused raw data environmental map that improves algorithm efficiency, further reducing processing requirements in the central compute platform.

Surely the hype cycle is in full bloom when it comes to self-driving cars. Despite the deluge of media, there is much room yet in the market, largely because the challenges – technical, regulatory, societal – are so great. Accordingly, we need more people and companies working in this space.

That’s the opinion of Reilly Brennan, executive director of Stanford’s Revs automotive research program, who gave an excellent interview last month on the Autonocast podcast. (The Autonocast guys gave a nice plug to DRS 360 at the end of an earlier episode.)

Brennan’s summary is that “there is a lot of enthusiasm with not a lot of knowledge.” In the case of Mentor I’d say this is only half right.

Yes, we’re just as excited as everyone else about autonomous vehicles. But unlike many of the startups Brennan covers in his influential “Future of Transportation” newsletter and invites to his Stanford class, Mentor has many decades of experience working on complex automotive E/E systems, from wire harness design to connected, embedded software to thermal analysis and simulation of IGBTs and LEDs.

“The tax you have to pay (to being in this market) is that there’s an article every day, every hour with some kind of sweaty-tooth reaction to whatever’s happened in the world,” said Brennan. “We just have to deal with that. The people doing the hard work will continue to do the hard work.”

Mentor belongs on any hard-work, heads-down list in the evolving automotive supply chain. And kudos to Ed and the rest of the Semiconductor Engineering editorial staff for keeping sweaty-tooth coverage of the industry to a minimum. Check out the introduction to the DRS 360 autonomous driving platform here.