Sensor technologies are still evolving, and capabilities are being debated.
With the cost of sensors ranging from $15 to $1,000, carmakers are beginning to question how many sensors are needed for vehicles to be fully autonomous at least part of the time.
Those sensors are used to collect data about the surrounding environment, and they include image, lidar, radar, ultrasonic, and thermal sensors. One type of sensor is not sufficient, because each has its limitations. That’s a key driving force behind sensor fusion, which combines multiple types of sensors to achieve safe autonomous driving.
All Level 2 or higher vehicles depend on sensors to “see” their surroundings and perform tasks such as lane centering, adaptive cruise control, emergency braking, and blind-spot warning, among other things. So far, OEMs are taking very different design and deployment approaches.
In May 2022, Mercedes-Benz introduced the first vehicle capable of Level 3 autonomous driving in Germany. Level 3 autonomous driving is an option for the S-Class and the EQS, with U.S. introduction planned for 2024. According to the company, the DRIVE PILOT, which builds on the driving assistance package (radar and camera), has added new sensors, including lidar, an advanced stereo camera in the front window, and a multi-purpose camera in the rear window. Microphones (especially for detecting emergency vehicles) and a moisture sensor in the front wheelhouse also have been added. In total, 30 sensors were installed to capture the necessary data for safe autonomous driving.
Tesla is taking a different path. In 2021, Tesla announced that its Tesla Vision camera-only autonomous driving technology strategy would be implemented for the Model 3 and Model Y, followed by Model S and Model X in 2022. The company also has decided to remove the ultrasonic sensors.
Sensor limitations
Among the challenges facing autonomous driving design today are the limitations of different sensors. To achieve safe autonomous driving, sensor fusion may be needed. The key questions are not only how many sensors, what types, and where to deploy them, but also how AI/ML technology should interact with sensors to analyze data for optimal driving decision making.
Fig. 1: To overcome sensor limitations, sensor fusion may be needed to combine multiple sensors for autonomous driving to achieve optimal performance and safety. Source: Cadence
“Autonomous driving extensively uses AI technologies,” said Thierry Kouthon, technical product manager for security IP at Rambus. “Autonomous driving, and even entry-level ADAS functions, require a vehicle to exhibit a level of environmental awareness equivalent to, or better than, a human driver. First, the vehicle must recognize other cars, pedestrians, and roadside infrastructure and identify their correct position. This requires pattern recognition capabilities that AI deep learning techniques address well. Visual pattern recognition is an advanced deep learning domain that vehicles use intensively. Also, the vehicle must be able to compute at all times its optimal trajectory and speed. This requires route planning capabilities that AI also addresses well. With that, lidar and radar provide distance information essential for properly reconstructing the vehicular environment.”
Sensor fusion, which combines the information from different sensors to better understand the vehicle environment, continues to be an active area of research.
“Each type of sensor has limitations,” Kouthon said. “Cameras are excellent for object recognition but provide poor distance information, and image processing requires substantial computing resources. In contrast, lidar and radar provide excellent distance information but poor definition. Additionally, lidar does not work well in poor weather conditions.”
How many sensors do we really need?
There is no simple answer to the question of how many sensors are needed for autonomous driving systems. OEMs currently are trying to figure that out. Other considerations here include the fact that trucks navigating open roads and city robo-taxis, for example, have very different needs.
“This is a hard calculation, as each automotive OEM has their own architecture for securing the vehicle by providing better spatial positioning, longer range with high visibility, and the capability to identify and classify objects and then differentiate between various objects,” said Amit Kumar, Cadence‘s director of product management and marketing for Tensilica Vision, radar and lidar DSPs. “It also depends on what levels of autonomy a vehicle manufacturer decides to enable (for example, to provide breadth). In short, to enable partial autonomy, a minimum number of sensors could be 4 to 8 of various types. For full autonomy, 12+ sensors are used today.”
Kumar noted in the case of Tesla, there are 20 sensors (8 camera sensors plus 12 ultrasonic sensors for Level 3 or below) with no lidar or radar. “The company strongly believes in computer vision, and its sensor suite works well for L3 Autonomy. The media has reported that Tesla might be bringing in radar to improve self-driving.”
Zoox has implemented four lidar sensors, plus a combination of camera and radar sensors. This is a fully autonomous vehicle with no driver inside, and is targeted to operate on a well-mapped and well-understood route. Commercial deployment has not yet begun, but it soon will with a restricted use case (not as broad as a passenger vehicle).
Nuro’s self-driving delivery vehicle, where aesthetics are not so important, uses a 360-degree camera system with four sensors, plus a 360-degree lidar sensor, four radar sensors, plus ultrasonic sensors.
There is no simple formula for implementing these systems.
“The number of sensors you need is the number of sensors that is an acceptable level of risk for the organization, and is also application-dependent,” said Chris Clark, senior manager for Automotive Software & Security in Synopsys’ Automotive Group. “If you are developing robo-taxis, not only do they need the sensors for road safety, but they also need sensors inside the vehicle to monitor what that passenger is doing within the vehicle for passenger safety. In this scenario, we’re going to be in a high-population and high-urban-density area that has rather unique characteristics as opposed to a vehicle that’s intended for highway driving, where you have much longer distances and more room to react. On the highway, there is less possibility of intrusions into the roadway. I don’t think there is a set rule that you must have, say, three different types of sensors and three different cameras to cover different angles for all autonomous vehicles.”
Still, how many sensors is going to depend on the use case of what that vehicle is going to be addressing.
“In the example of the robo-taxi, lidar and regular cameras, as well as ultrasonics or radar, will have to be used, as there’s too much density to deal with,” Clark said. “Additionally, we would need to include a sensor for V2X, where data flowing into the vehicle will align with what the vehicle is seeing in the surroundings. In a highway trucking solution, different types of sensors will be used. Ultrasonic isn’t as beneficial at highway speeds, unless we’re doing something like teaming, but that wouldn’t be a forward-looking sensor. Instead, it would be potentially forward- and rear-looking sensors so that we have connectivity to all of the team assets. But lidar and radar become much more important because of the distances and the range that truck has to take into account when traveling at highway speeds.”
Another consideration is the level of analysis required. “With so much data to process, we must decide how much of that data is important,” he said. “This is where it becomes interesting regarding the sensors’ type and capability. For example, if the lidar sensors can do local analysis early in the cycle, this decreases the amount of data streamed back to sensor fusion for additional analysis. Reducing the amount of data in turn lowers the overall amount of computational power and cost of system design. Otherwise, additional processing would be required in the vehicle either in the form of a consolidated compute environment or a dedicated ECU focused on sensor meshing and analysis.”
Cost always an issue
Sensor fusion can be expensive. In the early days, a lidar system consisting of multiple units could cost as much as $80,000. The high cost came from the mechanical parts in the unit. Today, the cost is much lower, and some manufacturers projected that at some point in the future it could be as low as $200 to 300 per unit. The new and emerging thermal sensor technology will be in the range of a few thousand dollars. Overall, there will be continued pressure on OEMs to reduce total sensor deployment costs. Using more cameras instead of lidar systems would help OEMs reduce manufacturing costs.
“In an urban environment, the basic definition of safety is the elimination of all avoidable collisions,” said David Fritz, vice president of hybrid and virtual systems at Siemens Digital Industries Software. The minimum number of sensors required is use-case dependent. Some believe that in the future, smart city infrastructures will be sophisticated and ubiquitous, reducing the need for onboard sensing in urban environments.”
Vehicle-to-vehicle communication may have an impact on sensors, as well.
“Here, the number of onboard sensors may be reduced, but we’re not there yet,” Fritz observed. “Additionally, there will always be situations where the AV will have to have to assume that all external information becomes unavailable due to power failure or some other outage. So some set of sensors will always need to be on board the vehicle — not just for urban areas, but for rural areas, as well. A lot of the designs we’ve been working on require eight cameras on the outside of the vehicle and a couple of cameras inside. With two cameras in the front, properly calibrated, we can achieve low latency, high-resolution stereo vision providing depth range of an object, thereby reducing the need for radar. We do that on the front, back, and both sides of the vehicle for a full 360° perspective.”
With all cameras performing object detection and classification, critical information will be passed into the central compute system to make control decisions.
“If infrastructure or other vehicle information is available, it is fused with information from the onboard sensors to generate a more holistic 3D view, enabling better decision making,” Fritz said. “In the interior, additional cameras serve the purpose of driver monitoring, and also detecting occupancy conditions like objects left behind. Potentially adding a low-cost radar to handle bad weather cases, such as foggy or rainy conditions, is a premium addition to the sensor suite. We’re not seeing a great deal of lidar being used recently. In some cases, lidar performance is impacted by echoes and reflections. Initially, autonomous driving prototypes relied heavily on GPU processing of lidar data, but recently smarter architectures have been trending more toward high-resolution, high-FPS cameras with distributed architectures that are better optimized to the flow of data across the system.”
Optimizing sensor fusion can be complicated. How do you know which combination gives you the best performance? Besides doing functional testing, OEMs rely on companies such as Ansys and Siemens to provide modeling and simulation solutions to test the outcome of various combination of sensors to achieve optimal performance.
Augmenting technologies impact future sensor design
Augmenting technologies such as V2X, 5G, advanced digital mapping, and GPS in smart infrastructure will enable autonomous driving with fewer sensors on board. But for these technologies to improve, autonomous driving will require the support of the automotive industry as a whole, as well as smart city development.
“Various augmenting technologies serve different purposes,” noted Frank Schirrmeister, VP of Solutions and Business Development at Arteris IP. “Developers often combine several to create safe and convenient user experiences. For instance, digital twins of map info for path planning can create safer experiences in conditions with limited visibility to augment in-car, local decisions based on sensor information. V2V and V2X information can supplement information available locally within the car to make safety decisions, adding redundancy and creating more data points to base safe decisions on.”
Further, vehicle-to-everything promises real-time collaboration between vehicles and roadside infrastructure, which requires technologies such as Ultra Reliable Low Latency Communications (URLLC).
“These requirements result in the applications of various AI technologies for traffic prediction, 5G resource allocation, congestion control, etc.,” said Kouthon. “In other words, AI can optimize and reduce the heavy toll that autonomous driving will have on the network infrastructure. We expect OEMs to build autonomous vehicles using software-defined vehicle architectures, where ECUs are virtualized and are updated over the air. Digital twin technologies will be essential to test software and updates on a cloud simulation of the vehicle that is very close to the real vehicle.”
Conclusion
When finally implemented, Level 3 autonomous driving may require 30+ sensors, or a dozen cameras, depending upon an OEM’s architecture. But the verdict is still out as to which is safer, or whether autonomous driving sensor systems will provide the same level of safe driving in an urban environment compared to driving on the freeway.
As costs of sensors come down in the next few years, it could open the door to new sensors, which can be added into the mix to increase safety in bad weather. But it may be a long time before OEMs standardize on a certain number of sensors that are considered sufficient to ensure safety under all conditions and corner cases.
Leave a Reply