Enabling “Triple Vision” – LiDAR Technology For Safe Driving

Modern cars need a sophisticated system of vehicle sensors and related processing to support new ADAS features.

popularity

Cars are becoming safer, thanks to Advanced Driver Assistance Systems (ADAS) features such as automatic emergency braking (AEB) and driver monitoring systems.

These features are becoming ever more sophisticated, making automated driving robust. For instance, AEB began with merely watching cars in front. Now, it detects pedestrians, weaving traffic, cyclists, and objects in the road. Realizing the importance of AI to assist drivers, 20 automakers have readily agreed to equip most new passenger vehicles with low-speed AEB and forward-collision warning by September 2022.

To make that happen, cars need a sophisticated system of vehicle sensors and related processing.

Beyond camera and radar, a third sensor – light detection and ranging (LiDAR) – is gaining popularity. Like radar technology, LiDAR uses laser light to determine how far away an object is, and as is done with camera and radar image, it can detect objects in the road using convolutional neural networks (CNN).

The difference is that in LiDAR, the sensor generates 3D point cloud data (a set of data points in space) with thousands of points. So, its accuracy and precision are richer. LiDAR usage also ensures redundancy in the overall ADAS/automated driving (AD) system. For instance, where a camera might miss a particular object because of reflection of the sun or oncoming headlights, LiDAR negates that reflectivity and can detect a person in the middle of the road.

But LiDAR presents two big challenges:

  1. High compute need: the rich LiDAR data processing makes LiDAR technology much more expensive than its counter-parts camera and radar, which have been in the automotive industry significantly longer
  2. Varying and evolving designs: there are different types of LiDAR ranging from solid-state scanning, solid-state flash, rotating MEMS, FMCW, and more.

Xilinx is uniquely positioned to address both challenges. Our powerful DSP capabilities coupled with flexible I/O configurations and programmable logic are a good match for the high compute need of many LiDAR makers. In addition, our devices contain programmable hardware (HW) that can adapt to any LiDAR sensor configuration, making it ideal for varying and evolving designs. There isn’t a clear ASSP/ASIC device architecture because LiDAR technology is relatively new and a common approach has not been adopted in the ADAS/AD market.

In addition to meeting the needs of high compute and evolving designs for LiDAR, Xilinx solutions are also great for addressing cost and power issues. FPGAs enable true HW-based processing pipelines for multiple sensor RX channels. This allows simultaneous and independent RX channel processing with varying objectives. In addition, it enables integrated HW acceleration for post-detection processing – e.g., point cloud generation and grid mapping – and ideal partitioning between sensor software (SW) and associated HW acceleration functions using the high bandwidth connectivity between the processing system and programmable logic.

The integrated solution that FPGAs enable help drive cost down. In addition, parallel HW processing reduces the need for clock speed, reducing power. The integrated solution also provides unique opportunities to update not only sensor SW, but HW re-programmability, as well.

Customers

ZVISION, a startup that develops solid-state LiDAR technologies, chose Xilinx for the HW processing platform for both LiDAR signal processing and point cloud-based AI algorithms. Our device met their requirements for high-level customization, continuously evolving signal processing algorithms, and parallel computing power for AI processing.

RoboSense, a Chinese startup, chose a Xilinx device enabled with point-cloud AI object recognition over a mature NVIDIA/Jetson TX2-based solution. They valued our throughput and latency advantages, as well as cost efficiency. What’s more, their RS-LiDAR-M1 (with point-cloud object recognition using the Xilinx DPU) won the CES 2020 Innovation Award!

Xilinx solutions are positioned to address high compute needs, evolving designs for LiDAR, and cost and power issues. Major automakers are just as invested in LiDAR as we are in powering this unique and powerful technology.

Clearly, the proof is in the adoption of our devices for LiDAR. In addition to RoboSense and ZVISION, Xilinx’s technology is used in LiDAR solutions developed by Baraja, Benewake, Blickfeld, Hesai, Innovusion, Opsys, OURS, Ouster, Phantom Intelligence, Pointcloud, SureStar, and many others. And these solutions are deployed in many vehicles – perhaps even yours.



Leave a Reply


(Note: This name will be displayed publicly)