Home
TECHNICAL PAPERS

Data Fusion Scheme For Object Detection & Trajectory Prediction for Autonomous Driving

popularity

New research paper titled “Multi-View Fusion of Sensor Data for Improved Perception and Prediction in Autonomous Driving” from researchers at Uber.

Abstract

“We present an end-to-end method for object detection and trajectory prediction utilizing multi-view representations of LiDAR returns. Our method builds on a state-of-the-art Bird’s-Eye View (BEV) network that fuses voxelized features from a sequence of historical LiDAR data as well as rasterized high-definition map to perform detection and prediction tasks. We extend the BEV network with additional LiDAR Range-View (RV) features that use the raw LiDAR information in its native, non-quantized representation. The RV feature map is projected into BEV and fused with the BEV features computed from LiDAR and high-definition map. The fused features are then further processed to output the final detections and trajectories, within a single end-to-end trainable network. In addition, the RV fusion of LiDAR and camera is performed in a straightforward and computational efficient manner using this framework. The proposed approach improves the state-of-the-art on proprietary large-scale real-world data collected by a fleet of self-driving vehicles, as well as on the public nuScenes data set.”

Find the open access paper here. Accepted for publication at IEEE Winter Conference on Applications of Computer Vision (WACV) 2022.

Sudeep Fadadu, Shreyash Pandey, Darshan Hegde, Yi Shi, Fang-Chieh Chou, Nemanja Djuric, Carlos Vallespi-Gonzalez; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2022, pp. 2349-2357.

Visit Semiconductor Engineering’s Technical Paper library here and discover many more chip industry academic papers.



Leave a Reply


(Note: This name will be displayed publicly)