3D Object Detection with Temporal LiDAR data for Autonomous Driving

ABOUT THIS PROJECT

At a glance

A crucial task in Autonomous Vehicle applications is the ability to accurately detect and infer details about the surrounding objects accurately, and in a timely fashion. Among available sensors for autonomous driving, LiDAR results in highest distance accuracy, and works under low light conditions such as nighttime. The availability of labeled temporal LiDAR datasets by Waymo and NuTonomy as of mid 2019, has opened up exciting research opportunities for development of temporal LiDAR algorithms for 3D object detection. In this project, we develop novel 3D object detection, algorithms that exploit the inherent temporal nature of LiDAR data. In doing so, we pay particular attention to pedestrian detection as it is the most important class to detect, and yet is most challenging for existing methods. We consider two approaches to designing deep learning object detection systems that utilize temporal LiDAR data. One is to modify an existing model for example by inserting recurrent networks such as ConvLSTM or Fullly connected LSTM. The second approach is to start from scratch and to systematically design new temporally inspired deep learning architectures matched to the unique characteristics of temporal LiDAR data. Either way, there is a need to carefully optimize parameters such as grouping of the LiDAR frames, and the number of past frames used during inference. We will also investigate architectures for sensor fusion with temporal LiDAR data, as well as platform aware model optimizations. Our preliminary results on modification of PointPillar architecture shows an 8% improvement in pedestrian detection.

principal investigatorsresearchersthemes
Avideh Zakhor 

LiDAR, 3D object detection, Pedestrian detection, recurrent networks, Fusion