Domain Adaptation for Realistic LiDAR Data Synthesis
ABOUT THE PROJECT
At a glance
Recently, Deep Neural Networks (DNN) have achieved remarkable progress in many machine learning/computer vision challenges. However, training DNN requires large labeled datasets, which are expensive and time-consuming to obtain. Training on synthetic data with automatically generated annotations, rather than real data, obviates the need for time-consuming labeling. However, due to the dataset bias or domain shift [Tzeng2015], the models learned from synthetic data cannot be reliably generalized to real data [Shrivastava2017]. For example, the overall per-pixel label accuracy of a state-of-the-art semantic segmentation model drops from 93% (if trained on real imagery) to 54% (if trained only on synthetic data) [Hoffman2017]. How to effectively adapt from synthetic domain to the real domain remains an open problem.
In this project, we focus on applying domain adaptation techniques to LiDAR data synthesis problems. LiDAR is a reliable sensor commonly used for autonomous driving applications. Recent works have begun to explore using DNN to perform perception tasks on LiDAR point cloud [Wu2017]. However, due to the cost of LiDAR sensors and the particular difficulty of labeling 3D bounding-boxes in LiDAR point clouds, LiDAR datasets have limited availability compared to image based datasets. In [Wu2017], we utilized GTA-V as a preliminary LiDAR simulator to synthesize point cloud for training neural networks, but the efficacy was limited by several issues: a) unrealistic physical models, b) absence of noise, c) difficult to simulate variable beam intensity. As in rendering in computer graphics, developing a sophisticated LiDAR simulator that can address above issues based on physical models is very difficult. Instead, we propose to apply domain adaptation techniques to transfer GTA synthesized data to real-world data with intensity that can match the real-world distribution and ultimately, improve DNN’s performance on real-world dataset.
Research Agenda and Challenges:
Given a large volume of training synthetic data set, some natural questions arise: How to adapt or generalize well-trained models on synthetic data to real world data? How to combine or fuse different modalities of synthetic data and what is the best fusion strategy? To answer these questions, we envision a specific scenario in which: a) large scale LiDAR and (or) other synthetic data are continuously gathered from autonomous vehicles with ground truth labels automatically generated; and b) novel modalities of synthetic data may be obtained with the development of new sensors. We will focus on three aspects of domain adaptation:
Effective enhancement techniques over state-of-the-art: Current domain adaptation methods are sensitive to artifacts and are unstable during training. Effectively overcoming these limitations and directly applying these methods to the LiDAR adaptation will be necessary to yield good performance. We plan to develop novel techniques by designing corresponding loss functions and training skills, such as self-regularization term and local adversarial loss, to enable existing methods adapt well in the LiDAR domain.
Different levels of domain adaptation: Besides feature-level consistency, which aligns the extracted features from the source to the target domains, there are other consistencies that are important in domain adaptation but have not been considered such as semantic consistency and low-level appearance consistency. We plan to design effective and efficient losses for feature-level, pixel-level, and semantic-level, respectively. Further, we will perform adaptation at all these levels jointly by enforcing cycle-consistency and leveraging different task losses.
Data fusion strategies for multiple synthetic data: Synthetic data generated by simulations (CARLA and GTA-V, etc.) may be of different modalities, such as LiDAR and radar. Similar to feature fusion in image classification and retrieval, we believe that jointly combining and fusing different modalities to explore the complementation may boost the performance of domain adaptation. We plan to design and implement effective fusion strategies, such as decision-level fusion. We further plan to deal with the adaptation generalization situation with new modality data by incremental adaptation methods.
|Kurt Keutzer||Domain adaptation, LiDAR simulation|