Fast Inverse Vehicle Dynamics Adaptation for Driving Policy Transfer via Meta Learning

ABOUT THE PROJECT

At a glance

Deep reinforcement learning (RL) can solve many complex continuous control problems, and thus has great potential in the field of autonomous driving. However, the sampling efficiency of model-free RL is extremely low so that its training in real environments can hardly be achieved. Moreover, a policy trained in simulation often fails on real vehicles due to the “reality gap” between the simulated experiments and the real-world driving scenarios. The dynamics discrepancy between different vehicles also make it a challenging task to rapidly transfer the policy for one vehicle to another. Therefore, our goal in this project is to narrow the “reality gap” and the dynamics discrepancy, so that policies trained in simulators can be well transferred to real vehicles, and control laws valid for one vehicles can be rapidly adapted to different vehicles.

PRINCIPAL INVESTIGATORSRESEARCHERSTHEMES
Masayoshi TomizukaZhuo Xu and Chen Tang policy transfer, meta-learning, Youla parameterization