AutoPylot: An Open Platform for Autonomous Vehicles

ABOUT THE PROJECT

At a glance

Figure 1: AutoPylot Architecture: The AutoPylot platform consists of Common IO Abstraction Layer between simulated and real world vehicles, a Car Platform which schedules and executed complex task graphs on heterogeneous parallel hardware, and a Cloud Platform that manages observation fusion across vehicles.

Problem Setting: Autonomous vehicles present a new set of systems challenges around how we render robust real-time predictions and compose multiple complex prediction pipelines. For example, the perception and control systems in an autonomous vehicle must process data from multiple high-bandwidth sensors and make real-time predictions about the environment. Often several computer vision pipelines must to run in parallel and at different rates (e.g., processing high-speed video and lower frequency LiDAR). The output of perception pipelines is then fused and processed in planning and control stages to compute optimal driving actions. Finally, these actions need to be communicated to a range of control systems.

An autonomous vehicle platform must adapt to changing environments and collect data to enable future analysis and research. As new observations arrive they must be integrated into the internal state (e.g., obstacle grids) and published to other vehicles in the cloud (e.g., to support fleet driving). Finally, it is critical to monitor and log computation at each stage to identify failures and enable future training and analysis.

A single autonomous vehicle is a complex distributed system. There are a wide rage of sensors with different sampling frequencies and bandwidth requirements. The compute infrastructure often consists of multiple networked heterogeneous processors with a range of SIMD and MIMD parallelism (e.g., a pair Drive PX2). Finally, unlike many distributed cloud platforms, autonomous vehicles have stringent power and latency requirements and system failures can result in loss of life.

Related Work: Today there are no open-source real-time platforms to support research in autonomous vehicles spanning simulation to real world driving. Perhaps the closest systems to the research proposed here are Baidu’s project Apollo [1] and NVidia DriveWorks [4]. The more mature NVidia DriveWorks closed-source platform consists of a collection of hardware abstraction layers and composition tools written in C++ that are optimized to run on a patched Ubuntu Kernel with real-time extensions. Project Apollo is a much less mature open-source platform based on real-time modifications to ROS [5] and basic c++ hardware drivers and composition tools. While both of these systems could be used to control an autonomous vehicle, their low-level design and limited support for simulation has hindered their use in research settings. We plan to continue to follow and reuse parts of project Apollo as it develops.

Proposed Research: We  propose the design and implementation of an open-source platform (Figure 1) to support perception, planning, and control in autonomous vehicles spanning simulation and on-vehicle driving. Our goal is to design a system that will facilitate the composition of state-of-the-art perception pipelines with planning and control to drive both simulated and real-world vehicles. A key requirement in the design of this platform will be to enable rapid prototyping and experimentation of perception, planning, and control modules. Therefore the systems research effort will focus on high-level Python API design as well as low-level parallel placement and acceleration of complex task graphs with latency constraints.

This work will build on the TensorFlow open-source deep learning framework as well as the new Ray [3] parallel python execution framework being developed in the UC Berkeley RISE Lab. We will leverage  the efficient low-level execution of static task graphs afforded by TensorFlow to deliver high-throughput predictions from a wide range of deep neural networks. Meanwhile, we will use the support for dynamic task graph execution in Ray to scale the exeuction of complex pipelines while controlling tail latencies.|

References
[1] Baidu. Project Apollo. http://apollo.auto, 2018.
[2] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun. CARLA: An open urban driving simulator. In Proceedings of the 1st Annual Conference on Robot Learning, volume 78 of Proceedings of Machine Learning Research, pages 1–16. PMLR, 13–15 Nov 2017.
[3] E. Liang, R. Liaw, R. Nishihara, P. Moritz, R. Fox, J. Gonzalez, K. Goldberg, and I. Stoica. Ray RLlib: A composable and scalable reinforcement learning library. CoRR, abs/1712.09381, 2017.
[4] NVidia. NVidia DriveWorks. https://developer.nvidia.com/driveworks, 2018.
[5] M. Quigley, K. Conley, B. P. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng. Ros: an open-source robot operating system. In ICRA Workshop on Open Source Software, 2009.
 

principal investigatorsresearchersthemes
Joseph E. Gonzalez and Ion StoicaFisher YuPlatform, Systems, Software Infrastructure, Python