Fundamental Tradeoffs in Learning and Control


At a glance

Closed control loops are pervasive in modern automobile systems, but often rely on intricate, explicit models of the underlying physical phenomena to be successful. Here we propose using recent advances from machine learning to demonstrate controllability even in the case of poorly understood or modeled physical processes.

Classical control theory and machine learning have similar goals: take in new data about the world, perform a prediction, and use that prediction to positively impact the world. However, the approaches they use are frequently at odds. Controls is the theory of designing complex actions from well-specified models, while machine learning makes intricate, model-free predictions from data alone. For autonomous driving systems, some sort of hybrid is essential in order to fuse and process the vast amounts of sensor data recorded by vehicles into timely, agile, and safe decisions. While substantial progress has been made in designing data-driven control systems with deep neural networks, critical fundamental questions remain poorly understood.

In this project, propose methods that operate at a midpoint between the precise-physical-models of classical controls and the model-free-but-performance-uncertain approaches taken by recent successes in reinforcement learning. We hypothesize that many physical systems can be very poorly identified and still controlled to high accuracy. It could be argued that most of the models used in robust control and learning today are unnecessarily complicated. Among control practitioners and engineers, it is a well-known truism that in the real world, many systems can be adequately controlled with fairly simple strategies (proportional-integral-derivative control, PID).

We will attempt to fuse control design with model identification. Using the analogy to PID control, we will attempt to provide concrete sample bounds required to achieve robust controllability. We will circumvent exact model identification, and instead build surrogate constraints that suffice to guarantee that a model can be stabilized by a given control policy.
Benjamin RechtRoss Boczar
Sarah Dean
Horia Mania
Stephen Tu
Adaptive Control
Reinforcement Learning
Machine Learning
Theoretical Foundations