Fundamental Tradeoffs in Learning and Control
ABOUT THE PROJECT
At a glance
Closed control loops are pervasive in modern automobile systems, but often rely on intricate, explicit models of the underlying physical phenomena to be successful. Here we propose using recent advances from machine learning to demonstrate controllability even in the case of poorly understood or modeled physical processes.
Classical control theory and machine learning have similar goals: take in new data about the world, perform a prediction, and use that prediction to positively impact the world. However, the approaches they use are frequently at odds. Controls is the theory of designing complex actions from well-specified models, while machine learning makes intricate, model-free predictions from data alone. For autonomous driving systems, some sort of hybrid is essential in order to fuse and process the vast amounts of sensor data recorded by vehicles into timely, agile, and safe decisions. While substantial progress has been made in designing data-driven control systems with deep neural networks, critical fundamental questions remain poorly understood.
We will attempt to fuse control design with model identification. Using the analogy to PID control, we will attempt to provide concrete sample bounds required to achieve robust controllability. We will circumvent exact model identification, and instead build surrogate constraints that suffice to guarantee that a model can be stabilized by a given control policy.
PRINCIPAL INVESTIGATORS | RESEARCHERS | THEMES |
---|---|---|
Benjamin Recht | Ross Boczar Sarah Dean Horia Mania Stephen Tu | Adaptive Control Reinforcement Learning Machine Learning Theoretical Foundations |