Operationalized Active Learning

ABOUT THE PROJECT

At a glance

Deciding which data to acquire is a common challenge in machine learning, and especially in autonomous driving. All existing methods for autonomous navigation are dependent on comprehensive datasets, often acquired at great expense. Moreover, some of the most pressing safety demands requires being able to acquire data on extremely rare events. We propose adaptive data collection, also known as active learning as a way to mitigate these resource demands, leveraging already-collected data to guide future measurements in a closed loop.

Though active learning has a rich literature and much promise, this community has provided few practical methods, and it is exceptionally difficult to predict the behavior and verify the reliability of even the most simple heuristics. Indeed, there are not even reasonable heuristics for active regression and classification. What are the appropriate analysis tools for understanding the tradeoffs between the aggressiveness and reliability of adaptive learning algorithms? How can the risks and benefits be made transparent and exploitable by autonomous driving systems?

The goal of this project is to design useful algorithms for structured active learning problems. To do so, we will attempt to construct a new paradigm for analyzing adaptive sampling allocations. How many active measurements are necessary to accurately model a system the sample complexity is currently addressed by only two strategies. On the one hand, Fano's inequality and packing arguments have proven invaluable for understanding the sample complexity that arises from hypothesis structure in non-adaptive sampling (e.g. empirical risk minimization), but lead to coarse, worst- or average-case lower bounds for adaptive problems because they rely on constructions which are either artificially symmetric or highly pessimistic. On the other hand, change-of-measure techniques control the precise, problem-dependent sample complexity for adaptive problems, but grossly underestimate the difficulty of differentiating between large and richly structured hypothesis classes that Fano's inequality captures for non-adaptive sampling.

We propose an approach that differs from the existing methods by considering not how much information can be gathered by any fixed sampling strategy, but how difficult it is to distinguish a good sampling strategy from a bad one given the limited amount of data collected up to any given time.


 

 

PRINCIPAL INVESTIGATORSRESEARCHERSTHEMES
Benjamin RechtKevin Jamieson
Max Simchowitz
Active Learning, Bandits,
Experiment Design, Theoretical Foundations