Self-supervised Representation Learning for Autonomous Driving


At a glance

In this project, we propose methods for better ways of training models, to force them to generalize from the start, instead of just memorizing various data short-cuts. The idea is to make the learning algorithm work harder at training time by using self-supervision to discover the regularities in the data, instead of just letting it memorize the training set. This is akin to a student trying to solve a math problem first, before looking to the back of the textbook to see if s/he got the correct answer, compared to simply memorizing a list of problem / answer pairs. 
In 2019, we propose to focus on using self-supervision and cycle-consistency to build better models of temporal visual data, i.e. videos. The focus will be on learning visual correspondence from the cycle-consistency of time.

principal investigatorsresearchersthemes
Alexei (Alyosha) Efros 

unsupervised learning, self-supervision, correspondence, video