Automatic Semantics Extraction and Representation for High-Definition Map Construction and Scene Understanding
ABOUT THIS PROJECT
At a glance
|Masayoshi Tomizuka||Wei Zhan|
semantics, HD map, scene understanding
Autonomous vehicles need to understand the scenes with semantics such as drivable areas, lane centers/boundaries/connections, virtual lanes without markings, traffic rules implied by corresponding signs/lights, etc. Some of the semantics can be incorporated into a high-definition (HD) map mainly by human. However, constructing such HD maps with full semantics in large scale by human can be extremely overwhelming. Moreover, autonomous vehicles need to extract the semantics online when there is no HD map provided. Therefore, automatic semantics extraction is crucial to scalably enable full autonomy.
Furthermore, traffic rules and drivable areas may be implicitly represented even without the presence of traffic signs and lane markings. Also, the preferred reference paths of human drivers may not strictly follow lane centers presented, which can be obtained from motion data of other road users. Therefore, we need novel representation of the implicit semantics extracted either offline (for the HD maps) or online (for scene understanding). In this project, we aim at automatically extracting semantics from onboard sensors such as LiDARs and cameras, and propose novel representation of implicit semantics obtain via human prior and motion data.