Learning to drive anywhere

Files
2309.12295v2.pdf(28.78 MB)
First author draft
Date
2023-11-01
Authors
Saligrama, Venkatesh
Ohn-Bar, Eshed
Zhu, Ruizhao
Huang, Peng
Version
First author draft
OA Version
Citation
V. Saligrama, E. Ohn-Bar, R. zhu. 2023. "Learning to Drive Anywhere". 7th Annual Conference on Robot Learning. CoRL 2023. https://openreview.net/group?id=robot-learning.org/CoRL/2023/Conference
Abstract
Human drivers can seamlessly adapt their driving decisions across geographical locations with diverse conditions and rules of the road, e.g., left vs. right-hand traffic. In contrast, existing models for autonomous driving have been thus far only deployed within restricted operational domains, i.e., without accounting for varying driving behaviors across locations or model scalability. In this work, we propose AnyD, a single geographically-aware conditional imitation learning (CIL) model that can efficiently learn from heterogeneous and globally distributed data with dynamic environmental, traffic, and social characteristics. Our key insight is to introduce a high-capacity geo-location-based channel attention mechanism that effectively adapts to local nuances while also flexibly modeling similarities among regions in a data-driven manner. By optimizing a contrastive imitation objective, our proposed approach can efficiently scale across the inherently imbalanced data distributions and location-dependent events. We demonstrate the benefits of our AnyD agent across multiple datasets, cities, and scalable deployment paradigms, i.e., centralized, semi-supervised, and distributed agent training. Specifically, AnyD outperforms CIL baselines by over 14% in open-loop evaluation and 30% in closed-loop testing on CARLA.
Description
License