Talk2BEV: Language-Enhanced Bird's Eye View (BEV) Maps

ICRA 2024

* Equal Contribution \( ^\dagger \) Equal Advising
We build Language-enhanced Bird's-Eye View (BEV) maps using (a) BEV representations constructed from vehicle sensors (multi-view images, lidar), and (b) Aligned vision-language features for each object which can be directly used as context within large vision-language models (LVLMs) to query and talk to the objects in the scene. These maps embed knowledge about object semantics, material properties, affordances, and spatial concepts and can be queried for visual reasoning, spatial understanding, and making decisions about potential future scenarios, critical for autonomous driving application.

Abstract

We introduce Talk2BEV, a large vision- language model (LVLM) interface for bird’s-eye view (BEV) maps commonly used in autonomous driving.

While existing perception systems for autonomous driving scenarios have largely focused on a pre-defined (closed) set of object categories and driving scenarios, Talk2BEV eliminates the need for BEV- specific training, relying instead on performant pre-trained LVLMs. This enables a single system to cater to a variety of autonomous driving tasks encompassing visual and spatial reasoning, predicting the intents of traffic actors, and decision- making based on visual cues.

We extensively evaluate Talk2BEV on a large number of scene understanding tasks that rely on both the ability to interpret freefrom natural language queries, and in grounding these queries to the visual context embedded into the language-enhanced BEV map. To enable further research in LVLMs for autonomous driving scenarios, we develop and release Talk2BEV-Bench, a benchmark encom- passing 1000 human-annotated BEV scenarios, with more than 20,000 questions and ground-truth responses from the NuScenes dataset.

Approach

  1. We input the perspective images \( \mathcal{I} \) and Lidar data \( \mathcal{X} \).
  2. Using a BEV Network, we generate Bird's Eye View (BEV) predictions \( \mathcal{O} \)
  3. For each object \( i \) in the BEV Prediction \( \mathcal{O} \), we extract its crop proposals \( r_i \) from the LiDAR-Camera re-projection pipeline, and obtain its captions using Large Vision-Language Models (LVLMs). Each object \( i \) contains in the map \( \mathbf{L} \) its bev information including geometric cues like - area, centroid, and object descriptions like crop captions. . We perform BEV-Image Projection for the objects \( o_i \in \mathcal{O} \).
  4. We construct the Language-Enhanced map \( \mathbf{L}(\mathcal{O}) \) by augmenting the generated BEV with aligned image-language features for each object from large vision-language models (LVLMs). These features can directly be used as context to LVLMs for answering object and scene-specific queries. The comprehensive Language Enhanced Map representation encodes objects in the scene along with their semantic descriptions and geometric cues from BEV.

Spatial Operators

Find distance between ...

The response to the query "Find distance between" is the operator find_dist(., .)

Nearest 2 vehicles in front..

Related Links

There are many previous works which bind Language and Vision for Autonomous Driving - Talk2Car, ReferKITTI.

Concurrent to our work is NuScenesQA which attempts VQA in autonomous driving scenarios on top of BEV networks.

Works like NuPrompt attempt Language-Image grounding with Large-Language models.

BibTeX

@inproceedings{talk2bev,
      title = {Talk2BEV: Language-enhanced Bird’s-eye View Maps for Autonomous Driving},
      author = {Dewangan, Vikrant and Choudhary, Tushar and Chandhok, Shivam and Priyadarshan, Shubham and Jain, Anushka and Singh, Arun and Srivastava, Siddharth and Jatavallabhula, {Krishna Murthy} and Krishna, Madhava},
      year = {2023},
      booktitle = {arXiv},
}