The World's First Perception and Reasoning Benchmark for Scene Structure in Autonomous Driving.
English | δΈζ
![](/collector-m/OpenLane-V2/raw/master/imgs/poster.gif)
*: The dataset, OpenLane-V2 at Shanghai AI Lab, is named as RoadGenome at Huawei and publically as OpenLane-Huawei.
- Highlight
- Task
- News
- Data
- Devkit
- Get Started
- Train a Model
- Benchmark and Leaderboard
- Citation
- License
Previous datasets annotate lanes on images in the perspective view. Such a type of 2D annotation is insufficient to fulfill real-world requirements. Following the OpenLane dataset, we annotate lanes in 3D space to reflect their properties in the real world.
Not only preventing collision but also facilitating efficiency is essential. Vehicles follow predefined traffic rules for self-disciplining and cooperating with others to ensure a safe and efficient traffic system. Traffic elements on the roads, such as traffic lights and road signs, provide practical and real-time information.
A traffic element is only valid for its corresponding lanes. Following the wrong signals would be catastrophic. Also, lanes have their predecessors and successors to build the map. Autonomous vehicles are required to reason about the topology relationships to drive in the right way. In this dataset, we hope to shed light on the task of scene structure perception and reasoning.
Experience from the sunny day does not apply to the dancing snowflakes. For machine learning, data is the must-have food. We provide annotations on data collected in various cities, from Austin to Singapore and from Boston to Miami. The diversity of data enables models to generalize in different atmospheres and landscapes.
The primary task of the dataset is scene structure perception and reasoning, which requires the model to recognize the dynamic drivable states of lanes in the surrounding environment. The challenge of this dataset includes not only detecting lane centerlines and traffic elements but also recognizing the attribute of traffic elements and topology relationships on detected objects. We define the OpenLane-V2 Score (OLS), which is the average of various metrics covering different aspects of the primary task:
$$ \text{OLS} = \frac{1}{4} \bigg[ \text{DET}{l} + \text{DET}{t} + f(\text{TOP}{ll}) + f(\text{TOP}{lt}) \bigg]. $$
The metrics of different subtasks are described below.
The OpenLane dataset, which is the first real-world and the largest scaled 3D lane dataset to date, provides lane line annotations in 3D space.
Similarly, we annotate 3D lane centerlines and include the F-Score for evaluating predicted results of undirected lane centerlines.
Furthermore, we define the subtask of 3D lane detection as detecting directed 3D lane centerlines from the given multi-view images covering the whole horizontal FOV.
The instance-level evaluation metric of average precision
Traffic elements and their attribute provide crucial information for autonomous vehicles.
The attribute represents the semantic meaning of a traffic element, such as the red color of a traffic light.
In this subtask, on the given image in the front view, the location of traffic elements (traffic lights and road signs) and their attributes are demanded to be perceived simultaneously.
Compared to typical 2D detection datasets, the challenge is that the size of traffic elements is tiny due to the large scale of outdoor environments.
Similar to the typical 2D detection task, the metric of
We first define the task of recognizing topology relationships in the field of autonomous driving.
Given multi-view images, the model learns to recognize the topology relationships among lane centerlines and between lane centerlines and traffic elements.
The most similar task is link prediction in the field of graph, in which the vertices are given and only edges are predicted by models.
In our case, both vertices and edges are unknown for the model.
Thus, lane centerlines and traffic elements are needed to be detected first, and then the topology relationships are built.
Adapted from the task of link prediction,
- [2023/02]
- Dataset
v1.0
: Data ofsubset_A
released. - Baseline model released.
- Dataset
- [2023/01]
- Dataset
v0.1
: Initial OpenLane-V2 dataset sample released. - Devkit
v0.1.0
: Initial OpenLane-V2 devkit released.
- Dataset
The OpenLane-V2 dataset is a large-scale dataset for scene structure perception and reasoning in the field of autonomous driving. Following OpenLane, the first 3D lane dataset, we provide lane annotations in 3D space. The difference is that instead of lane lines, we annotate lane centerlines, which can be served as the trajectory for autonomous vehicles. Besides, we provide annotations on traffic elements (traffic lights and road signs) and their attribute, and the topology relationships among lane centerlines and between lane centerlines and traffic elements.
The dataset is divided into two subsets.
The subset_A
serves as the primary subset and is utilized for the coming challenges and leaderboard, in which no external data, including the other subset, is allowed.
The subset_B
can be used to test the generalization ability of the model.
For more details, please refer to the corresponding pages: use of data, notes of annotation, and dataset statistics.
Download now to discover our dataset!
We provide a devkit for easy access to the OpenLane-V2 dataset.
After installing the package, the use of the dataset, such as loading images, loading meta data, and evaluating results, can be accessed through the API of openlanv2
.
For more details on the API, please refer to devkit.
Please follow the steps below to get familiar with the OpenLane-V2 dataset.
-
Run the following commands to install the environment for setting up the dataset:
git clone https://github.com/OpenDriveLab/OpenLane-V2.git cd OpenLane-V2 conda create -n openlanev2 python=3.8 -y conda activate openlanev2 pip install -r requirements.txt python setup.py develop
-
Use links to download data manually from
Then put them into the
data/OpenLane-V2/
folder and unzip them. The resulting folder hierarchy is described here. Or use the following commands to download example data for a quick glance at the dataset:cd data/OpenLane-V2 wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1Ni-L6u1MGKJRAfUXm39PdBIxdk_ntdc6' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1Ni-L6u1MGKJRAfUXm39PdBIxdk_ntdc6" -O OpenLane-V2_sample.tar md5sum -c openlanev2.md5 tar -xvf *.tar cd ../..
-
Run the tutorial on jupyter notebook to get familiar with the dataset and devkit.
Plug-ins to prevail deep learning frameworks for training models are provided to start training models on the OpenLane-V2 dataset. We appreciate your valuable feedback and contributions to plug-ins on different frameworks.
The plug-in to MMDetection3d is built on top of mmdet3d v1.0.0rc6 and tested under:
- Python 3.8.15
- PyTorch 1.9.1
- CUDA 11.1
- GCC 5.4.0
- mmcv-full==1.5.2
- mmdet==2.26.0
- mmsegmentation==0.29.1
Please follow the instruction to install mmdet3d.
Assuming OpenLane-V2 is installed under OpenLane-V2/
and mmdet3d is built under mmdetection3d/
, create a soft link to the plug-in file:
βββ mmdetection3d
βββ projects
βββ example_project
βββ openlanev2 -> OpenLane-V2/plugin/mmdet3d
Then you can train or evaluate a model using the config mmdetection3d/projects/openlanev2/configs/baseline.py
, whose path is replaced accordingly.
Options can be passed to enable supported functions during evaluation, such as --eval-options dump=True dump_dir=/PATH/TO/DUMP
to save pickle file for submission and --eval-options visualization=True visualization_dir=/PATH/TO/VIS
for visualization.
We will provide an initial benchmark on the OpenLane-V2 dataset, and you are welcome to add your work here! Please stay tuned for the release of the benchmark.
Method | OLS (main metric) (%) |
|
|
|
|
|
---|---|---|---|---|---|---|
Baseline | 0.29 | 0.08 | 0.31 | 0.00 | 0.01 | 8.56 |
* F-Score for lane detection is not taken into consideration in both the challenge and leaderboard.
Please use the following citation when referencing OpenLane-V2:
@misc{ openlanev2_dataset,
author = {{OpenLane-V2 Dataset Contributors}},
title = {{OpenLane-V2: The World's First Perception and Reasoning Benchmark for Scene Structure in Autonomous Driving}},
url = {https://github.com/OpenDriveLab/OpenLane-V2},
license = {Apache-2.0},
year = {2023}
}
Our dataset is built on top of the nuScenes and Argoverse 2 datasets. Please also cite:
@article{ nuscenes2019,
author = {Holger Caesar and Varun Bankiti and Alex H. Lang and Sourabh Vora and Venice Erin Liong and Qiang Xu and Anush Krishnan and Yu Pan and Giancarlo Baldan and Oscar Beijbom},
title = {nuScenes: A multimodal dataset for autonomous driving},
journal = {arXiv preprint arXiv:1903.11027},
year = {2019}
}
@INPROCEEDINGS { Argoverse2,
author = {Benjamin Wilson and William Qi and Tanmay Agarwal and John Lambert and Jagjeet Singh and Siddhesh Khandelwal and Bowen Pan and Ratnesh Kumar and Andrew Hartnett and Jhony Kaesemodel Pontes and Deva Ramanan and Peter Carr and James Hays},
title = {Argoverse 2: Next Generation Datasets for Self-driving Perception and Forecasting},
booktitle = {Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (NeurIPS Datasets and Benchmarks 2021)},
year = {2021}
}
Before using the OpenLane-V2 dataset, you should register on the website and agree to the terms of use of the nuScenes and Argoverse 2 datasets. All code within this repository is under Apache License 2.0.