Skip to content

wqfzu/UAV3D

 
 

Repository files navigation

UAV3D: A Large-scale 3D Perception Benchmark for Unmanned Aerial Vehicles

News

  • (2024/9) The paper got accepted at NeurIPS 2024.
  • (2024/9) UAV3D V1.0-mini is released.
  • (2024/6) Source code and pre-trained models are released.
  • (2024/6) UAV3D V1.0 is released.

Installation

Dataset Download

Please check Google Drive to download the dataset.

After downloading the data, please put the data in the following structure:

├── data
│   ├── UAV3D
│   │   ├── maps
│   │   ├── samples
│   │   ├── sweeps
│   │   ├── v1.0-test
|   |   ├── v1.0-trainval
│   │   ├── uav3d_infos_train.pkl
│   │   ├── uav3d_infos_val.pkl
│   │   ├── uav3d_infos_test.pkl

Train & inference

Single UAV 3D object detection

Baseline PETR

cd perception/PETR

Training:

tools/dist_train.sh projects/configs/petr/petr_r50dcn_gridmask_p4.py 4 --work-dir work_dirs/petr_r50dcn_gridmask_p4/

Evaluation:

tools/dist_test.sh projects/configs/petr/petr_r50dcn_gridmask_p4.py work_dirs/petr_r50dcn_gridmask_p4/latest.pth 8 --eval bbox

Baseline BEVFusion

cd perception/bevfusion

Training:

torchpack dist-run -np 4  python tools/train.py configs/nuscenes/det/centerhead/lssfpn/camera/256x704/resnet/default.yaml    --run-dir runs/resnet50

Evaluation:

torchpack dist-run -np 4  python tools/test.py configs/nuscenes/det/centerhead/lssfpn/camera/256x704/resnet/default.yaml   runs/resnet50/epoch_24.pth   --eval bbox

Baseline DETR3D

cd perception/detr3d

Training:

tools/dist_train.sh     projects/configs/detr3d/detr3d_res50_gridmask.py   4  --work-dir      work_dirs/detr3d_res50_gridmask/

Evaluation:

tools/dist_test.sh      projects/configs/detr3d/detr3d_res50_gridmask.py     work_dirs/detr3d_res50_gridmask/epoch_24.pth  4  --eval bbox

Collaborative UAVs 3D Object Detection

cd collaborative_perception/bevfusion

Training when2com(lowerbound / upperbound / v2vnet / when2com / who2com/ disonet):

torchpack dist-run -np 4  python tools/train.py configs/nuscenes/det/centerhead/lssfpn/camera/256x704/swint/when2com/default.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth    --run-dir runs/when2com

Evaluation of when2com(lowerbound / upperbound / v2vnet / when2com / who2com/ disonet):

torchpack dist-run -np 4  python tools/test.py configs/nuscenes/det/centerhead/lssfpn/camera/256x704/swint/when2com/default.yaml    runs/when2com/epoch_24.pth   --eval bbox

Object Tracking

cd tracking/CenterPoint

Put the results_nusc.json file in the CenterPoint/input folder.

Evaluation:

python pub_test.py --work_dir /{dir}/UAV3D/tracking/CenterPoint/output  --checkpoint    /{dir}/UAV3D/tracking/CenterPoint/input/results_nusc.json 

Main Results

3D Object Detection (UAV3D val)

Model Backbone Size mAP↑ NDS↑ mATE↓ mASE↓ mAOE↓ Checkpoint Log
PETR Res-50 704×256 0.512 0.571 0.741 0.173 0.072 link link
BEVFusion Res-50 704×256 0.487 0.458 0.615 0.152 1.000 link link
DETR3D Res-50 704×256 0.430 0.509 0.791 0.187 0.100 link link
PETR Res-50 800×450 0.581 0.632 0.625 0.160 0.064 link link
BEVFusion Res-101 800×450 0.536 0.582 0.521 0.154 0.343 link link
DETR3D Res-101 800×450 0.618 0.671 0.494 0.158 0.070 link link

3D Object Tracking (UAV3D val)

Model Backbone Size AMOTA↑ AMOTP↓ MOTA↑ MOTP↓ TID↓ LGD↓ det_result Log
PETR Res-50 704×256 0.199 1.294 0.195 0.794 1.280 2.970 link link
BEVFusion Res-50 704×256 0.566 1.137 0.501 0.695 0.790 1.600 link link
DETR3D Res-50 704×256 0.089 1.382 0.121 0.800 1.540 3.530 link link
PETR Res-50 800×450 0.291 1.156 0.256 0.677 1.090 2.550 link link
BEVFusion Res-101 800×450 0.606 1.006 0.540 0.627 0.700 1.390 link link
DETR3D Res-101 800×450 0.262 1.123 0.238 0.561 1.140 2.720 link link

Collaborative 3D Object Detection (UAV3D val)

Model mAP↑ NDS↑ mATE↓ mASE↓ mAOE↓ AP@IoU=0.5↑ AP@IoU=0.7↑ Checkpoint Log
Lower-bound 0.544 0.556 0.540 0.147 0.578 0.457 0.140 link link
When2com 0.550 0.507 0.534 0.156 0.679 0.461 0.166 link link
Who2com 0.546 0.597 0.541 0.150 0.263 0.453 0.141 link link
V2VNet 0.647 0.628 0.508 0.167 0.533 0.545 0.141 link link
DiscoNet 0.700 0.689 0.423 0.143 0.422 0.649 0.247 link link
Upper-bound 0.720 0.748 0.391 0.106 0.117 0.673 0.316 link link

Collaborative 3D Object Tracking (UAV3D val)

Model AMOTA↑ AMOTP↓ MOTA↑ MOTP↓ TID↓ LGD↓ det_result Log
Lower-bound 0.644 1.018 0.593 0.611 0.620 1.280 link link
When2com 0.646 1.012 0.595 0.618 0.590 1.200 link link
Who2com 0.648 1.012 0.602 0.623 0.580 1.200 link link
V2VNet 0.782 0.803 0.735 0.587 0.360 0.710 link link
DiscoNet 0.809 0.703 0.766 0.516 0.300 0.590 link link
Upper-bound 0.812 0.672 0.781 0.476 0.300 0.570 link link

Citation

If you find this repository useful, please consider giving a star ⭐ and citation 📘:

@inproceedings{uav3d2024,
  title={UAV3D: A Large-scale 3D Perception Benchmark for Unmanned Aerial Vehicles},
  author={Hui Ye and Raj Sunderraman and Shihao Ji},
  booktitle={The 38th Conference on Neural Information Processing Systems (NeurIPS)},
  year={2024}
}

Acknowledgement

In collecting UAV3D, we received valuable help and suggestions from the authors of CoPerception-UAV and Where2comm.

For 3D object detection task, our implementation is based on PETR, BEVFusion, and DETR3D.

For Collaborative 3D object detection task, our implementation is based on BEVFusion and CoPerception.

For object tracking task, our implementation is based on CenterPoint.

The software and data were created by Georgia State University Research Foundation under Army Research Laboratory (ARL) Award Numbers W911NF-22-2-0025 and W911NF-23-2-0224. ARL, as the Federal awarding agency, reserves a royalty-free, nonexclusive and irrevocable right to reproduce, publish, or otherwise use this software for Federal purposes, and to authorize others to do so in accordance with 2 CFR 200.315(b).

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 87.2%
  • C++ 7.3%
  • Cuda 5.3%
  • MATLAB 0.1%
  • Shell 0.1%
  • Jupyter Notebook 0.0%