Skip to content

Commit

Permalink
[Feature] Support BEVFusion in projects/ (open-mmlab#2236)
Browse files Browse the repository at this point in the history
* add bevfusion models

* refactor

* build successfully

* update ImageAug3D

* support inference

* update the format of final bboxes

* add new loading func

* align test precision

* polish docstring

* refactor transformer decoder

* polish code

* fix table in readme

* fix table in readme

* fix table in readme

* update pre-commit-config

* minor changes

* revert the changes of file_client_args in LoadAnnotation3D

* remove unnucessary functions in BEVFusion

* fix loading bug

* fix docstring
  • Loading branch information
JingweiZhang12 authored Jan 30, 2023
1 parent c6a8eb1 commit 4d77b4c
Show file tree
Hide file tree
Showing 31 changed files with 5,015 additions and 24 deletions.
13 changes: 9 additions & 4 deletions mmdet3d/datasets/transforms/loading.py
Original file line number Diff line number Diff line change
Expand Up @@ -193,10 +193,10 @@ def transform(self, results: dict) -> Optional[dict]:
# unravel to list, see `DefaultFormatBundle` in formating.py
# which will transpose each image separately and then stack into array
results['img'] = [img[..., i] for i in range(img.shape[-1])]
results['img_shape'] = img.shape
results['ori_shape'] = img.shape
results['img_shape'] = img.shape[:2]
results['ori_shape'] = img.shape[:2]
# Set initial values for default meta_keys
results['pad_shape'] = img.shape
results['pad_shape'] = img.shape[:2]
if self.set_default_scale:
results['scale_factor'] = 1.0
num_channels = 1 if len(img.shape) < 3 else img.shape[2]
Expand Down Expand Up @@ -297,9 +297,13 @@ def __init__(self,
test_mode: bool = False) -> None:
self.load_dim = load_dim
self.sweeps_num = sweeps_num
if isinstance(use_dim, int):
use_dim = list(range(use_dim))
assert max(use_dim) < load_dim, \
f'Expect all used dimensions < {load_dim}, got {use_dim}'
self.use_dim = use_dim
self.file_client_args = file_client_args.copy()
self.file_client = None
self.file_client = mmengine.FileClient(**self.file_client_args)
self.pad_empty_sweeps = pad_empty_sweeps
self.remove_close = remove_close
self.test_mode = test_mode
Expand Down Expand Up @@ -761,6 +765,7 @@ def __init__(
self.with_mask_3d = with_mask_3d
self.with_seg_3d = with_seg_3d
self.seg_3d_dtype = seg_3d_dtype
self.file_client = None

def _load_bboxes_3d(self, results: dict) -> dict:
"""Private function to move the 3D bounding box annotation from
Expand Down
5 changes: 3 additions & 2 deletions mmdet3d/utils/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,12 @@
from .setup_env import register_all_modules, setup_multi_processes
from .typing_utils import (ConfigType, InstanceList, MultiConfig,
OptConfigType, OptInstanceList, OptMultiConfig,
OptSamplingResultList)
OptSampleList, OptSamplingResultList)

__all__ = [
'collect_env', 'setup_multi_processes', 'compat_cfg',
'register_all_modules', 'array_converter', 'ArrayConverter', 'ConfigType',
'OptConfigType', 'MultiConfig', 'OptMultiConfig', 'InstanceList',
'OptInstanceList', 'OptSamplingResultList', 'replace_ceph_backend'
'OptInstanceList', 'OptSamplingResultList', 'replace_ceph_backend',
'OptSampleList'
]
1 change: 1 addition & 0 deletions mmdet3d/utils/typing_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,3 +23,4 @@

OptSamplingResultList = Optional[SamplingResultList]
SampleList = List[Det3DDataSample]
OptSampleList = Optional[SampleList]
126 changes: 126 additions & 0 deletions projects/BEVFusion/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
# BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation

> [BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation](https://arxiv.org/abs/2205.13542)
<!-- [ALGORITHM] -->

## Abstract

Multi-sensor fusion is essential for an accurate and reliable autonomous driving system. Recent approaches are based on point-level fusion: augmenting the LiDAR point cloud with camera features. However, the camera-to-LiDAR projection throws away the semantic density of camera features, hindering the effectiveness of such methods, especially for semantic-oriented tasks (such as 3D scene segmentation). In this paper, we break this deeply-rooted convention with BEVFusion, an efficient and generic multi-task multi-sensor fusion framework. It unifies multi-modal features in the shared bird's-eye view (BEV) representation space, which nicely preserves both geometric and semantic information. To achieve this, we diagnose and lift key efficiency bottlenecks in the view transformation with optimized BEV pooling, reducing latency by more than 40x. BEVFusion is fundamentally task-agnostic and seamlessly supports different 3D perception tasks with almost no architectural changes. It establishes the new state of the art on nuScenes, achieving 1.3% higher mAP and NDS on 3D object detection and 13.6% higher mIoU on BEV map segmentation, with 1.9x lower computation cost. Code to reproduce our
results is available at https://github.com/mit-han-lab/bevfusion.

<div align=center>
<img src="https://user-images.githubusercontent.com/34888372/215313913-4b43f8a1-e2e2-49ba-b631-992155351922.png" width="800"/>
</div>

## Introduction

We implement BEVFusion and provide the results and pretrained checkpoints on NuScenes dataset.

## Usage

<!-- For a typical model, this section should contain the commands for training and testing. You are also suggested to dump your environment specification to env.yml by `conda env export > env.yml`. -->

### Compiling operations on CUDA

**Note** that the voxelization OP in the original implementation of `BEVFusion` is different from the implementation in MMCV. If you want to use the original pretrained model [here](https://github.com/mit-han-lab/bevfusion/blob/main/README.md), you need to use the original implementation of voxelization OP.

```python
python projects/BEVFusion/setup.py develop
```

### Training commands

In MMDetection3D's root directory, run the following command to train the model:

```bash
python tools/train.py projects/BEVFusion/configs/bevfusion_voxel0075_second_secfpn_8xb4-cyclic-20e_nus-3d.py
```

For multi-gpu training, run:

```bash
python -m torch.distributed.launch --nnodes=1 --node_rank=0 --nproc_per_node=${NUM_GPUS} --master_port=29506 --master_addr="127.0.0.1" tools/train.py projects/BEVFusion/configs/bevfusion_voxel0075_second_secfpn_8xb4-cyclic-20e_nus-3d.py
```

### Testing commands

In MMDetection3D's root directory, run the following command to test the model:

```bash
python tools/train.py projects/BEVFusion/configs/bevfusion_voxel0075_second_secfpn_8xb4-cyclic-20e_nus-3d.py ${CHECKPOINT_PATH}
```

## Results and models

### NuScenes

| Backbone | Voxel type (voxel size) | NMS | Mem (GB) | Inf time (fps) | NDS | mAP | Download |
| :-----------------------------------------------------------------------------: | :---------------------: | :-: | :------: | :------------: | :---: | :---: | :------------------------------------------------------------------------------------------------------: |
| [SECFPN](./configs/bevfusion_voxel0075_second_secfpn_8xb4-cyclic-20e_nus-3d.py) | voxel (0.075) | × | - | - | 71.62 | 68.77 | [converted_model](https://drive.google.com/file/d/1QkvbYDk4G2d6SZoeJqish13qSyXA4lp3/view?usp=share_link) |

## Citation

```latex
@inproceedings{liu2022bevfusion,
title={BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation},
author={Liu, Zhijian and Tang, Haotian and Amini, Alexander and Yang, Xingyu and Mao, Huizi and Rus, Daniela and Han, Song},
booktitle={IEEE International Conference on Robotics and Automation (ICRA)},
year={2023}
}
```

## Checklist

<!-- Here is a checklist illustrating a usual development workflow of a successful project, and also serves as an overview of this project's progress. The PIC (person in charge) or contributors of this project should check all the items that they believe have been finished, which will further be verified by codebase maintainers via a PR.
OpenMMLab's maintainer will review the code to ensure the project's quality. Reaching the first milestone means that this project suffices the minimum requirement of being merged into 'projects/'. But this project is only eligible to become a part of the core package upon attaining the last milestone.
Note that keeping this section up-to-date is crucial not only for this project's developers but the entire community, since there might be some other contributors joining this project and deciding their starting point from this list. It also helps maintainers accurately estimate time and effort on further code polishing, if needed.
A project does not necessarily have to be finished in a single PR, but it's essential for the project to at least reach the first milestone in its very first PR. -->

- [x] Milestone 1: PR-ready, and acceptable to be one of the `projects/`.

- [x] Finish the code

<!-- The code's design shall follow existing interfaces and convention. For example, each model component should be registered into `mmdet3d.registry.MODELS` and configurable via a config file. -->

- [x] Basic docstrings & proper citation

<!-- Each major object should contain a docstring, describing its functionality and arguments. If you have adapted the code from other open-source projects, don't forget to cite the source project in docstring and make sure your behavior is not against its license. Typically, we do not accept any code snippet under GPL license. [A Short Guide to Open Source Licenses](https://medium.com/nationwide-technology/a-short-guide-to-open-source-licenses-cf5b1c329edd) -->

- [x] Test-time correctness

<!-- If you are reproducing the result from a paper, make sure your model's inference-time performance matches that in the original paper. The weights usually could be obtained by simply renaming the keys in the official pre-trained weights. This test could be skipped though, if you are able to prove the training-time correctness and check the second milestone. -->

- [x] A full README

<!-- As this template does. -->

- [ ] Milestone 2: Indicates a successful model implementation.

- [ ] Training-time correctness

<!-- If you are reproducing the result from a paper, checking this item means that you should have trained your model from scratch based on the original paper's specification and verified that the final result matches the report within a minor error range. -->

- [ ] Milestone 3: Good to be a part of our core package!

- [ ] Type hints and docstrings

<!-- Ideally *all* the methods should have [type hints](https://www.pythontutorial.net/python-basics/python-type-hints/) and [docstrings](https://google.github.io/styleguide/pyguide.html#381-docstrings). [Example](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/mmdet3d/models/detectors/fcos_mono3d.py) -->

- [ ] Unit tests

<!-- Unit tests for each module are required. [Example](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/tests/test_models/test_dense_heads/test_fcos_mono3d_head.py) -->

- [ ] Code polishing

<!-- Refactor your code according to reviewer's comment. -->

- [ ] Metafile.yml

<!-- It will be parsed by MIM and Inferencer. [Example](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/configs/fcos3d/metafile.yml) -->

- [ ] Move your modules into the core package following the codebase's file hierarchy structure.

<!-- In particular, you may have to refactor this README into a standard one. [Example](/configs/textdet/dbnet/README.md) -->

- [ ] Refactor your modules into the core package following the codebase's file hierarchy structure.
18 changes: 18 additions & 0 deletions projects/BEVFusion/bevfusion/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
from .bevfusion import BEVFusion
from .bevfusion_necks import GeneralizedLSSFPN
from .depth_lss import DepthLSSTransform
from .loading import BEVLoadMultiViewImageFromFiles
from .sparse_encoder import BEVFusionSparseEncoder
from .transformer import TransformerDecoderLayer
from .transforms_3d import GridMask, ImageAug3D
from .transfusion_head import ConvFuser, TransFusionHead
from .utils import (BBoxBEVL1Cost, HeuristicAssigner3D, HungarianAssigner3D,
IoU3DCost)

__all__ = [
'BEVFusion', 'TransFusionHead', 'ConvFuser', 'ImageAug3D', 'GridMask',
'GeneralizedLSSFPN', 'HungarianAssigner3D', 'BBoxBEVL1Cost', 'IoU3DCost',
'HeuristicAssigner3D', 'DepthLSSTransform',
'BEVLoadMultiViewImageFromFiles', 'BEVFusionSparseEncoder',
'TransformerDecoderLayer'
]
Loading

0 comments on commit 4d77b4c

Please sign in to comment.