Skip to content

Files

Latest commit

b978bf2 · Apr 7, 2023

History

History

maskdino

Mask DINO

PWC PWC PWC PWC PWC

By Feng Li*, Hao Zhang*, Huaizhe Xu, Shilong Liu, Lei Zhang, Lionel M. Ni, and Heung-Yeung Shum.

This repository is an official detrex implementation of the Mask DINO: Towards A Unified Transformer-based Framework for Object Detection and Segmentation (DINO pronounced `daɪnoʊ' as in dinosaur). The source code is available at MaskDINO.

Features

  • A unified architecture for object detection, panoptic, instance and semantic segmentation.
  • Achieve task and data cooperation between detection and segmentation.
  • State-of-the-art performance under the same setting.
  • Support major detection and segmentation datasets: COCO, ADE20K, Cityscapes,

Code Updates

  • [2022/11] Our code is available! Achieve 51.7 and 59.0 AP with a ResNet-50 and SwinL without extra detection data on COCO, better detection performance compared with DINO!

  • [2022/6] We release a unified detection and segmentation model Mask DINO that achieves the best results on all the three segmentation tasks (54.7 AP on COCO instance leaderboard, 59.5 PQ on COCO panoptic leaderboard, and 60.8 mIoU on ADE20K semantic leaderboard)!.

MaskDINO

Installation

See installation instructions.

Getting Started

See Results.

See Preparing Datasets for MaskDINO.

See More Usage.

Results

Released Models

COCO Instance Segmentation and Object Detection

In this part, we follow DINO to use hidden dimension 2048 in the encoder by default. We also use the mask-enhanced box initialization proposed in our paper by default. To better present our model, we also list the models trained with hidden dimension 1024 (hid 1024) and not using mask-enhance initialization (no mask enhance) in this table.

Name Backbone Epochs Mask AP Box AP Params GFlops download
MaskDINO (hid 1024) R50 50 46.1 51.5 47M 226 model
MaskDINO | config R50 50 46.3 51.7 52M 286 model

COCO Panoptic Segmentation

Name Backbone epochs PQ Mask AP Box AP mIoU download
MaskDINO | config R50 50 53.0 48.8 44.3 60.6 model

To Release

These models can be found and tested in MaskDINO Source Code and will be available soon in detrex.

COCO Instance Segmentation and Object Detection

In this part, we follow DINO to use hidden dimension 2048 in the encoder by default. We also use the mask-enhanced box initialization proposed in our paper by default. To better present our model, we also list the models trained with hidden dimension 1024 (hid 1024) and not using mask-enhance initialization (no mask enhance) in this table.

Name Backbone Epochs Mask AP Box AP Params GFlops download
MaskDINO (no mask enhance) | config Swin-L (IN21k) 50 52.1 58.3 223 1326 model
MaskDINO | config Swin-L (IN21k) 50 52.3 59.0 223 1326 model

COCO Panoptic Segmentation

Name Backbone epochs PQ Mask AP Box AP mIoU download
MaskDINO | config Swin-L (IN21k) 50 58.3 50.6 56.2 67.5 model

ADE20K Semantic Segmentation

Name Dataset Backbone iterations mIoU download
MaskDINO | config ADE20K R50 160k 48.7 model
MaskDINO | config Cityscapes R50 90k 79.8 model

All models were trained with 4 NVIDIA A100 GPUs (ResNet-50 based models) or 8 NVIDIA A100 GPUs (Swin-L based models).

More Usage

Mask-enhanced box initialization

We provide 2 ways to convert predicted masks to boxes to initialize decoder boxes. You can set as follows

  • MODEL.MaskDINO.INITIALIZE_BOX_TYPE: no not using mask enhanced box initialization
  • MODEL.MaskDINO.INITIALIZE_BOX_TYPE: mask2box a fast conversion way
  • MODEL.MaskDINO.INITIALIZE_BOX_TYPE: bitmask provided conversion from detectron2, slower but more accurate conversion.

These two conversion ways do not affect the final performance much, you can choose either way.

In addition, if you already train a model for 50 epochs without mask-enhance box initialization, you can plug in this method and simply finetune the model in the last few epochs (i.e., load from 32K iteration trained model and finetune it). This way can also achieve similar performance compared with training from scratch, but more flexible.

Model components

MaskDINO consists of three components: a backbone, a pixel decoder and a Transformer decoder. You can easily replace each of these three components with your own implementation.

  • backbone: Define and register your backbone under maskdino/modeling/backbone. You can follow the Swin Transformer as an example.

  • pixel decoder: pixel decoder is actually the multi-scale encoder in DINO and Deformable DETR, we follow mask2former to call it pixel decoder. It is in maskdino/modeling/pixel_decoder, you can change your multi-scale encoder. The returned values include

    1. mask_features is the per-pixel embeddings with resolution 1/4 of the original image, obtained by fusing backbone 1/4 features and multi-scale encoder encoded 1/8 features. This is used to produce binary masks.
    2. multi_scale_features, which is the multi-scale inputs to the Transformer decoder. For ResNet-50 models with 4 scales, we use resolution 1/32, 1/16, and 1/8 but you can use arbitrary resolutions here, and follow DINO to additionally downsample 1/32 to get a 4th scale with 1/64 resolution. For 5-scale models with SwinL, we additional use 1/4 resolution features as in DINO.
  • transformer decoder: it mainly follows DINO decoder to do detection and segmentation tasks. It is defined in maskdino/modeling/transformer_decoder.

LICNESE

Mask DINO is released under the Apache 2.0 license. Please see the LICENSE file for more information.

Copyright (c) IDEA. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use these files except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Citing Mask DINO

If you find our work helpful for your research, please consider citing the following BibTeX entry.

@misc{li2022mask,
      title={Mask DINO: Towards A Unified Transformer-based Framework for Object Detection and Segmentation}, 
      author={Feng Li and Hao Zhang and Huaizhe xu and Shilong Liu and Lei Zhang and Lionel M. Ni and Heung-Yeung Shum},
      year={2022},
      eprint={2206.02777},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

If you find the code useful, please also consider the following BibTeX entry.

@misc{zhang2022dino,
      title={DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection}, 
      author={Hao Zhang and Feng Li and Shilong Liu and Lei Zhang and Hang Su and Jun Zhu and Lionel M. Ni and Heung-Yeung Shum},
      year={2022},
      eprint={2203.03605},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

@inproceedings{li2022dn,
      title={Dn-detr: Accelerate detr training by introducing query denoising},
      author={Li, Feng and Zhang, Hao and Liu, Shilong and Guo, Jian and Ni, Lionel M and Zhang, Lei},
      booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
      pages={13619--13627},
      year={2022}
}

@inproceedings{
      liu2022dabdetr,
      title={{DAB}-{DETR}: Dynamic Anchor Boxes are Better Queries for {DETR}},
      author={Shilong Liu and Feng Li and Hao Zhang and Xiao Yang and Xianbiao Qi and Hang Su and Jun Zhu and Lei Zhang},
      booktitle={International Conference on Learning Representations},
      year={2022},
      url={https://openreview.net/forum?id=oMI9PjOb9Jl}
}

Acknowledgement

Many thanks to these excellent opensource projects