Arxiv, 2024
Xiangtai Li
·
Haobo Yuan
.
Wei Li
.
Henghui Ding
·
Size Wu
·
Wenwei Zhang
·
Yining Li
.
Kai Chen
.
Chen Change Loy*
In this work, we address various segmentation tasks, each traditionally tackled by distinct or partially unified models. We propose OMG-Seg, One Model that is Good enough to efficiently and effectively handle all the segmentation tasks, including image semantic, instance, and panoptic segmentation, as well as their video counterparts, open vocabulary settings, prompt-driven, interactive segmentation like SAM, and video object segmentation. To our knowledge, this is the first model to fill all these tasks in one model and achieve good enough performance.
We show that OMG-Seg, a transformer-based encoder-decoder architecture with task-specific queries and outputs, can support over ten distinct segmentation tasks and yet significantly reduce computational and parameter overhead across various tasks and datasets. We rigorously evaluate the inter-task influences and correlations during co-training. Both the code and models will be publicly available.
- Test Code and Models are released!!
- Release training code.
- Release CKPTs.
- Support HuggingFace.
- The first universal model that support image segmentation, video segmentation, open-vocabulary segmentation, multi-dataset segmentation, interactive segmentation.
- A new unified view for solving multiple segmentation tasks in one view.
See DATASET.md
Our codebase is built with MMdetection-3.0 tools.
See INSTALL.md
See the configs under seg/configs/m2ov_val.
For example, test COCO dataset.
./toos/dist.sh test seg/configs/m2ov_val/eval_m2_convl_300q_ov_coco.py model_path 4
Convnext-large backbone. model
Convnext-XX-large backbone. model
If you think OMG-Seg codebase are useful for your research, please consider referring us:
@article{omgseg,
title={OMG-Seg: Is One Model Good Enough For All Segmentation?},
author={Li, Xiangtai and Yuan, Haobo and Li, Wei and Ding, Henghui and Wu, Size and Zhang, Wenwei and Li, Yining and Chen, Kai and Loy, Chen Change},
journal={arXiv},
year={2024}
}
S-Lab LICENSE.