Alexander Kirillov, Yuxin Wu, Kaiming He, Ross Girshick
In this repository, we release code for PointRend in Detectron2. PointRend can be flexibly applied to both instance and semantic (comming soon) segmentation tasks by building on top of existing state-of-the-art models.
Install Detectron 2 following INSTALL.md. You are ready to go!
This Colab Notebook tutorial contains examples of PointRend usage and visualizations of its point sampling stages.
To train a model with 8 GPUs run:
cd /path/to/detectron2/projects/PointRend
python train_net.py --config-file configs/InstanceSegmentation/pointrend_rcnn_R_50_FPN_1x_coco.yaml --num-gpus 8
Model evaluation can be done similarly:
cd /path/to/detectron2/projects/PointRend
python train_net.py --config-file configs/InstanceSegmentation/pointrend_rcnn_R_50_FPN_1x_coco.yaml --eval-only MODEL.WEIGHTS /path/to/model_checkpoint
Mask head |
Backbone | lr sched |
Output resolution |
mask AP |
mask AP* |
model id | download |
---|---|---|---|---|---|---|---|
PointRend | R50-FPN | 1× | 224×224 | 36.2 | 39.7 | 164254221 | model | metrics |
PointRend | R50-FPN | 3× | 224×224 | 38.3 | 41.6 | 164955410 | model | metrics |
AP* is COCO mask AP evaluated against the higher-quality LVIS annotations; see the paper for details. Run python detectron2/datasets/prepare_cocofied_lvis.py
to prepare GT files for AP* evaluation. Since LVIS annotations are not exhaustive lvis-api
and not cocoapi
should be used to evaluate AP*.
Cityscapes model is trained with ImageNet pretraining.
Mask head |
Backbone | lr sched |
Output resolution |
mask AP |
model id | download |
---|---|---|---|---|---|---|
PointRend | R50-FPN | 1× | 224×224 | 35.9 | 164255101 | model | metrics |
[comming soon]
If you use PointRend, please use the following BibTeX entry.
@InProceedings{kirillov2019pointrend,
title={{PointRend}: Image Segmentation as Rendering},
author={Alexander Kirillov and Yuxin Wu and Kaiming He and Ross Girshick},
journal={ArXiv:1912.08193},
year={2019}
}