Official PyTorch implementation of the following paper:
Part-aware Personalized Segment Anything Model for Patient-Specific Segmentation
Chenhui Zhao and Liyue Shen
University of Michigan
We propose P2SAM, a training-free method that adapts a promptable segmentation model to new patients using only one-shot patient-specific data. P2SAM incorporates a part-aware prompt mechanism and a distribution-based retrieval approach to enhance generalization across:
- Tasks: P2SAM enhances generalization across different patient-specific segmentation tasks.
- Models: P2SAM can be plugged into various promptable segmentation models, such as SAM, fine-tuned SAM, and SAM 2.
- Domains: P2SAM performs effectively in both medical and natural image domains.
- (2025-01) Release P2SAM's SAM fine-tuning code and fine-tuned models.
- (2025-01) Release P2SAM code for adaptive nsclc segmentation.
- (2025-01) Release P2SAM code for endoscopy video segmentation.
- (2024-07) Release P2SAM code for personalized segmentation.
Create an Python environment with:
conda env create -f environment.yaml
Prepare datasets with:
Prepare pre-trained and fine-tuned models with:
Fine-tune SAM on custom datasets (using SAM's iteratively training strategy) with:
Test P2SAM on out-of-distribution datasets with:
This repository is built using the DEiT, SAM, PerSAM repositories.
If you find this repository helpful, please consider citing:
@article{zhao2024part,
title={Part-aware Personalized Segment Anything Model for Patient-Specific Segmentation},
author={Zhao, Chenhui and Shen, Liyue},
journal={arXiv preprint arXiv:2403.05433},
year={2024}
}