ConceptCLIP: Towards Trustworthy Medical AI via Concept-Enhanced Contrastive Language-Image Pre-training
ConceptCLIP is a large-scale, pre-training vision-language model designed for diverse medical image modalities.
ConceptCLIP enhances language-image pre-training with medical concepts, enabling it to handle various image modalities across multiple tasks.
Authors: Yuxiang Nie*, Sunan He*, Yequan Bie*, Yihui Wang, Zhixuan Chen, Shu Yang, Hao Chen** (*Equal Contribution, **Corresponding author)
- [01/25]🔥We released ConceptCLIP, a pre-training model for medical vision-language tasks with concept enhancements. Explore our paper and model.
ConceptCLIP is built on OpenCLIP. Follow the requirements.txt to set up your environment.
ConceptCLIP is integrated with Hugging Face, making it easy to load and use in Python.
from transformers import AutoModel, AutoProcessor
import torch
from PIL import Image
model = AutoModel.from_pretrained('JerrryNie/ConceptCLIP', trust_remote_code=True)
processor = AutoProcessor.from_pretrained('JerrryNie/ConceptCLIP', trust_remote_code=True)
image = Image.open('example_data/chest_X-ray.jpg').convert('RGB')
labels = ['chest X-ray', 'brain MRI', 'skin lesion']
texts = [f'a medical image of {label}' for label in labels]
inputs = processor(
images=image,
text=texts,
return_tensors='pt',
padding=True,
truncation=True
).to(model.device)
with torch.no_grad():
outputs = model(**inputs)
logits = (outputs['logit_scale'] * outputs['image_features'] @ outputs['text_features'].t()).softmax(dim=-1)[0]
print({label: f"{prob:.2%}" for label, prob in zip(labels, logits)})
For more detailed usage, refer to usage.py
.
This project is based on OpenCLIP. We appreciate the authors for their open-source contributions and encourage users to cite their works when applicable.
If you use this code for your research or project, please cite:
@article{nie2025conceptclip,
title={{ConceptCLIP: Towards Trustworthy Medical AI via Concept-Enhanced Contrastive Language-Image Pre-training}},
author={Nie, Yuxiang and He, Sunan and Bie, Yequan and Wang, Yihui and Chen, Zhixuan and Yang, Shu and Chen, Hao},
journal={arXiv preprint arXiv:2501.15579},
year={2025}
}
For any questions, please contact Yuxiang Nie at [email protected].