This is the code for Label-free multiplex microscopic imaging by image-to-image translation overcoming the trade-off between pixel- and image-level similarity. This project is carried out in Funahashi Lab. at Keio University.
Our model performs image-to-image translation that converts z-stacked bright-field microscopy images with 3 channels into images of multiple subcellular components with 5 channels.
The architecture of our model consisted of the I2SB [1] framework, which directly learns probabilistic transformations between images, and the cWGAN-GP [2] framework, which solves the minimization problem of the Wasserstein distance between distributions.
The detailed information on this code is described in our paper published on Label-free multiplex microscopic imaging by image-to-image translation overcoming the trade-off between pixel- and image-level similarity.
Input bright-field images were captured as three slices spaced at ±4 μm along the z-axis. Right panel shows the output images of different models: the ground truth, Palette [3], guided-I2I [4], I2SB [1], and our model. Subcellular components (names of channels that captured them): mitochondria (Mito); Golgi, plasma membrane, and actin cytoskeleton (AGP); nucleoli and cytoplasmic RNA (RNA); endoplasmic reticulum (ER); and nucleus (DNA). The models and channel names are described in detail in the Methods section.
Note: We used the dataset cpg0000-jump-pilot [5], available from the Cell Painting Gallery [6] in the Registry of Open Data on AWS.
We have confirmed that our code works correctly on Ubuntu 18.04, 20.04 and 22.04.
- Python 3.10.11
- Pytorch 2.0.0
- Matplotlib 3.7.1
- Seaborn 0.13.2
- NumPy 1.24.2
- scikit-image 0.20.0
- SciPy 1.9.1
- Pandas 2.0.0
- scikit-learn 1.2.2
- opencv-python 4.8.1.78
- torch_ema 0.3
- prefetch_generator 1.0.3
See requirements.txt
for details.
- Download this repository by
git clone
.
% git clone [email protected]:funalab/I2IWSB.git
- Install requirements.
% cd I2IWSB
% python -m venv venv
% source ./venv/bin/activate
% pip install --upgrade pip
% pip install -r requirements.txt
-
Download learned model and a part of datasets.
-
This dataset is a minimum dataset required to run the demo.
-
NOTE: To download the entire dataset, see
datasets/JUMP/README.md
. you will need about 960GB of storage, so please check in advance whether you have enough space.% mkdir models ## Download and extract learned model % wget -O models/i2iwsb.tar.gz "https://drive.usercontent.google.com/download?id=1klNecJvscby4DybfYEJeg8omuaRHQIeT&confirm=xxx" % tar zxvf models/i2iwsb.tar.gz -C models ## Download and extract test data % wget -O datasets/demo/data.tar.gz "https://drive.usercontent.google.com/download?id=1xXsuKHGft_OpZxGzrthUIZUhYq20JYQW&confirm=xxx" % tar zxvf datasets/demo/data.tar.gz -C datasets/demo
-
-
Run the model.
-
On GPU (Specify GPU device name):
% python src/tools/custom/i2iwsb/test.py --conf_file confs/demo/test.cfg --device cuda:1 --model_dir models/i2iwsb --save_dir results/demo/i2iwsb/test
-
On CPU:
% python src/tools/custom/i2iwsb/test.py --conf_file confs/demo/test.cfg --device cpu --model_dir models/i2iwsb --save_dir results/demo/i2iwsb/test
The processing time of above example will be about 50 sec on GPU (NVIDIA A100) and about 3000 sec on CPU.
-
-
Train model with the demo dataset.
% python src/tools/custom/i2iwsb/train.py --conf_file confs/demo/trial/train_fold1.cfg --device cuda:1
-
Run model to inference.
% python src/tools/custom/i2iwsb/test.py --conf_file confs/demo/trial/test.cfg --device cuda:1
-
(If you needed) Perform additional evaluations of model performance as described in our paper
% python src/tools/template/evaluate_outputs.py --conf_file confs/demo/trial/test.cfg --device cuda:1
The research was funded by JST CREST, Japan Grant Number JPMJCR2331 to Akira Funahashi.
- [1] Liu, Guan-Horng., et al. "I2SB: Image-to-Image Schrödinger Bridge" arXiv, arXiv:2302.05872 (2023).
- [2] Cross-Zamirski, Jan Oscar., et al. "Label-free prediction of cell painting from brightfield images" Scientific Reports, 12, 10001 (2022).
- [3] Saharia, Chitwan., et al. "Palette: Image-to-Image Diffusion Models" In ACM SIGGRAPH 2022 conference proceedings, 1-10 (2022).
- [4] Cross-Zamirski, Jan Oscar., et al. "Class-Guided Image-to-Image Diffusion: Cell Painting from Bright field Images with Class Labels" In Proceedings of the IEEE/CVF International Conference on Computer Vision, 3800-3809 (2023).
- [5] Chandrasekaran, Srinivas Niranj., et al. "Three million images and morphological profiles of cells treated with matched chemical and genetic perturbations" Nature Methods, 21, 1114–1121 (2024).
- [6] Weisbart, Erin., et al. "Cell Painting Gallery: an open resource for image-based profiling" Nature Methods, 21, 1775-1777 (2024).