Skip to content
/ I2IWSB Public

I2IWSB: Image-To-Image Wasserstein Schrödinger Bridge

Notifications You must be signed in to change notification settings

funalab/I2IWSB

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

I2IWSB: Image-To-Image Wasserstein Schrödinger Bridge

This is the code for Label-free multiplex microscopic imaging by image-to-image translation overcoming the trade-off between pixel- and image-level similarity. This project is carried out in Funahashi Lab. at Keio University.

Overview

Our model performs image-to-image translation that converts z-stacked bright-field microscopy images with 3 channels into images of multiple subcellular components with 5 channels.

task

The architecture of our model consisted of the I2SB [1] framework, which directly learns probabilistic transformations between images, and the cWGAN-GP [2] framework, which solves the minimization problem of the Wasserstein distance between distributions.

proposed_architecture

The detailed information on this code is described in our paper published on Label-free multiplex microscopic imaging by image-to-image translation overcoming the trade-off between pixel- and image-level similarity.

Performance

Input bright-field images were captured as three slices spaced at ±4 μm along the z-axis. Right panel shows the output images of different models: the ground truth, Palette [3], guided-I2I [4], I2SB [1], and our model. Subcellular components (names of channels that captured them): mitochondria (Mito); Golgi, plasma membrane, and actin cytoskeleton (AGP); nucleoli and cytoplasmic RNA (RNA); endoplasmic reticulum (ER); and nucleus (DNA). The models and channel names are described in detail in the Methods section.

representative_images

Note: We used the dataset cpg0000-jump-pilot [5], available from the Cell Painting Gallery [6] in the Registry of Open Data on AWS.

Requirements

We have confirmed that our code works correctly on Ubuntu 18.04, 20.04 and 22.04.

See requirements.txt for details.

QuickStart

  1. Download this repository by git clone.
% git clone [email protected]:funalab/I2IWSB.git
  1. Install requirements.
% cd I2IWSB
% python -m venv venv
% source ./venv/bin/activate
% pip install --upgrade pip
% pip install -r requirements.txt
  1. Download learned model and a part of datasets.

    • This dataset is a minimum dataset required to run the demo.

    • NOTE: To download the entire dataset, see datasets/JUMP/README.md. you will need about 960GB of storage, so please check in advance whether you have enough space.

      % mkdir models
      ## Download and extract learned model
      % wget -O models/i2iwsb.tar.gz "https://drive.usercontent.google.com/download?id=1klNecJvscby4DybfYEJeg8omuaRHQIeT&confirm=xxx"
      % tar zxvf models/i2iwsb.tar.gz -C models
      ## Download and extract test data
      % wget -O datasets/demo/data.tar.gz "https://drive.usercontent.google.com/download?id=1xXsuKHGft_OpZxGzrthUIZUhYq20JYQW&confirm=xxx"
      % tar zxvf datasets/demo/data.tar.gz -C datasets/demo
  2. Run the model.

    • On GPU (Specify GPU device name):

      % python src/tools/custom/i2iwsb/test.py --conf_file confs/demo/test.cfg --device cuda:1 --model_dir models/i2iwsb --save_dir results/demo/i2iwsb/test
    • On CPU:

      % python src/tools/custom/i2iwsb/test.py --conf_file confs/demo/test.cfg --device cpu --model_dir models/i2iwsb --save_dir results/demo/i2iwsb/test

    The processing time of above example will be about 50 sec on GPU (NVIDIA A100) and about 3000 sec on CPU.

How to train and run model

  1. Train model with the demo dataset.

    % python src/tools/custom/i2iwsb/train.py --conf_file confs/demo/trial/train_fold1.cfg  --device cuda:1
  2. Run model to inference.

    % python src/tools/custom/i2iwsb/test.py --conf_file confs/demo/trial/test.cfg --device cuda:1
  3. (If you needed) Perform additional evaluations of model performance as described in our paper

    % python src/tools/template/evaluate_outputs.py --conf_file confs/demo/trial/test.cfg --device cuda:1

Acknowledgement

The research was funded by JST CREST, Japan Grant Number JPMJCR2331 to Akira Funahashi.

References

About

I2IWSB: Image-To-Image Wasserstein Schrödinger Bridge

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published