SAM-dPCR: Accurate and Generalist Biological Sample Quantification Leveraging the Zero-Shot Segment Anything Model
SAM-dPCR is a novel, open-source, self-supervised bioanalysis paradigm that integrates the zero-shot segment anything model (SAM) with digital PCR technology. This advanced system swiftly and accurately quantifies encapsulated biological samples, boasting over 97.7% accuracy in under 4 seconds. SAM-dPCR is adaptable to standard lab fluorescence microscopes and various experimental conditions including droplet-based, microwell-based, and agarose-based microreactors, demonstrating broad application scope within molecular biology. This innovative method enhances dPCR data visualization and analysis, offering high throughput, accuracy, and generalizability, thereby addressing bioanalytical needs in resource-limited settings. Below is a structured guide to implementing SAM-dPCR on a standard desktop computer to analyze dPCR images, thus speeding up the absolute quantification of biological samples.
The system requires no non-standard hardware and could run on standard desktop computers. It could operate without a GPU, while implementing one would decrease the expected run time.
This package is supported for MacOS, Windows and Linux. It has been tested on the systems listed below:
- macOS: Ventura 13.0 & Sonoma 14.2.1
- Windows: 11 Home 22H2
SAM-dPCR depends on the following python environment to run:
- python>=3.8
- pytorch>=1.7
- torchvision>=0.8
- OpenCV-Python
- Pycocotools
- Matplotlib
- ONNX Runtime
- ONNX
Find more on Environment Setup for SAM-dPCR , /samdpcr/samdpcr_requirements.txt
and /samdpcr/samdpcr.yml
.
- Without GPU: approximately 45s per image
- Utilizing GPU: <4s per image
You can create a conda environment for Sam by following the steps below:
-
Locate the yml file: Find the conda
samdpcr.yml
file in the project directory. This file contains the necessary dependencies for the environment. -
Create the environment: Use the following command in your terminal to create a new conda environment:
conda env create -f samdpcr.yml
-
Activate the environment: Once the environment is created, you can activate it using:
conda activate samdpcr
Follow the steps below to install the requirements:
-
Locate the requirements file: Find the
samdpcr_requirements.txt
file in the project directory. This file contains the necessary packages for the project. -
Install the requirements: Use the following command in your terminal to install the requirements:
pip install -r samdpcr_requirements.txt
-
Download SAM checkpoint: Make sure you have downloaded
sam_vit_h_4b8939.pth
and put it under the folder of/samdpcr
. You can download it from here. If you want to use your own checkpoint, please modify the checkpoint path in themain.py
on line 8, and change the relativemodel_type
on line 9.
Please ensure that you have the correct permissions to install packages on your system. If you encounter any issues, you may need to use sudo
or consult your system's documentation.
Follow the steps below to set up CUDA to run the code faster:
-
Check your system compatibility: Ensure that your system has a CUDA-compatible GPU. You can check this on the NVIDIA website.
-
Install PyTorch for CUDA: If you wish to use GPU acceleration, install the appropriate version of PyTorch for CUDA. Follow the instructions provided on the official website to do this.
-
Modify the code: Change line 10
device = "cpu"
todevice = "cuda"
in themain.py
.
Please note that using CUDA is optional for this project. If you do not have a CUDA-compatible GPU or do not wish to use GPU acceleration, you can use the CPU version of our project.
Follow the steps below to run the code:
-
Locate the Python file: Find the
main.py
file in the project directory:cd samdpcr
-
Modify the input and output directories: In
main.py
, replace the values ofinputDirectory
andoutputDirectory
on lines 16 and 17 with your desired input and output paths. The pre-set values are set to a demo path. -
Run the code: Use the following command in your terminal to run the code:
python main.py
- Special Note:When running different concentrations samples, using the S channel is preferred over the V channel. To do this, simply change line 81 in visualize.py
from:
imgSatu = hsv_targetImage[:, :, 2]
to:
imgSatu = hsv_targetImage[:, :, 1]
In all other cases, the default initial setting imgSatu = hsv_targetImage[:, :, 2]
is more optimal.
To compare the performance of SAM-dPCR, we also provide the installation and operation mode of Deep-qGFP.
You can create a conda environment for Deep-qGFP by following the steps below:
-
Locate the yml file: Find the conda
deepqgfp.yml
file in the project directory. This file contains the necessary dependencies for the environment. -
Create the environment: Use the following command in your terminal to create a new conda environment:
conda env create -f deepqgfp.yml
-
Activate the environment: Once the environment is created, you can activate it using:
conda activate deepqgfp
Follow the steps below to install the requirements:
-
Locate the requirements file: Find the
deepqgfp_requirements.txt
file in the project directory. This file contains the necessary packages for the project. -
Install the requirements: Use the following command in your terminal to install the requirements:
pip install -r deepqgfp_requirements.txt
Please ensure that you have the correct permissions to install packages on your system. If you encounter any issues, you may need to use sudo
or consult your system's documentation.
Follow the steps below to set up CUDA:
-
Check your system compatibility: Ensure that your system has a CUDA-compatible GPU. You can check this on the NVIDIA website.
-
Install PyTorch for CUDA: If you wish to use GPU acceleration, install the appropriate version of PyTorch for CUDA. Follow the instructions provided on the official website to do this.
3.Modify the code: Change line 64 device='cuda device, i.e. 0 or 0,1,2,3 or cpu'
in detect_LabelsOutput.py
. Or add --device 'cuda device, i.e. 0 or 0,1,2,3 or cpu'
in the command line.
Please note that using CUDA is optional for this project. If you do not have a CUDA-compatible GPU or do not wish to use GPU acceleration, you can use the CPU version of our project.
Follow the steps below to run the code:
-
Locate the Python file: Find the
detect_LabelsOutput.py
file in the project directory:cd deepqgfp
-
(Optional) Modify the input directories: In
detect_LabelsOutput.py
, replace the values ofsource
anddefault
on lines 58 and 248 with your desired input path. The pre-set values are set to a demo path. The default output path is/runs/detect/exp(number)
, the number will automatically increased to avoid name conflict. If you want to change the name of the output path, you can simply replace the values ofproject
,name
anddefault
on lines 75, 76, 265 and 266 with your desired output path (project=output path, name=output folder under the output path, save results to project/name). -
Run the code: Use the following command in your terminal to run the code if you follow the step 2 :
python detect_LabelsOutput.py
else:python detect_LabelsOutput.py --source input_directory --project output_path --name output_folder_name --device 'cuda device, i.e. 0 or 0,1,2,3 or cpu'
-
For SAM-dPCR: Demo images have been placed in
samdpcr/data/test/files
. After installation progress, simply runpython main.py
, the output files will be loaded to bothsamdpcr/data/test/files
andsamdpcr/data/test/outputfiles
. -
For Deep-qGFP: Same demo images have been placed in
deepqgfp/data/test/files
. After installation progress, simply runpython detect_LabelsOutput.py
, the output files will be loaded todeepqgfp/runs/detect/exp
.
- Code developer: Shanhang Luo & Changran Xu
- File developer: Changran Xu & Yingqi Fu