This repository has been modified to apply Tracktor (tracking without bells and whistles) to cell image data.
Please refer to here for environment construction. We have been tested under the environment of Python 3.7.10, pytorch 1.7.1 and torchvision 0.8.2.
- Clone and enter this repository:
git clone https://github.com/Hideo-Matsuda/LeukoTrack
- Install packages for Python 3.7 in virtualenv:
pip install -r requirements.txt
- Install PyTorch 1.7 and torchvision 0.8 from here.
- Install Tracktor:
pip install -e .
- Train data should be placed in
src/data/train/
. - Test data should be placed in
src/data/test/
.
The following is a description of the data format. Basically, it follows the MOT (multiple object tracking) competition data.
The internal structure of data directory is as follows. The group_name
is the name of the group (ex. group1, group2 etc.) which the data set was split for cross-validation. The sample_name
is the sequence name (ex. sample01, sample02 etc.).
train or test
├── group_name(ex. group1)
│ ├── sample_name(ex. sample01)
│ │ ├── det
│ │ │ └── det.txt
│ │ ├── img1
│ │ │ └── 000001.png
│ │ │ └── 000002.png
│ │ ├── gt
│ │ │ └── gt.txt
│ │ ├── seqinfo.ini
│ ├── sample_name(ex. sample02)
├── group_name(ex. group2)
The common format for the detection file (det.txt
), the ground truth file (gt.txt
), and the output file of the tracking results is as follows.one line corresponds to one bounding box (= detection) per frame and per target. Each line is separated by comma and lists the following information.
- frame
- int
- target id
- int
- In the detection file (since the correspondence between frames is not taken into account), all -1 is used.
- left x-coordinate of Bounding Box
- int or float
- top y-coordinate of Bounding Box
- int or float
- width of Bounding Box
- int or float
- hight of Bounding Box
- int or float
- Detection confidence
- float
- class no.
- In the detection file, all -1 is used.
- Visibility
- how is the target "visible" in the tracking results file. For example, lower values for targets that are not visible due to occlusion or being at the edges of the image.
- In the detection file, all -1 is used.
For details, refer to Section 3.3 of here.
seqinfo.ini
contains a total of seven pieces of information about name
, imgDir
, frameRate
, seqLength
, imWidth
, imHeight
, imExt
.
The name
is written in sample name(ex. sample01, sample02 ...), the imgDir
is written in img1
and the frameRate
is written in seconds per frame.
-
Create config file Create a yaml file that write the training setup such as
train_cfgs/sample.yaml
. -
Execution command If you want to use the original dataset which have another data name structure, please modify
det/crest/crest_det_dataset
.
python src/det/train_detector.py -c train_cfgs/sample.yaml
-c
,--cfg_file
: path to config file
After training, the model is saved as {epoch}.pth
in the directory specified in the configuration file (train_results/...
). If the directory already exists, the model is saved in the directory with the execution date and time appended at the end to distinguish it from other directories.
Detects the target for all frames of the test sequence before testing (tracking).
- Create config file
Create a yaml file that write the test sequence to be executed such as
test_group/sample.yaml
. - Execution command
python det/detect.py -d data/test -g test_group/sample.yaml -m train_results/sample_results/30.pth -o det_results/sample_results
-d
,--dataroot
: path to test data directory-g
,--group
: config file with test sequences.(If omitted, it is executed for all sequences in thedataroot
.)-m
,--model_path
: path to the train model (.pth
).-o
,--outdir
: path to the directory where the detection results will be saved.
After the detection, the detection results of MOT format are saved in the directory which specified by outdir
. If the directory already exists, the directory with the execution date and time appended to the end of the file will be used as the destination directory to distinguish it.
- Download ReID model
The ReID model trained on generic objects is in
src/reid_model/model-last.pth.tar
. This model was downloaded from here as per the instructions in Tracktor. If you want to use another ReID model, put insrc/reid_model/
. - Data preparation
Copy the detection results for each sequence in
det/det_results/{group_name}
todet/det.txt
in the data directory (data/test/{group_name}/{sample_name}
). - Create config file
Create a yaml file that write the training setup such as
test_cfgs/sample.yaml
. Thedataset
in the this config file (test_cfgs/sample.yaml
) specifies the dataset to track.
Format:
biodata_{group}
group
group1
,group2
, ... etc. : group name (only test sequence in group)all
: test sequence in group + train sequence in group
If you want to apply the another structure format, modify code by imitating lines 50-52 in src/track/tracktor/datasets/factory.py
-
Specify dataset to track Change the test data names by group (See
class BiodataWrapper
insrc/track/tracktor/datasets/mot_wrapper.py
) to correspond to thedataset
in the config file. -
Execution command If you want to use the original dataset which have another data name structure, please modify
det/crest/crest_det_dataset
.
python src/scripts/test_tracktor.py -c test_cfgs/sample.yaml
-c
,--cfg_file
: path to config file.
After the tracking, the tracking results are saved in the directory which specified by output/tracker/{module_name}/{name}
. {module_name}
and {name}
are in the config file.
If the directory already exists, the directory with the execution date and time appended to the end of the file will be used as the destination directory to distinguish it.
- Change the parameters in the
src/data_augmentation_by_image_processing.py
and run.
python src/data_augmentation_by_image_processing.py
- Change the parameters in the
src/create_pseudo_labels.py
and run.
python src/create_pseudo_labels.py
=======
Leukocyte tracking by CNN