.
├── CNN
│ ├── ConfusionMiatrix.png
│ ├── code
│ ├── training_log
│ ├── model.h5
│ └── model.tflite
├── Object_detection
│ ├── annotation
│ ├── obj_results.xlsx
│ └── scripts
├── test_results.xlsx
└── README.md
We used two different approaches in Deep Learning part, one is a convolutional neural network (CNN), which is simpler but faster, another is an object-detection model based on ResNet, which can provide more information.
-
For CNN
- Install python environment (Anaconda is recommended)
- Install Tensorflow
pip install --upgrade tensorflow
-
For Object Detection
- Install TensorFlow Object Detection API
git clone https://github.com/tensorflow/models.git
- Install Protobuf from protoc repo
- Install COCO API
pip install cython pip install git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI
Visual C++ 2015 build tools must be installed and on your path
- Install the Object Detection API
cp object_detection/packages/tf2/setup.py . python -m pip install .
- Install LabelImg
pip install labelImg
- Install TensorFlow Object Detection API
-
CNN
- Prapering the dataset.
The dataset for this study was placed in "
Image/TrainingImages
" directory, the dataset were devided to Training set and Validation set, and in each subset the images with different labels were placed into saperate folders. Obtain the a subset of our datasets from our data repository.- Run the Python code for training.
The code for training is "
/code/main.py
". You need to change the "Paths" in the Python code to the directory on your computer. This step takes approximatelly 60 minutes ( depends on your hardware and settings such as steps and input size). More information can be found on the TensorFlow Official website.You can use Tensorboard to monitoring the training process like this:
By runing the command:
tensorboard --logdir=your_log_dr
Then open the web App at http://localhost:6006 from your browser.
The Tensorboard callbacks was add in line 158 of main.py.
log_dir="logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S") tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
-
Object Detection
You can download our dataset and trained model from our data repository.
- Praparing the dataset.
-
Lable the Images with LabelImg software.
-
Run the partition.py in "
/scripts/preprocess/
" folder to split the dataset to training set and test set.python partition_dataset.py -x -i [PATH_TO_IMAGES_FOLDER] -r 0.1
The dataset used in this study was placed in "
TrainingImage
" directory. -
Create Label Map Label Map is a file includes every label in your dataset and its id. E.g.
item { id: 1 name: 'cat' } item { id: 2 name: 'dog' }
The lable map for this study is "
/annotation/label_map.pbtxt
" -
Create Tensorflow Records
# Create train data: python generate_tfrecord.py -x [PATH_TO_IMAGES_FOLDER]/train -l [PATH_TO_ANNOTATIONS_FOLDER]/label_map.pbtxt -o [PATH_TO_ANNOTATIONS_FOLDER]/train.record # Create test data: python generate_tfrecord.py -x [PATH_TO_IMAGES_FOLDER]/test -l [PATH_TO_ANNOTATIONS_FOLDER]/label_map.pbtxt -o [PATH_TO_ANNOTATIONS_FOLDER]/test.record
You can use the script "
/scripts/preprocess/generate_tfrecord.py
" with the commands above to do this. -
Start Training
- Create a new folder for the model, then copy and past the "
pipline.config
" file from a pre-trained model to the new folder. - Edit the config file.
- Run the "
scripts/model_main_tf2.py
" by following command:#This is an example for starting the training process python model_main_tf2.py --model_dir=models/my_ssd_resnet50_v1_fpn --pipeline_config_path=models/my_ssd_resnet50_v1_fpn/pipeline.config
- Create a new folder for the model, then copy and past the "
This step takes approximatelly 10 hours( depends on your hardware and settings such as steps and model you choose) . TensorFlow 2 provided many pre-trained models in TensorFlow 2 Detection model zoo
-
More details about training a custom Object Detection model with TensorFlow can be found here. To mornitoring your training process, run tensorboard in your model directory.
- Praparing the dataset.
You can easily test the performace of your model with the scripts provided in this repo. The only thing you need to do is editing "PATH" in the code by following the comments.The test results we used for publication are in the test_results.xlsx file.
-
CNN
The test code for the CNN model is "
/CNN/code/test.py
". -
Object detection
The test code can be found in "
/Object_detection/script/testmodel.py
".