This repository provides codes for the skin segmentation methods investigated in [1], mainly Mask-RCNN, U-Net, a Fully Connected Network and a MATLAB script for thresholding. The algorithms were primarily developed to perform abdominal skin segmentation for trauma patients using RGB images, as part of an ongoing research work for developing an autonomous robot for trauma assessment [2][3].
Robotic abdominal ultrasound system with the camera view of the abdominal region, and the corresponding segmented skin mask.
The dataset consists of 1,400 abdomen images retrieved online from Google images search, which were subsequently manually segmented. The images were selected to preserve the diversity of different ethnic groups, preventing indirect racial biases in segmentation algorithms; 700 images represent darker skinned people, which include African, Indian and Hispanic groups, and 700 images represent lighter skinned people, such as Caucasian and Asian groups. A total of 400 images were selected to represent people with higher body mass indices, split equally between light and dark categories. Variations between individuals, such as hair and tattoo coverage, in addition to externals variations like shadows, were also accounted for in the dataset preparation. The size of the images is 227x227 pixels. The skin pixels form 66% of the entire pixel data, with a mean of 54.42% per individual image, and a corresponding standard deviation of 15%.
Sample pictures of the abdominal dataset depicting different skin color intensities and body complexions.
Improved segmentation results with the use of our proposed Abdominal dataset using U-Net. From left to right in columns: original image, ground truth, segmentation with the Abdominal dataset, and segmentation without the Abdominal dataset.
The complete skin datasets containing the original images along with their masks (which include HGR, TDSD, Schmugge, Pratheepan, VDM, SFA, FSD and our abdominal dataset) can be download from the following link. These datasets have been sorted to follow the same format, and can be readily run in the codes. If you're only interested in the abdominal dataset, you can download it from here. You can also download and unzip the datasets from the terminal:
$ pip install gdown
$ gdown "https://drive.google.com/uc?id=1xzYn4Rat4z2LA5zQW7JTfvA1bosz7oM-"
$ tar -xzvf All_Skin_Datasets.tar.gz
If you want to download the abdominal dataset separately:
$ gdown "https://drive.google.com/uc?id=1MnBW_OJqrTmzwc23YI5NK_y_l4zk9JGJ"
$ tar -xzvf Abdomen_Only_Dataset.tar.gz
The folders are organized as follows:
/All_Skin_Datasets/
├── Dataset1_HGR/
│ ├── original_images/
│ │ ├ <uniqueName1>.jpg
│ │ .
│ │ └ <uniqueNameK>.jpg
| └── skin_masks/
| ├ <uniqueName1>.png
| .
| └ <uniqueNameK>.png
├── Dataset2_TDSD/
├── Dataset3_Schmugge/
.
.
└── Dataset8_Abdomen/
├── test/
| ├── original_images/
│ | ├ <uniqueName1>.jpg
│ | .
│ | └ <uniqueNameK>.jpg
│ └── skin_masks/
│ ├ <uniqueName1>.png
│ .
│ └ <uniqueNameK>.png
└── train/
├── original_images/
| ├ <uniqueName1>.jpg
| .
| └ <uniqueNameK>.jpg
└── skin_masks/
├ <uniqueName1>.png
.
└ <uniqueNameK>.png
The codes require Python 3 to run. For installation run:
$ sudo apt-get update
$ sudo apt-get install python3.6
U-Net and the Fully Connected Network are written in Jupyter Notebook, so if you wish to run them, you should have it installed:
$ pip install jupyterlab
Next you need to install Tensorflow and Keras (it's better to install tenserflow gpu, otherwise it will take days to train your networks). The following steps include the installation of needed dependencies for this step:
$ pip install --upgrade tensorflow
$ pip install numpy scipy
$ pip install scikit-learn
$ pip install Pillow
$ pip install h5py
$ pip install keras
Some other dependencies for running the codes which are not included in the Python library:
$ pip install six matplotlib scikit-image opencv-python imageio Shapely
$ pip install imgaug
$ pip install talos
$ pip install tqdm
$ pip install Cython
$ pip install more-itertools
Finally clone the repository:
$ git clone --recursive https://github.com/MRE-Lab-UMD/abd-skin-segmentation.git
The directory Mask_RCNN
contains a README file which provides ample explanations and examples to guide you through the codes. It provides steps for running different parts of the code, and examples on how to extend the algorithm to other applications. For training or running this network, you need to make sure the images are named according to the coco format. To help you with that, we provided a MATLAB script in the directory miscellaneous_files
named coco_data_generate.m
. If you want to augment your data, you can use the augment.m
script in the same directory.
The U-Net notebook in the folder UNET and Features
provides clear instructions and comments on each section and subsection. Just follow the guidelines to train your own network, and make sure you replace our paths with yours. The code will automatically save your model as .h5, which you can subsequently load for further usage. The notebook U-Net - Parameter Optimization contains the same code as U-Net, but trains the network over a set of hyperparameters to find the optimal ones.
The Features notebook in the folder UNET and Features
provides clear instructions and comments on each section and subsection. Just follow the guidelines to train your own network, and make sure you replace our paths with yours. The code will automatically save your model as .h5, which you can subsequently load for further usage. We recommend you read the entire instructions once before running any sections, as some of them will take a while to complete, so you want to make sure you're running the parts that are needed for you.
The MATLAB script for the thresholding is in the Thresholding
directory. It is a function, so you can write your own script which calls it the way you need it. It takes as input an RGB image, and returns a thresholded (black and white) image.
To run the real-time segmentation using a trained U-Net model, go to the Real Time Skin Segmentation
directory in your terminal, and just type in:
$ python UNET-live.py
Make sure that you have set up your path to the trained model correctly in the code, and installed all required dependencies. Press on the ESC key to stop the code and close the camera window.
Video of Anirudh demonstrating the real-time skin segmentation using U-Net. The algorithm works with multiple people in the same view, too.
We are providing you with our trained models in the Models
directory. The folder contains the U-Net and Features models. The thresholding model lies within the threshold values we defined in the aforementioned MATLAB script. The Mask-RCNN model is too large to be uploaded to this repository, so you can download it from your terminal:
$ gdown "https://drive.google.com/uc?id=1ovteKdgCMAuu-N1-C9pyQyvj_al9s_k3"
If you have used the abdominal dataset, or any of our trained models, kindly cite the associated paper:
@inproceedings{topiwala2019bibe,
author = {A. Topiwala and L. Al-Zogbi and T. Fleiter and A. Krieger},
title = {{Adaptation and Evaluation of Deep Leaning Techniques for Skin Segmentation on Novel Abdominal Dataset}},
booktitle = {BIBE 2019; International Conference on Biological Information and Biomedical Engineering},
pages = {752--759},
year = {2019}
}
We can't guarantee that the codes will run perfectly on your machine (they should, but you never know). If you have any problems, questions, or suggestions, please feel free to contact the authors by email, we are pretty responsive and friendly.
- Anirudh Topiwala: [email protected]
- Lydia Zoghbi: [email protected]
We hope to bring the best to the community! Cheers ❤️🍻!