-
Prepare training data : -- download CelebAMask-HQ dataset
-
Preprocessing (to stack all the mask images of a person into one)
python prepropess_data.py
- Train
python train.py
- Current model (Unet)
- loss function:
$Loss = cross entropy + dice loss$ - evaluation metric: mean IoU
- hyperparamters and global variables are in
configs.py
- beaware of the arguments in the scheduler, unexpected result are produced if the scheduler is initialized unappropriately
- a jupyter notebook is also created for visualization convenience.
python test.py
- 60 comparison results are generated for better visualization of the model performance
- model weight is too large, can't push to gihub
- segmentation model pytorch library might be useful for building different model architecture and applying pretrained weight.
- this reference given by TAs shows lots of performance results by using different model architecture
- this library includes more architectures, but seems quite difficilt to use
- 10 samples from Unseen dataset
- FaceSynthetics dataset (optional)