This repository contains the code for Image Matting with state-of-the-art Method “F, B, Alpha Matting” blog.
Please, follow the instruction to launch the demonstration script:
- Download model weights from there;
- Install the requirements with
pip3 install -r requirements.txt
; - Launch
python3 demo.py
to use default arguments. Usepython3 demo.py -h
for details.
The results will be saved into ./examples/predictions
by default.
If you want to run the matting network on your own images you will need to generate the corresponding trimaps first. This process is supposed to be manual but that's too burdensome. Instead, please follow the instruction below to launch the trimap generation process using a semantic segmentation algorithm:
- You need to generate trimap. In this repo we use PyTorch implementation of the DeepLabV3 for that purpose. Select the class of your foreground object using --target_class key:
python generate_trimaps.py -i /path/to/your/images (should be a directory) --target_class cat (consult --help for other options)
- Your trimaps will be stored into
path/to/your/images/trimaps
- Then launch
python demo.py --image_dir path/to/your/images --trimap_dir path/to/your/images/trimaps --output_dir path/to/save
to get predictions
- Note that results may be imprecise due to rough trimap generation. You can try to play with the --conf_threshold to fix that.
Want to become an expert in AI? AI Courses by OpenCV is a great place to start.