Skip to content

Latest commit

 

History

History
 
 

DeepLPF

DeepLPF: Deep Local Parametric Filters for Image Enhancement

DeepLPF: Deep Local Parametric Filters for Image Enhancement (CVPR2020)

Paper: here

Teaser

Datasets

Dataset Preprocessing

  • Adobe-DPE (5000 images, RGB, RGB pairs): this dataset can be downloaded here. After downloading this dataset you will need to use Lightroom to pre-process the images according to the procedure outlined in the DeepPhotoEnhancer (DPE) paper. Please see the issue here for instructions. Artist C retouching is used as the groundtruth/target. Feel free to raise a Gitlab issue if you need assistance with this (or indeed the Adobe-UPE dataset below). You can also find the training, validation and testing dataset splits for Adobe-DPE in the following file.

  • Adobe-UPE (5000 images, RGB, RGB pairs): this dataset can be downloaded here. As above, you will need to use Lightroom to pre-process the images according to the procedure outlined in the Underexposed Photo Enhancement Using Deep Illumination Estimation (DeepUPE) paper and detailed in the issue here. Artist C retouching is used as the groundtruth/target. You can find the test images for the Adobe-UPE dataset at this link.

Training

python main.py --valid_every=25 --num_epoch=10000 

valid_every: number of epochs to dump the testing and validation dataset metrics 
num_epoch: total number of training epochs 

For DeepLPF at 25 epochs, on Adobe5k_DPE dataset, you should get the following result:

Validation dataset PSNR: 22.57 dB

Test dataset PSNR: 22.48 dB

Output is written to a corresponding data directory subdirectory eg:

/Adobe5k/log_<timestamp>/

Inference

For inference, create a directory e.g. inference_imgs containing two sub-directories called "input" and "output":

/inference_imgs/input 

/inference_imgs/output 

Place the input images into to the input directory (i.e. those images you wish to inference) and put the groundtruth images in the output directory.

In the inference_imgs directory create a text file called "images_inference.txt" and list the image names to be inferenced one per line, without any path of file extension e.g. if the image is a5000.tif you would create a file with one line with the entry:

a5000 

To run inference use the following command:

python main.py  --checkpoint_filepath= ---inference_img_dirpath= 

checkpoint_filepath: location of checkpoint file 
inference_img_dirpath: location of image directory 

For example:

python main.py  --inference_img_dirpath="/aiml/data/inference_imgs/" --checkpoint_filepath="/aiml/data/deeplpf_validpsnr_23.512562908388936_validloss_0.03257064148783684_testpsnr_23.772689725002834_testloss_0.03129354119300842_epoch_399_model.pt" 

Output is written to a corresponding data directory subdir eg:

/Adobe5k/log_<timestamp>/