This is the official repository for Compression Artifact Tracing Network (CAT-Net). Given a possibly manipulated image, this network outputs a probability map of each pixel being manipulated.
Keywords: CAT-Net, Image forensics, Multimedia forensics, Image manipulation detection, Image manipulation localization, Image processing
- v1 (WACV2021) [link to the paper]
Myung-Joon Kwon, In-Jae Yu, Seung-Hun Nam, and Heung-Kyu Lee, “CAT-Net: Compression Artifact Tracing Network for Detection and Localization of Image Splicing”, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 375–384
- v2 (arXiv, under review) [link to the paper]
Myung-Joon Kwon, Seung-Hun Nam, In-Jae Yu, Heung-Kyu Lee, and Changick Kim, “Learning JPEG Compression Artifacts for Image Manipulation Detection and Localization”, arXiv:2108.12947 [cs, eess], Aug. 2021
2. Download the weights from: [Google Drive Link] or [Baiduyun Link] (extract code: ycft).
CAT-Net
├── pretrained_models (pretrained weights for each stream)
│ ├── DCT_djpeg.pth.tar
│ └── hrnetv2_w48_imagenet_pretrained.pth
├── output (trained weights for CAT-Net)
│ └── splicing_dataset
│ ├── CAT_DCT_only
│ │ └── DCT_only_v2.pth.tar
│ └── CAT_full
│ └── CAT_full_v1.pth.tar
│ └── CAT_full_v2.pth.tar
If you are trying to test the network, you only need CAT_full_v1.pth.tar or CAT_full_v2.pth.tar.
v1 indicates the WACV model while v2 indicates the journal model. Both models have same architecture but the trained weights are different. v1 targets only splicing but v2 also targets copy-move forgery. If you are planning to train from scratch, you can skip downloading.
conda create -n cat python=3.6
conda activate cat
conda install pytorch==1.1.0 torchvision==0.3.0 cudatoolkit=10.0 -c pytorch
pip install -r requirements.txt
You need to manually install JpegIO from here. PIP installing is not supported. After cloning JpegIO repo, go to JpegIO directory and run:
python setup.py install
Set paths properly in 'project_config.py'.
Set settings properly in 'experiments/CAT_full.yaml'. If you are using single GPU, set GPU=(0,) not (0).
Put input images in 'input' directory. Use English file names.
Choose between full CAT-Net and the DCT stream by commenting/uncommenting lines 65-66 and 75-76 in tools/infer.py
. Also, choose between v1 and v2 in the lines 65-66 by modifying the strings.
At the root of this repo, run:
python tools/infer.py
The predictions are saved in 'output_pred' directory as heatmaps.
Obtain datasets you want to use for training.
You can download tampCOCO and compRAISE datasets on [Google Drive Link].
Note that tampCOCO consists of four datasets: cm_COCO, sp_COCO, bcm_COCO (=CM RAISE), bcmc_COCO (=CM-JPEG RAISE).
Also note that compRAISE is an alias of JPEG RAISE in the journal paper.
You are allowed to use the datasets for research purpose only.
Set training and validation set configuration in Splicing/data/data_core.py.
CAT-Net only allows JPEG images for training. So non-JPEG images in each dataset must be JPEG compressed (with Q100 and no chroma subsampling) before you start training. You may run each dataset file (EX: Splicing/data/dataset_IMD2020.py), for automatic compression.
If you wish to add additional datasets, you should create dataset class files similar to the existing ones.
At the root of this repo, run:
python tools/train.py
Training starts from the pretrained weight if you place it properly.
This code is built on top of HRNet. You need to follow their licence.
For CAT-Net, you may freely use it for research purpose.
Commercial usage is strictly prohibited.
If you use some resources provided by this repo, please cite these papers.
- v1 (WACV2021)
@inproceedings{kwon2021cat,
title={CAT-Net: Compression Artifact Tracing Network for Detection and Localization of Image Splicing},
author={Kwon, Myung-Joon and Yu, In-Jae and Nam, Seung-Hun and Lee, Heung-Kyu},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
pages={375--384},
year={2021}
}
- v2 (arXiv, under review)
@article{
title = {Learning {JPEG} Compression Artifacts for Image Manipulation Detection and Localization},
url = {http://arxiv.org/abs/2108.12947},
journaltitle = {{arXiv}:2108.12947 [cs, eess]},
author = {Kwon, Myung-Joon and Nam, Seung-Hun and Yu, In-Jae and Lee, Heung-Kyu and Kim, Changick},
date = {2021-08-29},
eprinttype = {arxiv},
eprint = {2108.12947},
}