Skip to content
forked from lumingzzz/TinyLIC

High-Efficiency Lossy Image Coding Through Adaptive Neighborhood Information Aggregation

License

Notifications You must be signed in to change notification settings

xushu-me/TinyLIC

 
 

Repository files navigation

High-Efficiency Lossy Image Coding Through Adaptive Neighborhood Information Aggregation

Pytorch Implementation of our paper "High-Efficiency Lossy Image Coding Through Adaptive Neighborhood Information Aggregation"[arXiv].

More details can be found at the homepage.

News

  • [22.10.27] The latest version of our TinyLIC is released with more efficient network architecture in both transform and entropy coding modules. More details can be found in the paper.

Installation

To get started locally and install the development version of our work, run the following commands (The docker environment is recommended):

git clone https://github.com/lumingzzz/TinyLIC.git
cd TinyLIC
pip install -U pip && pip install -e .

Usage

Train

We use the Flicker2W dataset for training, and the script for preprocessing.

Run the script for a simple training pipeline:

python examples/train.py -m tinylic -d /path/to/my/image/dataset/ --epochs 400 -lr 1e-4 --batch-size 8 --cuda --save

The training checkpoints will be generated in the "chekpoints" folder at the current directory. You can change the default folder by modifying the function "init()" in "./expample/train.py".

Evaluation

Pre-trained models can be downloaded in NJU Box. The mse optimized R-D results of three popular datasets can be found in /results for reference.

An example to evaluate model:

python -m compressai.utils.eval_model checkpoint path/to/eval/data/ -a tinylic -p path/to/pretrained/model --cuda

Citation

If you find this work useful for your research, please cite:

@article{lu2022high,
  title={High-Efficiency Lossy Image Coding Through Adaptive Neighborhood Information Aggregation},
  author={Lu, Ming and Ma, Zhan},
  journal={arXiv preprint arXiv:2204.11448},
  year={2022}
}

Acknowledgement

This framework is based on CompressAI, we add our modifications mainly in compressai.models.tinylic and compressai.layers for usage. You can refer to the paper to understand the modificated part.

The TinyLIC model is partially built upon the Neighborhood Attention Transformer and the open-sourced unofficial implementation of Checkerboard Shaped Context Model. We thank the authors for sharing their codes.

Contact

If you have any question, please contact me via [email protected].

About

High-Efficiency Lossy Image Coding Through Adaptive Neighborhood Information Aggregation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 65.6%
  • Cuda 28.7%
  • C++ 5.3%
  • Makefile 0.4%