Skip to content

The open-sourced Python toolbox for backdoor attacks and defenses.

License

Notifications You must be signed in to change notification settings

jshim0978/BackdoorBox

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Welcome to BackdoorBox (Under Development)

Python 3.8 Pytorch 1.8.0 torchvision 0.9.0 CUDA 11.1 License GPL

BackdoorBox is a Python toolbox for backdoor attacks and defenses.

This project is still under development and therefore there is no user manual yet. Please refer to the 'tests' sub-folder to get more insights about how to use our implemented methods.

Current Status

Developed Methods

Backdoor Attacks

Method Source Key Properties Note
BadNets IEEE ACCESS, 2019 poison-only first backdoor attack
Blended Attack arXiv, 2017 poison-only, invisible first invisible attack
Refool (simplified version) ECCV, 2020 poison-only, sample-specific first stealthy attack with visible yet natural trigger
Label-consistent Attack arXiv, 2019 poison-only, invisible, clean-label first clean-label backdoor attack
ISSBA ICCV, 2021 poison-only, sample-specific, physical first poison-only sample-specific attack
WaNet ICLR, 2021 poison-only, invisible, sample-specific
Blind Backdoor (blended-based) USENIX Security, 2021 training-controlled first training-controlled attack targeting loss computation
Input-aware Dynamic Attack NeurIPS, 2020 training-controlled, optimized, sample-specific first training-controlled sample-specific attack
Physical Attack ICLR Workshop, 2021 training-controlled, physical first physical backdoor attack
LIRA ICCV, 2021 training-controlled, invisible, optimized, sample-specific

Note: For the convenience of users, all our implemented attacks support obtaining poisoned dataset (via .get_poisoned_dataset()), obtaining infected model (via .get_model()), and training with your own local samples (loaded via torchvision.datasets.DatasetFolder). Please refer to base.py and the attack's codes for more details.

Backdoor Defenses

  • ShrinkPad (Key Properties: Pre-processing-based Defense)

Methods Under Development

  • TUAP (basic version)
  • Sleeper Agent
  • NAD
  • Fine-tuning
  • Pruning
  • MCR
  • DBD
  • SS
  • ABL

Contributors

Organization Contributors
Tsinghua University Yiming Li, Mengxi Ya, Guanhao Gan, Kuofeng Gao, Xin Yan, Jia Xu, Tong Xu, Sheng Yang, Linghui Zhu, Yang Bai

Citation

If our toolbox is useful for your research, please cite our paper(s) as follows:

@article{li2022backdoorbox,
  title={{BackdoorBox}: A Python Toolbox for Backdoor Learning},
  author={Li, Yiming and Ya, Mengxi and Bai, Yang and Jiang, Yong and Xia, Shu-Tao},
  year={2022}
}
@article{li2020backdoor,
  title={Backdoor Learning: A Survey},
  author={Li, Yiming and Jiang, Yong and Li, Zhifeng and Xia, Shu-Tao},
  journal={arXiv preprint arXiv:2007.08745},
  year={2020}
}

About

The open-sourced Python toolbox for backdoor attacks and defenses.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.0%
  • Jupyter Notebook 1.0%