Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
THUYimingLi authored Jun 21, 2022
1 parent 1349639 commit 5b256ce
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,8 +41,8 @@ Currently, this toolbox is still under development (but the attack parts are alm
|:---------------------------------------------------------------------------------------------------------------:|:---------------------:|------------------------------------------------------------|-------------------------------------------------------------|
| [ShrinkPad](https://github.com/THUYimingLi/BackdoorBox/blob/main/core/defenses/ShrinkPad.py) | [Backdoor Attack in the Physical World](https://arxiv.org/pdf/2104.02361.pdf). ICLR Workshop, 2021. | Sample Pre-processing | efficient defense |
| [FineTuning](https://github.com/THUYimingLi/BackdoorBox/blob/main/core/defenses/FineTuning.py) | [Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks](https://arxiv.org/pdf/1805.12185.pdf). RAID, 2018. | Model Repairing | first defense based on model repairing |
| [MCR](https://github.com/THUYimingLi/BackdoorBox/blob/main/core/defenses/MCR.py) | [Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness](https://arxiv.org/pdf/2005.00060.pdf). ICLR, 2020 | Model Repairing | |
| [NAD](https://github.com/THUYimingLi/BackdoorBox/blob/main/core/defenses/NAD.py) | [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks](https://openreview.net/pdf?id=9l0K4OM-oXE). ICLR, 2021 | Model Repairing | first distillation-based defense |
| [MCR](https://github.com/THUYimingLi/BackdoorBox/blob/main/core/defenses/MCR.py) | [Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness](https://arxiv.org/pdf/2005.00060.pdf). ICLR, 2020. | Model Repairing | |
| [NAD](https://github.com/THUYimingLi/BackdoorBox/blob/main/core/defenses/NAD.py) | Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks. [ICLR]((https://openreview.net/pdf?id=9l0K4OM-oXE)), 2021. | Model Repairing | first distillation-based defense |



Expand Down

0 comments on commit 5b256ce

Please sign in to comment.