From 5b256ce63cf132b135be197620c606653a2bfe8b Mon Sep 17 00:00:00 2001 From: Yiming Li <46520010+THUYimingLi@users.noreply.github.com> Date: Tue, 21 Jun 2022 14:14:23 +0800 Subject: [PATCH] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index e21ea0d..f057ec9 100644 --- a/README.md +++ b/README.md @@ -41,8 +41,8 @@ Currently, this toolbox is still under development (but the attack parts are alm |:---------------------------------------------------------------------------------------------------------------:|:---------------------:|------------------------------------------------------------|-------------------------------------------------------------| | [ShrinkPad](https://github.com/THUYimingLi/BackdoorBox/blob/main/core/defenses/ShrinkPad.py) | [Backdoor Attack in the Physical World](https://arxiv.org/pdf/2104.02361.pdf). ICLR Workshop, 2021. | Sample Pre-processing | efficient defense | | [FineTuning](https://github.com/THUYimingLi/BackdoorBox/blob/main/core/defenses/FineTuning.py) | [Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks](https://arxiv.org/pdf/1805.12185.pdf). RAID, 2018. | Model Repairing | first defense based on model repairing | -| [MCR](https://github.com/THUYimingLi/BackdoorBox/blob/main/core/defenses/MCR.py) | [Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness](https://arxiv.org/pdf/2005.00060.pdf). ICLR, 2020 | Model Repairing | | -| [NAD](https://github.com/THUYimingLi/BackdoorBox/blob/main/core/defenses/NAD.py) | [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks](https://openreview.net/pdf?id=9l0K4OM-oXE). ICLR, 2021 | Model Repairing | first distillation-based defense | +| [MCR](https://github.com/THUYimingLi/BackdoorBox/blob/main/core/defenses/MCR.py) | [Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness](https://arxiv.org/pdf/2005.00060.pdf). ICLR, 2020. | Model Repairing | | +| [NAD](https://github.com/THUYimingLi/BackdoorBox/blob/main/core/defenses/NAD.py) | Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks. [ICLR]((https://openreview.net/pdf?id=9l0K4OM-oXE)), 2021. | Model Repairing | first distillation-based defense |