- Python: 3.7
- PyTorch: 1.5.0
We use the benchmark dataset MiniImageNet, which can be download here and here. CIFARFS and Omniglot can be found in the package torchmeta here
We use a four-layer-conv NN model
Run MAML_TrainStd.ipynb, associate files include MAMLMeta.py, attack.py, learner.py
- Attack power level has to be changed in MAMLMeta.py
- The device in MAML_TrainStd.ipynb and attack.py should set to be the same. (same in the following adversarial training)
Run trainfgsmrs.ipynb, associate files include metafgsm.py, attack.py, learner.py. To incorporate adversarial training in the inner-loop, please replace metafgsm.py with metafgsminout.py
Run train_trade.ipynb, associate files include MetaFT.py, LoadUnlableData.py. The unlabled data can be downloaded from here
Run StandardTransNew.ipynb, associate files include LoadDataST.py, StandardTrans.py. StandardTransAdv.ipynb contains adversarial training in the model training process.
Run figureselection.ipynb, associate files include
Run robust_vis_neuron.ipynb, associate files include Visualization.py, vis_tool.py, MODELMETA.py.
- By maximizing the output of a nueron with a perturbation in th input, the feature is shown in the input under a robust model, while "random noise" is shown in the input under a standard MAML model.
- The fine-tuned model has the similar feature to the original model in the same neuron. This suggests that the robustness is kept in the fine-tuned model even without adding the adversarial training in the fine-tuning.
Run .ipynb files in the two folders "CIFARFS" and "Omniglot"
If you use this code, please cite the following reference
@article{wangfast,
title={ON FAST ADVERSARIAL ROBUSTNESS ADAPTATION IN MODEL-AGNOSTIC META-LEARNING},
author={Wang, Ren and Xu, Kaidi and Liu, Sijia and Chen, Pin-Yu and Weng, Tsui-Wei and Gan, Chuang and Wang, Meng},
booktitle={International Conference on Learning Representations (ICLR)},
pages={},
year={2021}
}