Skip to content

agpeshal/Robustness

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Adversarial Machine Learning

This repository contains implementation of the Projected Gradient Descent (Madry et al) which is based on Fast Gradient Sign Method (Goodfellow et al) to attack neural networks trained on CIFAR10

Defend the network by performed adversarial training

Installation

Use conda to create a virtual environment and install dependencies

conda create -n robust
conda activate robust
conda install --file requirements.txt

Attack

Download model checkpoint To run PGD attack on model using CIFAR-10 images, run

cd src
python attack.py --ckpt {path_to_ckpt}

Some examples of the adversarial images from the attack

Defense

Use adversarial training as the defense mechanism

cd src
python train.py

References

Goodfellow et al "Explaining and harnessing adversarial examples" ICLR, 2015

Aleksander Madry et al. ”Towards deep learning models resistant to adversarial attacks”. ICLR (2018).

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages