some algorithms about adversarial attack in rl
NIPS adversarial: https://github.com/sangxia/nips-2017-adversarial
risk and uncertainty: https://paperswithcode.com/paper/estimating-risk-and-uncertainty-in-deep
FGSM:
https://github.com/rodgzilla/machine_learning_adversarial_examples
https://github.com/soumyac1999/FGSM-Keras
https://github.com/ferjad/FGSM_pytorch
https://github.com/yangdechuan/fast-gradient-sign-method-tensorflow
Adversarial policy(Adam Gleave): https://github.com/HumanCompatibleAI/adversarial-policies
iLQR mujoco: https://github.com/anassinator/ilqr
hands_on: https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On-Second-Edition
Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks: https://link.springer.com/chapter/10.1007/978-3-319-62416-7_19
为啥一样的:https://openreview.net/attachment?id=Hkxbz1HKvr&name=original_pdf
survey: https://arxiv.org/pdf/2001.09684.pdf
attack methods:https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7958570