Skip to content
View NeuralSec's full-sized avatar

Block or report NeuralSec

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Pinned Loading

  1. certified-data-learnability certified-data-learnability Public

    [NDSS'25] "Provably Unlearnable Data Examples"

    Python 3 1

  2. camp-robust-rl camp-robust-rl Public

    [USENIX Security'25] "{CAMP} in the Odyssey: Provably Robust Reinforcement Learning with Certified Radius Maximization"

    Python 2 1

  3. Daedalus-attack Daedalus-attack Public

    The code of our paper: 'Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples', in Tensorflow.

    Python 52 8

  4. Daedalus-physical Daedalus-physical Public

    Crafting physical Daedalus examples (A complementary repository to (https://github.com/NeuralSec/Daedalus-attack).

    Python 2 1

  5. advVAE advVAE Public

    VAE used for adversarial example generation in the scenario of man-in-the-middle attacks.

    Python 2 1