For educational purposes only, exhaustive samples of 500+ classic/modern trojan builders including screenshots.
-
Updated
Sep 10, 2025
For educational purposes only, exhaustive samples of 500+ classic/modern trojan builders including screenshots.
Image Payload Creating/Injecting tools
The open-sourced Python toolbox for backdoor attacks and defenses.
a unique framework for cybersecurity simulation and red teaming operations, windows auditing for newer vulnerabilities, misconfigurations and privilege escalations attacks, replicate the tactics and techniques of an advanced adversary in a network.
Hide your payload into .jpg file
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.
Code implementation of the paper "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks", at IEEE Security and Privacy 2019.
A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)
A curated list of papers & resources on backdoor attacks and defenses in deep learning.
An open-source toolkit for textual backdoor attack and defense (NeurIPS 2022 D&B, Spotlight)
Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to potentially execute offline remote code execution without running any actual code on the victim's machine or thwart LLM-based fraud/moderation systems.
WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)
This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks](https://openreview.net/pdf?id=9l0K4OM-oXE) in PyTorch.
Persistent Powershell backdoor tool {😈}
BackdoorSim: An Educational into Remote Administration Tools
You should never use malware to infiltrate a target system. With the skill of writing and exploiting technical codes, you can do the best ways of penetration. This is done in order to test and increase the security of the open sourcecode.
ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341
LoVerst is a backdoor generator and backdoor generating tools.
[ICCV 2023] Source code for our paper "Rickrolling the Artist: Injecting Invisible Backdoors into Text-Guided Image Generation Models".
[ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Add a description, image, and links to the backdoor-attacks topic page so that developers can more easily learn about it.
To associate your repository with the backdoor-attacks topic, visit your repo's landing page and select "manage topics."