Stars
Run Myo on Apple M1+ using a Windows virtual machine, Max and OSC.
Refine high-quality datasets and visual AI models
[IEEE S&P 2024] Exploring the Orthogonality and Linearity of Backdoor Attacks
Official repo for Detecting, Explaining, and Mitigating Memorization in Diffusion Models (ICLR 2024)
Official pytorch implementation of the paper: "An Edit Friendly DDPM Noise Space: Inversion and Manipulations". CVPR 2024.
Python Fire is a library for automatically generating command line interfaces (CLIs) from absolutely any Python object.
[ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?
A watermarking tool to protect artworks from AIGC-driven style mimicry (e.g. LoRA)
High-Resolution Image Synthesis with Latent Diffusion Models
Your browser's reference manager: automatic paper detection (Arxiv, OpenReview & more), publication venue matching and code repository discovery! Also enhances ArXiv: BibTex citation, Markdown link…
match command-line arguments to their help text
Code Repo for the NeurIPS 2023 paper "VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models"
🛡️[ICLR'2024] Toward effective protection against diffusion-based mimicry through score distillation, a.k.a SDS-Attack
Anti-DreamBooth: Protecting users from personalized text-to-image synthesis (ICCV 2023)
Raising the Cost of Malicious AI-Powered Image Editing
Outpainting with Stable Diffusion on an infinite canvas
Diffusion attentive attribution maps for interpreting Stable Diffusion.
Using Low-rank adaptation to quickly fine-tune diffusion models.
[NeurIPS 2024] Invisible Image Watermarks Are Provably Removable Using Generative AI
High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.