Skip to content

brian6091/Dreambooth

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

One script to rule them all

Fine-tune Stable diffusion models using Dreambooth, Textual inversion, Custom diffusion, and/or Low-rank Adaptation (LoRA), all in one place.

Notebook that is less flexible, but contains more explanations:

Open In Colab

$~$

Notebook that exposes all parameters:

Open In Colab

$~$

Tested with Tesla T4 and A100 GPUs on Google Colab (some configurations will not work on T4 due to limited memory)

Tested with Stable Diffusion v1-5 and Stable Diffusion v2-base.

Some unique features:

  • Based on main Hugging Face Diffusers🧨 so it's easy to stay up-to-date
  • Mix-and-match different fine-tuning methods (LoRA X Dreambooth, Dreambooth X Textual inversion, etc)
  • Low-rank Adaptation (LoRA) for faster and more efficient fine-tuning (using cloneofsimo's implementation)
  • Data augmentation such as random cropping, flipping and resizing, which can minimize manually prepping and cropping images in certain cases (e.g., training a style)
  • More parameters for experimentation (modify LoRA rank approximation, ADAM optimizer parameters, cosine_with_restarts learning rate scheduler, etc), all of which are dumped to a yaml file so you can remember what you did
  • Drop some text-conditioning to improve classifier-free guidance sampling (e.g., how SD V1-5 was fine-tuned)
  • Image captioning using filenames or associated textfiles
  • Training loss and prior class loss are tracked separately (can be visualized using tensorboard)
  • Option to generate exponentially-weighted moving average (EMA) weights for the unet
  • Inference with trained models uses Diffusers🧨 pipelines, does not rely on any web-apps

$~$

Image comparing Dreambooth and LoRA (more information here):

full-size image here for the pixel-peepers

Buy Me A Coffee

Credits

This notebook was initially based on the Diffusers🧨 example, with elements from ShivamShrirao's fork.