You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/textual_inversion/README.md
+9-1Lines changed: 9 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -3,6 +3,15 @@
3
3
[Textual inversion](https://arxiv.org/abs/2208.01618) is a method to personalize text2image models like stable diffusion on your own images using just 3-5 examples.
4
4
The `textual_inversion.py` script shows how to implement the training procedure and adapt it for stable diffusion.
5
5
6
+
## Running on Colab
7
+
8
+
Colab for training
9
+
[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb)
10
+
11
+
Colab for inference
12
+
[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb)
13
+
14
+
## Running locally
6
15
### Installing the dependencies
7
16
8
17
Before running the scipts, make sure to install the library's training dependencies:
@@ -64,7 +73,6 @@ A full training run takes ~1 hour on one V100 GPU.
64
73
65
74
Once you have trained a model using above command, the inference can be done simply using the `StableDiffusionPipeline`. Make sure to include the `placeholder_token` in your prompt.
0 commit comments