Skip to content

Commit

Permalink
update paint by example docs (huggingface#2598)
Browse files Browse the repository at this point in the history
  • Loading branch information
williamberman authored Mar 9, 2023
1 parent c812d97 commit 7fe638c
Showing 1 changed file with 3 additions and 5 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -136,18 +136,16 @@ def prepare_mask_and_masked_image(image, mask):

class PaintByExamplePipeline(DiffusionPipeline):
r"""
Pipeline for text-guided image inpainting using Stable Diffusion. *This is an experimental feature*.
Pipeline for image-guided image inpainting using Stable Diffusion. *This is an experimental feature*.
This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
Args:
vae ([`AutoencoderKL`]):
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
text_encoder ([`CLIPTextModel`]):
Frozen text-encoder. Stable Diffusion uses the text portion of
[CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
image_encoder ([`PaintByExampleImageEncoder`]):
Encodes the example input image. The unet is conditioned on the example image instead of a text prompt.
tokenizer (`CLIPTokenizer`):
Tokenizer of class
[CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
Expand Down

0 comments on commit 7fe638c

Please sign in to comment.