Skip to content

nomand/ComfyUI_IPAdapter_plus

 
 

Repository files navigation

ComfyUI_IPAdapter_Plus

ComfyUI reference implementation for IPAdapter models.

The code is mostly taken from the original IPAdapter repository and laksjdjf's implementation, all credit goes to them. I just made the extension closer to ComfyUI philosophy.

Example workflow

IPAdapter Example workflow

Installation

Download or git clone this repository inside ComfyUI/custom_nodes/ directory.

The pre-trained models are available on huggingface, download and place them in the ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/models directory.

For SD1.5 you need:

For SDXL you need:

Additionally you need the clip vision models:

You can rename them to something easier to remember (eg: ip-adapter_sd15-image-encoder.bin) and place them under ComfyUI/models/clip_vision/.

How to use

There's a basic workflow included in this repo and a few examples in the example directory.

IMPORTANT: To use the IPAdapter Plus models (base and face) you must use the new CLIP Vision Encode (IPAdapter) node. The non-plus version works with both the standard CLIP Vision Encode and the new one.

IPAdapter + Canny ControlNet

The model is very effective when paired with a ControlNet. In the example below I experimented with Canny. The workflow is in the examples directory.

canny controlnet

IPAdapter Face

IPAdapter offers an interesting model for a kind of "face swap" effect. The workflow is provided.

face swap

Masking

Masking in img2img generally works but I find inpainting to be far more effective. The inpainting workflow uses the face model together with an inpainting checkpoint.

inpainting

Important: when masking the IPAdapter Apply node be sure that the mask is of the same size of the latent.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%