Changelog:
-
add FreeU Hack from https://huggingface.co/papers/2309.11497
-
add option to apply FreeU before or after controlnet outputs
-
add inpaint-softedge and temporal-depth controlnet models
-
auto-download inpaint-softedge and temporal-depth checkpoints
-
fix sd21 lineart model not working
-
refactor get_controlnet_annotations a bit
-
add inpaint-softedge and temporal-depth controlnet preprocessors
-
fix controlnet preview (next_frame error)
-
fix dwpose 'final_boxes' error for frames with no people
-
move width_height to video init cell to avoid people forgetting to run it to update width_height
-
fix xformers version
-
fix flow preview error for fewer than 10 frames
-
fix pillow errors (UnidentifiedImageError: cannot identify image file)
-
fix timm import error (isDirectory error)
-
deprecate v2_depth model (use depth controlnet instead)
-
fix pytorch dependencies error
-
fix zoe depth error
-
move installers to github repo
FreeU
GUI - misc - apply_freeu_after_control, do_freeunet
This hack lowers the effect of stablediffusion unet residual skip-connections, prioritizing the core concepts in the image over low-frequency details. As you can see in the video, with FreeU on the image seems less cluttered, but still has enough high-frequency details. apply_freeu_after_control applies the hack after getting input from controlnets, which for me was producing a bit worse results.
Inpaint-softedge controlnet
I've experimented with mixed-input controlnets. This works the same way inpaint controlnet does + it uses softedge input for the inpainted area, so it relies not only on the masked area surroundings, but also on softedge filter output for the masked area, which gives a little more control.
Temporal-depth controlnet
This one takes previous frame + current frame depth + next frame depth as its inputs
Those controlnets are experimental, and you can try replacing some controlnet pairs with them, like replace depth with temporal-depth, or replace inpaint with inpaint-softedge