This repository is for the paper:
UniControl: A Unified Diffusion Model for Controllable Visual Generation In the Wild
Can Qin 1,2, Shu Zhang1, Ning Yu 1, Yihao Feng1, Xinyi Yang1, Yingbo Zhou 1, Huan Wang 1, Juan Carlos Niebles1, Caiming Xiong 1, Silvio Savarese 1, Stefano Ermon 3, Yun Fu 2, Ran Xu 1
1 Salesforce AI 2 Northeastern University 3 Stanford Univerisy
Work done when Can Qin was an intern at Salesforce AI Research.
Achieving machine autonomy and human control often represent divergent objectives in the design of interactive AI systems. Visual generative foundation models such as Stable Diffusion show promise in navigating these goals, especially when prompted with arbitrary languages. However, they often fall short in generating images with spatial, structural, or geometric controls. The integration of such controls, which can accommodate various visual conditions in a single unified model, remains an unaddressed challenge. In response, we introduce UniControl, a new generative foundation model that consolidates a wide array of controllable condition-to-image (C2I) tasks within a singular framework, while still allowing for arbitrary language prompts. UniControl enables pixel-level-precise image generation, where visual conditions primarily influence the generated structures and language prompts guide the style and context. To equip UniControl with the capacity to handle diverse visual conditions, we augment pretrained text-to-image diffusion models and introduce a task-aware HyperNet to modulate the diffusion models, enabling the adaptation to different C2I tasks simultaneously. Trained on nine unique C2I tasks, UniControl demonstrates impressive zero-shot generation abilities with unseen visual conditions. Experimental results show that UniControl often surpasses the performance of single-task-controlled methods of comparable model sizes. This control versatility positions UniControl as a significant advancement in the realm of controllable visual generation.
- 05/18/23: UniControl paper uploaded to arXiv.
- 05/26/23: UniControl inference code and checkpoint open to public.
- 05/28/23: Latest UniControl model checkpoint (1.4B #params, 5.78GB) updated.
Setup the env first (need to wait a few minutes).
conda env create -f environment.yaml
conda activate unicontrol
The checkpoint of pre-trained UniControl model is saved at ./ckpts/unicontrol.ckpt
.
mkdir ckpts
cd ckpts
wget https://storage.googleapis.com/sfr-unicontrol-data-research/unicontrol.ckpt
The example inference data already are saved at ./data
and ./test_imgs_CN
.
For different tasks, please run the code as follows. If you meet OOM error, please decrease the "--num_samples".
Canny to Image Generation:
python inference_demo.py --ckpt ./ckpts/unicontrol.ckpt --task canny
HED Edge to Image Generation:
python inference_demo.py --ckpt ./ckpts/unicontrol.ckpt --task hed
HED-like Skech to Image Generation:
python inference_demo.py --ckpt ./ckpts/unicontrol.ckpt --task hedsketch
Depth Map to Image Generation:
python inference_demo.py --ckpt ./ckpts/unicontrol.ckpt --task depth
Normal Surface Map to Image Generation:
python inference_demo.py --ckpt ./ckpts/unicontrol.ckpt --task normal
Segmentation Map to Image Generation:
python inference_demo.py --ckpt ./ckpts/unicontrol.ckpt --task seg
Human Skeleton to Image Generation:
python inference_demo.py --ckpt ./ckpts/unicontrol.ckpt --task openpose
Object Bounding Boxes to Image Generation:
python inference_demo.py --ckpt ./ckpts/unicontrol.ckpt --task bbox
Image Outpainting:
python inference_demo.py --ckpt ./ckpts/unicontrol.ckpt --task outpainting
Gradio Demo (App Demo Video, CUDA 11.0 and Conda 4.12.0 work)
We have provided gradio demos for different tasks to use. The example images are saved at ./test_imgs
.
For all the tasks (Canny, HED, Sketch, Depth, Normal, Human Pose, Seg, Bbox, Outpainting
) please run the following code:
python gradio_all_tasks.py
We support the direct condition-to-image generation (as shown above). Please unmark the Condition Extraction
in UI if you want to upload condition image directly.
Or, we provide the task-specifc gradio demos:
Canny to Image Generation:
python gradio_canny2image.py
HED Edge to Image Generation:
python gradio_hed2image.py
HED-like Skech to Image Generation:
python gradio_hedsketch2image.py
Depth Map to Image Generation:
python gradio_depth2image.py
Normal Surface Map to Image Generation:
python gradio_normal2image.py
Segmentation Map to Image Generation:
python gradio_seg2image.py
Human Skeleton to Image Generation:
python gradio_pose2image.py
Object Bounding Boxes to Image Generation:
python gradio_bbox2image.py
Image Outpainting:
python gradio_outpainting.py
- Data Preparation
- Pre-training Tasks Inference
- Gradio Demo
- Zero-shot Tasks Inference
- Model Training
- Negative prompts are very useful sometimes:
monochrome, lowres, bad anatomy, worst quality, low quality
are example negative prompts.
If you find this project useful for your research, please kindly cite our paper:
@article{qin2023unicontrol,
title={UniControl: A Unified Diffusion Model for Controllable Visual Generation In the Wild},
author={Qin, Can and Zhang, Shu and Yu, Ning and Feng, Yihao and Yang, Xinyi and Zhou, Yingbo and Wang, Huan and Niebles, Juan Carlos and Xiong, Caiming and Savarese, Silvio and others},
journal={arXiv preprint arXiv:2305.11147},
year={2023}
}
This project is built upon the gaint sholders of ControlNet and Stable Diffusion. Great thanks to them!
Stable Diffusion https://github.com/CompVis/stable-diffusion
ControlNet https://github.com/lllyasviel/ControlNet
StyleGAN3 https://github.com/NVlabs/stylegan3