368220873_826368889022136_4472311944594836999_n--d6_concat.mp4
368220873_826368889022136_4472311944594836999_n--d6_audio.mp4
d0--d6_vid2vid_audio.2.mp4
348371303-0716a9f3-531b-4876-af2d-afe54b04e2ef.mp4
img_1--d2.mp4
This repo is the optimize task by converted to ONNX and TensorRT models for LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control. We are actively updating and improving this repository. If you find any bugs or have suggestions, welcome to raise issues or submit pull requests (PR) π.
Also we adding feature:
- Real-Time demo with ONNX models
- TensorRT runtime with latest Tensorrt version. You should run on Colab, this still can't use on Window
- Face-ID adapter for control Face animation in the Multiple Faces image you want to do
- Coming soon for ControlNet Stable Diffusion. Stay tuned
[β ] 20/07/2024: TensorRT Engine code and Demo
[β ] 22/07/2024: Support Multiple Faces
[β ] 22/07/2024: Face-ID Adapter for Control Face Animation
[β ] 24/07/2024: Multiple Face motion in Video for animation multiples Face in image
[β ] 28/07/2024: Supported Video2Video Live Portrait (only use one Face)
[β ] 30/07/2024: Support SDXL-Lightning Controlnet-Open-Pose from 1 to 8 step for change source image to Art Image
[ ] Integrate Animate-Diff Lightning Motion module
git clone https://github.com/aihacker111/Efficient-Live-Portrait
# create env using conda
conda create -n ELivePortrait python==3.10.14
conda activate ELivePortrait
# install dependencies with CPU
pip install -r requirements-cpu.txt
# install dependencies with GPU
pip install -r requirements-gpu.txt
# install dependencies with pip for mps
pip install -r requirements-mps.txt
Note: make sure your system has FFmpeg installed!
The pretrained weights is also automatic downloading You don't need to download and put model into sources code
pretrained_weights
|
βββ landmarks
β βββ models
β βββ buffalo_l
β | βββ 2d106det.onnx
β | βββ det_10g.onnx
| βββ landmark.onnx
βββ live_portrait
|
βββ appearance_feature_extractor.onnx
βββ motion_extractor.onnx
βββ generator_warping.onnx
βββ stitching_retargeting.onnx
βββ stitching_retargeting_eye.onnx
βββ stitching_retargeting_lip.onnx
βββ appearance_feature_extractor_fp32.engine
βββ motion_extractor_fp32.engine
βββ generator_fp32.engine
βββ stitching_fp32.engine
βββ stitching_eye_fp32.engine
βββ stitching_lip_fp32.engine
βββ appearance_feature_extractor_fp16.engine
βββ motion_extractor_fp16.engine
βββ generator_fp16.engine
βββ stitching_fp16.engine
βββ stitching_eye_fp16.engine
βββ stitching_lip_fp16.engine
- TensorRT FP32 is seem slower than FP16 but result better than fp16, so be careful to use both of it, I'm not recommend using ONNX model because it's not still update and fix grid sample or speed
- Also If you want to Quality Result. Please remove FP16, the speed can be slower than fp16 but result is better For run Face-ID mode:
python run_live_portrait.py --driving_video 'path/to/your/video/driving/or/webcam/id' --source_image 'path/to/your/image/want/to/animation' -condition_image 'path/the/single/face/image/to/compute/face-id' --task ['image', 'video', 'webcam'] --run_time --half_precision --use_face_id
For run Multiple Face Motion mode:
python run_live_portrait.py --driving_video 'path/to/your/video/driving/or/webcam/id' --source_image 'path/to/your/image/want/to/animation' --task ['image', 'video', 'webcam'] --run_time --half_precision
For Vid2Vid Live Portrait:
python run_live_portrait.py --driving_video 'path/to/your/video/driving/or/webcam/id' --source_video 'path/to/your/video/want/to/animation' --task ['image', 'video', 'webcam'] --run_time --half_precision
For SDXL-Lightning + OpenPose-Lora + Live Portrait
python /content/Efficient-Live-Portrait/run_live_portrait.py --driving_video 'path/to/your/video' --source_image 'path/to/your/image/want/to/animation' --run_time --task image --use_diffusion --lcm_steps [1, 2, 4, 8] --prompt '1girl, offshoulder, light smile, shiny skin best quality, masterpiece, photorealistic'
Follow in the colab folder
We'll release it soon