Skip to content

aihacker111/Efficient-Live-Portrait

Repository files navigation

Efficient-Live-Portrait

πŸ“Ή SDXL-Lightning + Controlnet-Open-Pose + Live-Portrait

368220873_826368889022136_4472311944594836999_n--d6_concat.mp4
368220873_826368889022136_4472311944594836999_n--d6_audio.mp4

πŸ“Ή Video2Video Demo

d0--d6_vid2vid_audio.2.mp4

πŸ“Ή Video Demo for normal mode

348371303-0716a9f3-531b-4876-af2d-afe54b04e2ef.mp4

πŸ“Ή Video Demo for Face-ID mode

  • Single Face Image 368220873_826368889022136_4472311944594836999_n

  • Through Face-ID adapter

img_1--d2.mp4

Introduction

This repo is the optimize task by converted to ONNX and TensorRT models for LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control. We are actively updating and improving this repository. If you find any bugs or have suggestions, welcome to raise issues or submit pull requests (PR) πŸ’–.

Also we adding feature:

  • Real-Time demo with ONNX models
  • TensorRT runtime with latest Tensorrt version. You should run on Colab, this still can't use on Window
  • Face-ID adapter for control Face animation in the Multiple Faces image you want to do
  • Coming soon for ControlNet Stable Diffusion. Stay tuned

Features

[βœ…] 20/07/2024: TensorRT Engine code and Demo

[βœ…] 22/07/2024: Support Multiple Faces

[βœ…] 22/07/2024: Face-ID Adapter for Control Face Animation

[βœ…] 24/07/2024: Multiple Face motion in Video for animation multiples Face in image

[βœ…] 28/07/2024: Supported Video2Video Live Portrait (only use one Face)

[βœ…] 30/07/2024: Support SDXL-Lightning Controlnet-Open-Pose from 1 to 8 step for change source image to Art Image

[ ] Integrate Animate-Diff Lightning Motion module

πŸ”₯ Getting Started

1. Clone the code and prepare the environment

git clone https://github.com/aihacker111/Efficient-Live-Portrait
# create env using conda
conda create -n ELivePortrait python==3.10.14
conda activate ELivePortrait
# install dependencies with CPU
pip install -r requirements-cpu.txt
# install dependencies with GPU
pip install -r requirements-gpu.txt
# install dependencies with pip for mps
pip install -r requirements-mps.txt 

Note: make sure your system has FFmpeg installed!

2. Download pretrained weights

The pretrained weights is also automatic downloading You don't need to download and put model into sources code

pretrained_weights
|
β”œβ”€β”€ landmarks
β”‚   └── models
β”‚       └── buffalo_l
β”‚       |   β”œβ”€β”€ 2d106det.onnx
β”‚       |    └── det_10g.onnx
|       └── landmark.onnx
└── live_portrait
      |
      β”œβ”€β”€ appearance_feature_extractor.onnx
      β”œβ”€β”€ motion_extractor.onnx
      β”œβ”€β”€ generator_warping.onnx
      β”œβ”€β”€ stitching_retargeting.onnx
      └── stitching_retargeting_eye.onnx
      └── stitching_retargeting_lip.onnx
      β”œβ”€β”€ appearance_feature_extractor_fp32.engine
      β”œβ”€β”€ motion_extractor_fp32.engine
      β”œβ”€β”€ generator_fp32.engine
      β”œβ”€β”€ stitching_fp32.engine
      └── stitching_eye_fp32.engine
      └── stitching_lip_fp32.engine
      β”œβ”€β”€ appearance_feature_extractor_fp16.engine
      β”œβ”€β”€ motion_extractor_fp16.engine
      β”œβ”€β”€ generator_fp16.engine
      β”œβ”€β”€ stitching_fp16.engine
      └── stitching_eye_fp16.engine
      └── stitching_lip_fp16.engine
      

3. Inference and Real-time Demo πŸš€

Fast hands-on

  • TensorRT FP32 is seem slower than FP16 but result better than fp16, so be careful to use both of it, I'm not recommend using ONNX model because it's not still update and fix grid sample or speed
  • Also If you want to Quality Result. Please remove FP16, the speed can be slower than fp16 but result is better For run Face-ID mode:
python run_live_portrait.py --driving_video 'path/to/your/video/driving/or/webcam/id' --source_image 'path/to/your/image/want/to/animation' -condition_image 'path/the/single/face/image/to/compute/face-id' --task ['image', 'video', 'webcam'] --run_time --half_precision --use_face_id 

For run Multiple Face Motion mode:

python run_live_portrait.py --driving_video 'path/to/your/video/driving/or/webcam/id' --source_image 'path/to/your/image/want/to/animation'  --task ['image', 'video', 'webcam'] --run_time --half_precision

For Vid2Vid Live Portrait:

python run_live_portrait.py --driving_video 'path/to/your/video/driving/or/webcam/id' --source_video 'path/to/your/video/want/to/animation'  --task ['image', 'video', 'webcam'] --run_time --half_precision

For SDXL-Lightning + OpenPose-Lora + Live Portrait

python /content/Efficient-Live-Portrait/run_live_portrait.py --driving_video 'path/to/your/video' --source_image 'path/to/your/image/want/to/animation'  --run_time --task image --use_diffusion --lcm_steps [1, 2, 4, 8] --prompt '1girl, offshoulder, light smile, shiny skin best quality, masterpiece, photorealistic'

Colab Demo

Follow in the colab folder

5. Inference speed evaluation πŸš€πŸš€πŸš€

We'll release it soon

About

Fast running Live Portrait with TensorRT and ONNX models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published