Skip to content

VideoCrafter1: Open Diffusion Models for High-Quality Video Generation

Notifications You must be signed in to change notification settings

kurollo/VideoCrafter

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

95 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VideoCrafter1: Open Diffusion Models for High-Quality Video Generation

Discord GitHub

🔥🔥 The VideoCrafter1 for high-quality video generation are now released! Please Join us and create your own film on Discord/Floor33.

Floor33 | Film

IMAGE ALT TEXT HERE

🔆 Introduction

🤗🤗🤗 VideoCrafter is an open-source video generation and editing toolbox for crafting video content.
It currently includes the Text2Video and Image2Video models:

1. Generic Text-to-video Generation

Click the GIF to access the high-resolution video.

"A girl is looking at the camera smiling. High Definition." "an astronaut running away from a dust storm on the surface of the moon, the astronaut is running towards the camera, cinematic"
"A giant spaceship is landing on mars in the sunset. High Definition." "A blue unicorn flying over a mystical land"

2. Generic Image-to-video Generation

"a black swan swims on the pond" "a girl is riding a horse fast on grassland" "a boy sits on a chair facing the sea" "two galleons moving in the wind at sunset"

📝 Changelog

  • [2023.10.13]: 🔥🔥 Release the VideoCrafter1, High Quality Video Generation!

  • [2023.08.14]: Release a new version of VideoCrafter on Discord/Floor33. Please join us to create your own film!

  • [2023.04.18]: Release a VideoControl model with most of the watermarks removed!

  • [2023.04.05]: Release pretrained Text-to-Video models, VideoLora models, and inference code.


⏳ Models

Models Resolution Checkpoints
Text2Video 576x1024 Hugging Face
Image2Video 320x512 Hugging Face

⚙️ Setup

1. Install Environment via Anaconda (Recommended)

conda create -n videocrafter python=3.8.5
conda activate videocrafter
pip install -r requirements.txt

💫 Inference

1. Text-to-Video

  1. Download pretrained T2V models via Hugging Face, and put the model.ckpt in checkpoints/base_1024_v1/model.ckpt.
  2. Input the following commands in terminal.
  sh scripts/run_text2video.sh

2. Image-to-Video

  1. Download pretrained I2V models via Hugging Face, and put the model.ckpt in checkpoints/i2v_512_v1/model.ckpt.
  2. Input the following commands in terminal.
  sh scripts/run_image2video.sh

📋 Techinical Report

⏳⏳⏳ Comming soon. We are still working on it.💪

😉 Citation

The technical report is currently unavailable as it is still in preparation. You can cite the paper of our base model, on which we built our applications.

@article{he2022lvdm,
      title={Latent Video Diffusion Models for High-Fidelity Long Video Generation}, 
      author={Yingqing He and Tianyu Yang and Yong Zhang and Ying Shan and Qifeng Chen},
      year={2022},
      eprint={2211.13221},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

🤗 Acknowledgements

Our codebase builds on Stable Diffusion. Thanks the authors for sharing their awesome codebases!

📢 Disclaimer

We develop this repository for RESEARCH purposes, so it can only be used for personal/research/non-commercial purposes.


About

VideoCrafter1: Open Diffusion Models for High-Quality Video Generation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.5%
  • Shell 0.5%