MovieChat: From Dense Token to Sparse Memory for Long Video Understanding
Enxin Song*, Wenhao Chai*, Guanhong Wang*, Yucheng Zhang, Haoyang Zhou, Feiyang Wu, Xun Guo, Tian Ye, Yan Lu, Jenq-Neng Hwang, Gaoang WangβοΈ
CVPR 2024.
MovieChat can handle videos with >10K frames on a 24GB graphics card. MovieChat has a 10000Γ advantage over other methods in terms of the average increase in GPU memory cost per frame (21.3KB/f to ~200MB/f).
Feel free to PR your new results!
Model with Link | Comment | Breakpoint Acc | Global Acc |
---|---|---|---|
Video-LLaMA | End-to-end | 39.1 | 51.7 |
VideoChat | End-to-end | 46.1 | 57.8 |
TimeChat | CoT, ICL, train on MovieChat | 46.1 | 73.8 |
VideoChatGPT | End-to-end | 48.0 | 47.6 |
MovieChat (baseline) | End-to-end | 48.3 | 62.3 |
MovieChat+ (baseline) | End-to-end | 49.6 | 71.2 |
Long-LLaVA | Eng-to-end | 54.0 | 69.6 |
Long-LLaVA + Video-RAG | Eng-to-end | 54.5 | 72.9 |
Streaming Long Video | Train on MovieChat | 54.9 | 90.4 |
DrVideo | RAG | 56.7 | 93.1 |
ReWind | End-to-end | 57.2 | 87.6 |
HERMES | Train on MovieChat | 57.3 | 78.6 |
Flash-VStream | Train on MovieChat | 59.6 | 96.0 |
MM-Screenplayer | RAG | 68.8 | 87.5 |
VILA1.5-8B | End-to-end | - | 40.0 |
FocusChat | End-to-end | - | 60.0 |
llavaonevision-MovieChat | End-to-end | - | 79.0 |
Sullam Jeoung, et al | Agent | - | 84.8 |
SEAL | Train on MovieChat | - | 86.8 |
HEM-LLM | Unknown training dataset | - | 90.6 |
Sort in alphabetical order.
Benchmark | Results |
---|---|
ActivityNet-QA | Acc. / Score: 45.7 / 3.4 |
Charades-STA | R@1(IOU =0.3): 8.8 β’ R@1(IOU =0.5): 2.9 β’ R@1(IOU =0.7): 1.3 |
CineClipQA | Overall: 20.86/2.11 β’ Description: 23.67/2.41 β’ Intention: 30.19/2.41 β’ Perception: 21.80/1.97 β’ Temporality: 16.32/1.97 β’ Spaciality: 16.40/1.98 |
CVRR-ES | Average: 16.41 |
EgoSchema | Top 1 Acc: 53.5 |
EventBench | Acc: 20.33 |
InfiniBench | Global Appearance: 6.59 β’ Scene transition: 6.41 β’ Character actions: 4.51 β’ Temporal order: 36.99 β’ Local visual: 17.76 β’ Summarization: 0.14 β’ Deep context: 0.55 β’ Spoiler questions: 0.34 β’ Multiple events: 0.85 β’ Avg: 14.45/0.47 |
InfiniBench-Vision | Acc: 14.2 β’ Score: 1.2 |
LvBench | ER: 21.3 β’ EU: 23.1 β’ KIR: 25.9 β’ TG: 22.3 β’ Rea: 24.0 β’ Sum: 17.2 β’ Overall: 22.5 |
LvM-QA | Acc. / Score: 48.3 / 2.57 |
MLVU | Holistic TR: 29.5 β’ AR: 25.0 β’ VS: 2.33 β’ Single Detail NQA: 24.2 β’ ER: 24.7 β’ PQA: 25.8 β’ SSC: 3.23 β’ Multi Detail AO: 28.6 β’ AC: 22.8 β’ M-Avg: 25.8 β’ G-Avg: 2.78 |
MovieChat-1K | Global Acc. / Score: 62.3 / 3.23 β’ Global Acc. / Score: 48.3 / 2.57 |
MovieCORE | Acc: 20.33 β’ Comp: 2.90 β’ Depth: 2.29 β’ Evid: 2.14 β’ Coh: 2.30 β’ Avg: 2.23 |
MSVD-QA | Acc. / Score: 75.2 / 3.8 |
MSRVTT-QA | Acc. / Score: 52.7 / 2.6 |
MVBench | Avg: 55.1 |
NExT-QA | Acc. / Score: 49.9 / 2.7 |
QVHighlight | mAP: 11.7 β’ HIT @1: 16.1 |
RVS-Ego | Acc. / Score: 50.7 / 3.4 |
RVS-Movie | Acc. / Score: 36.0 / 2.3 |
Seed-Bench | Procedure Understanding: 29.82 β’ Action Recognition: 40.11 |
SFD | Multiple-Choice V: 8.4 β’ L: 16.4 β’ VL: 8.0 β’ Open-Ended V: 14.0 β’ L: 15.7 β’ VL: 11.8 |
SVBench | Dialogue SA: 20.46 β’ Dialogue CC: 20.05 β’ Dialogue LC: 27.76 β’ Dialogue TU: 21.81 β’ Dialogue IC: 22.21 β’ Dialogue OS: 21.89 β’ Streaming SA: 17.99 β’ Streaming CC: 16.42 β’ Streaming LC: 20.37 β’ Streaming TU: 15.77 β’ Streaming IC: 19.08 β’ Streaming OS: 17.43 |
TV-Caption | BertScore: 38.11 β’ CIDER: 8.43 β’ ROUGE-L: 12.09 β’ SPICE: 9.21 |
VCG Bench | CI: 2.76 β’ DO: 2.93 β’ CU: 3.01 β’ TU: 2.24 β’ CO: 2.42 β’ Avg: 2.67 |
VDC | Camera: 37.25/1.98 β’ Short: 32.55/1.59 β’ Background: 28.99/1.54 β’ Main: 31.97/1.64 β’ Object: 28.82/1.46 β’ Avg: 31.92/1.64 |
VideoMME | w/o subs: 38.2 β’ w/o subs (Long): 33.4 |
Video-ChatGPT | Avg: 2.67 β’ CI: 2.76 β’ DO: 2.93 β’ CU: 3.01 β’ TU: 2.24 β’ CO: 2.42 |
VS-Ego | Acc. / Score: 52.2 / 3.4 |
VS-Movie | Acc. / Score: 39.1 / 2.3 |
YouCook2 | C: 38.5 β’ M: 18.8 |
- [2024.10.26] β¨οΈ We upload MovieChat, MovieChat_OneVision, MovieChat-1K to lmms-eval.
- [2024.10.26] β¨οΈ We release a new version of MovieChat, which use LLaVA-OneVision as the base model instead of the original VideoLLaMA. The new version is available on MovieChat_Onevision.
- [2024.6.13] π½οΈ We release the ground truth of MovieChat's test set in Hugging Face.
- [2024.5.10] π½οΈ We release the raw videos of MovieChat's training set in Hugging Face.
- [2024.4.29] π We update the MovieChat+ paper with implementation details, technical evaluations, and dataset information.
- [2024.4.25] β¨οΈWe update a new version of MovieChat+. We realse the MovieChat+ code and the corresponding evaluation code. Our paper is Coming soon!
- [2024.4.19] β¨οΈWe update the latest source code of MovieChat to PyPI. Now you can use MovieChat by
pip install Moviechat
directly! - [2024.3.25] π We host challenge track 1 of the 4th International Workshop on Long-form Video Understanding: Towards Multimodal AI Assistant and Copilot at CVPR 2024. You can participate in the challenge and submit your results via Codalab. We will display the results on the leaderboard. For each participant, we hope you can submit your results in JSON format and report both the average running time and VRAM usage. We will use these metrics to select the most efficient method. For detailed information about the challenge, please refer to this link.
- [2024.3.11] π½οΈ We release the test set of the MovieChat-1K in Hugging Face. Each video contains 3 global questions and 10 breakpoint questions.
- [2024.2.27] π Our paper was accepted by CVPR 2024!
- [2024.2.14] π½οΈ We release the training set of the MovieChat-1K in Hugging Face. Due to copyright restrictions, we share the clip features extracted by eva_vit_g, containing 8192 frames of each video.
- [2023.11.27] π We update the paper with implementation details, technical evaluations, and dataset information.
- [2023.11.23] β¨οΈWe update the latest source code of MovieChat.
- [2023.8.1] π We release the paper.
- [2023.7.31] β¨οΈWe release eval code and instraction for short video QA on MSVD-QA, MSRVTT-QA and ActivityNet-QA.
- [2023.7.29] πΉοΈWe release Gradio demo of MovieChat.
- [2023.7.22] β¨οΈWe release source code of MovieChat.
Method | Text Decoder | # Frames | Global Mode Acc. | Global Mode Sco. |
---|---|---|---|---|
GIT | non-LLM based | 6 | 28.8 | 1.83 |
mPLUG-2 | non-LLM based | 8 | 31.7 | 2.13 |
Video Chat | LLM based | 32 | 57.8 | 3.00 |
Video LLaMA | LLM based | 32 | 51.7 | 2.67 |
Video-ChatGPT | LLM based | 100 | 47.6 | 2.55 |
MovieChat | LLM based | 2048 | 62.3 | 3.23 |
MovieChat+ | LLM based | 2048 | 71.2 | 3.51 |
MovieChat-Onevision | LLM based | 2048 | 79.0 | 4.20 |
We have packaged MovieChat and uploaded it to PyPI. To run MovieChat quickly, you need to install it firstly.
pip install MovieChat
We advise you to install version 0.6.3
for now. Since MovieChat
will download checkpoints from Huggingface automatically, if your service doesn't support git clone from <HuggingFace url>
, we recommend you to download the checkpoint to your service, and change the respective path in the package, including q_former_model, ckpt_path, and llama_model.
Before you run the following inference code, we hope you can verify the installation of ffprobe
via ffprobe -version
. This command should return the version of ffprobe if it is correctly installed. Otherwise, you should install it via sudo apt-get install ffmpeg
(Ubuntu).
from PIL import Image
import cv2
from MovieChat.processors.video_processor import AlproVideoEvalProcessor
from MovieChat.models.chat_model import Chat
from MovieChat.models.moviechat import MovieChat
device = 'cuda:0'
print('Initializing Chat')
moviechat_model = MovieChat.from_config(device=device).to(device)
vis_processor_cfg = {'name': 'alpro_video_eval', 'n_frms': 8, 'image_size': 224}
frame_processor = AlproVideoEvalProcessor.from_config(vis_processor_cfg)
chat = Chat(moviechat_model, frame_processor, device=device)
print('Initialization Finished')
video_path = "Your video path, end with mp4"
fragment_video_path = "The path to store tmp video clips"
middle_video = False # True->Breakpoint mode, False->Global mode
question = "Your Question"
cur_min = 0 # Change it when Breakpoint mode
cur_sec = 0 # Change it when Breakpoint mode
cap = cv2.VideoCapture(video_path)
cur_fps = cap.get(cv2.CAP_PROP_FPS)
cap.set(cv2.CAP_PROP_POS_FRAMES, cur_fps)
ret, frame = cap.read()
rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
pil_image = Image.fromarray(rgb_frame)
image = chat.image_vis_processor(pil_image).unsqueeze(0).unsqueeze(2).half().to(device)
cur_image = chat.model.encode_image(image)
img_list = []
msg = chat.upload_video_without_audio(
video_path=video_path,
fragment_video_path=fragment_video_path,
cur_min=cur_min,
cur_sec=cur_sec,
cur_image=cur_image,
img_list=img_list,
middle_video=middle_video,
question=question
)
answer = chat.answer(
img_list=img_list,
input_text=question,
msg = msg,
num_beams=1,
temperature=1.0,
max_new_tokens=300,
max_length=2000)[0]
print(answer)
Note that if you receive a RuntimeError like "Error reading <filename.mp4>"
, one solution is to initialize <filename.mp4>
with any other video file.
To better evaluate the performance of MovieChat, we collect a new benchmark for long video understanding tasks, MovieChat-1K, which contains 1K high quality video clips sourced from various movies and TV series with 14K manual annotations.
To the best of our knowledge, a long video understanding dataset has not yet been established. Our work represents the initial step in creating and making it publicly available.We create MovieChat1K, containing 1k long videos and corresponding 1k dense captions, and 13k visual question-answer pairs.For each video, we manually set and provide 1 dense caption for the whole video, 3 question-answering pairs for global mode and 10 question-answering pairs with timestamps for breakpoint mode.
We collect videos from 15 popular categories with varying distribution, including documentary film, detective film, animation film, and so on. Among these, each video comprises multiple alternating scenes, contributing to a diverse and dynamic visual narrative within the context of the collection. Over 90% of the videos exhibit a duration ranging from 10K to 12K frames, while 14.6% of videos extending beyond 12K frames. Only 8.6% of videos have duration less than 10k frames.
Note that MovieChat-1K is specifically designed for long video comprehension tasks, the majority of questions are open-ended, with only a quarter classified as multiple-choice questions, marked by initiators such as βDo,β βDoes,β βIs,β or βAre.β We also compute the word distributions of our provided question-answer pairs, which includes common objects (people, clothes, etc.), time (day, night, etc.), scenes (indoor, outdoor, etc.), and so on.
MovieChat1K exhibits diverse lengths of question-answer pairs in the segmented clip level. Despite the distribution of questionanswer pairs varies between the global mode and breakpoint mode, the majority of questions tends to concentrate between 5-15 words in length, while the length of answers generally have fewer than 10 words.
To facilitate a more detailed understanding of long videos, we provide a dense caption for each video. MovieChat-1K exhibits diverse caption lengths in the segmented clip level. Approximately two-thirds of the clips have captions with 100-149 words, while one-fifth of the clip captions have fewer than 100 words. About 11% of clips have long captions with more than 150 words.
To analyze the word distribution of our generated captions, we compute their distributions. The resulting word distribution of the captions is presented in Fig. B6, which includes common objects (man, woman, people, girl, etc.), attributes (detective, various, small, white, etc.), locations (inside, behind, south, next, etc.), scenes (room, house, building, office, etc.), actions/events (talk, enter, leave, take, etc.), and more.
In terms of actionness, MovieChat-1K captions contains nearly the same number of verbs as with the WebVid10M dataset. To evaluate this, we use the NLTK toolkit to analyze the number of verbs in captions, focusing on extracting and tagging all unique verbs. We find a total of 109,485 verbs in the WebVid10M caption dataset, while the MovieChat-1K captions contain 102,988 unique instances of verbs. While these counts may not be entirely accurate due to our simple counting method, we believe they provide a rough indication of the actionness of the two datasets.
π Β© Due to the copyright concers and the size limitations of the movies, we plan to release the features of the dataset. Please wait for a few weeks.
First, create a conda environment:
conda env create -f environment.yml
conda activate moviechat
Before using the repository, make sure you have obtained the following checkpoints:
- Get the original LLaMA weights in the Hugging Face format by following the instructions here.
- Download Vicuna delta weights π [7B] (Note: we use v0 weights instead of v1.1 weights).
- Use the following command to add delta weights to the original LLaMA weights to obtain the Vicuna weights:
python apply_delta.py \
--base ckpt/LLaMA/7B_hf \
--target ckpt/Vicuna/7B \
--delta ckpt/Vicuna/vicuna-7b-delta-v0 \
- Download the MiniGPT-4 model (trained linear layer) from this link.
- Download pretrained weights to run MovieChat with Vicuna-7B as language decoder locally from this link.
Firstly, set the llama_model
, llama_proj_model
and ckpt
in eval_configs/MovieChat.yaml.
Then run the script:
python inference.py \
--cfg-path eval_configs/MovieChat.yaml \
--gpu-id 0 \
--num-beams 1 \
--temperature 1.0 \
--text-query "What is he doing?" \
--video-path src/examples/Cooking_cake.mp4 \
--fragment-video-path src/video_fragment/output.mp4 \
--cur-min 1 \
--cur-sec 1 \
--middle-video 1 \
Note that, if you want to use the global mode (understanding and question-answering for the whole video), remember to change middle-video into 0.
We are grateful for the following awesome projects our MovieChat arising from:
- Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
- Token Merging: Your ViT but Faster
- XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model
- MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models
- FastChat: An Open Platform for Training, Serving, and Evaluating Large Language Model based Chatbots
- BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
- EVA-CLIP: Improved Training Techniques for CLIP at Scale
- LLaMA: Open and Efficient Foundation Language Models
- VideoChat: Chat-Centric Video Understanding
- LLaVA: Large Language and Vision Assistant
Our MovieChat is just a research preview intended for non-commercial use only. You must NOT use our MovieChat for any illegal, harmful, violent, racist, or sexual purposes. You are strictly prohibited from engaging in any activity that will potentially violate these guidelines.
If you find MovieChat useful for your your research and applications, please cite using this BibTeX:
@article{song2023moviechat,
title={MovieChat: From Dense Token to Sparse Memory for Long Video Understanding},
author={Song, Enxin and Chai, Wenhao and Wang, Guanhong and Zhang, Yucheng and Zhou, Haoyang and Wu, Feiyang and Guo, Xun and Ye, Tian and Lu, Yan and Hwang, Jenq-Neng and others},
journal={arXiv preprint arXiv:2307.16449},
year={2023}
}
@article{song2024moviechat+,
title={MovieChat+: Question-aware Sparse Memory for Long Video Question Answering},
author={Song, Enxin and Chai, Wenhao and Ye, Tian and Hwang, Jenq-Neng and Li, Xi and Wang, Gaoang},
journal={arXiv preprint arXiv:2404.17176},
year={2024}
}