Skip to content

CoTracker is a model for tracking any point (pixel) on a video.

License

Notifications You must be signed in to change notification settings

Juyue/co-tracker-fork

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

42 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

CoTracker: It is Better to Track Together

Meta AI Research, GenAI; University of Oxford, VGG

Nikita Karaev, Ignacio Rocco, Benjamin Graham, Natalia Neverova, Andrea Vedaldi, Christian Rupprecht

Open In Colab Spaces

CoTracker is a fast transformer-based model that can track any point in a video. It brings to tracking some of the benefits of Optical Flow.

CoTracker can track:

  • Any pixel in a video
  • A quasi-dense set of pixels together
  • Points can be manually selected or sampled on a grid in any video frame

Try these tracking modes for yourself with our Colab demo or in the Hugging Face Space πŸ€—.

Updates:

  • [September 25, 2024] πŸ“£ CoTracker2.1 is now available! This model has better performance on TAP-Vid benchmarks and follows the architecture of the original CoTracker. Try it out!

  • [June 14, 2024] πŸ“£ We have released the code for VGGSfM, a model for recovering camera poses and 3D structure from any image sequences based on point tracking! VGGSfM is the first fully differentiable SfM framework that unlocks scalability and outperforms conventional SfM methods on standard benchmarks.

  • [December 27, 2023] πŸ“£ CoTracker2 is now available! It can now track many more (up to 265*265!) points jointly and it has a cleaner and more memory-efficient implementation. It also supports online processing. See the updated paper for more details. The old version remains available here.

  • [September 5, 2023] πŸ“£ You can now run our Gradio demo locally!

Quick start

The easiest way to use CoTracker is to load a pretrained model from torch.hub:

Offline mode:

pip install imageio[ffmpeg], then:

import torch
# Download the video
url = 'https://github.com/facebookresearch/co-tracker/raw/main/assets/apple.mp4'

import imageio.v3 as iio
frames = iio.imread(url, plugin="FFMPEG")  # plugin="pyav"

device = 'cuda'
grid_size = 10
video = torch.tensor(frames).permute(0, 3, 1, 2)[None].float().to(device)  # B T C H W

# Run Offline CoTracker:
cotracker = torch.hub.load("facebookresearch/co-tracker", "cotracker2v1").to(device)
pred_tracks, pred_visibility = cotracker(video, grid_size=grid_size) # B T N 2,  B T N 1

Online mode:

cotracker = torch.hub.load("facebookresearch/co-tracker", "cotracker2v1_online").to(device)

# Run Online CoTracker, the same model with a different API:
# Initialize online processing
cotracker(video_chunk=video, is_first_step=True, grid_size=grid_size)  

# Process the video
for ind in range(0, video.shape[1] - cotracker.step, cotracker.step):
    pred_tracks, pred_visibility = cotracker(
        video_chunk=video[:, ind : ind + cotracker.step * 2]
    )  # B T N 2,  B T N 1

Online processing is more memory-efficient and allows for the processing of longer videos. However, in the example provided above, the video length is known! See the online demo for an example of tracking from an online stream with an unknown video length.

If you wish to use CoTracker2 version of the model (and not 2.1 as an example above), just change cotracker2v1 to cotracker2 and cotracker2v1_online to cotracker2_online

Visualize predicted tracks:

pip install matplotlib, then:

from cotracker.utils.visualizer import Visualizer

vis = Visualizer(save_dir="./saved_videos", pad_value=120, linewidth=3)
vis.visualize(video, pred_tracks, pred_visibility)

We offer a number of other ways to interact with CoTracker:

  1. Interactive Gradio demo:
  2. Jupyter notebook:
  3. You can install CoTracker locally and then:
    • Run an offline demo with 10 ⨉ 10 points sampled on a grid on the first frame of a video (results will be saved to ./saved_videos/demo.mp4)):

      python demo.py --grid_size 10
    • Run an online demo:

      python online_demo.py

A GPU is strongly recommended for using CoTracker locally.

Installation Instructions

You can use a Pretrained Model via PyTorch Hub, as described above, or install CoTracker from this GitHub repo. This is the best way if you need to run our local demo or evaluate/train CoTracker.

Ensure you have both PyTorch and TorchVision installed on your system. Follow the instructions here for the installation. We strongly recommend installing both PyTorch and TorchVision with CUDA support, although for small tasks CoTracker can be run on CPU.

Install a Development Version

git clone https://github.com/facebookresearch/co-tracker
cd co-tracker
pip install -e .
pip install matplotlib flow_vis tqdm tensorboard imageio[ffmpeg]

You can manually download the CoTracker2 checkpoint from the links below and place it in the checkpoints folder as follows:

mkdir -p checkpoints
cd checkpoints
wget https://huggingface.co/facebook/cotracker/resolve/main/cotracker2v1.pth
cd ..

You could also download CoTracker2 with the following command:

cd checkpoints
wget https://huggingface.co/facebook/cotracker/resolve/main/cotracker2.pth

For older checkpoints, see this section.

After installation, this is how you could run the model on ./assets/apple.mp4 (results will be saved to ./saved_videos/apple.mp4):

python demo.py --checkpoint checkpoints/cotracker2.pth

Evaluation

To reproduce the results presented in the paper, download the following datasets:

And install the necessary dependencies:

pip install hydra-core==1.1.0 mediapy

Then, execute the following command to evaluate on TAP-Vid DAVIS:

python ./cotracker/evaluation/evaluate.py --config-name eval_tapvid_davis_first exp_dir=./eval_outputs dataset_root=your/tapvid/path

By default, evaluation will be slow since it is done for one target point at a time, which ensures robustness and fairness, as described in the paper.

We have fixed some bugs and retrained the model after updating the paper. These are the numbers that you should be able to reproduce using the released checkpoint and the current version of the codebase:

Kinetics, $\delta_\text{avg}^\text{vis}$ DAVIS, $\delta_\text{avg}^\text{vis}$ RoboTAP, $\delta_\text{avg}^\text{vis}$ RGB-S, $\delta_\text{avg}^\text{vis}$
CoTracker2.1, 25.09.24 63 76.1 70.6 79.6
CoTracker2, 27.12.23 61.8 74.6 69.6 73.4

All evaluations were done in the query first mode.

Training

To train the CoTracker as described in our paper, you first need to generate annotations for Google Kubric MOVI-f dataset. Instructions for annotation generation can be found here. You can also find a discussion on dataset generation in this issue.

Once you have the annotated dataset, you need to make sure you followed the steps for evaluation setup and install the training dependencies:

pip install pytorch_lightning==1.6.0 tensorboard

Now you can launch training on Kubric. Our model was trained for 50000 iterations on 32 GPUs (4 nodes with 8 GPUs). Modify dataset_root and ckpt_path accordingly before running this command. For training on 4 nodes, add --num_nodes 4.

python train.py --batch_size 1 \
--num_steps 50000 --ckpt_path ./ --dataset_root ./datasets --model_name cotracker \
--save_freq 200 --sequence_len 24 --eval_datasets dynamic_replica tapvid_davis_first \
--traj_per_sample 768 --sliding_window_len 8 \
--num_virtual_tracks 64 --model_stride 4

Development

Building the documentation

To build CoTracker documentation, first install the dependencies:

pip install sphinx
pip install sphinxcontrib-bibtex

Then you can use this command to generate the documentation in the docs/_build/html folder:

make -C docs html

Previous version

You can use CoTracker v1 directly via pytorch hub:

import torch
import einops
import timm
import tqdm

cotracker = torch.hub.load("facebookresearch/co-tracker:v1.0", "cotracker_w8")

The old version of the code is available here. You can also download the corresponding checkpoints:

wget https://dl.fbaipublicfiles.com/cotracker/cotracker_stride_4_wind_8.pth
wget https://dl.fbaipublicfiles.com/cotracker/cotracker_stride_4_wind_12.pth
wget https://dl.fbaipublicfiles.com/cotracker/cotracker_stride_8_wind_16.pth

License

The majority of CoTracker is licensed under CC-BY-NC, however portions of the project are available under separate license terms: Particle Video Revisited is licensed under the MIT license, TAP-Vid is licensed under the Apache 2.0 license.

Acknowledgments

We would like to thank PIPs and TAP-Vid for publicly releasing their code and data. We also want to thank Luke Melas-Kyriazi for proofreading the paper, Jianyuan Wang, Roman Shapovalov and Adam W. Harley for the insightful discussions.

Citing CoTracker

If you find our repository useful, please consider giving it a star ⭐ and citing our paper in your work:

@article{karaev2023cotracker,
  title={CoTracker: It is Better to Track Together},
  author={Nikita Karaev and Ignacio Rocco and Benjamin Graham and Natalia Neverova and Andrea Vedaldi and Christian Rupprecht},
  journal={arXiv:2307.07635},
  year={2023}
}

About

CoTracker is a model for tracking any point (pixel) on a video.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 98.8%
  • Python 1.2%