Stars
Code for "Motion-2-to-3: Leveraging 2D Motion Data to Boost 3D Motion Generation", Arxiv 2024
Synthetic animal image dataset for pose and shape reconstruction.
[NeurIPS 2024] Neural Localizer Fields for Continuous 3D Human Pose and Shape Estimation
[ECCV 2024] Official PyTorch implementation of RoPE-ViT "Rotary Position Embedding for Vision Transformer"
Official repository of "SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory"
The Fast Way From Vertices to Parametric 3D Humans
Human Motion Video Generation: A Survey (https://www.techrxiv.org/users/836049/articles/1228135-human-motion-video-generation-a-survey)
Official implementation of "MIMO: Controllable Character Video Synthesis with Spatial Decomposed Modeling"
MotionFix: Text-Driven 3D Human Motion Editing [SIGGRAPH ASIA 2024]
Code for "GVHMR: World-Grounded Human Motion Recovery via Gravity-View Coordinates", Siggraph Asia 2024
A list of awesome human motion generation papers. Continuing to be updated!!!
High-resolution models for human tasks.
Controllable video and image Generation, SVD, Animate Anyone, ControlNet, ControlNeXt, LoRA
MoYuM / HowToDrink
Forked from Anduin2017/HowToCook程序员在家自制酒/饮料方法指南。Programmer's guide about how to drink at home (Simplified Chinese only).
The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use th…
OmniControl: Control Any Joint at Any Time for Human Motion Generation, ICLR 2024
OpenXRLab Synthetic Data Rendering Toolbox
The code for CVPR 2024 paper "Closely Interactive Human Reconstruction with Proxemics and Physics-Guided Adaption"
Hamba: Single-view 3D Hand Reconstruction with Graph-guided Bi-Scanning Mamba
A vector-quantized periodic autoencoder (VQ-PAE) for motion alignment across different morphologies with no supervision [SIGGRAPH 2024]
Learning with 3D rotations, a hitchhiker’s guide to SO(3) - ICML 2024