Skip to content

Commit

Permalink
[Update] Readme.md
Browse files Browse the repository at this point in the history
  • Loading branch information
Dai-Wenxun committed Dec 3, 2024
1 parent a30cc85 commit 665dcc2
Showing 1 changed file with 4 additions and 0 deletions.
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,10 @@

> This work introduces MotionLCM, extending controllable motion generation to a real-time level. Existing methods for spatial-temporal control in text-conditioned motion generation suffer from significant runtime inefficiency. To address this issue, we first propose the motion latent consistency model (MotionLCM) for motion generation, building upon the latent diffusion model. By adopting one-step (or few-step) inference, we further improve the runtime efficiency of the motion latent diffusion model for motion generation. To ensure effective controllability, we incorporate a motion ControlNet within the latent space of MotionLCM and enable explicit control signals (e.g., initial poses) in the vanilla motion space to further provide supervision for the training process. By employing these techniques, our approach can generate human motions with text and control signals in real-time. Experimental results demonstrate the remarkable generation and controlling capabilities of MotionLCM while maintaining real-time runtime efficiency.

##

![](assets/pk.png)

## 📢 News

- **[2024/08/15]** Support the training of motion latent diffusion model (MLD).
Expand Down

0 comments on commit 665dcc2

Please sign in to comment.