| --- |
| license: bsd-2-clause |
| tags: |
| - human-motion-generation |
| - human-motion-prediction |
| - probabilistic-human-motion-generation |
| pinned: true |
| language: |
| - en |
| --- |
| # SkeletonDiffusion Model Card |
| This model card focuses on the model associated with the SkeletonDiffusion model, from _Nonisotropic Gaussian Diffusion for Realistic 3D Human Motion Prediction_, [arxiv](https://arxiv.org/abs/2501.06035), codebase available [here](https://github.com/Ceveloper/SkeletonDiffusion/tree/main). |
|
|
| SkeletonDiffusion is a probabilistic human motion prediction model that takes as input 0.5s of human motion and generates future motions of 2s with a inference time of 0.4s. |
| SkeletonDiffusion generates motions that are at the same time realistic and diverse. It is a latent diffusion model that with a custom graph attention architecture trained with nonisotropic Gaussian diffusion. |
|
|
| We provide a model for each dataset mentioned in the paper (AMASS, FreeMan, Human3.6M), and a further model trained on AMASS with hands joints (AMASS-MANO). |
|
|
| <img src="https://cdn-uploads.huggingface.co/production/uploads/6501e39f192a9bf2226a864d/sIe8dJwlrWSMSnYiVFCpl.png" alt="drawing" width="600"/> |
|
|
|
|
| ## Online demo |
| The model trained on AMASS is accessible in a demo workflow that predicts future motions from videos. |
| The demo extracts 3D human poses from video via Neural Localizer Fields ([NLF](https://istvansarandi.com/nlf/)) by Sarandi et al., and SkeletonDiffusion generates future motions conditioned on the extracted poses: |
| SkeletonDiffusion has not been trained with real-world, noisy data, but despite this fact it can handle most cases reasonably. |
|
|
| ## Usage |
|
|
| ### Direct use |
| You can use the model for purposes under the BSD 2-Clause License. |
|
|
| ### Train and Inference |
|
|
| Please refer to our [GitHub](https://github.com/Ceveloper/SkeletonDiffusion/tree/main) codebase for both usecases. |