Buckets:
| title: Movimento | |
| emoji: � | |
| colorFrom: purple | |
| colorTo: blue | |
| sdk: gradio | |
| sdk_version: 6.14.0 | |
| python_version: '3.12' | |
| app_file: app.py | |
| pinned: true | |
| license: apache-2.0 | |
| short_description: Text-driven multi-character motion generation with Qwen LLM planning | |
| # Movimento: Multi-Character Motion Generation | |
| **Text-driven interactive motion synthesis** powered by **Qwen LLM planning** and **Kimodo diffusion models** on AMD hardware. | |
| ## Features | |
| - 🎭 **Multi-character orchestration**: Synchronize motion for multiple characters in a single scene | |
| - 🧠 **Qwen LLM planning**: Convert natural language prompts to structured motion scripts | |
| - 💾 **BONES-SEED dataset**: Pre-trained motions for realistic human movement | |
| - ⚡ **Real-time visualization**: Viser-based 3D motion preview with playback controls | |
| - 🎯 **Interactive constraints**: Hands-on guidance for character interactions (hand pose, foot contact) | |
| - 🔄 **Smooth transitions**: Automatic blending between motions with multiple transition policies (cut, overlap, hold, smooth) | |
| ## Usage | |
| 1. **Enter a multi-character scenario**: e.g., "Two characters walk together, then one sits down while the other stands." | |
| 2. **Review the motion script**: Qwen parses your prompt into character segments with timing and transitions | |
| 3. **Adjust parameters**: Configure transition styles, FPS, and postprocessing options | |
| 4. **Generate motion**: Diffusion models synthesize realistic human motion in real-time | |
| 5. **Visualize and download**: Preview in 3D, adjust playback speed, export as BVH/FBX | |
| ## Architecture | |
| ``` | |
| User Prompt | |
| ↓ | |
| Qwen LLM (7B/3B/1.5B) - Script Planning | |
| ↓ | |
| DeterministicLoop Scheduler - Multi-character collision detection & resolution | |
| ↓ | |
| CharacterKimodoPlan - Segment→Motion mapping with transition policies | |
| ↓ | |
| Kimodo Diffusion Models - Per-character motion generation with constraints | |
| ↓ | |
| Viser Viewer - 3D preview & pose refinement | |
| ``` | |
| ## Technical Stack | |
| - **LLM Planner**: Qwen2.5-7B via HuggingFace Inference API | |
| - **Motion Model**: Kimodo (NVIDIA Labs) - text-to-motion diffusion | |
| - **Dataset**: BONES-SEED (comprehensive human motion capture) | |
| - **Scheduler**: Deterministic RNG-based conflict resolution for multi-character scenes | |
| - **Infrastructure**: HuggingFace Spaces with AMD GPU support | |
| ## Documentation | |
| - [GitHub Repository](https://github.com/RydlrCS/kimodo) | |
| - [Kimodo Paper & Model](https://research.nvidia.com) | |
| - [BONES-SEED Dataset](https://huggingface.co/datasets/bones-studio/seed) | |
| ## Citation | |
| If you use Movimento in your research, please cite: | |
| ```bibtex | |
| @software{movimento2026, | |
| title={Movimento: Multi-Character Motion Generation with LLM Planning}, | |
| author={Ted Iro Opiyo}, | |
| year={2026}, | |
| url={https://huggingface.co/spaces/lablab-ai-amd-developer-hackathon/movimento} | |
| } | |
| ``` | |
| --- | |
| **Built for the lablab.ai × AMD Developer Hackathon 2026** | |
Xet Storage Details
- Size:
- 2.88 kB
- Xet hash:
- aef9c0bc49aa0affaafd44ff8850d029f8fe44cf44e01beb2b0ac1d1be18fcd3
·
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.