File size: 9,114 Bytes
32e1763 1a8ff90 4d797d4 1a8ff90 4d797d4 1a8ff90 4d797d4 1a8ff90 4d797d4 1a8ff90 4d797d4 1a8ff90 4d797d4 1a8ff90 4d797d4 1a8ff90 4d797d4 1a8ff90 4d797d4 1a8ff90 4d797d4 1a8ff90 4d797d4 1a8ff90 4d797d4 1a8ff90 4d797d4 1a8ff90 4d797d4 1a8ff90 4d797d4 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 | ---
license: other
license_name: qualcomm-ai-hub-proprietary-license
license_link: >-
https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf
pipeline_tag: text-to-video
tags:
- efficient
- mobile video generation
- dit
- pyramidal diffusion
language:
- en
base_model:
- qualcomm/Neodragon
---
<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<div align="center" style="padding: 20px; border-radius: 10px;">
<div style="display: flex; align-items: center; justify-content: center; gap: 20px;">
<img src="assets/Neodragon_title.jpg" alt="neodragon logo"/>
</div>
<!-- Animated banner (WebP with fallback) -->
<p align="center">
<img src="assets/showcase_video_banner.webp" alt="Neodragon showcase banner">
</p>
<h1> Neodragon: Mobile Video Generation Using Diffusion Transformer </h1>
<!-- Badges -->
<a href="https://qualcomm-ai-research.github.io/neodragon">
<img src="https://img.shields.io/badge/Project-Page-Green" alt="Project Page">
</a>
<a href="https://arxiv.org/abs/2511.06055">
<img src="https://img.shields.io/badge/arXiv-2511.06055-b31b1b.svg" alt="arXiv">
</a>
<a href="https://huggingface.co/qualcomm/Neodragon">
<img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-blue" alt="Hugging Face Model">
</a>
<a href="https://openreview.net/forum?id=XBzIhhwv8d">
<img src="https://img.shields.io/badge/ICLR%202026-OpenReview-8A2BE2" alt="ICLR 2026 OpenReview">
</a>
<a href="https://github.com/qualcomm-ai-research/neodragon">
<img src="https://img.shields.io/badge/GitHub-Code-181717?logo=github&logoColor=white" alt="GitHub Code">
</a>
**[Qualcomm AI Research](https://www.qualcomm.com/research/artificial-intelligence)**
[Animesh Karnewar](https://akanimax.github.io),
[Denis Korzhenkov](https://scholar.google.com/citations?user=ypspak0AAAAJ),
[Ioannis Lelekas](https://nl.linkedin.com/in/ioannis-lelekas-609bb5151),
[Noor Fathima](https://scholar.google.com/citations?user=M9BUCaUAAAAJ&hl=en),
[Adil Karjauv](https://scholar.google.com/citations?user=bN7UGiYAAAAJ&hl=en),
[Hanwen Xiong](#),
[Vancheeswaran Vaidyanathan](https://www.linkedin.com/in/vancheeswaran-vaidyanathan),
[Will Zeng](https://scholar.google.com/citations?user=B_fh4ioAAAAJ&hl=en),
[Rafael Esteves](https://www.linkedin.com/in/rafael-esteves-124353145),
[Tushar Singhal](https://www.linkedin.com/in/tushar-singhal),
[Fatih Porikli](https://scholar.google.com/citations?user=VpB8NZ8AAAAJ&hl=en),
[Mohsen Ghafoorian](https://mohsenghafoorian.github.io),
[Amirhossein Habibian](https://habibian.github.io/)
</div>
```bibtex
@article{karnewar2025neodragon,
author = {Animesh Karnewar and Denis Korzhenkov and Ioannis Lelekas and Noor Fathima and Adil Karjauv and Hanwen Xiong and Vancheeswaran Vaidyanathan and Will Zeng and Rafael Esteves and Tushar Singhal and Fatih Porikli and Mohsen Ghafoorian and Amirhossein Habibian},
title = {Neodragon: Mobile Video Generation using Diffusion Transformer},
journal = {arXiv preprint arXiv:2511.06055},
year = {2025},
note = {Published in the Proceedings of ICLR 2026. OpenReview: \url{https://openreview.net/forum?id=XBzIhhwv8d}; arXiv technical-report: \url{https://arxiv.org/abs/2511.06055}}
}
```
<section class="section hero is-light">
<div class="container is-max-widescreen">
<div class="columns is-centered has-text-centered">
<div class="column is-11">
<div class="content has-text-justified">
<p>
We introduce Neodragon, a text-to-video system capable of generating 2s (49 frames @24 fps) videos
at a resolution of <code>[640×1024]</code> directly on a <strong>Qualcomm Hexagon NPU</strong> in a
record <strong>~6.7s</strong> (7 FPS). Differing from existing transformer-based offline text-to-video
generation models, <strong>Neodragon</strong> is the first to have been specifically optimized for mobile
hardware to achieve efficient, low-cost, and high-fidelity video synthesis.
</p>
<ul>
<li>
<strong>Replacing the original large 4.762B <em>T5</em><sub>XXL</sub> Text-Encoder</strong>
with a much smaller 0.2B <em>DT5</em> (DistilT5) with minimal quality loss, enabling the entire model
to run without CPU offloading. This is enabled through a novel Text-Encoder Distillation
procedure which uses only generative text-prompt data and <em>does not</em> require any image or video data.
</li>
<li>
<strong>Proposing an Asymmetric Decoder Distillation approach</strong> which allows us to replace the native
codec-latent-VAE decoder with a more efficient one, without disturbing the generative latent-space of the
video generation pipeline.
</li>
<li>
<strong>Pruning of MMDiT blocks</strong> within the denoiser backbone based on their relative importance,
with recovery of original performance through a two-stage distillation process.
</li>
<li>
<strong>Reducing the NFE (Neural Functional Evaluation) requirement</strong> of the denoiser by performing
step distillation using a technique adapted from DMD for <em>pyramidal</em> flow-matching, thereby significantly
accelerating video generation.
</li>
</ul>
<p>
When paired with an optimized SSD1B first-frame image generator and QuickSRNet for 2×
super-resolution, our end-to-end <strong>Neodragon</strong> system becomes a highly parameter
(<strong>4.945B</strong> full model), memory (<strong>3.5GB</strong> peak RAM usage), and
runtime (<strong>6.7s</strong> E2E latency) efficient mobile-friendly model, while achieving a <em>VBench</em>
total score of <strong>81.61</strong>, yielding high-fidelity generated videos.
</p>
<p>
By enabling low-cost, private, and on-device text-to-video synthesis, <strong>Neodragon</strong> democratizes
AI-based video content creation, empowering creators to generate high-quality videos without reliance on cloud services.
</p>
<p>
Inference code is available at:
<a href="https://github.com/qualcomm-ai-research/neodragon">
https://github.com/qualcomm-ai-research/neodragon
</a>
</p>
</div>
</div>
</div>
</div>
</section>
# How to Inference
Please Refer to: https://github.com/qualcomm-ai-research/neodragon
### Model Description
- **Developed by:** Qualcomm AI Research, Generative Vision group, Amsterdam, Netherlands
- **Model type:** Mobile Video Generation with efficient pyramidal Diffusion Transformer
- **Model size:** 4.945B parameters (full package)
- **Model precision:** torch.bfloat16 (BF16)
- **Model resolution:** This model is developed to generate [320 x 512] resolution 49(2s @ 24fps) frames videos directly on a Snapdragon powered mobile phone.
- **Model Description:** This is a model that can be used to generate videos based on the provided text prompts.
It is a Diffusion Transformer that uses our finetuned TinyAEHV Auto-Encoder with 8x8x8x spatio-temporal-compressed latent features ([TinyAEHV](https://github.com/madebyollin/taehv)).
- **Resources for more information:** Check out our [GitHub Repository](https://github.com/qualcomm-ai-research/Neodragon) and the [Technical-report on arXiv](https://arxiv.org/abs/2511.06055) and the [ICLR 2026 Openreview](https://openreview.net/forum?id=XBzIhhwv8d).
## License/Terms of Use
This model is released under the terms-and-conditions laid out in the [Qualcomm-AI-Hub-proprietory License](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf).
## Uses
The model is intended for research purposes. Possible research areas and tasks include:
- Research on Efficient Transformer or non-Transformer based Backbone Architectures for Video Generation.
- Generation of Image/Video based artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
Excluded uses are described below.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render complex legible text
- The model cannot produce videos with accurate physically compliant motion
### Bias
While the capabilities of the presented mobile video generation model are impressive, they can also reinforce or exacerbate social biases strictly based on our foundational-base model [Pyramidal-Flow](https://arxiv.org/abs/2410.05954). |