| { |
| "base_model": "Qwen/Qwen2.5-Omni-3B", |
| "tree": [ |
| { |
| "model_id": "Qwen/Qwen2.5-Omni-3B", |
| "gated": "False", |
| "card": "---\nlicense: other\nlicense_name: qwen-research\nlicense_link: LICENSE\nlanguage:\n- en\ntags:\n- multimodal\nlibrary_name: transformers\npipeline_tag: any-to-any\n---\n\n# Qwen2.5-Omni\n<a href=\"https://chat.qwen.ai/\" target=\"_blank\" style=\"margin: 2px;\">\n <img alt=\"Chat\" src=\"https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5\" style=\"display: inline-block; vertical-align: middle;\"/>\n</a>\n\n\n## Overview \n### Introduction\nQwen2.5-Omni is an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. \n\n<p align=\"center\">\n <img src=\"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/qwen_omni.png\" width=\"80%\"/>\n<p>\n\n### Key Features\n\n* **Omni and Novel Architecture**: We propose Thinker-Talker architecture, an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. We propose a novel position embedding, named TMRoPE (Time-aligned Multimodal RoPE), to synchronize the timestamps of video inputs with audio.\n\n* **Real-Time Voice and Video Chat**: Architecture designed for fully real-time interactions, supporting chunked input and immediate output.\n\n* **Natural and Robust Speech Generation**: Surpassing many existing streaming and non-streaming alternatives, demonstrating superior robustness and naturalness in speech generation.\n\n* **Strong Performance Across Modalities**: Exhibiting exceptional performance across all modalities when benchmarked against similarly sized single-modality models. Qwen2.5-Omni outperforms the similarly sized Qwen2-Audio in audio capabilities and achieves comparable performance to Qwen2.5-VL-7B.\n\n* **Excellent End-to-End Speech Instruction Following**: Qwen2.5-Omni shows performance in end-to-end speech instruction following that rivals its effectiveness with text inputs, evidenced by benchmarks such as MMLU and GSM8K.\n\n### Model Architecture\n\n<p align=\"center\">\n <img src=\"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/overview.png\" width=\"80%\"/>\n<p>\n\n### Performance\n\nWe conducted a comprehensive evaluation of Qwen2.5-Omni, which demonstrates strong performance across all modalities when compared to similarly sized single-modality models and closed-source models like Qwen2.5-VL-7B, Qwen2-Audio, and Gemini-1.5-pro. In tasks requiring the integration of multiple modalities, such as OmniBench, Qwen2.5-Omni achieves state-of-the-art performance. Furthermore, in single-modality tasks, it excels in areas including speech recognition (Common Voice), translation (CoVoST2), audio understanding (MMAU), image reasoning (MMMU, MMStar), video understanding (MVBench), and speech generation (Seed-tts-eval and subjective naturalness).\n\n<p align=\"center\">\n <img src=\"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/bar.png\" width=\"80%\"/>\n<p>\n\n<details>\n<summary>Multimodality -> Text</summary>\n\n<table class=\"tg\"><thead>\n <tr>\n <th class=\"tg-0lax\">Datasets</th>\n <th class=\"tg-0lax\">Model</th>\n <th class=\"tg-0lax\">Performance</th>\n </tr></thead>\n<tbody>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"10\">OmniBench<br>Speech | Sound Event | Music | Avg</td>\n <td class=\"tg-0lax\">Gemini-1.5-Pro</td>\n <td class=\"tg-0lax\">42.67%|42.26%|46.23%|42.91%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MIO-Instruct</td>\n <td class=\"tg-0lax\">36.96%|33.58%|11.32%|33.80%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">AnyGPT (7B)</td>\n <td class=\"tg-0lax\">17.77%|20.75%|13.21%|18.04%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">video-SALMONN</td>\n <td class=\"tg-0lax\">34.11%|31.70%|<strong>56.60%</strong>|35.64%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">UnifiedIO2-xlarge</td>\n <td class=\"tg-0lax\">39.56%|36.98%|29.25%|38.00%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">UnifiedIO2-xxlarge</td>\n <td class=\"tg-0lax\">34.24%|36.98%|24.53%|33.98%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MiniCPM-o</td>\n <td class=\"tg-0lax\">-|-|-|40.50%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Baichuan-Omni-1.5</td>\n <td class=\"tg-0lax\">-|-|-|42.90%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">52.14%|52.08%|52.83%|52.19%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>55.25%</strong>|<strong>60.00%</strong>|52.83%|<strong>56.13%</strong></td>\n </tr>\n</tbody></table>\n</details>\n\n\n<details>\n<summary>Audio -> Text</summary>\n\n\n<table class=\"tg\"><thead>\n <tr>\n <th class=\"tg-0lax\">Datasets</th>\n <th class=\"tg-0lax\">Model</th>\n <th class=\"tg-0lax\">Performance</th>\n </tr></thead>\n<tbody>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">ASR</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"12\">Librispeech<br>dev-clean | dev other | test-clean | test-other</td>\n <td class=\"tg-0lax\">SALMONN</td>\n <td class=\"tg-0lax\">-|-|2.1|4.9</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">SpeechVerse</td>\n <td class=\"tg-0lax\">-|-|2.1|4.4</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Whisper-large-v3</td>\n <td class=\"tg-0lax\">-|-|1.8|3.6</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Llama-3-8B</td>\n <td class=\"tg-0lax\">-|-|-|3.4</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Llama-3-70B</td>\n <td class=\"tg-0lax\">-|-|-|3.1</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Seed-ASR-Multilingual</td>\n <td class=\"tg-0lax\">-|-|<strong>1.6</strong>|<strong>2.8</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MiniCPM-o</td>\n <td class=\"tg-0lax\">-|-|1.7|-</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MinMo</td>\n <td class=\"tg-0lax\">-|-|1.7|3.9</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen-Audio</td>\n <td class=\"tg-0lax\">1.8|4.0|2.0|4.2</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\"><strong>1.3</strong>|<strong>3.4</strong>|<strong>1.6</strong>|3.6</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">2.0|4.1|2.2|4.5</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\">1.6|3.5|1.8|3.4</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"5\">Common Voice 15<br>en | zh | yue | fr</td>\n <td class=\"tg-0lax\">Whisper-large-v3</td>\n <td class=\"tg-0lax\">9.3|12.8|10.9|10.8</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MinMo</td>\n <td class=\"tg-0lax\">7.9|6.3|6.4|8.5</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\">8.6|6.9|<strong>5.9</strong>|9.6</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">9.1|6.0|11.6|9.6</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>7.6</strong>|<strong>5.2</strong>|7.3|<strong>7.5</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"8\">Fleurs<br>zh | en</td>\n <td class=\"tg-0lax\">Whisper-large-v3</td>\n <td class=\"tg-0lax\">7.7|4.1</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Seed-ASR-Multilingual</td>\n <td class=\"tg-0lax\">-|<strong>3.4</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Megrez-3B-Omni</td>\n <td class=\"tg-0lax\">10.8|-</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MiniCPM-o</td>\n <td class=\"tg-0lax\">4.4|-</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MinMo</td>\n <td class=\"tg-0lax\">3.0|3.8</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\">7.5|-</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">3.2|5.4</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>3.0</strong>|4.1</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"6\">Wenetspeech<br>test-net | test-meeting</td>\n <td class=\"tg-0lax\">Seed-ASR-Chinese</td>\n <td class=\"tg-0lax\"><strong>4.7|5.7</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Megrez-3B-Omni</td>\n <td class=\"tg-0lax\">-|16.4</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MiniCPM-o</td>\n <td class=\"tg-0lax\">6.9|-</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MinMo</td>\n <td class=\"tg-0lax\">6.8|7.4</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">6.3|8.1</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\">5.9|7.7</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"4\">Voxpopuli-V1.0-en</td>\n <td class=\"tg-0lax\">Llama-3-8B</td>\n <td class=\"tg-0lax\">6.2</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Llama-3-70B</td>\n <td class=\"tg-0lax\"><strong>5.7</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">6.6</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\">5.8</td>\n </tr>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">S2TT</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"9\">CoVoST2<br>en-de | de-en | en-zh | zh-en</td>\n <td class=\"tg-0lax\">SALMONN</td>\n <td class=\"tg-0lax\">18.6|-|33.1|-</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">SpeechLLaMA</td>\n <td class=\"tg-0lax\">-|27.1|-|12.3</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">BLSP</td>\n <td class=\"tg-0lax\">14.1|-|-|-</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MiniCPM-o</td>\n <td class=\"tg-0lax\">-|-|<strong>48.2</strong>|27.2</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MinMo</td>\n <td class=\"tg-0lax\">-|<strong>39.9</strong>|46.7|26.0</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen-Audio</td>\n <td class=\"tg-0lax\">25.1|33.9|41.5|15.7</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\">29.9|35.2|45.2|24.4</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">28.3|38.1|41.4|26.6</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>30.2</strong>|37.7|41.4|<strong>29.4</strong></td>\n </tr>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">SER</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"6\">Meld</td>\n <td class=\"tg-0lax\">WavLM-large</td>\n <td class=\"tg-0lax\">0.542</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MiniCPM-o</td>\n <td class=\"tg-0lax\">0.524</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen-Audio</td>\n <td class=\"tg-0lax\">0.557</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\">0.553</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">0.558</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>0.570</strong></td>\n </tr>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">VSC</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"6\">VocalSound</td>\n <td class=\"tg-0lax\">CLAP</td>\n <td class=\"tg-0lax\">0.495</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Pengi</td>\n <td class=\"tg-0lax\">0.604</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen-Audio</td>\n <td class=\"tg-0lax\">0.929</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\"><strong>0.939</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">0.936</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>0.939</strong></td>\n </tr>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">Music</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"3\">GiantSteps Tempo</td>\n <td class=\"tg-0lax\">Llark-7B</td>\n <td class=\"tg-0lax\">0.86</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\"><strong>0.88</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>0.88</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"3\">MusicCaps</td>\n <td class=\"tg-0lax\">LP-MusicCaps</td>\n <td class=\"tg-0lax\">0.291|0.149|0.089|<strong>0.061</strong>|0.129|0.130</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">0.325|<strong>0.163</strong>|<strong>0.093</strong>|0.057|<strong>0.132</strong>|<strong>0.229</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>0.328</strong>|0.162|0.090|0.055|0.127|0.225</td>\n </tr>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">Audio Reasoning</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"4\">MMAU<br>Sound | Music | Speech | Avg</td>\n <td class=\"tg-0lax\">Gemini-Pro-V1.5</td>\n <td class=\"tg-0lax\">56.75|49.40|58.55|54.90</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\">54.95|50.98|42.04|49.20</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\"><strong>70.27</strong>|60.48|59.16|63.30</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\">67.87|<strong>69.16|59.76|65.60</strong></td>\n </tr>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">Voice Chatting</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"9\">VoiceBench<br>AlpacaEval | CommonEval | SD-QA | MMSU</td>\n <td class=\"tg-0lax\">Ultravox-v0.4.1-LLaMA-3.1-8B</td>\n <td class=\"tg-0lax\"><strong>4.55</strong>|3.90|53.35|47.17</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MERaLiON</td>\n <td class=\"tg-0lax\">4.50|3.77|55.06|34.95</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Megrez-3B-Omni</td>\n <td class=\"tg-0lax\">3.50|2.95|25.95|27.03</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Lyra-Base</td>\n <td class=\"tg-0lax\">3.85|3.50|38.25|49.74</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MiniCPM-o</td>\n <td class=\"tg-0lax\">4.42|<strong>4.15</strong>|50.72|54.78</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Baichuan-Omni-1.5</td>\n <td class=\"tg-0lax\">4.50|4.05|43.40|57.25</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\">3.74|3.43|35.71|35.72</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">4.32|4.00|49.37|50.23</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\">4.49|3.93|<strong>55.71</strong>|<strong>61.32</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"9\">VoiceBench<br>OpenBookQA | IFEval | AdvBench | Avg</td>\n <td class=\"tg-0lax\">Ultravox-v0.4.1-LLaMA-3.1-8B</td>\n <td class=\"tg-0lax\">65.27|<strong>66.88</strong>|98.46|71.45</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MERaLiON</td>\n <td class=\"tg-0lax\">27.23|62.93|94.81|62.91</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Megrez-3B-Omni</td>\n <td class=\"tg-0lax\">28.35|25.71|87.69|46.25</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Lyra-Base</td>\n <td class=\"tg-0lax\">72.75|36.28|59.62|57.66</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MiniCPM-o</td>\n <td class=\"tg-0lax\">78.02|49.25|97.69|71.69</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Baichuan-Omni-1.5</td>\n <td class=\"tg-0lax\">74.51|54.54|97.31|71.14</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\">49.45|26.33|96.73|55.35</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">74.73|42.10|98.85|68.81</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>81.10</strong>|52.87|<strong>99.42</strong>|<strong>74.12</strong></td>\n </tr>\n</tbody></table>\n</details>\n\n<details>\n<summary>Image -> Text</summary>\n\n| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Other Best | Qwen2.5-VL-7B | GPT-4o-mini | \n|--------------------------------|--------------|------------|------------|---------------|-------------|\n| MMMU<sub>val</sub> | 59.2 | 53.1 | 53.9 | 58.6 | **60.0** | \n| MMMU-Pro<sub>overall</sub> | 36.6 | 29.7 | - | **38.3** | 37.6 | \n| MathVista<sub>testmini</sub> | 67.9 | 59.4 | **71.9** | 68.2 | 52.5 | \n| MathVision<sub>full</sub> | 25.0 | 20.8 | 23.1 | **25.1** | - | \n| MMBench-V1.1-EN<sub>test</sub> | 81.8 | 77.8 | 80.5 | **82.6** | 76.0 | \n| MMVet<sub>turbo</sub> | 66.8 | 62.1 | **67.5** | 67.1 | 66.9 | \n| MMStar | **64.0** | 55.7 | **64.0** | 63.9 | 54.8 | \n| MME<sub>sum</sub> | 2340 | 2117 | **2372** | 2347 | 2003 | \n| MuirBench | 59.2 | 48.0 | - | **59.2** | - | \n| CRPE<sub>relation</sub> | **76.5** | 73.7 | - | 76.4 | - | \n| RealWorldQA<sub>avg</sub> | 70.3 | 62.6 | **71.9** | 68.5 | - | \n| MME-RealWorld<sub>en</sub> | **61.6** | 55.6 | - | 57.4 | - | \n| MM-MT-Bench | 6.0 | 5.0 | - | **6.3** | - | \n| AI2D | 83.2 | 79.5 | **85.8** | 83.9 | - | \n| TextVQA<sub>val</sub> | 84.4 | 79.8 | 83.2 | **84.9** | - | \n| DocVQA<sub>test</sub> | 95.2 | 93.3 | 93.5 | **95.7** | - | \n| ChartQA<sub>test Avg</sub> | 85.3 | 82.8 | 84.9 | **87.3** | - | \n| OCRBench_V2<sub>en</sub> | **57.8** | 51.7 | - | 56.3 | - | \n\n\n| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Qwen2.5-VL-7B | Grounding DINO | Gemini 1.5 Pro | \n|--------------------------|--------------|---------------|---------------|----------------|----------------|\n| Refcoco<sub>val</sub> | 90.5 | 88.7 | 90.0 | **90.6** | 73.2 | \n| Refcoco<sub>textA</sub> | **93.5** | 91.8 | 92.5 | 93.2 | 72.9 | \n| Refcoco<sub>textB</sub> | 86.6 | 84.0 | 85.4 | **88.2** | 74.6 | \n| Refcoco+<sub>val</sub> | 85.4 | 81.1 | 84.2 | **88.2** | 62.5 | \n| Refcoco+<sub>textA</sub> | **91.0** | 87.5 | 89.1 | 89.0 | 63.9 | \n| Refcoco+<sub>textB</sub> | **79.3** | 73.2 | 76.9 | 75.9 | 65.0 | \n| Refcocog+<sub>val</sub> | **87.4** | 85.0 | 87.2 | 86.1 | 75.2 | \n| Refcocog+<sub>test</sub> | **87.9** | 85.1 | 87.2 | 87.0 | 76.2 | \n| ODinW | 42.4 | 39.2 | 37.3 | **55.0** | 36.7 | \n| PointGrounding | 66.5 | 46.2 | **67.3** | - | - | \n</details>\n\n\n<details>\n<summary>Video(without audio) -> Text</summary>\n\n| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Other Best | Qwen2.5-VL-7B | GPT-4o-mini | \n|-----------------------------|--------------|------------|------------|---------------|-------------|\n| Video-MME<sub>w/o sub</sub> | 64.3 | 62.0 | 63.9 | **65.1** | 64.8 | \n| Video-MME<sub>w sub</sub> | **72.4** | 68.6 | 67.9 | 71.6 | - | \n| MVBench | **70.3** | 68.7 | 67.2 | 69.6 | - | \n| EgoSchema<sub>test</sub> | **68.6** | 61.4 | 63.2 | 65.0 | - | \n</details>\n\n<details>\n<summary>Zero-shot Speech Generation</summary>\n\n\n<table class=\"tg\"><thead>\n <tr>\n <th class=\"tg-0lax\">Datasets</th>\n <th class=\"tg-0lax\">Model</th>\n <th class=\"tg-0lax\">Performance</th>\n </tr></thead>\n<tbody>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">Content Consistency</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"11\">SEED<br>test-zh | test-en | test-hard </td>\n <td class=\"tg-0lax\">Seed-TTS_ICL</td>\n <td class=\"tg-0lax\">1.11 | 2.24 | 7.58</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Seed-TTS_RL</td>\n <td class=\"tg-0lax\"><strong>1.00</strong> | 1.94 | <strong>6.42</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MaskGCT</td>\n <td class=\"tg-0lax\">2.27 | 2.62 | 10.27</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">E2_TTS</td>\n <td class=\"tg-0lax\">1.97 | 2.19 | -</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">F5-TTS</td>\n <td class=\"tg-0lax\">1.56 | <strong>1.83</strong> | 8.67</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">CosyVoice 2</td>\n <td class=\"tg-0lax\">1.45 | 2.57 | 6.83</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">CosyVoice 2-S</td>\n <td class=\"tg-0lax\">1.45 | 2.38 | 8.08</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B_ICL</td>\n <td class=\"tg-0lax\">1.95 | 2.87 | 9.92</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B_RL</td>\n <td class=\"tg-0lax\">1.58 | 2.51 | 7.86</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B_ICL</td>\n <td class=\"tg-0lax\">1.70 | 2.72 | 7.97</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B_RL</td>\n <td class=\"tg-0lax\">1.42 | 2.32 | 6.54</td>\n </tr>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">Speaker Similarity</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"11\">SEED<br>test-zh | test-en | test-hard </td>\n <td class=\"tg-0lax\">Seed-TTS_ICL</td>\n <td class=\"tg-0lax\">0.796 | 0.762 | 0.776</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Seed-TTS_RL</td>\n <td class=\"tg-0lax\"><strong>0.801</strong> | <strong>0.766</strong> | <strong>0.782</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MaskGCT</td>\n <td class=\"tg-0lax\">0.774 | 0.714 | 0.748</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">E2_TTS</td>\n <td class=\"tg-0lax\">0.730 | 0.710 | -</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">F5-TTS</td>\n <td class=\"tg-0lax\">0.741 | 0.647 | 0.713</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">CosyVoice 2</td>\n <td class=\"tg-0lax\">0.748 | 0.652 | 0.724</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">CosyVoice 2-S</td>\n <td class=\"tg-0lax\">0.753 | 0.654 | 0.732</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B_ICL</td>\n <td class=\"tg-0lax\">0.741 | 0.635 | 0.748</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B_RL</td>\n <td class=\"tg-0lax\">0.744 | 0.635 | 0.746</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B_ICL</td>\n <td class=\"tg-0lax\">0.752 | 0.632 | 0.747</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B_RL</td>\n <td class=\"tg-0lax\">0.754 | 0.641 | 0.752</td>\n </tr>\n</tbody></table>\n</details>\n\n<details>\n<summary>Text -> Text</summary>\n\n| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Qwen2.5-7B | Qwen2.5-3B | Qwen2-7B | Llama3.1-8B | Gemma2-9B | \n|-----------------------------------|-----------|------------|------------|------------|------------|-------------|-----------|\n| MMLU-Pro | 47.0 | 40.4 | **56.3** | 43.7 | 44.1 | 48.3 | 52.1 | \n| MMLU-redux | 71.0 | 60.9 | **75.4** | 64.4 | 67.3 | 67.2 | 72.8 | \n| LiveBench<sub>0831</sub> | 29.6 | 22.3 | **35.9** | 26.8 | 29.2 | 26.7 | 30.6 | \n| GPQA | 30.8 | 34.3 | **36.4** | 30.3 | 34.3 | 32.8 | 32.8 | \n| MATH | 71.5 | 63.6 | **75.5** | 65.9 | 52.9 | 51.9 | 44.3 | \n| GSM8K | 88.7 | 82.6 | **91.6** | 86.7 | 85.7 | 84.5 | 76.7 | \n| HumanEval | 78.7 | 70.7 | **84.8** |\t74.4 | 79.9 | 72.6 | 68.9 | \n| MBPP | 73.2 | 70.4 | **79.2** | 72.7 | 67.2 | 69.6 | 74.9 | \n| MultiPL-E | 65.8 | 57.6 | **70.4** | 60.2 | 59.1 | 50.7 | 53.4 | \n| LiveCodeBench<sub>2305-2409</sub> | 24.6 | 16.5 | **28.7** | 19.9 | 23.9 | 8.3 | 18.9 | \n</details>\n\n## Quickstart\n\nBelow, we provide simple examples to show how to use Qwen2.5-Omni with \ud83e\udd17 Transformers. The codes of Qwen2.5-Omni has been in the latest Hugging face transformers and we advise you to build from source with command:\n```\npip uninstall transformers\npip install git+https://github.com/huggingface/transformers@v4.51.3-Qwen2.5-Omni-preview\npip install accelerate\n```\nor you might encounter the following error:\n```\nKeyError: 'qwen2_5_omni'\n```\n\n\nWe offer a toolkit to help you handle various types of audio and visual input more conveniently, as if you were using an API. This includes base64, URLs, and interleaved audio, images and videos. You can install it using the following command and make sure your system has `ffmpeg` installed:\n\n```bash\n# It's highly recommended to use `[decord]` feature for faster video loading.\npip install qwen-omni-utils[decord] -U\n```\n\nIf you are not using Linux, you might not be able to install `decord` from PyPI. In that case, you can use `pip install qwen-omni-utils -U` which will fall back to using torchvision for video processing. However, you can still [install decord from source](https://github.com/dmlc/decord?tab=readme-ov-file#install-from-source) to get decord used when loading video.\n\n### \ud83e\udd17 Transformers Usage\n\nHere we show a code snippet to show you how to use the chat model with `transformers` and `qwen_omni_utils`:\n\n```python\nimport soundfile as sf\n\nfrom transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor\nfrom qwen_omni_utils import process_mm_info\n\n# default: Load the model on the available device(s)\nmodel = Qwen2_5OmniForConditionalGeneration.from_pretrained(\"Qwen/Qwen2.5-Omni-3B\", torch_dtype=\"auto\", device_map=\"auto\")\n\n# We recommend enabling flash_attention_2 for better acceleration and memory saving.\n# model = Qwen2_5OmniForConditionalGeneration.from_pretrained(\n# \"Qwen/Qwen2.5-Omni-3B\",\n# torch_dtype=\"auto\",\n# device_map=\"auto\",\n# attn_implementation=\"flash_attention_2\",\n# )\n\nprocessor = Qwen2_5OmniProcessor.from_pretrained(\"Qwen/Qwen2.5-Omni-3B\")\n\nconversation = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"video\", \"video\": \"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/draw.mp4\"},\n ],\n },\n]\n\n# set use audio in video\nUSE_AUDIO_IN_VIDEO = True\n\n# Preparation for inference\ntext = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)\naudios, images, videos = process_mm_info(conversation, use_audio_in_video=USE_AUDIO_IN_VIDEO)\ninputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors=\"pt\", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO)\ninputs = inputs.to(model.device).to(model.dtype)\n\n# Inference: Generation of the output text and audio\ntext_ids, audio = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO)\n\ntext = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)\nprint(text)\nsf.write(\n \"output.wav\",\n audio.reshape(-1).detach().cpu().numpy(),\n samplerate=24000,\n)\n```\n\n<details>\n<summary>Minimum GPU memory requirements</summary>\n\n|Model | Precision | 15(s) Video | 30(s) Video | 60(s) Video |\n|--------------|-----------| ------------- | ------------- | ------------------ |\n| Qwen-Omni-3B | FP32 | 89.10 GB | Not Recommend | Not Recommend |\n| Qwen-Omni-3B | BF16 | 18.38 GB | 22.43 GB | 28.22 GB |\n| Qwen-Omni-7B | FP32 | 93.56 GB | Not Recommend | Not Recommend |\n| Qwen-Omni-7B | BF16 | 31.11 GB | 41.85 GB | 60.19 GB |\n\nNote: The table above presents the theoretical minimum memory requirements for inference with `transformers` and `BF16` is test with `attn_implementation=\"flash_attention_2\"`; however, in practice, the actual memory usage is typically at least 1.2 times higher. For more information, see the linked resource [here](https://huggingface.co/docs/accelerate/main/en/usage_guides/model_size_estimator).\n</details> \n\n<details>\n<summary>Video URL resource usage</summary>\n\nVideo URL compatibility largely depends on the third-party library version. The details are in the table below. Change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one.\n\n| Backend | HTTP | HTTPS |\n|-------------|------|-------|\n| torchvision >= 0.19.0 | \u2705 | \u2705 |\n| torchvision < 0.19.0 | \u274c | \u274c |\n| decord | \u2705 | \u274c |\n</details>\n\n<details>\n<summary>Batch inference</summary>\n\nThe model can batch inputs composed of mixed samples of various types such as text, images, audio and videos as input when `return_audio=False` is set. Here is an example.\n\n```python\n# Sample messages for batch inference\n\n# Conversation with video only\nconversation1 = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"video\", \"video\": \"/path/to/video.mp4\"},\n ]\n }\n]\n\n# Conversation with audio only\nconversation2 = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"audio\", \"audio\": \"/path/to/audio.wav\"},\n ]\n }\n]\n\n# Conversation with pure text\nconversation3 = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": \"who are you?\"\n }\n]\n\n\n# Conversation with mixed media\nconversation4 = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"/path/to/image.jpg\"},\n {\"type\": \"video\", \"video\": \"/path/to/video.mp4\"},\n {\"type\": \"audio\", \"audio\": \"/path/to/audio.wav\"},\n {\"type\": \"text\", \"text\": \"What are the elements can you see and hear in these medias?\"},\n ],\n }\n]\n\n# Combine messages for batch processing\nconversations = [conversation1, conversation2, conversation3, conversation4]\n\n# set use audio in video\nUSE_AUDIO_IN_VIDEO = True\n\n# Preparation for batch inference\ntext = processor.apply_chat_template(conversations, add_generation_prompt=True, tokenize=False)\naudios, images, videos = process_mm_info(conversations, use_audio_in_video=USE_AUDIO_IN_VIDEO)\n\ninputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors=\"pt\", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO)\ninputs = inputs.to(model.device).to(model.dtype)\n\n# Batch Inference\ntext_ids = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO, return_audio=False)\ntext = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)\nprint(text)\n```\n</details>\n\n### Usage Tips\n\n#### Prompt for audio output\nIf users need audio output, the system prompt must be set as \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\", otherwise the audio output may not work as expected.\n```\n{\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n}\n```\n#### Use audio in video\nIn the process of multimodal interaction, the videos provided by users are often accompanied by audio (such as questions about the content in the video, or sounds generated by certain events in the video). This information is conducive to the model providing a better interactive experience. So we provide the following options for users to decide whether to use audio in video.\n```python\n# first place, in data preprocessing\naudios, images, videos = process_mm_info(conversations, use_audio_in_video=True)\n```\n```python\n# second place, in model processor\ninputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors=\"pt\", \n padding=True, use_audio_in_video=True)\n```\n```python\n# third place, in model inference\ntext_ids, audio = model.generate(**inputs, use_audio_in_video=True)\n```\nIt is worth noting that during a multi-round conversation, the `use_audio_in_video` parameter in these places must be set to the same, otherwise unexpected results will occur.\n\n#### Use audio output or not\n\nThe model supports both text and audio outputs, if users do not need audio outputs, they can call `model.disable_talker()` after init the model. This option will save about `~2GB` of GPU memory but the `return_audio` option for `generate` function will only allow to be set at `False`.\n```python\nmodel = Qwen2_5OmniForConditionalGeneration.from_pretrained(\n \"Qwen/Qwen2.5-Omni-3B\",\n torch_dtype=\"auto\",\n device_map=\"auto\"\n)\nmodel.disable_talker()\n```\n\nIn order to obtain a flexible experience, we recommend that users can decide whether to return audio when `generate` function is called. If `return_audio` is set to `False`, the model will only return text outputs to get text responses faster.\n\n```python\nmodel = Qwen2_5OmniForConditionalGeneration.from_pretrained(\n \"Qwen/Qwen2.5-Omni-3B\",\n torch_dtype=\"auto\",\n device_map=\"auto\"\n)\n...\ntext_ids = model.generate(**inputs, return_audio=False)\n```\n\n#### Change voice type of output audio\nQwen2.5-Omni supports the ability to change the voice of the output audio. The `\"Qwen/Qwen2.5-Omni-3B\"` checkpoint support two voice types as follow:\n\n| Voice Type | Gender | Description |\n|------------|--------|-------------|\n| Chelsie | Female | A honeyed, velvety voice that carries a gentle warmth and luminous clarity.|\n| Ethan | Male | A bright, upbeat voice with infectious energy and a warm, approachable vibe.|\n\nUsers can use the `speaker` parameter of `generate` function to specify the voice type. By default, if `speaker` is not specified, the default voice type is `Chelsie`.\n\n```python\ntext_ids, audio = model.generate(**inputs, speaker=\"Chelsie\")\n```\n\n```python\ntext_ids, audio = model.generate(**inputs, speaker=\"Ethan\")\n```\n\n#### Flash-Attention 2 to speed up generation\n\nFirst, make sure to install the latest version of Flash Attention 2:\n\n```bash\npip install -U flash-attn --no-build-isolation\n```\n\nAlso, you should have hardware that is compatible with FlashAttention 2. Read more about it in the official documentation of the [flash attention repository](https://github.com/Dao-AILab/flash-attention). FlashAttention-2 can only be used when a model is loaded in `torch.float16` or `torch.bfloat16`.\n\nTo load and run a model using FlashAttention-2, add `attn_implementation=\"flash_attention_2\"` when loading the model:\n\n```python\nfrom transformers import Qwen2_5OmniForConditionalGeneration\n\nmodel = Qwen2_5OmniForConditionalGeneration.from_pretrained(\n \"Qwen/Qwen2.5-Omni-3B\",\n device_map=\"auto\",\n torch_dtype=torch.bfloat16,\n attn_implementation=\"flash_attention_2\",\n)\n```\n\n\n## Citation\n\nIf you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil: :)\n\n\n\n```BibTeX\n\n@article{Qwen2.5-Omni,\n title={Qwen2.5-Omni Technical Report},\n author={Jin Xu, Zhifang Guo, Jinzheng He, Hangrui Hu, Ting He, Shuai Bai, Keqin Chen, Jialin Wang, Yang Fan, Kai Dang, Bin Zhang, Xiong Wang, Yunfei Chu, Junyang Lin},\n journal={arXiv preprint arXiv:2503.20215},\n year={2025}\n}\n```\n\n<br>\n\n", |
| "metadata": "\"N/A\"", |
| "depth": 0, |
| "children": [ |
| "KE-Team/Ke-Omni-R-3B", |
| "giangndm/qwen2.5-omni-3b-mlx-8bit", |
| "giangndm/qwen2.5-omni-3b-mlx-4bit", |
| "unsloth/Qwen2.5-Omni-3B" |
| ], |
| "children_count": 4, |
| "adapters": [ |
| "FINGU-AI/qwen2.5-omni-3b-lora-sft", |
| "andrewt28/qwen2.5-omni-3b-keyboard-video-text" |
| ], |
| "adapters_count": 2, |
| "quantized": [ |
| "ggml-org/Qwen2.5-Omni-3B-GGUF", |
| "unsloth/Qwen2.5-Omni-3B-GGUF", |
| "mradermacher/Qwen2.5-Omni-3B-GGUF", |
| "mradermacher/Qwen2.5-Omni-3B-i1-GGUF", |
| "zhaoweiguo/Qwen2.5-Omni-3B-Q4_K_M-GGUF" |
| ], |
| "quantized_count": 5, |
| "merges": [], |
| "merges_count": 0, |
| "total_derivatives": 11, |
| "spaces": [], |
| "spaces_count": 0, |
| "parents": [], |
| "base_model": "Qwen/Qwen2.5-Omni-3B", |
| "base_model_relation": "base" |
| }, |
| { |
| "model_id": "KE-Team/Ke-Omni-R-3B", |
| "gated": "unknown", |
| "card": "---\nlicense: apache-2.0\ndatasets:\n- amaai-lab/MusicBench\nlanguage:\n- en\n- zh\nbase_model:\n- Qwen/Qwen2.5-Omni-3B\npipeline_tag: audio-text-to-text\n---\n\n# Ke-Omni-R: Achieving Advanced Audio Reasoning with a Concise 50-Words Think Process\nIf you wish to train or perform inference with the model, please visit the GitHub repository: [https://github.com/shuaijiang/Ke-Omni-R/](https://github.com/shuaijiang/Ke-Omni-R/).\nIf you find this model helpful, please like this model and star our GitHub.\n\nKe-Omni-R is an advanced audio reasoning model built upon [Qwen2.5-Omni-3B](https://github.com/QwenLM/Qwen2.5-Omni). With only 10k post-training samples, Ke-Omni-R has achieved state-of-the-art performance on the MMAU *Test-mini* and *Test* benchmarks. Key insights from its development include:\n\n- **GRPO Algorithm**: The GRPO algorithm significantly enhances the performance of the already strong base model (Qwen2.5-Omni-7B), demonstrating superior generalization even in unseen speech domains.\n- **Think Process**: Incorporating a concise think process (less than 50 words) plays a crucial role in improving reasoning capabilities.\n- **KL Divergence**: Slight improvements were observed during GRPO training by leveraging KL divergence.\n- **Domain Ratio vs. Data Volume**: Domain diversity outweighs data volume. We utilized only 10k samples, with 5k randomly selected from AVQA and another 5k from MusicBench.\n\n## Performance: Accuracies (%)\u2191 on MMAU Test-mini and Test benchmark\n| Model | Method | Sound (Test-mini) | Sound (Test) | Music (Test-mini) | Music (Test) | Speech (Test-mini) | Speech (Test) | Average (Test-mini) | Average (Test) |\n|---------------------------------------|-----------------------|-----------|-------|-----------|-------|-----------|------|------------|-------|\n| - | Human\\* | 86.31 | - | 78.22 | - | 82.17 | - | 82.23 | - |\n| Gemini Pro 2.0 Flash | Direct Inference\\* | 56.46 | 61.73 | 58.68 | 56.53 | 51.65 | 61.53 | 55.60 | 59.93 |\n| Audio Flamingo 2 | Direct Inference\\* | 61.56 | 65.10 | **73.95** |**72.90**| 30.93 | 40.26 | 55.48 | 59.42 |\n| GPT4o + Strong Cap. | Direct Inference\\* | 57.35 | 55.83 | 49.70 | 51.73 | 64.86 | **68.66** | 57.30 | 58.74 |\n| Llama-3-8B-Instruct + Strong Cap. | Direct Inference\\* | 50.75 | 49.10 | 48.93 | 48.93 | 55.25 | 62.70 | 52.10 | 53.57 |\n| Qwen2-Audio-7B-Instruct | Direct Inference\\* | 54.95 | 45.90 | 50.98 | 53.26 | 42.04 | 45.90 | 49.20 | 52.50 |\n| SALAMONN | Direct Inference\\* | 41.00 | 40.30 | 34.80 | 33.76 | 25.50 | 24.24 | 33.70 | 32.77 |\n| Audio-Reasoner(Qwen2-Audio-7B-Instruct) | \\[1\\] | 60.06 | - | 64.30 | - | 60.70 | - | 61.71 | - |\n| Audio-Cot(Qwen2-Audio-7B-Instruct) | \\[2\\] | 61.86 | - | 56.29 | - | 55.26 | - | 57.80 | - |\n| R1-AQA(Qwen2-Audio-7B-Instruct) | \\[3\\] | 68.77 | 69.76 | 64.37 | 61.40 | 63.66 | 62.70 | 65.60 | 64.36 |\n| Qwen2.5-Omni-3B | \\[4\\] | 70.27 | - | 60.48 | - | 59.16 | - | 63.30 | - |\n| Qwen2.5-Omni-7B | \\[4\\] | 67.87 | - | 69.16 | - | 59.76 | - | 65.60 | - |\n| Ke-Omni-R-3B(Qwen2.5-Omni-3B) | GRPO w/ think (ours) | **72.37** | 71.87 | 65.57 | 59.60 |64.26 | 64.17 | 67.40 |65.17 |\n| Ke-Omni-R(Qwen2.5-Omni-7B) | GRPO(ours) | 69.37 | **71.90** | 69.46 | 67.13 |**67.87** | 67.10 | **68.90** |**68.71** |\n\n## Performance: CER/WER (%)\u2193 on ASR benchmark\n| Model | Method | WenetSpeech test-net | WenetSpeech test-meeting | LibriSpeech test-clean | LibriSpeech test-other|\n| ---|----| ----| ----| ---- | ----|\n| Qwen2.5-Omni-3B | \\[4\\] | 6.3 | 8.1 | 2.2 | 4.5 |\n| Qwen2.5-Omni-7B | \\[4\\] | 5.9 | 7.7 | 1.8 | 3.4 |\n| Ke-Omni-3B | ours | 11.7 | 16.1 | 1.8 | 3.8 |\n| Ke-Omni-7B | ours | 7.5 | 9.8 | **1.6** | **3.1** |\n\nNote:\n\n- \\* The data are sourced from the [MMAU leaderboard](https://sakshi113.github.io/mmau_homepage/#leaderboard).\n \n- \\[1\\] Xie, Zhifei, et al. \"Audio-Reasoner: Improving Reasoning Capability in Large Audio Language Models.\" arXiv preprint arXiv:2503.02318. \n\n- \\[2\\] Ma, Ziyang, et al. \"Audio-CoT: Exploring Chain-of-Thought Reasoning in Large Audio Language Model.\" arXiv preprint arXiv:2501.07246.\n\n- \\[3\\] Li, Gang, et al. \"Reinforcement Learning Outperforms Supervised Fine-Tuning: A Case Study on Audio Question Answering.\" arXiv preprint arXiv:2503.11197\n\n- \\[4\\] Xu, Jin, et al. \"Qwen2.5-Omni Technical Report.\" arXiv preprint arXiv:2503.20215\n\n\n## Usage\n\n```python\n\nfrom transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor\nfrom qwen_omni_utils import process_mm_info\n\n\n# You can directly insert a local file path, a URL, or a base64-encoded audio into the position where you want in the text.\nmessages = [\n # Audio\n ## Local audio path\n [{\"role\": \"system\", \"content\":[{\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}]},\n {\"role\": \"user\", \"content\": [{\"type\": \"audio\", \"audio\": \"/path_to_avqa_wavs/-IBtBeR6B00_000000.wav\"}, {\"type\": \"text\", \"text\": \"Please describe this audio.\"}]}],\n [{\"role\": \"user\", \"content\": [{\"type\": \"audio\", \"audio\": \"/path_to_avqa_wavs/-IBtBeR6B00_000000.wav\"}, {\"type\": \"text\", \"text\": \"What is the main source of sound in the audio? ['aircraft', 'Car', 'Tank', 'Missile'] Output the thinking process (less than 50 words) in <think> </think> and final answer in <answer> </answer>.\"}]}],\n [{\"role\": \"user\", \"content\": [{\"type\": \"audio\", \"audio\": \"/path_to_avqa_wavs/-IBXTktoom8_000030.wav\"}, {\"type\": \"text\", \"text\": \"What animal is the main source of sound in the video? ['dog', 'wasp', 'honeybee', 'dragonfly'] Output the thinking process (less than 50 words) in <think> </think> and final answer in <answer> </answer>.\"}]}],\n]\n\nmodel = Qwen2_5OmniForConditionalGeneration.from_pretrained('KE-Team/Ke-Omni-R-3B')\nprocessor = Qwen2_5OmniProcessor.from_pretrained(model_path)\n\ntext = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)\nprint(text)\naudios, images, videos = process_mm_info(messages, use_audio_in_video=False)\ninputs = processor(text=text, images=images, videos=videos, audio=audios, padding=True, return_tensors=\"pt\")\n\ngeneration = model.generate(**inputs, thinker_temperature=0, thinker_do_sample=False)\ngenerated_ids = generation[:, inputs.input_ids.size(1):]\ncompletions = processor.batch_decode(generated_ids, skip_special_tokens=True)\nprint(completions)\n```\n\nthe output should be\n```\n[\"Well, it sounds like there's a car accelerating. You can hear the engine revving up, and there's a bit of a thump or thud sound too. It might be the car hitting something or just a part of the acceleration process. It gives off a sense of speed and power. What do you think about it? Do you have any other audio samples you want to talk about?\", '<think>The audio features a vehicle accelerating and revving, which is characteristic of a car. The sound is consistent with a car engine, not an aircraft, tank, or missile.</think>\\n<answer>Car</answer>', \"<think>The main source of sound is a buzzing insect, which is consistent with the size and sound of a honeybee. The other options don't match the sound or context.</think>\\n<answer>honeybee</answer>\"]\n```\n\n## Acknowledgements\nWe express our gratitude to the following projects and teams for their contributions:\n- **R1-AQA**: Referenced the GRPO-based training implementation from [R1-AQA](https://github.com/xiaomi-research/r1-aqa).\n- **Qwen Team**: Special thanks to the [Qwen2.5-Omni](https://github.com/QwenLM/Qwen2.5-Omni) model for providing a robust foundation.\n- **Datasets**: \n - [AVAQ](https://mn.cs.tsinghua.edu.cn/avqa/)\n - [MusicBench](https://amaai-lab.github.io/mustango/)\n - [MMAU](https://github.com/Sakshi113/MMAU/)\n\n\n## Citation\n```bib\n@misc{zhao2025keomnir,\n author = {Zhao, Shuaijiang and Guo, Tingwei and Wen, Cheng and Xiang, Bajian and Zou, Wei},\n title = {Ke-Omni-R: Achieving Advanced Audio Reasoning with a Concise 50-Words Think Process},\n year = {2025},\n publisher = {GitHub},\n journal = {GitHub Repository},\n howpublished = {\\url{https://github.com/shuaijiang/Ke-Omni-R}},\n}\n```", |
| "metadata": "\"N/A\"", |
| "depth": 1, |
| "children": [], |
| "children_count": 0, |
| "adapters": [], |
| "adapters_count": 0, |
| "quantized": [], |
| "quantized_count": 0, |
| "merges": [], |
| "merges_count": 0, |
| "total_derivatives": 0, |
| "spaces": [], |
| "spaces_count": 0, |
| "parents": [ |
| "Qwen/Qwen2.5-Omni-3B" |
| ], |
| "base_model": null, |
| "base_model_relation": null |
| }, |
| { |
| "model_id": "giangndm/qwen2.5-omni-3b-mlx-8bit", |
| "gated": "False", |
| "card": "---\nlicense: other\nlicense_name: qwen-research\nlicense_link: LICENSE\nlanguage:\n- en\ntags:\n- multimodal\n- mlx\nlibrary_name: mlx\npipeline_tag: text-generation\nbase_model: Qwen/Qwen2.5-Omni-3B\n---\n\n# giangndm/qwen2.5-omni-3b-mlx-8bit\n\nThis model [giangndm/qwen2.5-omni-3b-mlx-8bit](https://huggingface.co/giangndm/qwen2.5-omni-3b-mlx-8bit) was\nconverted to MLX format from [Qwen/Qwen2.5-Omni-3B](https://huggingface.co/Qwen/Qwen2.5-Omni-3B)\nusing mlx-lm version **0.24.0**.\n\n## Use with mlx (https://github.com/giangndm/mlx-lm-omni)\n\n```bash\nuv add mlx-lm-omni \n# or\nuv add https://github.com/giangndm/mlx-lm-omni.git\n```\n\n```python\nfrom mlx_lm_omni import load, generate\nimport librosa\nfrom io import BytesIO\nfrom urllib.request import urlopen\n\nmodel, tokenizer = load(\"giangndm/qwen2.5-omni-3b-mlx-8bit\")\n\naudio_path = \"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/1272-128104-0000.flac\"\naudio = librosa.load(BytesIO(urlopen(audio_path).read()), sr=16000)[0]\n\nmessages = [\n {\"role\": \"system\", \"content\": \"You are a speech recognition model.\"},\n {\"role\": \"user\", \"content\": \"Transcribe the English audio into text without any punctuation marks.\", \"audio\": audio},\n]\nprompt = tokenizer.apply_chat_template(\n messages, add_generation_prompt=True\n)\n\ntext = generate(model, tokenizer, prompt=prompt, verbose=True)\n```\n\n", |
| "metadata": "\"N/A\"", |
| "depth": 1, |
| "children": [], |
| "children_count": 0, |
| "adapters": [], |
| "adapters_count": 0, |
| "quantized": [], |
| "quantized_count": 0, |
| "merges": [], |
| "merges_count": 0, |
| "total_derivatives": 0, |
| "spaces": [], |
| "spaces_count": 0, |
| "parents": [ |
| "Qwen/Qwen2.5-Omni-3B" |
| ], |
| "base_model": "giangndm/qwen2.5-omni-3b-mlx-8bit", |
| "base_model_relation": "base" |
| }, |
| { |
| "model_id": "giangndm/qwen2.5-omni-3b-mlx-4bit", |
| "gated": "False", |
| "card": "---\nlicense: other\nlicense_name: qwen-research\nlicense_link: LICENSE\nlanguage:\n- en\ntags:\n- multimodal\n- mlx\nlibrary_name: mlx\npipeline_tag: text-generation\nbase_model: Qwen/Qwen2.5-Omni-3B\n---\n\n# giangndm/qwen2.5-omni-3b-mlx-4bit\n\nThis model [giangndm/qwen2.5-omni-3b-mlx-4bit](https://huggingface.co/giangndm/qwen2.5-omni-3b-mlx-4bit) was\nconverted to MLX format from [Qwen/Qwen2.5-Omni-3B](https://huggingface.co/Qwen/Qwen2.5-Omni-3B)\nusing mlx-lm version **0.24.0**.\n\n## Use with mlx\n\n```bash\npip install mlx-lm\n```\n\n```python\nfrom mlx_lm import load, generate\n\nmodel, tokenizer = load(\"giangndm/qwen2.5-omni-3b-mlx-4bit\")\n\nprompt = \"hello\"\n\nif tokenizer.chat_template is not None:\n messages = [{\"role\": \"user\", \"content\": prompt}]\n prompt = tokenizer.apply_chat_template(\n messages, add_generation_prompt=True\n )\n\nresponse = generate(model, tokenizer, prompt=prompt, verbose=True)\n```\n", |
| "metadata": "\"N/A\"", |
| "depth": 1, |
| "children": [], |
| "children_count": 0, |
| "adapters": [], |
| "adapters_count": 0, |
| "quantized": [], |
| "quantized_count": 0, |
| "merges": [], |
| "merges_count": 0, |
| "total_derivatives": 0, |
| "spaces": [], |
| "spaces_count": 0, |
| "parents": [ |
| "Qwen/Qwen2.5-Omni-3B" |
| ], |
| "base_model": "giangndm/qwen2.5-omni-3b-mlx-4bit", |
| "base_model_relation": "base" |
| }, |
| { |
| "model_id": "unsloth/Qwen2.5-Omni-3B", |
| "gated": "unknown", |
| "card": "---\nbase_model:\n- Qwen/Qwen2.5-Omni-3B\nlicense: other\nlicense_name: qwen-research\nlicense_link: LICENSE\nlanguage:\n- en\ntags:\n- multimodal\n- unsloth\nlibrary_name: transformers\npipeline_tag: any-to-any\n---\n<div>\n<p style=\"margin-top: 0;margin-bottom: 0;\">\n <em><a href=\"https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf\">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>\n </p>\n <div style=\"display: flex; gap: 5px; align-items: center; \">\n <a href=\"https://github.com/unslothai/unsloth/\">\n <img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"133\">\n </a>\n <a href=\"https://discord.gg/unsloth\">\n <img src=\"https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png\" width=\"173\">\n </a>\n <a href=\"https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\">\n <img src=\"https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png\" width=\"143\">\n </a>\n </div>\n</div>\n\n\n# Qwen2.5-Omni\n<a href=\"https://chat.qwen.ai/\" target=\"_blank\" style=\"margin: 2px;\">\n <img alt=\"Chat\" src=\"https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5\" style=\"display: inline-block; vertical-align: middle;\"/>\n</a>\n\n\n## Overview \n### Introduction\nQwen2.5-Omni is an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. \n\n<p align=\"center\">\n <img src=\"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/qwen_omni.png\" width=\"80%\"/>\n<p>\n\n### Key Features\n\n* **Omni and Novel Architecture**: We propose Thinker-Talker architecture, an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. We propose a novel position embedding, named TMRoPE (Time-aligned Multimodal RoPE), to synchronize the timestamps of video inputs with audio.\n\n* **Real-Time Voice and Video Chat**: Architecture designed for fully real-time interactions, supporting chunked input and immediate output.\n\n* **Natural and Robust Speech Generation**: Surpassing many existing streaming and non-streaming alternatives, demonstrating superior robustness and naturalness in speech generation.\n\n* **Strong Performance Across Modalities**: Exhibiting exceptional performance across all modalities when benchmarked against similarly sized single-modality models. Qwen2.5-Omni outperforms the similarly sized Qwen2-Audio in audio capabilities and achieves comparable performance to Qwen2.5-VL-7B.\n\n* **Excellent End-to-End Speech Instruction Following**: Qwen2.5-Omni shows performance in end-to-end speech instruction following that rivals its effectiveness with text inputs, evidenced by benchmarks such as MMLU and GSM8K.\n\n### Model Architecture\n\n<p align=\"center\">\n <img src=\"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/overview.png\" width=\"80%\"/>\n<p>\n\n### Performance\n\nWe conducted a comprehensive evaluation of Qwen2.5-Omni, which demonstrates strong performance across all modalities when compared to similarly sized single-modality models and closed-source models like Qwen2.5-VL-7B, Qwen2-Audio, and Gemini-1.5-pro. In tasks requiring the integration of multiple modalities, such as OmniBench, Qwen2.5-Omni achieves state-of-the-art performance. Furthermore, in single-modality tasks, it excels in areas including speech recognition (Common Voice), translation (CoVoST2), audio understanding (MMAU), image reasoning (MMMU, MMStar), video understanding (MVBench), and speech generation (Seed-tts-eval and subjective naturalness).\n\n<p align=\"center\">\n <img src=\"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/bar.png\" width=\"80%\"/>\n<p>\n\n<details>\n<summary>Multimodality -> Text</summary>\n\n<table class=\"tg\"><thead>\n <tr>\n <th class=\"tg-0lax\">Datasets</th>\n <th class=\"tg-0lax\">Model</th>\n <th class=\"tg-0lax\">Performance</th>\n </tr></thead>\n<tbody>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"10\">OmniBench<br>Speech | Sound Event | Music | Avg</td>\n <td class=\"tg-0lax\">Gemini-1.5-Pro</td>\n <td class=\"tg-0lax\">42.67%|42.26%|46.23%|42.91%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MIO-Instruct</td>\n <td class=\"tg-0lax\">36.96%|33.58%|11.32%|33.80%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">AnyGPT (7B)</td>\n <td class=\"tg-0lax\">17.77%|20.75%|13.21%|18.04%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">video-SALMONN</td>\n <td class=\"tg-0lax\">34.11%|31.70%|<strong>56.60%</strong>|35.64%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">UnifiedIO2-xlarge</td>\n <td class=\"tg-0lax\">39.56%|36.98%|29.25%|38.00%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">UnifiedIO2-xxlarge</td>\n <td class=\"tg-0lax\">34.24%|36.98%|24.53%|33.98%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MiniCPM-o</td>\n <td class=\"tg-0lax\">-|-|-|40.50%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Baichuan-Omni-1.5</td>\n <td class=\"tg-0lax\">-|-|-|42.90%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">52.14%|52.08%|52.83%|52.19%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>55.25%</strong>|<strong>60.00%</strong>|52.83%|<strong>56.13%</strong></td>\n </tr>\n</tbody></table>\n</details>\n\n\n<details>\n<summary>Audio -> Text</summary>\n\n\n<table class=\"tg\"><thead>\n <tr>\n <th class=\"tg-0lax\">Datasets</th>\n <th class=\"tg-0lax\">Model</th>\n <th class=\"tg-0lax\">Performance</th>\n </tr></thead>\n<tbody>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">ASR</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"12\">Librispeech<br>dev-clean | dev other | test-clean | test-other</td>\n <td class=\"tg-0lax\">SALMONN</td>\n <td class=\"tg-0lax\">-|-|2.1|4.9</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">SpeechVerse</td>\n <td class=\"tg-0lax\">-|-|2.1|4.4</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Whisper-large-v3</td>\n <td class=\"tg-0lax\">-|-|1.8|3.6</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Llama-3-8B</td>\n <td class=\"tg-0lax\">-|-|-|3.4</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Llama-3-70B</td>\n <td class=\"tg-0lax\">-|-|-|3.1</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Seed-ASR-Multilingual</td>\n <td class=\"tg-0lax\">-|-|<strong>1.6</strong>|<strong>2.8</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MiniCPM-o</td>\n <td class=\"tg-0lax\">-|-|1.7|-</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MinMo</td>\n <td class=\"tg-0lax\">-|-|1.7|3.9</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen-Audio</td>\n <td class=\"tg-0lax\">1.8|4.0|2.0|4.2</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\"><strong>1.3</strong>|<strong>3.4</strong>|<strong>1.6</strong>|3.6</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">2.0|4.1|2.2|4.5</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\">1.6|3.5|1.8|3.4</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"5\">Common Voice 15<br>en | zh | yue | fr</td>\n <td class=\"tg-0lax\">Whisper-large-v3</td>\n <td class=\"tg-0lax\">9.3|12.8|10.9|10.8</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MinMo</td>\n <td class=\"tg-0lax\">7.9|6.3|6.4|8.5</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\">8.6|6.9|<strong>5.9</strong>|9.6</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">9.1|6.0|11.6|9.6</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>7.6</strong>|<strong>5.2</strong>|7.3|<strong>7.5</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"8\">Fleurs<br>zh | en</td>\n <td class=\"tg-0lax\">Whisper-large-v3</td>\n <td class=\"tg-0lax\">7.7|4.1</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Seed-ASR-Multilingual</td>\n <td class=\"tg-0lax\">-|<strong>3.4</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Megrez-3B-Omni</td>\n <td class=\"tg-0lax\">10.8|-</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MiniCPM-o</td>\n <td class=\"tg-0lax\">4.4|-</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MinMo</td>\n <td class=\"tg-0lax\">3.0|3.8</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\">7.5|-</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">3.2|5.4</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>3.0</strong>|4.1</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"6\">Wenetspeech<br>test-net | test-meeting</td>\n <td class=\"tg-0lax\">Seed-ASR-Chinese</td>\n <td class=\"tg-0lax\"><strong>4.7|5.7</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Megrez-3B-Omni</td>\n <td class=\"tg-0lax\">-|16.4</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MiniCPM-o</td>\n <td class=\"tg-0lax\">6.9|-</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MinMo</td>\n <td class=\"tg-0lax\">6.8|7.4</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">6.3|8.1</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\">5.9|7.7</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"4\">Voxpopuli-V1.0-en</td>\n <td class=\"tg-0lax\">Llama-3-8B</td>\n <td class=\"tg-0lax\">6.2</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Llama-3-70B</td>\n <td class=\"tg-0lax\"><strong>5.7</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">6.6</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\">5.8</td>\n </tr>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">S2TT</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"9\">CoVoST2<br>en-de | de-en | en-zh | zh-en</td>\n <td class=\"tg-0lax\">SALMONN</td>\n <td class=\"tg-0lax\">18.6|-|33.1|-</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">SpeechLLaMA</td>\n <td class=\"tg-0lax\">-|27.1|-|12.3</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">BLSP</td>\n <td class=\"tg-0lax\">14.1|-|-|-</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MiniCPM-o</td>\n <td class=\"tg-0lax\">-|-|<strong>48.2</strong>|27.2</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MinMo</td>\n <td class=\"tg-0lax\">-|<strong>39.9</strong>|46.7|26.0</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen-Audio</td>\n <td class=\"tg-0lax\">25.1|33.9|41.5|15.7</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\">29.9|35.2|45.2|24.4</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">28.3|38.1|41.4|26.6</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>30.2</strong>|37.7|41.4|<strong>29.4</strong></td>\n </tr>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">SER</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"6\">Meld</td>\n <td class=\"tg-0lax\">WavLM-large</td>\n <td class=\"tg-0lax\">0.542</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MiniCPM-o</td>\n <td class=\"tg-0lax\">0.524</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen-Audio</td>\n <td class=\"tg-0lax\">0.557</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\">0.553</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">0.558</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>0.570</strong></td>\n </tr>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">VSC</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"6\">VocalSound</td>\n <td class=\"tg-0lax\">CLAP</td>\n <td class=\"tg-0lax\">0.495</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Pengi</td>\n <td class=\"tg-0lax\">0.604</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen-Audio</td>\n <td class=\"tg-0lax\">0.929</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\"><strong>0.939</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">0.936</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>0.939</strong></td>\n </tr>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">Music</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"3\">GiantSteps Tempo</td>\n <td class=\"tg-0lax\">Llark-7B</td>\n <td class=\"tg-0lax\">0.86</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\"><strong>0.88</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>0.88</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"3\">MusicCaps</td>\n <td class=\"tg-0lax\">LP-MusicCaps</td>\n <td class=\"tg-0lax\">0.291|0.149|0.089|<strong>0.061</strong>|0.129|0.130</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">0.325|<strong>0.163</strong>|<strong>0.093</strong>|0.057|<strong>0.132</strong>|<strong>0.229</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>0.328</strong>|0.162|0.090|0.055|0.127|0.225</td>\n </tr>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">Audio Reasoning</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"4\">MMAU<br>Sound | Music | Speech | Avg</td>\n <td class=\"tg-0lax\">Gemini-Pro-V1.5</td>\n <td class=\"tg-0lax\">56.75|49.40|58.55|54.90</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\">54.95|50.98|42.04|49.20</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\"><strong>70.27</strong>|60.48|59.16|63.30</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\">67.87|<strong>69.16|59.76|65.60</strong></td>\n </tr>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">Voice Chatting</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"9\">VoiceBench<br>AlpacaEval | CommonEval | SD-QA | MMSU</td>\n <td class=\"tg-0lax\">Ultravox-v0.4.1-LLaMA-3.1-8B</td>\n <td class=\"tg-0lax\"><strong>4.55</strong>|3.90|53.35|47.17</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MERaLiON</td>\n <td class=\"tg-0lax\">4.50|3.77|55.06|34.95</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Megrez-3B-Omni</td>\n <td class=\"tg-0lax\">3.50|2.95|25.95|27.03</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Lyra-Base</td>\n <td class=\"tg-0lax\">3.85|3.50|38.25|49.74</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MiniCPM-o</td>\n <td class=\"tg-0lax\">4.42|<strong>4.15</strong>|50.72|54.78</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Baichuan-Omni-1.5</td>\n <td class=\"tg-0lax\">4.50|4.05|43.40|57.25</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\">3.74|3.43|35.71|35.72</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">4.32|4.00|49.37|50.23</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\">4.49|3.93|<strong>55.71</strong>|<strong>61.32</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"9\">VoiceBench<br>OpenBookQA | IFEval | AdvBench | Avg</td>\n <td class=\"tg-0lax\">Ultravox-v0.4.1-LLaMA-3.1-8B</td>\n <td class=\"tg-0lax\">65.27|<strong>66.88</strong>|98.46|71.45</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MERaLiON</td>\n <td class=\"tg-0lax\">27.23|62.93|94.81|62.91</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Megrez-3B-Omni</td>\n <td class=\"tg-0lax\">28.35|25.71|87.69|46.25</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Lyra-Base</td>\n <td class=\"tg-0lax\">72.75|36.28|59.62|57.66</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MiniCPM-o</td>\n <td class=\"tg-0lax\">78.02|49.25|97.69|71.69</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Baichuan-Omni-1.5</td>\n <td class=\"tg-0lax\">74.51|54.54|97.31|71.14</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\">49.45|26.33|96.73|55.35</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">74.73|42.10|98.85|68.81</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>81.10</strong>|52.87|<strong>99.42</strong>|<strong>74.12</strong></td>\n </tr>\n</tbody></table>\n</details>\n\n<details>\n<summary>Image -> Text</summary>\n\n| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Other Best | Qwen2.5-VL-7B | GPT-4o-mini | \n|--------------------------------|--------------|------------|------------|---------------|-------------|\n| MMMU<sub>val</sub> | 59.2 | 53.1 | 53.9 | 58.6 | **60.0** | \n| MMMU-Pro<sub>overall</sub> | 36.6 | 29.7 | - | **38.3** | 37.6 | \n| MathVista<sub>testmini</sub> | 67.9 | 59.4 | **71.9** | 68.2 | 52.5 | \n| MathVision<sub>full</sub> | 25.0 | 20.8 | 23.1 | **25.1** | - | \n| MMBench-V1.1-EN<sub>test</sub> | 81.8 | 77.8 | 80.5 | **82.6** | 76.0 | \n| MMVet<sub>turbo</sub> | 66.8 | 62.1 | **67.5** | 67.1 | 66.9 | \n| MMStar | **64.0** | 55.7 | **64.0** | 63.9 | 54.8 | \n| MME<sub>sum</sub> | 2340 | 2117 | **2372** | 2347 | 2003 | \n| MuirBench | 59.2 | 48.0 | - | **59.2** | - | \n| CRPE<sub>relation</sub> | **76.5** | 73.7 | - | 76.4 | - | \n| RealWorldQA<sub>avg</sub> | 70.3 | 62.6 | **71.9** | 68.5 | - | \n| MME-RealWorld<sub>en</sub> | **61.6** | 55.6 | - | 57.4 | - | \n| MM-MT-Bench | 6.0 | 5.0 | - | **6.3** | - | \n| AI2D | 83.2 | 79.5 | **85.8** | 83.9 | - | \n| TextVQA<sub>val</sub> | 84.4 | 79.8 | 83.2 | **84.9** | - | \n| DocVQA<sub>test</sub> | 95.2 | 93.3 | 93.5 | **95.7** | - | \n| ChartQA<sub>test Avg</sub> | 85.3 | 82.8 | 84.9 | **87.3** | - | \n| OCRBench_V2<sub>en</sub> | **57.8** | 51.7 | - | 56.3 | - | \n\n\n| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Qwen2.5-VL-7B | Grounding DINO | Gemini 1.5 Pro | \n|--------------------------|--------------|---------------|---------------|----------------|----------------|\n| Refcoco<sub>val</sub> | 90.5 | 88.7 | 90.0 | **90.6** | 73.2 | \n| Refcoco<sub>textA</sub> | **93.5** | 91.8 | 92.5 | 93.2 | 72.9 | \n| Refcoco<sub>textB</sub> | 86.6 | 84.0 | 85.4 | **88.2** | 74.6 | \n| Refcoco+<sub>val</sub> | 85.4 | 81.1 | 84.2 | **88.2** | 62.5 | \n| Refcoco+<sub>textA</sub> | **91.0** | 87.5 | 89.1 | 89.0 | 63.9 | \n| Refcoco+<sub>textB</sub> | **79.3** | 73.2 | 76.9 | 75.9 | 65.0 | \n| Refcocog+<sub>val</sub> | **87.4** | 85.0 | 87.2 | 86.1 | 75.2 | \n| Refcocog+<sub>test</sub> | **87.9** | 85.1 | 87.2 | 87.0 | 76.2 | \n| ODinW | 42.4 | 39.2 | 37.3 | **55.0** | 36.7 | \n| PointGrounding | 66.5 | 46.2 | **67.3** | - | - | \n</details>\n\n\n<details>\n<summary>Video(without audio) -> Text</summary>\n\n| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Other Best | Qwen2.5-VL-7B | GPT-4o-mini | \n|-----------------------------|--------------|------------|------------|---------------|-------------|\n| Video-MME<sub>w/o sub</sub> | 64.3 | 62.0 | 63.9 | **65.1** | 64.8 | \n| Video-MME<sub>w sub</sub> | **72.4** | 68.6 | 67.9 | 71.6 | - | \n| MVBench | **70.3** | 68.7 | 67.2 | 69.6 | - | \n| EgoSchema<sub>test</sub> | **68.6** | 61.4 | 63.2 | 65.0 | - | \n</details>\n\n<details>\n<summary>Zero-shot Speech Generation</summary>\n\n\n<table class=\"tg\"><thead>\n <tr>\n <th class=\"tg-0lax\">Datasets</th>\n <th class=\"tg-0lax\">Model</th>\n <th class=\"tg-0lax\">Performance</th>\n </tr></thead>\n<tbody>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">Content Consistency</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"11\">SEED<br>test-zh | test-en | test-hard </td>\n <td class=\"tg-0lax\">Seed-TTS_ICL</td>\n <td class=\"tg-0lax\">1.11 | 2.24 | 7.58</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Seed-TTS_RL</td>\n <td class=\"tg-0lax\"><strong>1.00</strong> | 1.94 | <strong>6.42</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MaskGCT</td>\n <td class=\"tg-0lax\">2.27 | 2.62 | 10.27</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">E2_TTS</td>\n <td class=\"tg-0lax\">1.97 | 2.19 | -</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">F5-TTS</td>\n <td class=\"tg-0lax\">1.56 | <strong>1.83</strong> | 8.67</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">CosyVoice 2</td>\n <td class=\"tg-0lax\">1.45 | 2.57 | 6.83</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">CosyVoice 2-S</td>\n <td class=\"tg-0lax\">1.45 | 2.38 | 8.08</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B_ICL</td>\n <td class=\"tg-0lax\">1.95 | 2.87 | 9.92</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B_RL</td>\n <td class=\"tg-0lax\">1.58 | 2.51 | 7.86</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B_ICL</td>\n <td class=\"tg-0lax\">1.70 | 2.72 | 7.97</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B_RL</td>\n <td class=\"tg-0lax\">1.42 | 2.32 | 6.54</td>\n </tr>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">Speaker Similarity</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"11\">SEED<br>test-zh | test-en | test-hard </td>\n <td class=\"tg-0lax\">Seed-TTS_ICL</td>\n <td class=\"tg-0lax\">0.796 | 0.762 | 0.776</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Seed-TTS_RL</td>\n <td class=\"tg-0lax\"><strong>0.801</strong> | <strong>0.766</strong> | <strong>0.782</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MaskGCT</td>\n <td class=\"tg-0lax\">0.774 | 0.714 | 0.748</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">E2_TTS</td>\n <td class=\"tg-0lax\">0.730 | 0.710 | -</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">F5-TTS</td>\n <td class=\"tg-0lax\">0.741 | 0.647 | 0.713</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">CosyVoice 2</td>\n <td class=\"tg-0lax\">0.748 | 0.652 | 0.724</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">CosyVoice 2-S</td>\n <td class=\"tg-0lax\">0.753 | 0.654 | 0.732</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B_ICL</td>\n <td class=\"tg-0lax\">0.741 | 0.635 | 0.748</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B_RL</td>\n <td class=\"tg-0lax\">0.744 | 0.635 | 0.746</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B_ICL</td>\n <td class=\"tg-0lax\">0.752 | 0.632 | 0.747</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B_RL</td>\n <td class=\"tg-0lax\">0.754 | 0.641 | 0.752</td>\n </tr>\n</tbody></table>\n</details>\n\n<details>\n<summary>Text -> Text</summary>\n\n| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Qwen2.5-7B | Qwen2.5-3B | Qwen2-7B | Llama3.1-8B | Gemma2-9B | \n|-----------------------------------|-----------|------------|------------|------------|------------|-------------|-----------|\n| MMLU-Pro | 47.0 | 40.4 | **56.3** | 43.7 | 44.1 | 48.3 | 52.1 | \n| MMLU-redux | 71.0 | 60.9 | **75.4** | 64.4 | 67.3 | 67.2 | 72.8 | \n| LiveBench<sub>0831</sub> | 29.6 | 22.3 | **35.9** | 26.8 | 29.2 | 26.7 | 30.6 | \n| GPQA | 30.8 | 34.3 | **36.4** | 30.3 | 34.3 | 32.8 | 32.8 | \n| MATH | 71.5 | 63.6 | **75.5** | 65.9 | 52.9 | 51.9 | 44.3 | \n| GSM8K | 88.7 | 82.6 | **91.6** | 86.7 | 85.7 | 84.5 | 76.7 | \n| HumanEval | 78.7 | 70.7 | **84.8** |\t74.4 | 79.9 | 72.6 | 68.9 | \n| MBPP | 73.2 | 70.4 | **79.2** | 72.7 | 67.2 | 69.6 | 74.9 | \n| MultiPL-E | 65.8 | 57.6 | **70.4** | 60.2 | 59.1 | 50.7 | 53.4 | \n| LiveCodeBench<sub>2305-2409</sub> | 24.6 | 16.5 | **28.7** | 19.9 | 23.9 | 8.3 | 18.9 | \n</details>\n\n## Quickstart\n\nBelow, we provide simple examples to show how to use Qwen2.5-Omni with \ud83e\udd17 Transformers. The codes of Qwen2.5-Omni has been in the latest Hugging face transformers and we advise you to build from source with command:\n```\npip uninstall transformers\npip install git+https://github.com/huggingface/transformers@v4.51.3-Qwen2.5-Omni-preview\npip install accelerate\n```\nor you might encounter the following error:\n```\nKeyError: 'qwen2_5_omni'\n```\n\n\nWe offer a toolkit to help you handle various types of audio and visual input more conveniently, as if you were using an API. This includes base64, URLs, and interleaved audio, images and videos. You can install it using the following command and make sure your system has `ffmpeg` installed:\n\n```bash\n# It's highly recommended to use `[decord]` feature for faster video loading.\npip install qwen-omni-utils[decord] -U\n```\n\nIf you are not using Linux, you might not be able to install `decord` from PyPI. In that case, you can use `pip install qwen-omni-utils -U` which will fall back to using torchvision for video processing. However, you can still [install decord from source](https://github.com/dmlc/decord?tab=readme-ov-file#install-from-source) to get decord used when loading video.\n\n### \ud83e\udd17 Transformers Usage\n\nHere we show a code snippet to show you how to use the chat model with `transformers` and `qwen_omni_utils`:\n\n```python\nimport soundfile as sf\n\nfrom transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor\nfrom qwen_omni_utils import process_mm_info\n\n# default: Load the model on the available device(s)\nmodel = Qwen2_5OmniForConditionalGeneration.from_pretrained(\"Qwen/Qwen2.5-Omni-3B\", torch_dtype=\"auto\", device_map=\"auto\")\n\n# We recommend enabling flash_attention_2 for better acceleration and memory saving.\n# model = Qwen2_5OmniForConditionalGeneration.from_pretrained(\n# \"Qwen/Qwen2.5-Omni-3B\",\n# torch_dtype=\"auto\",\n# device_map=\"auto\",\n# attn_implementation=\"flash_attention_2\",\n# )\n\nprocessor = Qwen2_5OmniProcessor.from_pretrained(\"Qwen/Qwen2.5-Omni-3B\")\n\nconversation = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"video\", \"video\": \"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/draw.mp4\"},\n ],\n },\n]\n\n# set use audio in video\nUSE_AUDIO_IN_VIDEO = True\n\n# Preparation for inference\ntext = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)\naudios, images, videos = process_mm_info(conversation, use_audio_in_video=USE_AUDIO_IN_VIDEO)\ninputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors=\"pt\", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO)\ninputs = inputs.to(model.device).to(model.dtype)\n\n# Inference: Generation of the output text and audio\ntext_ids, audio = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO)\n\ntext = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)\nprint(text)\nsf.write(\n \"output.wav\",\n audio.reshape(-1).detach().cpu().numpy(),\n samplerate=24000,\n)\n```\n\n<details>\n<summary>Minimum GPU memory requirements</summary>\n\n|Model | Precision | 15(s) Video | 30(s) Video | 60(s) Video |\n|--------------|-----------| ------------- | ------------- | ------------------ |\n| Qwen-Omni-3B | FP32 | 89.10 GB | Not Recommend | Not Recommend |\n| Qwen-Omni-3B | BF16 | 18.38 GB | 22.43 GB | 28.22 GB |\n| Qwen-Omni-7B | FP32 | 93.56 GB | Not Recommend | Not Recommend |\n| Qwen-Omni-7B | BF16 | 31.11 GB | 41.85 GB | 60.19 GB |\n\nNote: The table above presents the theoretical minimum memory requirements for inference with `transformers` and `BF16` is test with `attn_implementation=\"flash_attention_2\"`; however, in practice, the actual memory usage is typically at least 1.2 times higher. For more information, see the linked resource [here](https://huggingface.co/docs/accelerate/main/en/usage_guides/model_size_estimator).\n</details> \n\n<details>\n<summary>Video URL resource usage</summary>\n\nVideo URL compatibility largely depends on the third-party library version. The details are in the table below. Change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one.\n\n| Backend | HTTP | HTTPS |\n|-------------|------|-------|\n| torchvision >= 0.19.0 | \u2705 | \u2705 |\n| torchvision < 0.19.0 | \u274c | \u274c |\n| decord | \u2705 | \u274c |\n</details>\n\n<details>\n<summary>Batch inference</summary>\n\nThe model can batch inputs composed of mixed samples of various types such as text, images, audio and videos as input when `return_audio=False` is set. Here is an example.\n\n```python\n# Sample messages for batch inference\n\n# Conversation with video only\nconversation1 = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"video\", \"video\": \"/path/to/video.mp4\"},\n ]\n }\n]\n\n# Conversation with audio only\nconversation2 = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"audio\", \"audio\": \"/path/to/audio.wav\"},\n ]\n }\n]\n\n# Conversation with pure text\nconversation3 = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": \"who are you?\"\n }\n]\n\n\n# Conversation with mixed media\nconversation4 = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"/path/to/image.jpg\"},\n {\"type\": \"video\", \"video\": \"/path/to/video.mp4\"},\n {\"type\": \"audio\", \"audio\": \"/path/to/audio.wav\"},\n {\"type\": \"text\", \"text\": \"What are the elements can you see and hear in these medias?\"},\n ],\n }\n]\n\n# Combine messages for batch processing\nconversations = [conversation1, conversation2, conversation3, conversation4]\n\n# set use audio in video\nUSE_AUDIO_IN_VIDEO = True\n\n# Preparation for batch inference\ntext = processor.apply_chat_template(conversations, add_generation_prompt=True, tokenize=False)\naudios, images, videos = process_mm_info(conversations, use_audio_in_video=USE_AUDIO_IN_VIDEO)\n\ninputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors=\"pt\", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO)\ninputs = inputs.to(model.device).to(model.dtype)\n\n# Batch Inference\ntext_ids = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO, return_audio=False)\ntext = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)\nprint(text)\n```\n</details>\n\n### Usage Tips\n\n#### Prompt for audio output\nIf users need audio output, the system prompt must be set as \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\", otherwise the audio output may not work as expected.\n```\n{\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n}\n```\n#### Use audio in video\nIn the process of multimodal interaction, the videos provided by users are often accompanied by audio (such as questions about the content in the video, or sounds generated by certain events in the video). This information is conducive to the model providing a better interactive experience. So we provide the following options for users to decide whether to use audio in video.\n```python\n# first place, in data preprocessing\naudios, images, videos = process_mm_info(conversations, use_audio_in_video=True)\n```\n```python\n# second place, in model processor\ninputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors=\"pt\", \n padding=True, use_audio_in_video=True)\n```\n```python\n# third place, in model inference\ntext_ids, audio = model.generate(**inputs, use_audio_in_video=True)\n```\nIt is worth noting that during a multi-round conversation, the `use_audio_in_video` parameter in these places must be set to the same, otherwise unexpected results will occur.\n\n#### Use audio output or not\n\nThe model supports both text and audio outputs, if users do not need audio outputs, they can call `model.disable_talker()` after init the model. This option will save about `~2GB` of GPU memory but the `return_audio` option for `generate` function will only allow to be set at `False`.\n```python\nmodel = Qwen2_5OmniForConditionalGeneration.from_pretrained(\n \"Qwen/Qwen2.5-Omni-3B\",\n torch_dtype=\"auto\",\n device_map=\"auto\"\n)\nmodel.disable_talker()\n```\n\nIn order to obtain a flexible experience, we recommend that users can decide whether to return audio when `generate` function is called. If `return_audio` is set to `False`, the model will only return text outputs to get text responses faster.\n\n```python\nmodel = Qwen2_5OmniForConditionalGeneration.from_pretrained(\n \"Qwen/Qwen2.5-Omni-3B\",\n torch_dtype=\"auto\",\n device_map=\"auto\"\n)\n...\ntext_ids = model.generate(**inputs, return_audio=False)\n```\n\n#### Change voice type of output audio\nQwen2.5-Omni supports the ability to change the voice of the output audio. The `\"Qwen/Qwen2.5-Omni-3B\"` checkpoint support two voice types as follow:\n\n| Voice Type | Gender | Description |\n|------------|--------|-------------|\n| Chelsie | Female | A honeyed, velvety voice that carries a gentle warmth and luminous clarity.|\n| Ethan | Male | A bright, upbeat voice with infectious energy and a warm, approachable vibe.|\n\nUsers can use the `speaker` parameter of `generate` function to specify the voice type. By default, if `speaker` is not specified, the default voice type is `Chelsie`.\n\n```python\ntext_ids, audio = model.generate(**inputs, speaker=\"Chelsie\")\n```\n\n```python\ntext_ids, audio = model.generate(**inputs, speaker=\"Ethan\")\n```\n\n#### Flash-Attention 2 to speed up generation\n\nFirst, make sure to install the latest version of Flash Attention 2:\n\n```bash\npip install -U flash-attn --no-build-isolation\n```\n\nAlso, you should have hardware that is compatible with FlashAttention 2. Read more about it in the official documentation of the [flash attention repository](https://github.com/Dao-AILab/flash-attention). FlashAttention-2 can only be used when a model is loaded in `torch.float16` or `torch.bfloat16`.\n\nTo load and run a model using FlashAttention-2, add `attn_implementation=\"flash_attention_2\"` when loading the model:\n\n```python\nfrom transformers import Qwen2_5OmniForConditionalGeneration\n\nmodel = Qwen2_5OmniForConditionalGeneration.from_pretrained(\n \"Qwen/Qwen2.5-Omni-3B\",\n device_map=\"auto\",\n torch_dtype=torch.bfloat16,\n attn_implementation=\"flash_attention_2\",\n)\n```\n\n\n## Citation\n\nIf you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil: :)\n\n\n\n```BibTeX\n\n@article{Qwen2.5-Omni,\n title={Qwen2.5-Omni Technical Report},\n author={Jin Xu, Zhifang Guo, Jinzheng He, Hangrui Hu, Ting He, Shuai Bai, Keqin Chen, Jialin Wang, Yang Fan, Kai Dang, Bin Zhang, Xiong Wang, Yunfei Chu, Junyang Lin},\n journal={arXiv preprint arXiv:2503.20215},\n year={2025}\n}\n```\n\n<br>\n\n", |
| "metadata": "\"N/A\"", |
| "depth": 1, |
| "children": [], |
| "children_count": 0, |
| "adapters": [], |
| "adapters_count": 0, |
| "quantized": [], |
| "quantized_count": 0, |
| "merges": [], |
| "merges_count": 0, |
| "total_derivatives": 0, |
| "spaces": [], |
| "spaces_count": 0, |
| "parents": [ |
| "Qwen/Qwen2.5-Omni-3B" |
| ], |
| "base_model": null, |
| "base_model_relation": null |
| }, |
| { |
| "model_id": "FINGU-AI/qwen2.5-omni-3b-lora-sft", |
| "gated": "False", |
| "card": "---\nlibrary_name: peft\nlicense: other\nbase_model: Qwen/Qwen2.5-Omni-3B\ntags:\n- llama-factory\n- lora\n- generated_from_trainer\nmodel-index:\n- name: sft\n results: []\n---\n\n<!-- This model card has been generated automatically according to the information the Trainer had access to. You\nshould probably proofread and complete it, then remove this comment. -->\n\n# sft\n\nThis model is a fine-tuned version of [Qwen/Qwen2.5-Omni-3B](https://huggingface.co/Qwen/Qwen2.5-Omni-3B) on the fingu dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 3.0\n- mixed_precision_training: Native AMP\n\n### Training results\n\n\n\n### Framework versions\n\n- PEFT 0.15.1\n- Transformers 4.52.0.dev0\n- Pytorch 2.2.0a0+81ea7a4\n- Datasets 2.17.1\n- Tokenizers 0.21.1", |
| "metadata": "\"N/A\"", |
| "depth": 1, |
| "children": [], |
| "children_count": 0, |
| "adapters": [], |
| "adapters_count": 0, |
| "quantized": [], |
| "quantized_count": 0, |
| "merges": [], |
| "merges_count": 0, |
| "total_derivatives": 0, |
| "spaces": [], |
| "spaces_count": 0, |
| "parents": [ |
| "Qwen/Qwen2.5-Omni-3B" |
| ], |
| "base_model": "FINGU-AI/qwen2.5-omni-3b-lora-sft", |
| "base_model_relation": "base" |
| }, |
| { |
| "model_id": "andrewt28/qwen2.5-omni-3b-keyboard-video-text", |
| "gated": "False", |
| "card": "---\nlibrary_name: peft\nlicense: afl-3.0\ndatasets:\n- andrewt28/keystroke-typing-videos\nlanguage:\n- en\nbase_model:\n- Qwen/Qwen2.5-Omni-3B\npipeline_tag: video-text-to-text\n---\n\n# Model Card for Qwen2.5-Omni-3B-Keyboard-Video-Text\n\nFine-tuned on video and audio of typing to predict the typed text.", |
| "metadata": "\"N/A\"", |
| "depth": 1, |
| "children": [], |
| "children_count": 0, |
| "adapters": [], |
| "adapters_count": 0, |
| "quantized": [], |
| "quantized_count": 0, |
| "merges": [], |
| "merges_count": 0, |
| "total_derivatives": 0, |
| "spaces": [], |
| "spaces_count": 0, |
| "parents": [ |
| "Qwen/Qwen2.5-Omni-3B" |
| ], |
| "base_model": "andrewt28/qwen2.5-omni-3b-keyboard-video-text", |
| "base_model_relation": "base" |
| }, |
| { |
| "model_id": "ggml-org/Qwen2.5-Omni-3B-GGUF", |
| "gated": "unknown", |
| "card": "---\nlicense: other\nlicense_name: qwen-research\nlicense_link: https://huggingface.co/Qwen/Qwen2.5-Omni-3B/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- multimodal\npipeline_tag: any-to-any\nbase_model:\n- Qwen/Qwen2.5-Omni-3B\n---\n\n# Qwen2.5-Omni-3B-GGUF\n\nOriginal model: https://huggingface.co/Qwen/Qwen2.5-Omni-3B\n\nModalities:\n- \u2705 Text input\n- \u2705 Audio input\n- \u2705 Image input\n- \u274c Video input\n- \u274c Audio generation\n\nRef PR: https://github.com/ggml-org/llama.cpp/pull/13784\n", |
| "metadata": "\"N/A\"", |
| "depth": 1, |
| "children": [], |
| "children_count": 0, |
| "adapters": [], |
| "adapters_count": 0, |
| "quantized": [], |
| "quantized_count": 0, |
| "merges": [], |
| "merges_count": 0, |
| "total_derivatives": 0, |
| "spaces": [], |
| "spaces_count": 0, |
| "parents": [ |
| "Qwen/Qwen2.5-Omni-3B" |
| ], |
| "base_model": null, |
| "base_model_relation": null |
| }, |
| { |
| "model_id": "unsloth/Qwen2.5-Omni-3B-GGUF", |
| "gated": "unknown", |
| "card": "---\nbase_model:\n- Qwen/Qwen2.5-Omni-3B\nlicense: other\nlicense_name: qwen-research\nlicense_link: LICENSE\nlanguage:\n- en\ntags:\n- multimodal\n- unsloth\nlibrary_name: transformers\npipeline_tag: any-to-any\n---\n<div>\n<p style=\"margin-top: 0;margin-bottom: 0;\">\n <em><a href=\"https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf\">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>\n </p>\n <div style=\"display: flex; gap: 5px; align-items: center; \">\n <a href=\"https://github.com/unslothai/unsloth/\">\n <img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"133\">\n </a>\n <a href=\"https://discord.gg/unsloth\">\n <img src=\"https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png\" width=\"173\">\n </a>\n <a href=\"https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune\">\n <img src=\"https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png\" width=\"143\">\n </a>\n </div>\n</div>\n\n\n# Qwen2.5-Omni\n<a href=\"https://chat.qwen.ai/\" target=\"_blank\" style=\"margin: 2px;\">\n <img alt=\"Chat\" src=\"https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5\" style=\"display: inline-block; vertical-align: middle;\"/>\n</a>\n\n\n## Overview \n### Introduction\nQwen2.5-Omni is an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. \n\n<p align=\"center\">\n <img src=\"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/qwen_omni.png\" width=\"80%\"/>\n<p>\n\n### Key Features\n\n* **Omni and Novel Architecture**: We propose Thinker-Talker architecture, an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. We propose a novel position embedding, named TMRoPE (Time-aligned Multimodal RoPE), to synchronize the timestamps of video inputs with audio.\n\n* **Real-Time Voice and Video Chat**: Architecture designed for fully real-time interactions, supporting chunked input and immediate output.\n\n* **Natural and Robust Speech Generation**: Surpassing many existing streaming and non-streaming alternatives, demonstrating superior robustness and naturalness in speech generation.\n\n* **Strong Performance Across Modalities**: Exhibiting exceptional performance across all modalities when benchmarked against similarly sized single-modality models. Qwen2.5-Omni outperforms the similarly sized Qwen2-Audio in audio capabilities and achieves comparable performance to Qwen2.5-VL-7B.\n\n* **Excellent End-to-End Speech Instruction Following**: Qwen2.5-Omni shows performance in end-to-end speech instruction following that rivals its effectiveness with text inputs, evidenced by benchmarks such as MMLU and GSM8K.\n\n### Model Architecture\n\n<p align=\"center\">\n <img src=\"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/overview.png\" width=\"80%\"/>\n<p>\n\n### Performance\n\nWe conducted a comprehensive evaluation of Qwen2.5-Omni, which demonstrates strong performance across all modalities when compared to similarly sized single-modality models and closed-source models like Qwen2.5-VL-7B, Qwen2-Audio, and Gemini-1.5-pro. In tasks requiring the integration of multiple modalities, such as OmniBench, Qwen2.5-Omni achieves state-of-the-art performance. Furthermore, in single-modality tasks, it excels in areas including speech recognition (Common Voice), translation (CoVoST2), audio understanding (MMAU), image reasoning (MMMU, MMStar), video understanding (MVBench), and speech generation (Seed-tts-eval and subjective naturalness).\n\n<p align=\"center\">\n <img src=\"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/bar.png\" width=\"80%\"/>\n<p>\n\n<details>\n<summary>Multimodality -> Text</summary>\n\n<table class=\"tg\"><thead>\n <tr>\n <th class=\"tg-0lax\">Datasets</th>\n <th class=\"tg-0lax\">Model</th>\n <th class=\"tg-0lax\">Performance</th>\n </tr></thead>\n<tbody>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"10\">OmniBench<br>Speech | Sound Event | Music | Avg</td>\n <td class=\"tg-0lax\">Gemini-1.5-Pro</td>\n <td class=\"tg-0lax\">42.67%|42.26%|46.23%|42.91%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MIO-Instruct</td>\n <td class=\"tg-0lax\">36.96%|33.58%|11.32%|33.80%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">AnyGPT (7B)</td>\n <td class=\"tg-0lax\">17.77%|20.75%|13.21%|18.04%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">video-SALMONN</td>\n <td class=\"tg-0lax\">34.11%|31.70%|<strong>56.60%</strong>|35.64%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">UnifiedIO2-xlarge</td>\n <td class=\"tg-0lax\">39.56%|36.98%|29.25%|38.00%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">UnifiedIO2-xxlarge</td>\n <td class=\"tg-0lax\">34.24%|36.98%|24.53%|33.98%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MiniCPM-o</td>\n <td class=\"tg-0lax\">-|-|-|40.50%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Baichuan-Omni-1.5</td>\n <td class=\"tg-0lax\">-|-|-|42.90%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">52.14%|52.08%|52.83%|52.19%</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>55.25%</strong>|<strong>60.00%</strong>|52.83%|<strong>56.13%</strong></td>\n </tr>\n</tbody></table>\n</details>\n\n\n<details>\n<summary>Audio -> Text</summary>\n\n\n<table class=\"tg\"><thead>\n <tr>\n <th class=\"tg-0lax\">Datasets</th>\n <th class=\"tg-0lax\">Model</th>\n <th class=\"tg-0lax\">Performance</th>\n </tr></thead>\n<tbody>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">ASR</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"12\">Librispeech<br>dev-clean | dev other | test-clean | test-other</td>\n <td class=\"tg-0lax\">SALMONN</td>\n <td class=\"tg-0lax\">-|-|2.1|4.9</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">SpeechVerse</td>\n <td class=\"tg-0lax\">-|-|2.1|4.4</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Whisper-large-v3</td>\n <td class=\"tg-0lax\">-|-|1.8|3.6</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Llama-3-8B</td>\n <td class=\"tg-0lax\">-|-|-|3.4</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Llama-3-70B</td>\n <td class=\"tg-0lax\">-|-|-|3.1</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Seed-ASR-Multilingual</td>\n <td class=\"tg-0lax\">-|-|<strong>1.6</strong>|<strong>2.8</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MiniCPM-o</td>\n <td class=\"tg-0lax\">-|-|1.7|-</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MinMo</td>\n <td class=\"tg-0lax\">-|-|1.7|3.9</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen-Audio</td>\n <td class=\"tg-0lax\">1.8|4.0|2.0|4.2</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\"><strong>1.3</strong>|<strong>3.4</strong>|<strong>1.6</strong>|3.6</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">2.0|4.1|2.2|4.5</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\">1.6|3.5|1.8|3.4</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"5\">Common Voice 15<br>en | zh | yue | fr</td>\n <td class=\"tg-0lax\">Whisper-large-v3</td>\n <td class=\"tg-0lax\">9.3|12.8|10.9|10.8</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MinMo</td>\n <td class=\"tg-0lax\">7.9|6.3|6.4|8.5</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\">8.6|6.9|<strong>5.9</strong>|9.6</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">9.1|6.0|11.6|9.6</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>7.6</strong>|<strong>5.2</strong>|7.3|<strong>7.5</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"8\">Fleurs<br>zh | en</td>\n <td class=\"tg-0lax\">Whisper-large-v3</td>\n <td class=\"tg-0lax\">7.7|4.1</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Seed-ASR-Multilingual</td>\n <td class=\"tg-0lax\">-|<strong>3.4</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Megrez-3B-Omni</td>\n <td class=\"tg-0lax\">10.8|-</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MiniCPM-o</td>\n <td class=\"tg-0lax\">4.4|-</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MinMo</td>\n <td class=\"tg-0lax\">3.0|3.8</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\">7.5|-</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">3.2|5.4</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>3.0</strong>|4.1</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"6\">Wenetspeech<br>test-net | test-meeting</td>\n <td class=\"tg-0lax\">Seed-ASR-Chinese</td>\n <td class=\"tg-0lax\"><strong>4.7|5.7</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Megrez-3B-Omni</td>\n <td class=\"tg-0lax\">-|16.4</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MiniCPM-o</td>\n <td class=\"tg-0lax\">6.9|-</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MinMo</td>\n <td class=\"tg-0lax\">6.8|7.4</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">6.3|8.1</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\">5.9|7.7</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"4\">Voxpopuli-V1.0-en</td>\n <td class=\"tg-0lax\">Llama-3-8B</td>\n <td class=\"tg-0lax\">6.2</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Llama-3-70B</td>\n <td class=\"tg-0lax\"><strong>5.7</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">6.6</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\">5.8</td>\n </tr>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">S2TT</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"9\">CoVoST2<br>en-de | de-en | en-zh | zh-en</td>\n <td class=\"tg-0lax\">SALMONN</td>\n <td class=\"tg-0lax\">18.6|-|33.1|-</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">SpeechLLaMA</td>\n <td class=\"tg-0lax\">-|27.1|-|12.3</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">BLSP</td>\n <td class=\"tg-0lax\">14.1|-|-|-</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MiniCPM-o</td>\n <td class=\"tg-0lax\">-|-|<strong>48.2</strong>|27.2</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MinMo</td>\n <td class=\"tg-0lax\">-|<strong>39.9</strong>|46.7|26.0</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen-Audio</td>\n <td class=\"tg-0lax\">25.1|33.9|41.5|15.7</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\">29.9|35.2|45.2|24.4</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">28.3|38.1|41.4|26.6</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>30.2</strong>|37.7|41.4|<strong>29.4</strong></td>\n </tr>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">SER</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"6\">Meld</td>\n <td class=\"tg-0lax\">WavLM-large</td>\n <td class=\"tg-0lax\">0.542</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MiniCPM-o</td>\n <td class=\"tg-0lax\">0.524</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen-Audio</td>\n <td class=\"tg-0lax\">0.557</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\">0.553</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">0.558</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>0.570</strong></td>\n </tr>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">VSC</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"6\">VocalSound</td>\n <td class=\"tg-0lax\">CLAP</td>\n <td class=\"tg-0lax\">0.495</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Pengi</td>\n <td class=\"tg-0lax\">0.604</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen-Audio</td>\n <td class=\"tg-0lax\">0.929</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\"><strong>0.939</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">0.936</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>0.939</strong></td>\n </tr>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">Music</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"3\">GiantSteps Tempo</td>\n <td class=\"tg-0lax\">Llark-7B</td>\n <td class=\"tg-0lax\">0.86</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\"><strong>0.88</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>0.88</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"3\">MusicCaps</td>\n <td class=\"tg-0lax\">LP-MusicCaps</td>\n <td class=\"tg-0lax\">0.291|0.149|0.089|<strong>0.061</strong>|0.129|0.130</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">0.325|<strong>0.163</strong>|<strong>0.093</strong>|0.057|<strong>0.132</strong>|<strong>0.229</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>0.328</strong>|0.162|0.090|0.055|0.127|0.225</td>\n </tr>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">Audio Reasoning</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"4\">MMAU<br>Sound | Music | Speech | Avg</td>\n <td class=\"tg-0lax\">Gemini-Pro-V1.5</td>\n <td class=\"tg-0lax\">56.75|49.40|58.55|54.90</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\">54.95|50.98|42.04|49.20</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\"><strong>70.27</strong>|60.48|59.16|63.30</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\">67.87|<strong>69.16|59.76|65.60</strong></td>\n </tr>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">Voice Chatting</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"9\">VoiceBench<br>AlpacaEval | CommonEval | SD-QA | MMSU</td>\n <td class=\"tg-0lax\">Ultravox-v0.4.1-LLaMA-3.1-8B</td>\n <td class=\"tg-0lax\"><strong>4.55</strong>|3.90|53.35|47.17</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MERaLiON</td>\n <td class=\"tg-0lax\">4.50|3.77|55.06|34.95</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Megrez-3B-Omni</td>\n <td class=\"tg-0lax\">3.50|2.95|25.95|27.03</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Lyra-Base</td>\n <td class=\"tg-0lax\">3.85|3.50|38.25|49.74</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MiniCPM-o</td>\n <td class=\"tg-0lax\">4.42|<strong>4.15</strong>|50.72|54.78</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Baichuan-Omni-1.5</td>\n <td class=\"tg-0lax\">4.50|4.05|43.40|57.25</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\">3.74|3.43|35.71|35.72</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">4.32|4.00|49.37|50.23</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\">4.49|3.93|<strong>55.71</strong>|<strong>61.32</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"9\">VoiceBench<br>OpenBookQA | IFEval | AdvBench | Avg</td>\n <td class=\"tg-0lax\">Ultravox-v0.4.1-LLaMA-3.1-8B</td>\n <td class=\"tg-0lax\">65.27|<strong>66.88</strong>|98.46|71.45</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MERaLiON</td>\n <td class=\"tg-0lax\">27.23|62.93|94.81|62.91</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Megrez-3B-Omni</td>\n <td class=\"tg-0lax\">28.35|25.71|87.69|46.25</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Lyra-Base</td>\n <td class=\"tg-0lax\">72.75|36.28|59.62|57.66</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MiniCPM-o</td>\n <td class=\"tg-0lax\">78.02|49.25|97.69|71.69</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Baichuan-Omni-1.5</td>\n <td class=\"tg-0lax\">74.51|54.54|97.31|71.14</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2-Audio</td>\n <td class=\"tg-0lax\">49.45|26.33|96.73|55.35</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B</td>\n <td class=\"tg-0lax\">74.73|42.10|98.85|68.81</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B</td>\n <td class=\"tg-0lax\"><strong>81.10</strong>|52.87|<strong>99.42</strong>|<strong>74.12</strong></td>\n </tr>\n</tbody></table>\n</details>\n\n<details>\n<summary>Image -> Text</summary>\n\n| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Other Best | Qwen2.5-VL-7B | GPT-4o-mini | \n|--------------------------------|--------------|------------|------------|---------------|-------------|\n| MMMU<sub>val</sub> | 59.2 | 53.1 | 53.9 | 58.6 | **60.0** | \n| MMMU-Pro<sub>overall</sub> | 36.6 | 29.7 | - | **38.3** | 37.6 | \n| MathVista<sub>testmini</sub> | 67.9 | 59.4 | **71.9** | 68.2 | 52.5 | \n| MathVision<sub>full</sub> | 25.0 | 20.8 | 23.1 | **25.1** | - | \n| MMBench-V1.1-EN<sub>test</sub> | 81.8 | 77.8 | 80.5 | **82.6** | 76.0 | \n| MMVet<sub>turbo</sub> | 66.8 | 62.1 | **67.5** | 67.1 | 66.9 | \n| MMStar | **64.0** | 55.7 | **64.0** | 63.9 | 54.8 | \n| MME<sub>sum</sub> | 2340 | 2117 | **2372** | 2347 | 2003 | \n| MuirBench | 59.2 | 48.0 | - | **59.2** | - | \n| CRPE<sub>relation</sub> | **76.5** | 73.7 | - | 76.4 | - | \n| RealWorldQA<sub>avg</sub> | 70.3 | 62.6 | **71.9** | 68.5 | - | \n| MME-RealWorld<sub>en</sub> | **61.6** | 55.6 | - | 57.4 | - | \n| MM-MT-Bench | 6.0 | 5.0 | - | **6.3** | - | \n| AI2D | 83.2 | 79.5 | **85.8** | 83.9 | - | \n| TextVQA<sub>val</sub> | 84.4 | 79.8 | 83.2 | **84.9** | - | \n| DocVQA<sub>test</sub> | 95.2 | 93.3 | 93.5 | **95.7** | - | \n| ChartQA<sub>test Avg</sub> | 85.3 | 82.8 | 84.9 | **87.3** | - | \n| OCRBench_V2<sub>en</sub> | **57.8** | 51.7 | - | 56.3 | - | \n\n\n| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Qwen2.5-VL-7B | Grounding DINO | Gemini 1.5 Pro | \n|--------------------------|--------------|---------------|---------------|----------------|----------------|\n| Refcoco<sub>val</sub> | 90.5 | 88.7 | 90.0 | **90.6** | 73.2 | \n| Refcoco<sub>textA</sub> | **93.5** | 91.8 | 92.5 | 93.2 | 72.9 | \n| Refcoco<sub>textB</sub> | 86.6 | 84.0 | 85.4 | **88.2** | 74.6 | \n| Refcoco+<sub>val</sub> | 85.4 | 81.1 | 84.2 | **88.2** | 62.5 | \n| Refcoco+<sub>textA</sub> | **91.0** | 87.5 | 89.1 | 89.0 | 63.9 | \n| Refcoco+<sub>textB</sub> | **79.3** | 73.2 | 76.9 | 75.9 | 65.0 | \n| Refcocog+<sub>val</sub> | **87.4** | 85.0 | 87.2 | 86.1 | 75.2 | \n| Refcocog+<sub>test</sub> | **87.9** | 85.1 | 87.2 | 87.0 | 76.2 | \n| ODinW | 42.4 | 39.2 | 37.3 | **55.0** | 36.7 | \n| PointGrounding | 66.5 | 46.2 | **67.3** | - | - | \n</details>\n\n\n<details>\n<summary>Video(without audio) -> Text</summary>\n\n| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Other Best | Qwen2.5-VL-7B | GPT-4o-mini | \n|-----------------------------|--------------|------------|------------|---------------|-------------|\n| Video-MME<sub>w/o sub</sub> | 64.3 | 62.0 | 63.9 | **65.1** | 64.8 | \n| Video-MME<sub>w sub</sub> | **72.4** | 68.6 | 67.9 | 71.6 | - | \n| MVBench | **70.3** | 68.7 | 67.2 | 69.6 | - | \n| EgoSchema<sub>test</sub> | **68.6** | 61.4 | 63.2 | 65.0 | - | \n</details>\n\n<details>\n<summary>Zero-shot Speech Generation</summary>\n\n\n<table class=\"tg\"><thead>\n <tr>\n <th class=\"tg-0lax\">Datasets</th>\n <th class=\"tg-0lax\">Model</th>\n <th class=\"tg-0lax\">Performance</th>\n </tr></thead>\n<tbody>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">Content Consistency</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"11\">SEED<br>test-zh | test-en | test-hard </td>\n <td class=\"tg-0lax\">Seed-TTS_ICL</td>\n <td class=\"tg-0lax\">1.11 | 2.24 | 7.58</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Seed-TTS_RL</td>\n <td class=\"tg-0lax\"><strong>1.00</strong> | 1.94 | <strong>6.42</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MaskGCT</td>\n <td class=\"tg-0lax\">2.27 | 2.62 | 10.27</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">E2_TTS</td>\n <td class=\"tg-0lax\">1.97 | 2.19 | -</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">F5-TTS</td>\n <td class=\"tg-0lax\">1.56 | <strong>1.83</strong> | 8.67</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">CosyVoice 2</td>\n <td class=\"tg-0lax\">1.45 | 2.57 | 6.83</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">CosyVoice 2-S</td>\n <td class=\"tg-0lax\">1.45 | 2.38 | 8.08</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B_ICL</td>\n <td class=\"tg-0lax\">1.95 | 2.87 | 9.92</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B_RL</td>\n <td class=\"tg-0lax\">1.58 | 2.51 | 7.86</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B_ICL</td>\n <td class=\"tg-0lax\">1.70 | 2.72 | 7.97</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B_RL</td>\n <td class=\"tg-0lax\">1.42 | 2.32 | 6.54</td>\n </tr>\n <tr>\n <td class=\"tg-9j4x\" colspan=\"3\">Speaker Similarity</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\" rowspan=\"11\">SEED<br>test-zh | test-en | test-hard </td>\n <td class=\"tg-0lax\">Seed-TTS_ICL</td>\n <td class=\"tg-0lax\">0.796 | 0.762 | 0.776</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Seed-TTS_RL</td>\n <td class=\"tg-0lax\"><strong>0.801</strong> | <strong>0.766</strong> | <strong>0.782</strong></td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">MaskGCT</td>\n <td class=\"tg-0lax\">0.774 | 0.714 | 0.748</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">E2_TTS</td>\n <td class=\"tg-0lax\">0.730 | 0.710 | -</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">F5-TTS</td>\n <td class=\"tg-0lax\">0.741 | 0.647 | 0.713</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">CosyVoice 2</td>\n <td class=\"tg-0lax\">0.748 | 0.652 | 0.724</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">CosyVoice 2-S</td>\n <td class=\"tg-0lax\">0.753 | 0.654 | 0.732</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B_ICL</td>\n <td class=\"tg-0lax\">0.741 | 0.635 | 0.748</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-3B_RL</td>\n <td class=\"tg-0lax\">0.744 | 0.635 | 0.746</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B_ICL</td>\n <td class=\"tg-0lax\">0.752 | 0.632 | 0.747</td>\n </tr>\n <tr>\n <td class=\"tg-0lax\">Qwen2.5-Omni-7B_RL</td>\n <td class=\"tg-0lax\">0.754 | 0.641 | 0.752</td>\n </tr>\n</tbody></table>\n</details>\n\n<details>\n<summary>Text -> Text</summary>\n\n| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Qwen2.5-7B | Qwen2.5-3B | Qwen2-7B | Llama3.1-8B | Gemma2-9B | \n|-----------------------------------|-----------|------------|------------|------------|------------|-------------|-----------|\n| MMLU-Pro | 47.0 | 40.4 | **56.3** | 43.7 | 44.1 | 48.3 | 52.1 | \n| MMLU-redux | 71.0 | 60.9 | **75.4** | 64.4 | 67.3 | 67.2 | 72.8 | \n| LiveBench<sub>0831</sub> | 29.6 | 22.3 | **35.9** | 26.8 | 29.2 | 26.7 | 30.6 | \n| GPQA | 30.8 | 34.3 | **36.4** | 30.3 | 34.3 | 32.8 | 32.8 | \n| MATH | 71.5 | 63.6 | **75.5** | 65.9 | 52.9 | 51.9 | 44.3 | \n| GSM8K | 88.7 | 82.6 | **91.6** | 86.7 | 85.7 | 84.5 | 76.7 | \n| HumanEval | 78.7 | 70.7 | **84.8** |\t74.4 | 79.9 | 72.6 | 68.9 | \n| MBPP | 73.2 | 70.4 | **79.2** | 72.7 | 67.2 | 69.6 | 74.9 | \n| MultiPL-E | 65.8 | 57.6 | **70.4** | 60.2 | 59.1 | 50.7 | 53.4 | \n| LiveCodeBench<sub>2305-2409</sub> | 24.6 | 16.5 | **28.7** | 19.9 | 23.9 | 8.3 | 18.9 | \n</details>\n\n## Quickstart\n\nBelow, we provide simple examples to show how to use Qwen2.5-Omni with \ud83e\udd17 Transformers. The codes of Qwen2.5-Omni has been in the latest Hugging face transformers and we advise you to build from source with command:\n```\npip uninstall transformers\npip install git+https://github.com/huggingface/transformers@v4.51.3-Qwen2.5-Omni-preview\npip install accelerate\n```\nor you might encounter the following error:\n```\nKeyError: 'qwen2_5_omni'\n```\n\n\nWe offer a toolkit to help you handle various types of audio and visual input more conveniently, as if you were using an API. This includes base64, URLs, and interleaved audio, images and videos. You can install it using the following command and make sure your system has `ffmpeg` installed:\n\n```bash\n# It's highly recommended to use `[decord]` feature for faster video loading.\npip install qwen-omni-utils[decord] -U\n```\n\nIf you are not using Linux, you might not be able to install `decord` from PyPI. In that case, you can use `pip install qwen-omni-utils -U` which will fall back to using torchvision for video processing. However, you can still [install decord from source](https://github.com/dmlc/decord?tab=readme-ov-file#install-from-source) to get decord used when loading video.\n\n### \ud83e\udd17 Transformers Usage\n\nHere we show a code snippet to show you how to use the chat model with `transformers` and `qwen_omni_utils`:\n\n```python\nimport soundfile as sf\n\nfrom transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor\nfrom qwen_omni_utils import process_mm_info\n\n# default: Load the model on the available device(s)\nmodel = Qwen2_5OmniForConditionalGeneration.from_pretrained(\"Qwen/Qwen2.5-Omni-3B\", torch_dtype=\"auto\", device_map=\"auto\")\n\n# We recommend enabling flash_attention_2 for better acceleration and memory saving.\n# model = Qwen2_5OmniForConditionalGeneration.from_pretrained(\n# \"Qwen/Qwen2.5-Omni-3B\",\n# torch_dtype=\"auto\",\n# device_map=\"auto\",\n# attn_implementation=\"flash_attention_2\",\n# )\n\nprocessor = Qwen2_5OmniProcessor.from_pretrained(\"Qwen/Qwen2.5-Omni-3B\")\n\nconversation = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"video\", \"video\": \"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/draw.mp4\"},\n ],\n },\n]\n\n# set use audio in video\nUSE_AUDIO_IN_VIDEO = True\n\n# Preparation for inference\ntext = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)\naudios, images, videos = process_mm_info(conversation, use_audio_in_video=USE_AUDIO_IN_VIDEO)\ninputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors=\"pt\", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO)\ninputs = inputs.to(model.device).to(model.dtype)\n\n# Inference: Generation of the output text and audio\ntext_ids, audio = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO)\n\ntext = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)\nprint(text)\nsf.write(\n \"output.wav\",\n audio.reshape(-1).detach().cpu().numpy(),\n samplerate=24000,\n)\n```\n\n<details>\n<summary>Minimum GPU memory requirements</summary>\n\n|Model | Precision | 15(s) Video | 30(s) Video | 60(s) Video |\n|--------------|-----------| ------------- | ------------- | ------------------ |\n| Qwen-Omni-3B | FP32 | 89.10 GB | Not Recommend | Not Recommend |\n| Qwen-Omni-3B | BF16 | 18.38 GB | 22.43 GB | 28.22 GB |\n| Qwen-Omni-7B | FP32 | 93.56 GB | Not Recommend | Not Recommend |\n| Qwen-Omni-7B | BF16 | 31.11 GB | 41.85 GB | 60.19 GB |\n\nNote: The table above presents the theoretical minimum memory requirements for inference with `transformers` and `BF16` is test with `attn_implementation=\"flash_attention_2\"`; however, in practice, the actual memory usage is typically at least 1.2 times higher. For more information, see the linked resource [here](https://huggingface.co/docs/accelerate/main/en/usage_guides/model_size_estimator).\n</details> \n\n<details>\n<summary>Video URL resource usage</summary>\n\nVideo URL compatibility largely depends on the third-party library version. The details are in the table below. Change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one.\n\n| Backend | HTTP | HTTPS |\n|-------------|------|-------|\n| torchvision >= 0.19.0 | \u2705 | \u2705 |\n| torchvision < 0.19.0 | \u274c | \u274c |\n| decord | \u2705 | \u274c |\n</details>\n\n<details>\n<summary>Batch inference</summary>\n\nThe model can batch inputs composed of mixed samples of various types such as text, images, audio and videos as input when `return_audio=False` is set. Here is an example.\n\n```python\n# Sample messages for batch inference\n\n# Conversation with video only\nconversation1 = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"video\", \"video\": \"/path/to/video.mp4\"},\n ]\n }\n]\n\n# Conversation with audio only\nconversation2 = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"audio\", \"audio\": \"/path/to/audio.wav\"},\n ]\n }\n]\n\n# Conversation with pure text\nconversation3 = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": \"who are you?\"\n }\n]\n\n\n# Conversation with mixed media\nconversation4 = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"/path/to/image.jpg\"},\n {\"type\": \"video\", \"video\": \"/path/to/video.mp4\"},\n {\"type\": \"audio\", \"audio\": \"/path/to/audio.wav\"},\n {\"type\": \"text\", \"text\": \"What are the elements can you see and hear in these medias?\"},\n ],\n }\n]\n\n# Combine messages for batch processing\nconversations = [conversation1, conversation2, conversation3, conversation4]\n\n# set use audio in video\nUSE_AUDIO_IN_VIDEO = True\n\n# Preparation for batch inference\ntext = processor.apply_chat_template(conversations, add_generation_prompt=True, tokenize=False)\naudios, images, videos = process_mm_info(conversations, use_audio_in_video=USE_AUDIO_IN_VIDEO)\n\ninputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors=\"pt\", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO)\ninputs = inputs.to(model.device).to(model.dtype)\n\n# Batch Inference\ntext_ids = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO, return_audio=False)\ntext = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)\nprint(text)\n```\n</details>\n\n### Usage Tips\n\n#### Prompt for audio output\nIf users need audio output, the system prompt must be set as \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\", otherwise the audio output may not work as expected.\n```\n{\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n}\n```\n#### Use audio in video\nIn the process of multimodal interaction, the videos provided by users are often accompanied by audio (such as questions about the content in the video, or sounds generated by certain events in the video). This information is conducive to the model providing a better interactive experience. So we provide the following options for users to decide whether to use audio in video.\n```python\n# first place, in data preprocessing\naudios, images, videos = process_mm_info(conversations, use_audio_in_video=True)\n```\n```python\n# second place, in model processor\ninputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors=\"pt\", \n padding=True, use_audio_in_video=True)\n```\n```python\n# third place, in model inference\ntext_ids, audio = model.generate(**inputs, use_audio_in_video=True)\n```\nIt is worth noting that during a multi-round conversation, the `use_audio_in_video` parameter in these places must be set to the same, otherwise unexpected results will occur.\n\n#### Use audio output or not\n\nThe model supports both text and audio outputs, if users do not need audio outputs, they can call `model.disable_talker()` after init the model. This option will save about `~2GB` of GPU memory but the `return_audio` option for `generate` function will only allow to be set at `False`.\n```python\nmodel = Qwen2_5OmniForConditionalGeneration.from_pretrained(\n \"Qwen/Qwen2.5-Omni-3B\",\n torch_dtype=\"auto\",\n device_map=\"auto\"\n)\nmodel.disable_talker()\n```\n\nIn order to obtain a flexible experience, we recommend that users can decide whether to return audio when `generate` function is called. If `return_audio` is set to `False`, the model will only return text outputs to get text responses faster.\n\n```python\nmodel = Qwen2_5OmniForConditionalGeneration.from_pretrained(\n \"Qwen/Qwen2.5-Omni-3B\",\n torch_dtype=\"auto\",\n device_map=\"auto\"\n)\n...\ntext_ids = model.generate(**inputs, return_audio=False)\n```\n\n#### Change voice type of output audio\nQwen2.5-Omni supports the ability to change the voice of the output audio. The `\"Qwen/Qwen2.5-Omni-3B\"` checkpoint support two voice types as follow:\n\n| Voice Type | Gender | Description |\n|------------|--------|-------------|\n| Chelsie | Female | A honeyed, velvety voice that carries a gentle warmth and luminous clarity.|\n| Ethan | Male | A bright, upbeat voice with infectious energy and a warm, approachable vibe.|\n\nUsers can use the `speaker` parameter of `generate` function to specify the voice type. By default, if `speaker` is not specified, the default voice type is `Chelsie`.\n\n```python\ntext_ids, audio = model.generate(**inputs, speaker=\"Chelsie\")\n```\n\n```python\ntext_ids, audio = model.generate(**inputs, speaker=\"Ethan\")\n```\n\n#### Flash-Attention 2 to speed up generation\n\nFirst, make sure to install the latest version of Flash Attention 2:\n\n```bash\npip install -U flash-attn --no-build-isolation\n```\n\nAlso, you should have hardware that is compatible with FlashAttention 2. Read more about it in the official documentation of the [flash attention repository](https://github.com/Dao-AILab/flash-attention). FlashAttention-2 can only be used when a model is loaded in `torch.float16` or `torch.bfloat16`.\n\nTo load and run a model using FlashAttention-2, add `attn_implementation=\"flash_attention_2\"` when loading the model:\n\n```python\nfrom transformers import Qwen2_5OmniForConditionalGeneration\n\nmodel = Qwen2_5OmniForConditionalGeneration.from_pretrained(\n \"Qwen/Qwen2.5-Omni-3B\",\n device_map=\"auto\",\n torch_dtype=torch.bfloat16,\n attn_implementation=\"flash_attention_2\",\n)\n```\n\n\n## Citation\n\nIf you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil: :)\n\n\n\n```BibTeX\n\n@article{Qwen2.5-Omni,\n title={Qwen2.5-Omni Technical Report},\n author={Jin Xu, Zhifang Guo, Jinzheng He, Hangrui Hu, Ting He, Shuai Bai, Keqin Chen, Jialin Wang, Yang Fan, Kai Dang, Bin Zhang, Xiong Wang, Yunfei Chu, Junyang Lin},\n journal={arXiv preprint arXiv:2503.20215},\n year={2025}\n}\n```\n\n<br>\n\n", |
| "metadata": "\"N/A\"", |
| "depth": 1, |
| "children": [], |
| "children_count": 0, |
| "adapters": [], |
| "adapters_count": 0, |
| "quantized": [], |
| "quantized_count": 0, |
| "merges": [], |
| "merges_count": 0, |
| "total_derivatives": 0, |
| "spaces": [], |
| "spaces_count": 0, |
| "parents": [ |
| "Qwen/Qwen2.5-Omni-3B" |
| ], |
| "base_model": null, |
| "base_model_relation": null |
| }, |
| { |
| "model_id": "mradermacher/Qwen2.5-Omni-3B-GGUF", |
| "gated": "unknown", |
| "card": "---\nbase_model: Qwen/Qwen2.5-Omni-3B\nlanguage:\n- en\nlibrary_name: transformers\nlicense: other\nlicense_link: LICENSE\nlicense_name: qwen-research\nquantized_by: mradermacher\ntags:\n- multimodal\n---\n## About\n\n<!-- ### quantize_version: 2 -->\n<!-- ### output_tensor_quantised: 1 -->\n<!-- ### convert_type: hf -->\n<!-- ### vocab_type: -->\n<!-- ### tags: -->\nstatic quants of https://huggingface.co/Qwen/Qwen2.5-Omni-3B\n\n<!-- provided-files -->\nweighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF/resolve/main/Qwen2.5-Omni-3B.Q2_K.gguf) | Q2_K | 1.5 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF/resolve/main/Qwen2.5-Omni-3B.Q3_K_S.gguf) | Q3_K_S | 1.7 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF/resolve/main/Qwen2.5-Omni-3B.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF/resolve/main/Qwen2.5-Omni-3B.Q3_K_L.gguf) | Q3_K_L | 1.9 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF/resolve/main/Qwen2.5-Omni-3B.IQ4_XS.gguf) | IQ4_XS | 2.0 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF/resolve/main/Qwen2.5-Omni-3B.Q4_K_S.gguf) | Q4_K_S | 2.1 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF/resolve/main/Qwen2.5-Omni-3B.Q4_K_M.gguf) | Q4_K_M | 2.2 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF/resolve/main/Qwen2.5-Omni-3B.Q5_K_S.gguf) | Q5_K_S | 2.5 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF/resolve/main/Qwen2.5-Omni-3B.Q5_K_M.gguf) | Q5_K_M | 2.5 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF/resolve/main/Qwen2.5-Omni-3B.Q6_K.gguf) | Q6_K | 2.9 | very good quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF/resolve/main/Qwen2.5-Omni-3B.Q8_0.gguf) | Q8_0 | 3.7 | fast, best quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF/resolve/main/Qwen2.5-Omni-3B.f16.gguf) | f16 | 6.9 | 16 bpw, overkill |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n<!-- end -->\n", |
| "metadata": "\"N/A\"", |
| "depth": 1, |
| "children": [], |
| "children_count": 0, |
| "adapters": [], |
| "adapters_count": 0, |
| "quantized": [], |
| "quantized_count": 0, |
| "merges": [], |
| "merges_count": 0, |
| "total_derivatives": 0, |
| "spaces": [], |
| "spaces_count": 0, |
| "parents": [ |
| "Qwen/Qwen2.5-Omni-3B" |
| ], |
| "base_model": null, |
| "base_model_relation": null |
| }, |
| { |
| "model_id": "mradermacher/Qwen2.5-Omni-3B-i1-GGUF", |
| "gated": "unknown", |
| "card": "---\nbase_model: Qwen/Qwen2.5-Omni-3B\nlanguage:\n- en\nlibrary_name: transformers\nlicense: other\nlicense_link: LICENSE\nlicense_name: qwen-research\nquantized_by: mradermacher\ntags:\n- multimodal\n---\n## About\n\n<!-- ### quantize_version: 2 -->\n<!-- ### output_tensor_quantised: 1 -->\n<!-- ### convert_type: hf -->\n<!-- ### vocab_type: -->\n<!-- ### tags: nicoboss -->\nweighted/imatrix quants of https://huggingface.co/Qwen/Qwen2.5-Omni-3B\n\n<!-- provided-files -->\nstatic quants are available at https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.1 | mostly desperate |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.2 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.2 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ2_M.gguf) | i1-IQ2_M | 1.4 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.4 | very low quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.5 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.6 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.7 | IQ3_XS probably better |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ3_S.gguf) | i1-IQ3_S | 1.7 | beats Q3_K* |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ3_M.gguf) | i1-IQ3_M | 1.7 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.8 | IQ3_S probably better |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.9 | IQ3_M probably better |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.0 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.1 | prefer IQ4_XS |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q4_0.gguf) | i1-Q4_0 | 2.1 | fast, low quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.1 | optimal size/speed/quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.2 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q4_1.gguf) | i1-Q4_1 | 2.3 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.5 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.5 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q6_K.gguf) | i1-Q6_K | 2.9 | practically like static Q6_K |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n<!-- end -->\n", |
| "metadata": "\"N/A\"", |
| "depth": 1, |
| "children": [], |
| "children_count": 0, |
| "adapters": [], |
| "adapters_count": 0, |
| "quantized": [], |
| "quantized_count": 0, |
| "merges": [], |
| "merges_count": 0, |
| "total_derivatives": 0, |
| "spaces": [], |
| "spaces_count": 0, |
| "parents": [ |
| "Qwen/Qwen2.5-Omni-3B" |
| ], |
| "base_model": null, |
| "base_model_relation": null |
| }, |
| { |
| "model_id": "zhaoweiguo/Qwen2.5-Omni-3B-Q4_K_M-GGUF", |
| "gated": "unknown", |
| "card": "---\nlicense: other\nlicense_name: qwen-research\nlicense_link: LICENSE\nlanguage:\n- en\ntags:\n- multimodal\n- llama-cpp\n- gguf-my-repo\nlibrary_name: transformers\npipeline_tag: any-to-any\nbase_model: Qwen/Qwen2.5-Omni-3B\n---\n\n# zhaoweiguo/Qwen2.5-Omni-3B-Q4_K_M-GGUF\nThis model was converted to GGUF format from [`Qwen/Qwen2.5-Omni-3B`](https://huggingface.co/Qwen/Qwen2.5-Omni-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Omni-3B) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo zhaoweiguo/Qwen2.5-Omni-3B-Q4_K_M-GGUF --hf-file qwen2.5-omni-3b-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo zhaoweiguo/Qwen2.5-Omni-3B-Q4_K_M-GGUF --hf-file qwen2.5-omni-3b-q4_k_m.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo zhaoweiguo/Qwen2.5-Omni-3B-Q4_K_M-GGUF --hf-file qwen2.5-omni-3b-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo zhaoweiguo/Qwen2.5-Omni-3B-Q4_K_M-GGUF --hf-file qwen2.5-omni-3b-q4_k_m.gguf -c 2048\n```\n", |
| "metadata": "\"N/A\"", |
| "depth": 1, |
| "children": [], |
| "children_count": 0, |
| "adapters": [], |
| "adapters_count": 0, |
| "quantized": [], |
| "quantized_count": 0, |
| "merges": [], |
| "merges_count": 0, |
| "total_derivatives": 0, |
| "spaces": [], |
| "spaces_count": 0, |
| "parents": [ |
| "Qwen/Qwen2.5-Omni-3B" |
| ], |
| "base_model": null, |
| "base_model_relation": null |
| } |
| ] |
| } |