{
"base_model": "Qwen/Qwen2.5-Omni-3B",
"tree": [
{
"model_id": "Qwen/Qwen2.5-Omni-3B",
"gated": "False",
"card": "---\nlicense: other\nlicense_name: qwen-research\nlicense_link: LICENSE\nlanguage:\n- en\ntags:\n- multimodal\nlibrary_name: transformers\npipeline_tag: any-to-any\n---\n\n# Qwen2.5-Omni\n\n \n\n\n\n## Overview \n### Introduction\nQwen2.5-Omni is an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. \n\n
\n
\n
\n\n### Key Features\n\n* **Omni and Novel Architecture**: We propose Thinker-Talker architecture, an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. We propose a novel position embedding, named TMRoPE (Time-aligned Multimodal RoPE), to synchronize the timestamps of video inputs with audio.\n\n* **Real-Time Voice and Video Chat**: Architecture designed for fully real-time interactions, supporting chunked input and immediate output.\n\n* **Natural and Robust Speech Generation**: Surpassing many existing streaming and non-streaming alternatives, demonstrating superior robustness and naturalness in speech generation.\n\n* **Strong Performance Across Modalities**: Exhibiting exceptional performance across all modalities when benchmarked against similarly sized single-modality models. Qwen2.5-Omni outperforms the similarly sized Qwen2-Audio in audio capabilities and achieves comparable performance to Qwen2.5-VL-7B.\n\n* **Excellent End-to-End Speech Instruction Following**: Qwen2.5-Omni shows performance in end-to-end speech instruction following that rivals its effectiveness with text inputs, evidenced by benchmarks such as MMLU and GSM8K.\n\n### Model Architecture\n\n
\n
\n
\n\n### Performance\n\nWe conducted a comprehensive evaluation of Qwen2.5-Omni, which demonstrates strong performance across all modalities when compared to similarly sized single-modality models and closed-source models like Qwen2.5-VL-7B, Qwen2-Audio, and Gemini-1.5-pro. In tasks requiring the integration of multiple modalities, such as OmniBench, Qwen2.5-Omni achieves state-of-the-art performance. Furthermore, in single-modality tasks, it excels in areas including speech recognition (Common Voice), translation (CoVoST2), audio understanding (MMAU), image reasoning (MMMU, MMStar), video understanding (MVBench), and speech generation (Seed-tts-eval and subjective naturalness).\n\n
\n
\n
\n\nMultimodality -> Text
\n\n\n
\n\n \n\n Datasets \n Model \n Performance \n \n \n OmniBench \n
Speech | Sound Event | Music | AvgGemini-1.5-Pro \n 42.67%|42.26%|46.23%|42.91% \n \n \n MIO-Instruct \n 36.96%|33.58%|11.32%|33.80% \n \n \n AnyGPT (7B) \n 17.77%|20.75%|13.21%|18.04% \n \n \n video-SALMONN \n 34.11%|31.70%|56.60%|35.64% \n \n \n UnifiedIO2-xlarge \n 39.56%|36.98%|29.25%|38.00% \n \n \n UnifiedIO2-xxlarge \n 34.24%|36.98%|24.53%|33.98% \n \n \n MiniCPM-o \n -|-|-|40.50% \n \n \n Baichuan-Omni-1.5 \n -|-|-|42.90% \n \n \n Qwen2.5-Omni-3B \n 52.14%|52.08%|52.83%|52.19% \n \n \nQwen2.5-Omni-7B \n 55.25%|60.00%|52.83%|56.13% \n Audio -> Text
\n\n\n\n
\n\n \n\n Datasets \n Model \n Performance \n \n \n ASR \n \n \n Librispeech \n
dev-clean | dev other | test-clean | test-otherSALMONN \n -|-|2.1|4.9 \n \n \n SpeechVerse \n -|-|2.1|4.4 \n \n \n Whisper-large-v3 \n -|-|1.8|3.6 \n \n \n Llama-3-8B \n -|-|-|3.4 \n \n \n Llama-3-70B \n -|-|-|3.1 \n \n \n Seed-ASR-Multilingual \n -|-|1.6|2.8 \n \n \n MiniCPM-o \n -|-|1.7|- \n \n \n MinMo \n -|-|1.7|3.9 \n \n \n Qwen-Audio \n 1.8|4.0|2.0|4.2 \n \n \n Qwen2-Audio \n 1.3|3.4|1.6|3.6 \n \n \n Qwen2.5-Omni-3B \n 2.0|4.1|2.2|4.5 \n \n \n Qwen2.5-Omni-7B \n 1.6|3.5|1.8|3.4 \n \n \n Common Voice 15 \n
en | zh | yue | frWhisper-large-v3 \n 9.3|12.8|10.9|10.8 \n \n \n MinMo \n 7.9|6.3|6.4|8.5 \n \n \n Qwen2-Audio \n 8.6|6.9|5.9|9.6 \n \n \n Qwen2.5-Omni-3B \n 9.1|6.0|11.6|9.6 \n \n \n Qwen2.5-Omni-7B \n 7.6|5.2|7.3|7.5 \n \n \n Fleurs \n
zh | enWhisper-large-v3 \n 7.7|4.1 \n \n \n Seed-ASR-Multilingual \n -|3.4 \n \n \n Megrez-3B-Omni \n 10.8|- \n \n \n MiniCPM-o \n 4.4|- \n \n \n MinMo \n 3.0|3.8 \n \n \n Qwen2-Audio \n 7.5|- \n \n \n Qwen2.5-Omni-3B \n 3.2|5.4 \n \n \n Qwen2.5-Omni-7B \n 3.0|4.1 \n \n \n Wenetspeech \n
test-net | test-meetingSeed-ASR-Chinese \n 4.7|5.7 \n \n \n Megrez-3B-Omni \n -|16.4 \n \n \n MiniCPM-o \n 6.9|- \n \n \n MinMo \n 6.8|7.4 \n \n \n Qwen2.5-Omni-3B \n 6.3|8.1 \n \n \n Qwen2.5-Omni-7B \n 5.9|7.7 \n \n \n Voxpopuli-V1.0-en \n Llama-3-8B \n 6.2 \n \n \n Llama-3-70B \n 5.7 \n \n \n Qwen2.5-Omni-3B \n 6.6 \n \n \n Qwen2.5-Omni-7B \n 5.8 \n \n \n S2TT \n \n \n CoVoST2 \n
en-de | de-en | en-zh | zh-enSALMONN \n 18.6|-|33.1|- \n \n \n SpeechLLaMA \n -|27.1|-|12.3 \n \n \n BLSP \n 14.1|-|-|- \n \n \n MiniCPM-o \n -|-|48.2|27.2 \n \n \n MinMo \n -|39.9|46.7|26.0 \n \n \n Qwen-Audio \n 25.1|33.9|41.5|15.7 \n \n \n Qwen2-Audio \n 29.9|35.2|45.2|24.4 \n \n \n Qwen2.5-Omni-3B \n 28.3|38.1|41.4|26.6 \n \n \n Qwen2.5-Omni-7B \n 30.2|37.7|41.4|29.4 \n \n \n SER \n \n \n Meld \n WavLM-large \n 0.542 \n \n \n MiniCPM-o \n 0.524 \n \n \n Qwen-Audio \n 0.557 \n \n \n Qwen2-Audio \n 0.553 \n \n \n Qwen2.5-Omni-3B \n 0.558 \n \n \n Qwen2.5-Omni-7B \n 0.570 \n \n \n VSC \n \n \n VocalSound \n CLAP \n 0.495 \n \n \n Pengi \n 0.604 \n \n \n Qwen-Audio \n 0.929 \n \n \n Qwen2-Audio \n 0.939 \n \n \n Qwen2.5-Omni-3B \n 0.936 \n \n \n Qwen2.5-Omni-7B \n 0.939 \n \n \n Music \n \n \n GiantSteps Tempo \n Llark-7B \n 0.86 \n \n \n Qwen2.5-Omni-3B \n 0.88 \n \n \n Qwen2.5-Omni-7B \n 0.88 \n \n \n MusicCaps \n LP-MusicCaps \n 0.291|0.149|0.089|0.061|0.129|0.130 \n \n \n Qwen2.5-Omni-3B \n 0.325|0.163|0.093|0.057|0.132|0.229 \n \n \n Qwen2.5-Omni-7B \n 0.328|0.162|0.090|0.055|0.127|0.225 \n \n \n Audio Reasoning \n \n \n MMAU \n
Sound | Music | Speech | AvgGemini-Pro-V1.5 \n 56.75|49.40|58.55|54.90 \n \n \n Qwen2-Audio \n 54.95|50.98|42.04|49.20 \n \n \n Qwen2.5-Omni-3B \n 70.27|60.48|59.16|63.30 \n \n \n Qwen2.5-Omni-7B \n 67.87|69.16|59.76|65.60 \n \n \n Voice Chatting \n \n \n VoiceBench \n
AlpacaEval | CommonEval | SD-QA | MMSUUltravox-v0.4.1-LLaMA-3.1-8B \n 4.55|3.90|53.35|47.17 \n \n \n MERaLiON \n 4.50|3.77|55.06|34.95 \n \n \n Megrez-3B-Omni \n 3.50|2.95|25.95|27.03 \n \n \n Lyra-Base \n 3.85|3.50|38.25|49.74 \n \n \n MiniCPM-o \n 4.42|4.15|50.72|54.78 \n \n \n Baichuan-Omni-1.5 \n 4.50|4.05|43.40|57.25 \n \n \n Qwen2-Audio \n 3.74|3.43|35.71|35.72 \n \n \n Qwen2.5-Omni-3B \n 4.32|4.00|49.37|50.23 \n \n \n Qwen2.5-Omni-7B \n 4.49|3.93|55.71|61.32 \n \n \n VoiceBench \n
OpenBookQA | IFEval | AdvBench | AvgUltravox-v0.4.1-LLaMA-3.1-8B \n 65.27|66.88|98.46|71.45 \n \n \n MERaLiON \n 27.23|62.93|94.81|62.91 \n \n \n Megrez-3B-Omni \n 28.35|25.71|87.69|46.25 \n \n \n Lyra-Base \n 72.75|36.28|59.62|57.66 \n \n \n MiniCPM-o \n 78.02|49.25|97.69|71.69 \n \n \n Baichuan-Omni-1.5 \n 74.51|54.54|97.31|71.14 \n \n \n Qwen2-Audio \n 49.45|26.33|96.73|55.35 \n \n \n Qwen2.5-Omni-3B \n 74.73|42.10|98.85|68.81 \n \n \nQwen2.5-Omni-7B \n 81.10|52.87|99.42|74.12 \n Image -> Text
\n\n| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Other Best | Qwen2.5-VL-7B | GPT-4o-mini | \n|--------------------------------|--------------|------------|------------|---------------|-------------|\n| MMMUval | 59.2 | 53.1 | 53.9 | 58.6 | **60.0** | \n| MMMU-Prooverall | 36.6 | 29.7 | - | **38.3** | 37.6 | \n| MathVistatestmini | 67.9 | 59.4 | **71.9** | 68.2 | 52.5 | \n| MathVisionfull | 25.0 | 20.8 | 23.1 | **25.1** | - | \n| MMBench-V1.1-ENtest | 81.8 | 77.8 | 80.5 | **82.6** | 76.0 | \n| MMVetturbo | 66.8 | 62.1 | **67.5** | 67.1 | 66.9 | \n| MMStar | **64.0** | 55.7 | **64.0** | 63.9 | 54.8 | \n| MMEsum | 2340 | 2117 | **2372** | 2347 | 2003 | \n| MuirBench | 59.2 | 48.0 | - | **59.2** | - | \n| CRPErelation | **76.5** | 73.7 | - | 76.4 | - | \n| RealWorldQAavg | 70.3 | 62.6 | **71.9** | 68.5 | - | \n| MME-RealWorlden | **61.6** | 55.6 | - | 57.4 | - | \n| MM-MT-Bench | 6.0 | 5.0 | - | **6.3** | - | \n| AI2D | 83.2 | 79.5 | **85.8** | 83.9 | - | \n| TextVQAval | 84.4 | 79.8 | 83.2 | **84.9** | - | \n| DocVQAtest | 95.2 | 93.3 | 93.5 | **95.7** | - | \n| ChartQAtest Avg | 85.3 | 82.8 | 84.9 | **87.3** | - | \n| OCRBench_V2en | **57.8** | 51.7 | - | 56.3 | - | \n\n\n| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Qwen2.5-VL-7B | Grounding DINO | Gemini 1.5 Pro | \n|--------------------------|--------------|---------------|---------------|----------------|----------------|\n| Refcocoval | 90.5 | 88.7 | 90.0 | **90.6** | 73.2 | \n| RefcocotextA | **93.5** | 91.8 | 92.5 | 93.2 | 72.9 | \n| RefcocotextB | 86.6 | 84.0 | 85.4 | **88.2** | 74.6 | \n| Refcoco+val | 85.4 | 81.1 | 84.2 | **88.2** | 62.5 | \n| Refcoco+textA | **91.0** | 87.5 | 89.1 | 89.0 | 63.9 | \n| Refcoco+textB | **79.3** | 73.2 | 76.9 | 75.9 | 65.0 | \n| Refcocog+val | **87.4** | 85.0 | 87.2 | 86.1 | 75.2 | \n| Refcocog+test | **87.9** | 85.1 | 87.2 | 87.0 | 76.2 | \n| ODinW | 42.4 | 39.2 | 37.3 | **55.0** | 36.7 | \n| PointGrounding | 66.5 | 46.2 | **67.3** | - | - | \nVideo(without audio) -> Text
\n\n| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Other Best | Qwen2.5-VL-7B | GPT-4o-mini | \n|-----------------------------|--------------|------------|------------|---------------|-------------|\n| Video-MMEw/o sub | 64.3 | 62.0 | 63.9 | **65.1** | 64.8 | \n| Video-MMEw sub | **72.4** | 68.6 | 67.9 | 71.6 | - | \n| MVBench | **70.3** | 68.7 | 67.2 | 69.6 | - | \n| EgoSchematest | **68.6** | 61.4 | 63.2 | 65.0 | - | \nZero-shot Speech Generation
\n\n\n\n
\n\n \n\n Datasets \n Model \n Performance \n \n \n Content Consistency \n \n \n SEED \n
test-zh | test-en | test-hard Seed-TTS_ICL \n 1.11 | 2.24 | 7.58 \n \n \n Seed-TTS_RL \n 1.00 | 1.94 | 6.42 \n \n \n MaskGCT \n 2.27 | 2.62 | 10.27 \n \n \n E2_TTS \n 1.97 | 2.19 | - \n \n \n F5-TTS \n 1.56 | 1.83 | 8.67 \n \n \n CosyVoice 2 \n 1.45 | 2.57 | 6.83 \n \n \n CosyVoice 2-S \n 1.45 | 2.38 | 8.08 \n \n \n Qwen2.5-Omni-3B_ICL \n 1.95 | 2.87 | 9.92 \n \n \n Qwen2.5-Omni-3B_RL \n 1.58 | 2.51 | 7.86 \n \n \n Qwen2.5-Omni-7B_ICL \n 1.70 | 2.72 | 7.97 \n \n \n Qwen2.5-Omni-7B_RL \n 1.42 | 2.32 | 6.54 \n \n \n Speaker Similarity \n \n \n SEED \n
test-zh | test-en | test-hard Seed-TTS_ICL \n 0.796 | 0.762 | 0.776 \n \n \n Seed-TTS_RL \n 0.801 | 0.766 | 0.782 \n \n \n MaskGCT \n 0.774 | 0.714 | 0.748 \n \n \n E2_TTS \n 0.730 | 0.710 | - \n \n \n F5-TTS \n 0.741 | 0.647 | 0.713 \n \n \n CosyVoice 2 \n 0.748 | 0.652 | 0.724 \n \n \n CosyVoice 2-S \n 0.753 | 0.654 | 0.732 \n \n \n Qwen2.5-Omni-3B_ICL \n 0.741 | 0.635 | 0.748 \n \n \n Qwen2.5-Omni-3B_RL \n 0.744 | 0.635 | 0.746 \n \n \n Qwen2.5-Omni-7B_ICL \n 0.752 | 0.632 | 0.747 \n \n \nQwen2.5-Omni-7B_RL \n 0.754 | 0.641 | 0.752 \n Text -> Text
\n\n| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Qwen2.5-7B | Qwen2.5-3B | Qwen2-7B | Llama3.1-8B | Gemma2-9B | \n|-----------------------------------|-----------|------------|------------|------------|------------|-------------|-----------|\n| MMLU-Pro | 47.0 | 40.4 | **56.3** | 43.7 | 44.1 | 48.3 | 52.1 | \n| MMLU-redux | 71.0 | 60.9 | **75.4** | 64.4 | 67.3 | 67.2 | 72.8 | \n| LiveBench0831 | 29.6 | 22.3 | **35.9** | 26.8 | 29.2 | 26.7 | 30.6 | \n| GPQA | 30.8 | 34.3 | **36.4** | 30.3 | 34.3 | 32.8 | 32.8 | \n| MATH | 71.5 | 63.6 | **75.5** | 65.9 | 52.9 | 51.9 | 44.3 | \n| GSM8K | 88.7 | 82.6 | **91.6** | 86.7 | 85.7 | 84.5 | 76.7 | \n| HumanEval | 78.7 | 70.7 | **84.8** |\t74.4 | 79.9 | 72.6 | 68.9 | \n| MBPP | 73.2 | 70.4 | **79.2** | 72.7 | 67.2 | 69.6 | 74.9 | \n| MultiPL-E | 65.8 | 57.6 | **70.4** | 60.2 | 59.1 | 50.7 | 53.4 | \n| LiveCodeBench2305-2409 | 24.6 | 16.5 | **28.7** | 19.9 | 23.9 | 8.3 | 18.9 | \nMinimum GPU memory requirements
\n\n|Model | Precision | 15(s) Video | 30(s) Video | 60(s) Video |\n|--------------|-----------| ------------- | ------------- | ------------------ |\n| Qwen-Omni-3B | FP32 | 89.10 GB | Not Recommend | Not Recommend |\n| Qwen-Omni-3B | BF16 | 18.38 GB | 22.43 GB | 28.22 GB |\n| Qwen-Omni-7B | FP32 | 93.56 GB | Not Recommend | Not Recommend |\n| Qwen-Omni-7B | BF16 | 31.11 GB | 41.85 GB | 60.19 GB |\n\nNote: The table above presents the theoretical minimum memory requirements for inference with `transformers` and `BF16` is test with `attn_implementation=\"flash_attention_2\"`; however, in practice, the actual memory usage is typically at least 1.2 times higher. For more information, see the linked resource [here](https://huggingface.co/docs/accelerate/main/en/usage_guides/model_size_estimator).\nVideo URL resource usage
\n\nVideo URL compatibility largely depends on the third-party library version. The details are in the table below. Change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one.\n\n| Backend | HTTP | HTTPS |\n|-------------|------|-------|\n| torchvision >= 0.19.0 | \u2705 | \u2705 |\n| torchvision < 0.19.0 | \u274c | \u274c |\n| decord | \u2705 | \u274c |\nBatch inference
\n\nThe model can batch inputs composed of mixed samples of various types such as text, images, audio and videos as input when `return_audio=False` is set. Here is an example.\n\n```python\n# Sample messages for batch inference\n\n# Conversation with video only\nconversation1 = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"video\", \"video\": \"/path/to/video.mp4\"},\n ]\n }\n]\n\n# Conversation with audio only\nconversation2 = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"audio\", \"audio\": \"/path/to/audio.wav\"},\n ]\n }\n]\n\n# Conversation with pure text\nconversation3 = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": \"who are you?\"\n }\n]\n\n\n# Conversation with mixed media\nconversation4 = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"/path/to/image.jpg\"},\n {\"type\": \"video\", \"video\": \"/path/to/video.mp4\"},\n {\"type\": \"audio\", \"audio\": \"/path/to/audio.wav\"},\n {\"type\": \"text\", \"text\": \"What are the elements can you see and hear in these medias?\"},\n ],\n }\n]\n\n# Combine messages for batch processing\nconversations = [conversation1, conversation2, conversation3, conversation4]\n\n# set use audio in video\nUSE_AUDIO_IN_VIDEO = True\n\n# Preparation for batch inference\ntext = processor.apply_chat_template(conversations, add_generation_prompt=True, tokenize=False)\naudios, images, videos = process_mm_info(conversations, use_audio_in_video=USE_AUDIO_IN_VIDEO)\n\ninputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors=\"pt\", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO)\ninputs = inputs.to(model.device).to(model.dtype)\n\n# Batch Inference\ntext_ids = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO, return_audio=False)\ntext = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)\nprint(text)\n```\n
\n\n",
"metadata": "\"N/A\"",
"depth": 0,
"children": [
"KE-Team/Ke-Omni-R-3B",
"giangndm/qwen2.5-omni-3b-mlx-8bit",
"giangndm/qwen2.5-omni-3b-mlx-4bit",
"unsloth/Qwen2.5-Omni-3B"
],
"children_count": 4,
"adapters": [
"FINGU-AI/qwen2.5-omni-3b-lora-sft",
"andrewt28/qwen2.5-omni-3b-keyboard-video-text"
],
"adapters_count": 2,
"quantized": [
"ggml-org/Qwen2.5-Omni-3B-GGUF",
"unsloth/Qwen2.5-Omni-3B-GGUF",
"mradermacher/Qwen2.5-Omni-3B-GGUF",
"mradermacher/Qwen2.5-Omni-3B-i1-GGUF",
"zhaoweiguo/Qwen2.5-Omni-3B-Q4_K_M-GGUF"
],
"quantized_count": 5,
"merges": [],
"merges_count": 0,
"total_derivatives": 11,
"spaces": [],
"spaces_count": 0,
"parents": [],
"base_model": "Qwen/Qwen2.5-Omni-3B",
"base_model_relation": "base"
},
{
"model_id": "KE-Team/Ke-Omni-R-3B",
"gated": "unknown",
"card": "---\nlicense: apache-2.0\ndatasets:\n- amaai-lab/MusicBench\nlanguage:\n- en\n- zh\nbase_model:\n- Qwen/Qwen2.5-Omni-3B\npipeline_tag: audio-text-to-text\n---\n\n# Ke-Omni-R: Achieving Advanced Audio Reasoning with a Concise 50-Words Think Process\nIf you wish to train or perform inference with the model, please visit the GitHub repository: [https://github.com/shuaijiang/Ke-Omni-R/](https://github.com/shuaijiang/Ke-Omni-R/).\nIf you find this model helpful, please like this model and star our GitHub.\n\nKe-Omni-R is an advanced audio reasoning model built upon [Qwen2.5-Omni-3B](https://github.com/QwenLM/Qwen2.5-Omni). With only 10k post-training samples, Ke-Omni-R has achieved state-of-the-art performance on the MMAU *Test-mini* and *Test* benchmarks. Key insights from its development include:\n\n- **GRPO Algorithm**: The GRPO algorithm significantly enhances the performance of the already strong base model (Qwen2.5-Omni-7B), demonstrating superior generalization even in unseen speech domains.\n- **Think Process**: Incorporating a concise think process (less than 50 words) plays a crucial role in improving reasoning capabilities.\n- **KL Divergence**: Slight improvements were observed during GRPO training by leveraging KL divergence.\n- **Domain Ratio vs. Data Volume**: Domain diversity outweighs data volume. We utilized only 10k samples, with 5k randomly selected from AVQA and another 5k from MusicBench.\n\n## Performance: Accuracies (%)\u2191 on MMAU Test-mini and Test benchmark\n| Model | Method | Sound (Test-mini) | Sound (Test) | Music (Test-mini) | Music (Test) | Speech (Test-mini) | Speech (Test) | Average (Test-mini) | Average (Test) |\n|---------------------------------------|-----------------------|-----------|-------|-----------|-------|-----------|------|------------|-------|\n| - | Human\\* | 86.31 | - | 78.22 | - | 82.17 | - | 82.23 | - |\n| Gemini Pro 2.0 Flash | Direct Inference\\* | 56.46 | 61.73 | 58.68 | 56.53 | 51.65 | 61.53 | 55.60 | 59.93 |\n| Audio Flamingo 2 | Direct Inference\\* | 61.56 | 65.10 | **73.95** |**72.90**| 30.93 | 40.26 | 55.48 | 59.42 |\n| GPT4o + Strong Cap. | Direct Inference\\* | 57.35 | 55.83 | 49.70 | 51.73 | 64.86 | **68.66** | 57.30 | 58.74 |\n| Llama-3-8B-Instruct + Strong Cap. | Direct Inference\\* | 50.75 | 49.10 | 48.93 | 48.93 | 55.25 | 62.70 | 52.10 | 53.57 |\n| Qwen2-Audio-7B-Instruct | Direct Inference\\* | 54.95 | 45.90 | 50.98 | 53.26 | 42.04 | 45.90 | 49.20 | 52.50 |\n| SALAMONN | Direct Inference\\* | 41.00 | 40.30 | 34.80 | 33.76 | 25.50 | 24.24 | 33.70 | 32.77 |\n| Audio-Reasoner(Qwen2-Audio-7B-Instruct) | \\[1\\] | 60.06 | - | 64.30 | - | 60.70 | - | 61.71 | - |\n| Audio-Cot(Qwen2-Audio-7B-Instruct) | \\[2\\] | 61.86 | - | 56.29 | - | 55.26 | - | 57.80 | - |\n| R1-AQA(Qwen2-Audio-7B-Instruct) | \\[3\\] | 68.77 | 69.76 | 64.37 | 61.40 | 63.66 | 62.70 | 65.60 | 64.36 |\n| Qwen2.5-Omni-3B | \\[4\\] | 70.27 | - | 60.48 | - | 59.16 | - | 63.30 | - |\n| Qwen2.5-Omni-7B | \\[4\\] | 67.87 | - | 69.16 | - | 59.76 | - | 65.60 | - |\n| Ke-Omni-R-3B(Qwen2.5-Omni-3B) | GRPO w/ think (ours) | **72.37** | 71.87 | 65.57 | 59.60 |64.26 | 64.17 | 67.40 |65.17 |\n| Ke-Omni-R(Qwen2.5-Omni-7B) | GRPO(ours) | 69.37 | **71.90** | 69.46 | 67.13 |**67.87** | 67.10 | **68.90** |**68.71** |\n\n## Performance: CER/WER (%)\u2193 on ASR benchmark\n| Model | Method | WenetSpeech test-net | WenetSpeech test-meeting | LibriSpeech test-clean | LibriSpeech test-other|\n| ---|----| ----| ----| ---- | ----|\n| Qwen2.5-Omni-3B | \\[4\\] | 6.3 | 8.1 | 2.2 | 4.5 |\n| Qwen2.5-Omni-7B | \\[4\\] | 5.9 | 7.7 | 1.8 | 3.4 |\n| Ke-Omni-3B | ours | 11.7 | 16.1 | 1.8 | 3.8 |\n| Ke-Omni-7B | ours | 7.5 | 9.8 | **1.6** | **3.1** |\n\nNote:\n\n- \\* The data are sourced from the [MMAU leaderboard](https://sakshi113.github.io/mmau_homepage/#leaderboard).\n \n- \\[1\\] Xie, Zhifei, et al. \"Audio-Reasoner: Improving Reasoning Capability in Large Audio Language Models.\" arXiv preprint arXiv:2503.02318. \n\n- \\[2\\] Ma, Ziyang, et al. \"Audio-CoT: Exploring Chain-of-Thought Reasoning in Large Audio Language Model.\" arXiv preprint arXiv:2501.07246.\n\n- \\[3\\] Li, Gang, et al. \"Reinforcement Learning Outperforms Supervised Fine-Tuning: A Case Study on Audio Question Answering.\" arXiv preprint arXiv:2503.11197\n\n- \\[4\\] Xu, Jin, et al. \"Qwen2.5-Omni Technical Report.\" arXiv preprint arXiv:2503.20215\n\n\n## Usage\n\n```python\n\nfrom transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor\nfrom qwen_omni_utils import process_mm_info\n\n\n# You can directly insert a local file path, a URL, or a base64-encoded audio into the position where you want in the text.\nmessages = [\n # Audio\n ## Local audio path\n [{\"role\": \"system\", \"content\":[{\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}]},\n {\"role\": \"user\", \"content\": [{\"type\": \"audio\", \"audio\": \"/path_to_avqa_wavs/-IBtBeR6B00_000000.wav\"}, {\"type\": \"text\", \"text\": \"Please describe this audio.\"}]}],\n [{\"role\": \"user\", \"content\": [{\"type\": \"audio\", \"audio\": \"/path_to_avqa_wavs/-IBtBeR6B00_000000.wav\"}, {\"type\": \"text\", \"text\": \"What is the main source of sound in the audio? ['aircraft', 'Car', 'Tank', 'Missile'] Output the thinking process (less than 50 words) in
\n Unsloth Dynamic 2.0 achieves superior accuracy & outperforms other leading quants.\n
\n \n\n
\n
\n\n### Key Features\n\n* **Omni and Novel Architecture**: We propose Thinker-Talker architecture, an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. We propose a novel position embedding, named TMRoPE (Time-aligned Multimodal RoPE), to synchronize the timestamps of video inputs with audio.\n\n* **Real-Time Voice and Video Chat**: Architecture designed for fully real-time interactions, supporting chunked input and immediate output.\n\n* **Natural and Robust Speech Generation**: Surpassing many existing streaming and non-streaming alternatives, demonstrating superior robustness and naturalness in speech generation.\n\n* **Strong Performance Across Modalities**: Exhibiting exceptional performance across all modalities when benchmarked against similarly sized single-modality models. Qwen2.5-Omni outperforms the similarly sized Qwen2-Audio in audio capabilities and achieves comparable performance to Qwen2.5-VL-7B.\n\n* **Excellent End-to-End Speech Instruction Following**: Qwen2.5-Omni shows performance in end-to-end speech instruction following that rivals its effectiveness with text inputs, evidenced by benchmarks such as MMLU and GSM8K.\n\n### Model Architecture\n\n
\n
\n
\n\n### Performance\n\nWe conducted a comprehensive evaluation of Qwen2.5-Omni, which demonstrates strong performance across all modalities when compared to similarly sized single-modality models and closed-source models like Qwen2.5-VL-7B, Qwen2-Audio, and Gemini-1.5-pro. In tasks requiring the integration of multiple modalities, such as OmniBench, Qwen2.5-Omni achieves state-of-the-art performance. Furthermore, in single-modality tasks, it excels in areas including speech recognition (Common Voice), translation (CoVoST2), audio understanding (MMAU), image reasoning (MMMU, MMStar), video understanding (MVBench), and speech generation (Seed-tts-eval and subjective naturalness).\n\n
\n
\n
\n\nMultimodality -> Text
\n\n\n
\n\n \n\n Datasets \n Model \n Performance \n \n \n OmniBench \n
Speech | Sound Event | Music | AvgGemini-1.5-Pro \n 42.67%|42.26%|46.23%|42.91% \n \n \n MIO-Instruct \n 36.96%|33.58%|11.32%|33.80% \n \n \n AnyGPT (7B) \n 17.77%|20.75%|13.21%|18.04% \n \n \n video-SALMONN \n 34.11%|31.70%|56.60%|35.64% \n \n \n UnifiedIO2-xlarge \n 39.56%|36.98%|29.25%|38.00% \n \n \n UnifiedIO2-xxlarge \n 34.24%|36.98%|24.53%|33.98% \n \n \n MiniCPM-o \n -|-|-|40.50% \n \n \n Baichuan-Omni-1.5 \n -|-|-|42.90% \n \n \n Qwen2.5-Omni-3B \n 52.14%|52.08%|52.83%|52.19% \n \n \nQwen2.5-Omni-7B \n 55.25%|60.00%|52.83%|56.13% \n Audio -> Text
\n\n\n\n
\n\n \n\n Datasets \n Model \n Performance \n \n \n ASR \n \n \n Librispeech \n
dev-clean | dev other | test-clean | test-otherSALMONN \n -|-|2.1|4.9 \n \n \n SpeechVerse \n -|-|2.1|4.4 \n \n \n Whisper-large-v3 \n -|-|1.8|3.6 \n \n \n Llama-3-8B \n -|-|-|3.4 \n \n \n Llama-3-70B \n -|-|-|3.1 \n \n \n Seed-ASR-Multilingual \n -|-|1.6|2.8 \n \n \n MiniCPM-o \n -|-|1.7|- \n \n \n MinMo \n -|-|1.7|3.9 \n \n \n Qwen-Audio \n 1.8|4.0|2.0|4.2 \n \n \n Qwen2-Audio \n 1.3|3.4|1.6|3.6 \n \n \n Qwen2.5-Omni-3B \n 2.0|4.1|2.2|4.5 \n \n \n Qwen2.5-Omni-7B \n 1.6|3.5|1.8|3.4 \n \n \n Common Voice 15 \n
en | zh | yue | frWhisper-large-v3 \n 9.3|12.8|10.9|10.8 \n \n \n MinMo \n 7.9|6.3|6.4|8.5 \n \n \n Qwen2-Audio \n 8.6|6.9|5.9|9.6 \n \n \n Qwen2.5-Omni-3B \n 9.1|6.0|11.6|9.6 \n \n \n Qwen2.5-Omni-7B \n 7.6|5.2|7.3|7.5 \n \n \n Fleurs \n
zh | enWhisper-large-v3 \n 7.7|4.1 \n \n \n Seed-ASR-Multilingual \n -|3.4 \n \n \n Megrez-3B-Omni \n 10.8|- \n \n \n MiniCPM-o \n 4.4|- \n \n \n MinMo \n 3.0|3.8 \n \n \n Qwen2-Audio \n 7.5|- \n \n \n Qwen2.5-Omni-3B \n 3.2|5.4 \n \n \n Qwen2.5-Omni-7B \n 3.0|4.1 \n \n \n Wenetspeech \n
test-net | test-meetingSeed-ASR-Chinese \n 4.7|5.7 \n \n \n Megrez-3B-Omni \n -|16.4 \n \n \n MiniCPM-o \n 6.9|- \n \n \n MinMo \n 6.8|7.4 \n \n \n Qwen2.5-Omni-3B \n 6.3|8.1 \n \n \n Qwen2.5-Omni-7B \n 5.9|7.7 \n \n \n Voxpopuli-V1.0-en \n Llama-3-8B \n 6.2 \n \n \n Llama-3-70B \n 5.7 \n \n \n Qwen2.5-Omni-3B \n 6.6 \n \n \n Qwen2.5-Omni-7B \n 5.8 \n \n \n S2TT \n \n \n CoVoST2 \n
en-de | de-en | en-zh | zh-enSALMONN \n 18.6|-|33.1|- \n \n \n SpeechLLaMA \n -|27.1|-|12.3 \n \n \n BLSP \n 14.1|-|-|- \n \n \n MiniCPM-o \n -|-|48.2|27.2 \n \n \n MinMo \n -|39.9|46.7|26.0 \n \n \n Qwen-Audio \n 25.1|33.9|41.5|15.7 \n \n \n Qwen2-Audio \n 29.9|35.2|45.2|24.4 \n \n \n Qwen2.5-Omni-3B \n 28.3|38.1|41.4|26.6 \n \n \n Qwen2.5-Omni-7B \n 30.2|37.7|41.4|29.4 \n \n \n SER \n \n \n Meld \n WavLM-large \n 0.542 \n \n \n MiniCPM-o \n 0.524 \n \n \n Qwen-Audio \n 0.557 \n \n \n Qwen2-Audio \n 0.553 \n \n \n Qwen2.5-Omni-3B \n 0.558 \n \n \n Qwen2.5-Omni-7B \n 0.570 \n \n \n VSC \n \n \n VocalSound \n CLAP \n 0.495 \n \n \n Pengi \n 0.604 \n \n \n Qwen-Audio \n 0.929 \n \n \n Qwen2-Audio \n 0.939 \n \n \n Qwen2.5-Omni-3B \n 0.936 \n \n \n Qwen2.5-Omni-7B \n 0.939 \n \n \n Music \n \n \n GiantSteps Tempo \n Llark-7B \n 0.86 \n \n \n Qwen2.5-Omni-3B \n 0.88 \n \n \n Qwen2.5-Omni-7B \n 0.88 \n \n \n MusicCaps \n LP-MusicCaps \n 0.291|0.149|0.089|0.061|0.129|0.130 \n \n \n Qwen2.5-Omni-3B \n 0.325|0.163|0.093|0.057|0.132|0.229 \n \n \n Qwen2.5-Omni-7B \n 0.328|0.162|0.090|0.055|0.127|0.225 \n \n \n Audio Reasoning \n \n \n MMAU \n
Sound | Music | Speech | AvgGemini-Pro-V1.5 \n 56.75|49.40|58.55|54.90 \n \n \n Qwen2-Audio \n 54.95|50.98|42.04|49.20 \n \n \n Qwen2.5-Omni-3B \n 70.27|60.48|59.16|63.30 \n \n \n Qwen2.5-Omni-7B \n 67.87|69.16|59.76|65.60 \n \n \n Voice Chatting \n \n \n VoiceBench \n
AlpacaEval | CommonEval | SD-QA | MMSUUltravox-v0.4.1-LLaMA-3.1-8B \n 4.55|3.90|53.35|47.17 \n \n \n MERaLiON \n 4.50|3.77|55.06|34.95 \n \n \n Megrez-3B-Omni \n 3.50|2.95|25.95|27.03 \n \n \n Lyra-Base \n 3.85|3.50|38.25|49.74 \n \n \n MiniCPM-o \n 4.42|4.15|50.72|54.78 \n \n \n Baichuan-Omni-1.5 \n 4.50|4.05|43.40|57.25 \n \n \n Qwen2-Audio \n 3.74|3.43|35.71|35.72 \n \n \n Qwen2.5-Omni-3B \n 4.32|4.00|49.37|50.23 \n \n \n Qwen2.5-Omni-7B \n 4.49|3.93|55.71|61.32 \n \n \n VoiceBench \n
OpenBookQA | IFEval | AdvBench | AvgUltravox-v0.4.1-LLaMA-3.1-8B \n 65.27|66.88|98.46|71.45 \n \n \n MERaLiON \n 27.23|62.93|94.81|62.91 \n \n \n Megrez-3B-Omni \n 28.35|25.71|87.69|46.25 \n \n \n Lyra-Base \n 72.75|36.28|59.62|57.66 \n \n \n MiniCPM-o \n 78.02|49.25|97.69|71.69 \n \n \n Baichuan-Omni-1.5 \n 74.51|54.54|97.31|71.14 \n \n \n Qwen2-Audio \n 49.45|26.33|96.73|55.35 \n \n \n Qwen2.5-Omni-3B \n 74.73|42.10|98.85|68.81 \n \n \nQwen2.5-Omni-7B \n 81.10|52.87|99.42|74.12 \n Image -> Text
\n\n| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Other Best | Qwen2.5-VL-7B | GPT-4o-mini | \n|--------------------------------|--------------|------------|------------|---------------|-------------|\n| MMMUval | 59.2 | 53.1 | 53.9 | 58.6 | **60.0** | \n| MMMU-Prooverall | 36.6 | 29.7 | - | **38.3** | 37.6 | \n| MathVistatestmini | 67.9 | 59.4 | **71.9** | 68.2 | 52.5 | \n| MathVisionfull | 25.0 | 20.8 | 23.1 | **25.1** | - | \n| MMBench-V1.1-ENtest | 81.8 | 77.8 | 80.5 | **82.6** | 76.0 | \n| MMVetturbo | 66.8 | 62.1 | **67.5** | 67.1 | 66.9 | \n| MMStar | **64.0** | 55.7 | **64.0** | 63.9 | 54.8 | \n| MMEsum | 2340 | 2117 | **2372** | 2347 | 2003 | \n| MuirBench | 59.2 | 48.0 | - | **59.2** | - | \n| CRPErelation | **76.5** | 73.7 | - | 76.4 | - | \n| RealWorldQAavg | 70.3 | 62.6 | **71.9** | 68.5 | - | \n| MME-RealWorlden | **61.6** | 55.6 | - | 57.4 | - | \n| MM-MT-Bench | 6.0 | 5.0 | - | **6.3** | - | \n| AI2D | 83.2 | 79.5 | **85.8** | 83.9 | - | \n| TextVQAval | 84.4 | 79.8 | 83.2 | **84.9** | - | \n| DocVQAtest | 95.2 | 93.3 | 93.5 | **95.7** | - | \n| ChartQAtest Avg | 85.3 | 82.8 | 84.9 | **87.3** | - | \n| OCRBench_V2en | **57.8** | 51.7 | - | 56.3 | - | \n\n\n| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Qwen2.5-VL-7B | Grounding DINO | Gemini 1.5 Pro | \n|--------------------------|--------------|---------------|---------------|----------------|----------------|\n| Refcocoval | 90.5 | 88.7 | 90.0 | **90.6** | 73.2 | \n| RefcocotextA | **93.5** | 91.8 | 92.5 | 93.2 | 72.9 | \n| RefcocotextB | 86.6 | 84.0 | 85.4 | **88.2** | 74.6 | \n| Refcoco+val | 85.4 | 81.1 | 84.2 | **88.2** | 62.5 | \n| Refcoco+textA | **91.0** | 87.5 | 89.1 | 89.0 | 63.9 | \n| Refcoco+textB | **79.3** | 73.2 | 76.9 | 75.9 | 65.0 | \n| Refcocog+val | **87.4** | 85.0 | 87.2 | 86.1 | 75.2 | \n| Refcocog+test | **87.9** | 85.1 | 87.2 | 87.0 | 76.2 | \n| ODinW | 42.4 | 39.2 | 37.3 | **55.0** | 36.7 | \n| PointGrounding | 66.5 | 46.2 | **67.3** | - | - | \nVideo(without audio) -> Text
\n\n| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Other Best | Qwen2.5-VL-7B | GPT-4o-mini | \n|-----------------------------|--------------|------------|------------|---------------|-------------|\n| Video-MMEw/o sub | 64.3 | 62.0 | 63.9 | **65.1** | 64.8 | \n| Video-MMEw sub | **72.4** | 68.6 | 67.9 | 71.6 | - | \n| MVBench | **70.3** | 68.7 | 67.2 | 69.6 | - | \n| EgoSchematest | **68.6** | 61.4 | 63.2 | 65.0 | - | \nZero-shot Speech Generation
\n\n\n\n
\n\n \n\n Datasets \n Model \n Performance \n \n \n Content Consistency \n \n \n SEED \n
test-zh | test-en | test-hard Seed-TTS_ICL \n 1.11 | 2.24 | 7.58 \n \n \n Seed-TTS_RL \n 1.00 | 1.94 | 6.42 \n \n \n MaskGCT \n 2.27 | 2.62 | 10.27 \n \n \n E2_TTS \n 1.97 | 2.19 | - \n \n \n F5-TTS \n 1.56 | 1.83 | 8.67 \n \n \n CosyVoice 2 \n 1.45 | 2.57 | 6.83 \n \n \n CosyVoice 2-S \n 1.45 | 2.38 | 8.08 \n \n \n Qwen2.5-Omni-3B_ICL \n 1.95 | 2.87 | 9.92 \n \n \n Qwen2.5-Omni-3B_RL \n 1.58 | 2.51 | 7.86 \n \n \n Qwen2.5-Omni-7B_ICL \n 1.70 | 2.72 | 7.97 \n \n \n Qwen2.5-Omni-7B_RL \n 1.42 | 2.32 | 6.54 \n \n \n Speaker Similarity \n \n \n SEED \n
test-zh | test-en | test-hard Seed-TTS_ICL \n 0.796 | 0.762 | 0.776 \n \n \n Seed-TTS_RL \n 0.801 | 0.766 | 0.782 \n \n \n MaskGCT \n 0.774 | 0.714 | 0.748 \n \n \n E2_TTS \n 0.730 | 0.710 | - \n \n \n F5-TTS \n 0.741 | 0.647 | 0.713 \n \n \n CosyVoice 2 \n 0.748 | 0.652 | 0.724 \n \n \n CosyVoice 2-S \n 0.753 | 0.654 | 0.732 \n \n \n Qwen2.5-Omni-3B_ICL \n 0.741 | 0.635 | 0.748 \n \n \n Qwen2.5-Omni-3B_RL \n 0.744 | 0.635 | 0.746 \n \n \n Qwen2.5-Omni-7B_ICL \n 0.752 | 0.632 | 0.747 \n \n \nQwen2.5-Omni-7B_RL \n 0.754 | 0.641 | 0.752 \n Text -> Text
\n\n| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Qwen2.5-7B | Qwen2.5-3B | Qwen2-7B | Llama3.1-8B | Gemma2-9B | \n|-----------------------------------|-----------|------------|------------|------------|------------|-------------|-----------|\n| MMLU-Pro | 47.0 | 40.4 | **56.3** | 43.7 | 44.1 | 48.3 | 52.1 | \n| MMLU-redux | 71.0 | 60.9 | **75.4** | 64.4 | 67.3 | 67.2 | 72.8 | \n| LiveBench0831 | 29.6 | 22.3 | **35.9** | 26.8 | 29.2 | 26.7 | 30.6 | \n| GPQA | 30.8 | 34.3 | **36.4** | 30.3 | 34.3 | 32.8 | 32.8 | \n| MATH | 71.5 | 63.6 | **75.5** | 65.9 | 52.9 | 51.9 | 44.3 | \n| GSM8K | 88.7 | 82.6 | **91.6** | 86.7 | 85.7 | 84.5 | 76.7 | \n| HumanEval | 78.7 | 70.7 | **84.8** |\t74.4 | 79.9 | 72.6 | 68.9 | \n| MBPP | 73.2 | 70.4 | **79.2** | 72.7 | 67.2 | 69.6 | 74.9 | \n| MultiPL-E | 65.8 | 57.6 | **70.4** | 60.2 | 59.1 | 50.7 | 53.4 | \n| LiveCodeBench2305-2409 | 24.6 | 16.5 | **28.7** | 19.9 | 23.9 | 8.3 | 18.9 | \nMinimum GPU memory requirements
\n\n|Model | Precision | 15(s) Video | 30(s) Video | 60(s) Video |\n|--------------|-----------| ------------- | ------------- | ------------------ |\n| Qwen-Omni-3B | FP32 | 89.10 GB | Not Recommend | Not Recommend |\n| Qwen-Omni-3B | BF16 | 18.38 GB | 22.43 GB | 28.22 GB |\n| Qwen-Omni-7B | FP32 | 93.56 GB | Not Recommend | Not Recommend |\n| Qwen-Omni-7B | BF16 | 31.11 GB | 41.85 GB | 60.19 GB |\n\nNote: The table above presents the theoretical minimum memory requirements for inference with `transformers` and `BF16` is test with `attn_implementation=\"flash_attention_2\"`; however, in practice, the actual memory usage is typically at least 1.2 times higher. For more information, see the linked resource [here](https://huggingface.co/docs/accelerate/main/en/usage_guides/model_size_estimator).\nVideo URL resource usage
\n\nVideo URL compatibility largely depends on the third-party library version. The details are in the table below. Change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one.\n\n| Backend | HTTP | HTTPS |\n|-------------|------|-------|\n| torchvision >= 0.19.0 | \u2705 | \u2705 |\n| torchvision < 0.19.0 | \u274c | \u274c |\n| decord | \u2705 | \u274c |\nBatch inference
\n\nThe model can batch inputs composed of mixed samples of various types such as text, images, audio and videos as input when `return_audio=False` is set. Here is an example.\n\n```python\n# Sample messages for batch inference\n\n# Conversation with video only\nconversation1 = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"video\", \"video\": \"/path/to/video.mp4\"},\n ]\n }\n]\n\n# Conversation with audio only\nconversation2 = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"audio\", \"audio\": \"/path/to/audio.wav\"},\n ]\n }\n]\n\n# Conversation with pure text\nconversation3 = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": \"who are you?\"\n }\n]\n\n\n# Conversation with mixed media\nconversation4 = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"/path/to/image.jpg\"},\n {\"type\": \"video\", \"video\": \"/path/to/video.mp4\"},\n {\"type\": \"audio\", \"audio\": \"/path/to/audio.wav\"},\n {\"type\": \"text\", \"text\": \"What are the elements can you see and hear in these medias?\"},\n ],\n }\n]\n\n# Combine messages for batch processing\nconversations = [conversation1, conversation2, conversation3, conversation4]\n\n# set use audio in video\nUSE_AUDIO_IN_VIDEO = True\n\n# Preparation for batch inference\ntext = processor.apply_chat_template(conversations, add_generation_prompt=True, tokenize=False)\naudios, images, videos = process_mm_info(conversations, use_audio_in_video=USE_AUDIO_IN_VIDEO)\n\ninputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors=\"pt\", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO)\ninputs = inputs.to(model.device).to(model.dtype)\n\n# Batch Inference\ntext_ids = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO, return_audio=False)\ntext = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)\nprint(text)\n```\n
\n\n",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"Qwen/Qwen2.5-Omni-3B"
],
"base_model": null,
"base_model_relation": null
},
{
"model_id": "FINGU-AI/qwen2.5-omni-3b-lora-sft",
"gated": "False",
"card": "---\nlibrary_name: peft\nlicense: other\nbase_model: Qwen/Qwen2.5-Omni-3B\ntags:\n- llama-factory\n- lora\n- generated_from_trainer\nmodel-index:\n- name: sft\n results: []\n---\n\n\n\n# sft\n\nThis model is a fine-tuned version of [Qwen/Qwen2.5-Omni-3B](https://huggingface.co/Qwen/Qwen2.5-Omni-3B) on the fingu dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 3.0\n- mixed_precision_training: Native AMP\n\n### Training results\n\n\n\n### Framework versions\n\n- PEFT 0.15.1\n- Transformers 4.52.0.dev0\n- Pytorch 2.2.0a0+81ea7a4\n- Datasets 2.17.1\n- Tokenizers 0.21.1",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"Qwen/Qwen2.5-Omni-3B"
],
"base_model": "FINGU-AI/qwen2.5-omni-3b-lora-sft",
"base_model_relation": "base"
},
{
"model_id": "andrewt28/qwen2.5-omni-3b-keyboard-video-text",
"gated": "False",
"card": "---\nlibrary_name: peft\nlicense: afl-3.0\ndatasets:\n- andrewt28/keystroke-typing-videos\nlanguage:\n- en\nbase_model:\n- Qwen/Qwen2.5-Omni-3B\npipeline_tag: video-text-to-text\n---\n\n# Model Card for Qwen2.5-Omni-3B-Keyboard-Video-Text\n\nFine-tuned on video and audio of typing to predict the typed text.",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"Qwen/Qwen2.5-Omni-3B"
],
"base_model": "andrewt28/qwen2.5-omni-3b-keyboard-video-text",
"base_model_relation": "base"
},
{
"model_id": "ggml-org/Qwen2.5-Omni-3B-GGUF",
"gated": "unknown",
"card": "---\nlicense: other\nlicense_name: qwen-research\nlicense_link: https://huggingface.co/Qwen/Qwen2.5-Omni-3B/blob/main/LICENSE\nlanguage:\n- en\ntags:\n- multimodal\npipeline_tag: any-to-any\nbase_model:\n- Qwen/Qwen2.5-Omni-3B\n---\n\n# Qwen2.5-Omni-3B-GGUF\n\nOriginal model: https://huggingface.co/Qwen/Qwen2.5-Omni-3B\n\nModalities:\n- \u2705 Text input\n- \u2705 Audio input\n- \u2705 Image input\n- \u274c Video input\n- \u274c Audio generation\n\nRef PR: https://github.com/ggml-org/llama.cpp/pull/13784\n",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"Qwen/Qwen2.5-Omni-3B"
],
"base_model": null,
"base_model_relation": null
},
{
"model_id": "unsloth/Qwen2.5-Omni-3B-GGUF",
"gated": "unknown",
"card": "---\nbase_model:\n- Qwen/Qwen2.5-Omni-3B\nlicense: other\nlicense_name: qwen-research\nlicense_link: LICENSE\nlanguage:\n- en\ntags:\n- multimodal\n- unsloth\nlibrary_name: transformers\npipeline_tag: any-to-any\n---\n
\n Unsloth Dynamic 2.0 achieves superior accuracy & outperforms other leading quants.\n
\n \n\n
\n
\n\n### Key Features\n\n* **Omni and Novel Architecture**: We propose Thinker-Talker architecture, an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. We propose a novel position embedding, named TMRoPE (Time-aligned Multimodal RoPE), to synchronize the timestamps of video inputs with audio.\n\n* **Real-Time Voice and Video Chat**: Architecture designed for fully real-time interactions, supporting chunked input and immediate output.\n\n* **Natural and Robust Speech Generation**: Surpassing many existing streaming and non-streaming alternatives, demonstrating superior robustness and naturalness in speech generation.\n\n* **Strong Performance Across Modalities**: Exhibiting exceptional performance across all modalities when benchmarked against similarly sized single-modality models. Qwen2.5-Omni outperforms the similarly sized Qwen2-Audio in audio capabilities and achieves comparable performance to Qwen2.5-VL-7B.\n\n* **Excellent End-to-End Speech Instruction Following**: Qwen2.5-Omni shows performance in end-to-end speech instruction following that rivals its effectiveness with text inputs, evidenced by benchmarks such as MMLU and GSM8K.\n\n### Model Architecture\n\n
\n
\n
\n\n### Performance\n\nWe conducted a comprehensive evaluation of Qwen2.5-Omni, which demonstrates strong performance across all modalities when compared to similarly sized single-modality models and closed-source models like Qwen2.5-VL-7B, Qwen2-Audio, and Gemini-1.5-pro. In tasks requiring the integration of multiple modalities, such as OmniBench, Qwen2.5-Omni achieves state-of-the-art performance. Furthermore, in single-modality tasks, it excels in areas including speech recognition (Common Voice), translation (CoVoST2), audio understanding (MMAU), image reasoning (MMMU, MMStar), video understanding (MVBench), and speech generation (Seed-tts-eval and subjective naturalness).\n\n
\n
\n
\n\nMultimodality -> Text
\n\n\n
\n\n \n\n Datasets \n Model \n Performance \n \n \n OmniBench \n
Speech | Sound Event | Music | AvgGemini-1.5-Pro \n 42.67%|42.26%|46.23%|42.91% \n \n \n MIO-Instruct \n 36.96%|33.58%|11.32%|33.80% \n \n \n AnyGPT (7B) \n 17.77%|20.75%|13.21%|18.04% \n \n \n video-SALMONN \n 34.11%|31.70%|56.60%|35.64% \n \n \n UnifiedIO2-xlarge \n 39.56%|36.98%|29.25%|38.00% \n \n \n UnifiedIO2-xxlarge \n 34.24%|36.98%|24.53%|33.98% \n \n \n MiniCPM-o \n -|-|-|40.50% \n \n \n Baichuan-Omni-1.5 \n -|-|-|42.90% \n \n \n Qwen2.5-Omni-3B \n 52.14%|52.08%|52.83%|52.19% \n \n \nQwen2.5-Omni-7B \n 55.25%|60.00%|52.83%|56.13% \n Audio -> Text
\n\n\n\n
\n\n \n\n Datasets \n Model \n Performance \n \n \n ASR \n \n \n Librispeech \n
dev-clean | dev other | test-clean | test-otherSALMONN \n -|-|2.1|4.9 \n \n \n SpeechVerse \n -|-|2.1|4.4 \n \n \n Whisper-large-v3 \n -|-|1.8|3.6 \n \n \n Llama-3-8B \n -|-|-|3.4 \n \n \n Llama-3-70B \n -|-|-|3.1 \n \n \n Seed-ASR-Multilingual \n -|-|1.6|2.8 \n \n \n MiniCPM-o \n -|-|1.7|- \n \n \n MinMo \n -|-|1.7|3.9 \n \n \n Qwen-Audio \n 1.8|4.0|2.0|4.2 \n \n \n Qwen2-Audio \n 1.3|3.4|1.6|3.6 \n \n \n Qwen2.5-Omni-3B \n 2.0|4.1|2.2|4.5 \n \n \n Qwen2.5-Omni-7B \n 1.6|3.5|1.8|3.4 \n \n \n Common Voice 15 \n
en | zh | yue | frWhisper-large-v3 \n 9.3|12.8|10.9|10.8 \n \n \n MinMo \n 7.9|6.3|6.4|8.5 \n \n \n Qwen2-Audio \n 8.6|6.9|5.9|9.6 \n \n \n Qwen2.5-Omni-3B \n 9.1|6.0|11.6|9.6 \n \n \n Qwen2.5-Omni-7B \n 7.6|5.2|7.3|7.5 \n \n \n Fleurs \n
zh | enWhisper-large-v3 \n 7.7|4.1 \n \n \n Seed-ASR-Multilingual \n -|3.4 \n \n \n Megrez-3B-Omni \n 10.8|- \n \n \n MiniCPM-o \n 4.4|- \n \n \n MinMo \n 3.0|3.8 \n \n \n Qwen2-Audio \n 7.5|- \n \n \n Qwen2.5-Omni-3B \n 3.2|5.4 \n \n \n Qwen2.5-Omni-7B \n 3.0|4.1 \n \n \n Wenetspeech \n
test-net | test-meetingSeed-ASR-Chinese \n 4.7|5.7 \n \n \n Megrez-3B-Omni \n -|16.4 \n \n \n MiniCPM-o \n 6.9|- \n \n \n MinMo \n 6.8|7.4 \n \n \n Qwen2.5-Omni-3B \n 6.3|8.1 \n \n \n Qwen2.5-Omni-7B \n 5.9|7.7 \n \n \n Voxpopuli-V1.0-en \n Llama-3-8B \n 6.2 \n \n \n Llama-3-70B \n 5.7 \n \n \n Qwen2.5-Omni-3B \n 6.6 \n \n \n Qwen2.5-Omni-7B \n 5.8 \n \n \n S2TT \n \n \n CoVoST2 \n
en-de | de-en | en-zh | zh-enSALMONN \n 18.6|-|33.1|- \n \n \n SpeechLLaMA \n -|27.1|-|12.3 \n \n \n BLSP \n 14.1|-|-|- \n \n \n MiniCPM-o \n -|-|48.2|27.2 \n \n \n MinMo \n -|39.9|46.7|26.0 \n \n \n Qwen-Audio \n 25.1|33.9|41.5|15.7 \n \n \n Qwen2-Audio \n 29.9|35.2|45.2|24.4 \n \n \n Qwen2.5-Omni-3B \n 28.3|38.1|41.4|26.6 \n \n \n Qwen2.5-Omni-7B \n 30.2|37.7|41.4|29.4 \n \n \n SER \n \n \n Meld \n WavLM-large \n 0.542 \n \n \n MiniCPM-o \n 0.524 \n \n \n Qwen-Audio \n 0.557 \n \n \n Qwen2-Audio \n 0.553 \n \n \n Qwen2.5-Omni-3B \n 0.558 \n \n \n Qwen2.5-Omni-7B \n 0.570 \n \n \n VSC \n \n \n VocalSound \n CLAP \n 0.495 \n \n \n Pengi \n 0.604 \n \n \n Qwen-Audio \n 0.929 \n \n \n Qwen2-Audio \n 0.939 \n \n \n Qwen2.5-Omni-3B \n 0.936 \n \n \n Qwen2.5-Omni-7B \n 0.939 \n \n \n Music \n \n \n GiantSteps Tempo \n Llark-7B \n 0.86 \n \n \n Qwen2.5-Omni-3B \n 0.88 \n \n \n Qwen2.5-Omni-7B \n 0.88 \n \n \n MusicCaps \n LP-MusicCaps \n 0.291|0.149|0.089|0.061|0.129|0.130 \n \n \n Qwen2.5-Omni-3B \n 0.325|0.163|0.093|0.057|0.132|0.229 \n \n \n Qwen2.5-Omni-7B \n 0.328|0.162|0.090|0.055|0.127|0.225 \n \n \n Audio Reasoning \n \n \n MMAU \n
Sound | Music | Speech | AvgGemini-Pro-V1.5 \n 56.75|49.40|58.55|54.90 \n \n \n Qwen2-Audio \n 54.95|50.98|42.04|49.20 \n \n \n Qwen2.5-Omni-3B \n 70.27|60.48|59.16|63.30 \n \n \n Qwen2.5-Omni-7B \n 67.87|69.16|59.76|65.60 \n \n \n Voice Chatting \n \n \n VoiceBench \n
AlpacaEval | CommonEval | SD-QA | MMSUUltravox-v0.4.1-LLaMA-3.1-8B \n 4.55|3.90|53.35|47.17 \n \n \n MERaLiON \n 4.50|3.77|55.06|34.95 \n \n \n Megrez-3B-Omni \n 3.50|2.95|25.95|27.03 \n \n \n Lyra-Base \n 3.85|3.50|38.25|49.74 \n \n \n MiniCPM-o \n 4.42|4.15|50.72|54.78 \n \n \n Baichuan-Omni-1.5 \n 4.50|4.05|43.40|57.25 \n \n \n Qwen2-Audio \n 3.74|3.43|35.71|35.72 \n \n \n Qwen2.5-Omni-3B \n 4.32|4.00|49.37|50.23 \n \n \n Qwen2.5-Omni-7B \n 4.49|3.93|55.71|61.32 \n \n \n VoiceBench \n
OpenBookQA | IFEval | AdvBench | AvgUltravox-v0.4.1-LLaMA-3.1-8B \n 65.27|66.88|98.46|71.45 \n \n \n MERaLiON \n 27.23|62.93|94.81|62.91 \n \n \n Megrez-3B-Omni \n 28.35|25.71|87.69|46.25 \n \n \n Lyra-Base \n 72.75|36.28|59.62|57.66 \n \n \n MiniCPM-o \n 78.02|49.25|97.69|71.69 \n \n \n Baichuan-Omni-1.5 \n 74.51|54.54|97.31|71.14 \n \n \n Qwen2-Audio \n 49.45|26.33|96.73|55.35 \n \n \n Qwen2.5-Omni-3B \n 74.73|42.10|98.85|68.81 \n \n \nQwen2.5-Omni-7B \n 81.10|52.87|99.42|74.12 \n Image -> Text
\n\n| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Other Best | Qwen2.5-VL-7B | GPT-4o-mini | \n|--------------------------------|--------------|------------|------------|---------------|-------------|\n| MMMUval | 59.2 | 53.1 | 53.9 | 58.6 | **60.0** | \n| MMMU-Prooverall | 36.6 | 29.7 | - | **38.3** | 37.6 | \n| MathVistatestmini | 67.9 | 59.4 | **71.9** | 68.2 | 52.5 | \n| MathVisionfull | 25.0 | 20.8 | 23.1 | **25.1** | - | \n| MMBench-V1.1-ENtest | 81.8 | 77.8 | 80.5 | **82.6** | 76.0 | \n| MMVetturbo | 66.8 | 62.1 | **67.5** | 67.1 | 66.9 | \n| MMStar | **64.0** | 55.7 | **64.0** | 63.9 | 54.8 | \n| MMEsum | 2340 | 2117 | **2372** | 2347 | 2003 | \n| MuirBench | 59.2 | 48.0 | - | **59.2** | - | \n| CRPErelation | **76.5** | 73.7 | - | 76.4 | - | \n| RealWorldQAavg | 70.3 | 62.6 | **71.9** | 68.5 | - | \n| MME-RealWorlden | **61.6** | 55.6 | - | 57.4 | - | \n| MM-MT-Bench | 6.0 | 5.0 | - | **6.3** | - | \n| AI2D | 83.2 | 79.5 | **85.8** | 83.9 | - | \n| TextVQAval | 84.4 | 79.8 | 83.2 | **84.9** | - | \n| DocVQAtest | 95.2 | 93.3 | 93.5 | **95.7** | - | \n| ChartQAtest Avg | 85.3 | 82.8 | 84.9 | **87.3** | - | \n| OCRBench_V2en | **57.8** | 51.7 | - | 56.3 | - | \n\n\n| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Qwen2.5-VL-7B | Grounding DINO | Gemini 1.5 Pro | \n|--------------------------|--------------|---------------|---------------|----------------|----------------|\n| Refcocoval | 90.5 | 88.7 | 90.0 | **90.6** | 73.2 | \n| RefcocotextA | **93.5** | 91.8 | 92.5 | 93.2 | 72.9 | \n| RefcocotextB | 86.6 | 84.0 | 85.4 | **88.2** | 74.6 | \n| Refcoco+val | 85.4 | 81.1 | 84.2 | **88.2** | 62.5 | \n| Refcoco+textA | **91.0** | 87.5 | 89.1 | 89.0 | 63.9 | \n| Refcoco+textB | **79.3** | 73.2 | 76.9 | 75.9 | 65.0 | \n| Refcocog+val | **87.4** | 85.0 | 87.2 | 86.1 | 75.2 | \n| Refcocog+test | **87.9** | 85.1 | 87.2 | 87.0 | 76.2 | \n| ODinW | 42.4 | 39.2 | 37.3 | **55.0** | 36.7 | \n| PointGrounding | 66.5 | 46.2 | **67.3** | - | - | \nVideo(without audio) -> Text
\n\n| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Other Best | Qwen2.5-VL-7B | GPT-4o-mini | \n|-----------------------------|--------------|------------|------------|---------------|-------------|\n| Video-MMEw/o sub | 64.3 | 62.0 | 63.9 | **65.1** | 64.8 | \n| Video-MMEw sub | **72.4** | 68.6 | 67.9 | 71.6 | - | \n| MVBench | **70.3** | 68.7 | 67.2 | 69.6 | - | \n| EgoSchematest | **68.6** | 61.4 | 63.2 | 65.0 | - | \nZero-shot Speech Generation
\n\n\n\n
\n\n \n\n Datasets \n Model \n Performance \n \n \n Content Consistency \n \n \n SEED \n
test-zh | test-en | test-hard Seed-TTS_ICL \n 1.11 | 2.24 | 7.58 \n \n \n Seed-TTS_RL \n 1.00 | 1.94 | 6.42 \n \n \n MaskGCT \n 2.27 | 2.62 | 10.27 \n \n \n E2_TTS \n 1.97 | 2.19 | - \n \n \n F5-TTS \n 1.56 | 1.83 | 8.67 \n \n \n CosyVoice 2 \n 1.45 | 2.57 | 6.83 \n \n \n CosyVoice 2-S \n 1.45 | 2.38 | 8.08 \n \n \n Qwen2.5-Omni-3B_ICL \n 1.95 | 2.87 | 9.92 \n \n \n Qwen2.5-Omni-3B_RL \n 1.58 | 2.51 | 7.86 \n \n \n Qwen2.5-Omni-7B_ICL \n 1.70 | 2.72 | 7.97 \n \n \n Qwen2.5-Omni-7B_RL \n 1.42 | 2.32 | 6.54 \n \n \n Speaker Similarity \n \n \n SEED \n
test-zh | test-en | test-hard Seed-TTS_ICL \n 0.796 | 0.762 | 0.776 \n \n \n Seed-TTS_RL \n 0.801 | 0.766 | 0.782 \n \n \n MaskGCT \n 0.774 | 0.714 | 0.748 \n \n \n E2_TTS \n 0.730 | 0.710 | - \n \n \n F5-TTS \n 0.741 | 0.647 | 0.713 \n \n \n CosyVoice 2 \n 0.748 | 0.652 | 0.724 \n \n \n CosyVoice 2-S \n 0.753 | 0.654 | 0.732 \n \n \n Qwen2.5-Omni-3B_ICL \n 0.741 | 0.635 | 0.748 \n \n \n Qwen2.5-Omni-3B_RL \n 0.744 | 0.635 | 0.746 \n \n \n Qwen2.5-Omni-7B_ICL \n 0.752 | 0.632 | 0.747 \n \n \nQwen2.5-Omni-7B_RL \n 0.754 | 0.641 | 0.752 \n Text -> Text
\n\n| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Qwen2.5-7B | Qwen2.5-3B | Qwen2-7B | Llama3.1-8B | Gemma2-9B | \n|-----------------------------------|-----------|------------|------------|------------|------------|-------------|-----------|\n| MMLU-Pro | 47.0 | 40.4 | **56.3** | 43.7 | 44.1 | 48.3 | 52.1 | \n| MMLU-redux | 71.0 | 60.9 | **75.4** | 64.4 | 67.3 | 67.2 | 72.8 | \n| LiveBench0831 | 29.6 | 22.3 | **35.9** | 26.8 | 29.2 | 26.7 | 30.6 | \n| GPQA | 30.8 | 34.3 | **36.4** | 30.3 | 34.3 | 32.8 | 32.8 | \n| MATH | 71.5 | 63.6 | **75.5** | 65.9 | 52.9 | 51.9 | 44.3 | \n| GSM8K | 88.7 | 82.6 | **91.6** | 86.7 | 85.7 | 84.5 | 76.7 | \n| HumanEval | 78.7 | 70.7 | **84.8** |\t74.4 | 79.9 | 72.6 | 68.9 | \n| MBPP | 73.2 | 70.4 | **79.2** | 72.7 | 67.2 | 69.6 | 74.9 | \n| MultiPL-E | 65.8 | 57.6 | **70.4** | 60.2 | 59.1 | 50.7 | 53.4 | \n| LiveCodeBench2305-2409 | 24.6 | 16.5 | **28.7** | 19.9 | 23.9 | 8.3 | 18.9 | \nMinimum GPU memory requirements
\n\n|Model | Precision | 15(s) Video | 30(s) Video | 60(s) Video |\n|--------------|-----------| ------------- | ------------- | ------------------ |\n| Qwen-Omni-3B | FP32 | 89.10 GB | Not Recommend | Not Recommend |\n| Qwen-Omni-3B | BF16 | 18.38 GB | 22.43 GB | 28.22 GB |\n| Qwen-Omni-7B | FP32 | 93.56 GB | Not Recommend | Not Recommend |\n| Qwen-Omni-7B | BF16 | 31.11 GB | 41.85 GB | 60.19 GB |\n\nNote: The table above presents the theoretical minimum memory requirements for inference with `transformers` and `BF16` is test with `attn_implementation=\"flash_attention_2\"`; however, in practice, the actual memory usage is typically at least 1.2 times higher. For more information, see the linked resource [here](https://huggingface.co/docs/accelerate/main/en/usage_guides/model_size_estimator).\nVideo URL resource usage
\n\nVideo URL compatibility largely depends on the third-party library version. The details are in the table below. Change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one.\n\n| Backend | HTTP | HTTPS |\n|-------------|------|-------|\n| torchvision >= 0.19.0 | \u2705 | \u2705 |\n| torchvision < 0.19.0 | \u274c | \u274c |\n| decord | \u2705 | \u274c |\nBatch inference
\n\nThe model can batch inputs composed of mixed samples of various types such as text, images, audio and videos as input when `return_audio=False` is set. Here is an example.\n\n```python\n# Sample messages for batch inference\n\n# Conversation with video only\nconversation1 = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"video\", \"video\": \"/path/to/video.mp4\"},\n ]\n }\n]\n\n# Conversation with audio only\nconversation2 = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"audio\", \"audio\": \"/path/to/audio.wav\"},\n ]\n }\n]\n\n# Conversation with pure text\nconversation3 = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": \"who are you?\"\n }\n]\n\n\n# Conversation with mixed media\nconversation4 = [\n {\n \"role\": \"system\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.\"}\n ],\n },\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"/path/to/image.jpg\"},\n {\"type\": \"video\", \"video\": \"/path/to/video.mp4\"},\n {\"type\": \"audio\", \"audio\": \"/path/to/audio.wav\"},\n {\"type\": \"text\", \"text\": \"What are the elements can you see and hear in these medias?\"},\n ],\n }\n]\n\n# Combine messages for batch processing\nconversations = [conversation1, conversation2, conversation3, conversation4]\n\n# set use audio in video\nUSE_AUDIO_IN_VIDEO = True\n\n# Preparation for batch inference\ntext = processor.apply_chat_template(conversations, add_generation_prompt=True, tokenize=False)\naudios, images, videos = process_mm_info(conversations, use_audio_in_video=USE_AUDIO_IN_VIDEO)\n\ninputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors=\"pt\", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO)\ninputs = inputs.to(model.device).to(model.dtype)\n\n# Batch Inference\ntext_ids = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO, return_audio=False)\ntext = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)\nprint(text)\n```\n
\n\n",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"Qwen/Qwen2.5-Omni-3B"
],
"base_model": null,
"base_model_relation": null
},
{
"model_id": "mradermacher/Qwen2.5-Omni-3B-GGUF",
"gated": "unknown",
"card": "---\nbase_model: Qwen/Qwen2.5-Omni-3B\nlanguage:\n- en\nlibrary_name: transformers\nlicense: other\nlicense_link: LICENSE\nlicense_name: qwen-research\nquantized_by: mradermacher\ntags:\n- multimodal\n---\n## About\n\n\n\n\n\n\nstatic quants of https://huggingface.co/Qwen/Qwen2.5-Omni-3B\n\n\nweighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF/resolve/main/Qwen2.5-Omni-3B.Q2_K.gguf) | Q2_K | 1.5 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF/resolve/main/Qwen2.5-Omni-3B.Q3_K_S.gguf) | Q3_K_S | 1.7 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF/resolve/main/Qwen2.5-Omni-3B.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF/resolve/main/Qwen2.5-Omni-3B.Q3_K_L.gguf) | Q3_K_L | 1.9 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF/resolve/main/Qwen2.5-Omni-3B.IQ4_XS.gguf) | IQ4_XS | 2.0 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF/resolve/main/Qwen2.5-Omni-3B.Q4_K_S.gguf) | Q4_K_S | 2.1 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF/resolve/main/Qwen2.5-Omni-3B.Q4_K_M.gguf) | Q4_K_M | 2.2 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF/resolve/main/Qwen2.5-Omni-3B.Q5_K_S.gguf) | Q5_K_S | 2.5 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF/resolve/main/Qwen2.5-Omni-3B.Q5_K_M.gguf) | Q5_K_M | 2.5 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF/resolve/main/Qwen2.5-Omni-3B.Q6_K.gguf) | Q6_K | 2.9 | very good quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF/resolve/main/Qwen2.5-Omni-3B.Q8_0.gguf) | Q8_0 | 3.7 | fast, best quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF/resolve/main/Qwen2.5-Omni-3B.f16.gguf) | f16 | 6.9 | 16 bpw, overkill |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n\n",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"Qwen/Qwen2.5-Omni-3B"
],
"base_model": null,
"base_model_relation": null
},
{
"model_id": "mradermacher/Qwen2.5-Omni-3B-i1-GGUF",
"gated": "unknown",
"card": "---\nbase_model: Qwen/Qwen2.5-Omni-3B\nlanguage:\n- en\nlibrary_name: transformers\nlicense: other\nlicense_link: LICENSE\nlicense_name: qwen-research\nquantized_by: mradermacher\ntags:\n- multimodal\n---\n## About\n\n\n\n\n\n\nweighted/imatrix quants of https://huggingface.co/Qwen/Qwen2.5-Omni-3B\n\n\nstatic quants are available at https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.1 | mostly desperate |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.2 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.2 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ2_M.gguf) | i1-IQ2_M | 1.4 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.4 | very low quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.5 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.6 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.7 | IQ3_XS probably better |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ3_S.gguf) | i1-IQ3_S | 1.7 | beats Q3_K* |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ3_M.gguf) | i1-IQ3_M | 1.7 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.8 | IQ3_S probably better |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.9 | IQ3_M probably better |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.0 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.1 | prefer IQ4_XS |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q4_0.gguf) | i1-Q4_0 | 2.1 | fast, low quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.1 | optimal size/speed/quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.2 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q4_1.gguf) | i1-Q4_1 | 2.3 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.5 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.5 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q6_K.gguf) | i1-Q6_K | 2.9 | practically like static Q6_K |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n\n",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"Qwen/Qwen2.5-Omni-3B"
],
"base_model": null,
"base_model_relation": null
},
{
"model_id": "zhaoweiguo/Qwen2.5-Omni-3B-Q4_K_M-GGUF",
"gated": "unknown",
"card": "---\nlicense: other\nlicense_name: qwen-research\nlicense_link: LICENSE\nlanguage:\n- en\ntags:\n- multimodal\n- llama-cpp\n- gguf-my-repo\nlibrary_name: transformers\npipeline_tag: any-to-any\nbase_model: Qwen/Qwen2.5-Omni-3B\n---\n\n# zhaoweiguo/Qwen2.5-Omni-3B-Q4_K_M-GGUF\nThis model was converted to GGUF format from [`Qwen/Qwen2.5-Omni-3B`](https://huggingface.co/Qwen/Qwen2.5-Omni-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Omni-3B) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo zhaoweiguo/Qwen2.5-Omni-3B-Q4_K_M-GGUF --hf-file qwen2.5-omni-3b-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo zhaoweiguo/Qwen2.5-Omni-3B-Q4_K_M-GGUF --hf-file qwen2.5-omni-3b-q4_k_m.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo zhaoweiguo/Qwen2.5-Omni-3B-Q4_K_M-GGUF --hf-file qwen2.5-omni-3b-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo zhaoweiguo/Qwen2.5-Omni-3B-Q4_K_M-GGUF --hf-file qwen2.5-omni-3b-q4_k_m.gguf -c 2048\n```\n",
"metadata": "\"N/A\"",
"depth": 1,
"children": [],
"children_count": 0,
"adapters": [],
"adapters_count": 0,
"quantized": [],
"quantized_count": 0,
"merges": [],
"merges_count": 0,
"total_derivatives": 0,
"spaces": [],
"spaces_count": 0,
"parents": [
"Qwen/Qwen2.5-Omni-3B"
],
"base_model": null,
"base_model_relation": null
}
]
}