Audio-Omni: Extending Multi-modal Understanding to Versatile Audio Generation and Editing
Abstract
Audio-Omni presents the first end-to-end framework unifying audio generation and editing across sound, music, and speech domains using a frozen multimodal language model and trainable diffusion transformer architecture.
Recent progress in multimodal models has spurred rapid advances in audio understanding, generation, and editing. However, these capabilities are typically addressed by specialized models, leaving the development of a truly unified framework that can seamlessly integrate all three tasks underexplored. While some pioneering works have explored unifying audio understanding and generation, they often remain confined to specific domains. To address this, we introduce Audio-Omni, the first end-to-end framework to unify generation and editing across general sound, music, and speech domains, with integrated multi-modal understanding capabilities. Our architecture synergizes a frozen Multimodal Large Language Model for high-level reasoning with a trainable Diffusion Transformer for high-fidelity synthesis. To overcome the critical data scarcity in audio editing, we construct AudioEdit, a new large-scale dataset comprising over one million meticulously curated editing pairs. Extensive experiments demonstrate that Audio-Omni achieves state-of-the-art performance across a suite of benchmarks, outperforming prior unified approaches while achieving performance on par with or superior to specialized expert models. Beyond its core capabilities, Audio-Omni exhibits remarkable inherited capabilities, including knowledge-augmented reasoning generation, in-context generation, and zero-shot cross-lingual control for audio generation, highlighting a promising direction toward universal generative audio intelligence. The code, model, and dataset will be publicly released on https://zeyuet.github.io/Audio-Omni.
Community
Audio-Omni is the first unified framework for multimodal audio understanding, generation, and editing across general sound, music, and speech.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- UniTalking: A Unified Audio-Video Framework for Talking Portrait Generation (2026)
- SkyReels-V4: Multi-modal Video-Audio Generation, Inpainting and Editing model (2026)
- Kiwi-Edit: Versatile Video Editing via Instruction and Reference Guidance (2026)
- AC-Foley: Reference-Audio-Guided Video-to-Audio Synthesis with Acoustic Transfer (2026)
- JavisDiT++: Unified Modeling and Optimization for Joint Audio-Video Generation (2026)
- Identity as Presence: Towards Appearance and Voice Personalized Joint Audio-Video Generation (2026)
- AudioChat: Unified Audio Storytelling, Editing, and Understanding with Transfusion Forcing (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.10708 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper