MMAU-Pro: A Challenging and Comprehensive Benchmark for Holistic Evaluation of Audio General Intelligence
Abstract
MMAU-Pro is a comprehensive benchmark for evaluating audio intelligence in AI systems, assessing 49 unique skills across speech, sound, music, and their combinations, revealing significant limitations in current models.
Audio comprehension-including speech, non-speech sounds, and music-is essential for achieving human-level intelligence. Consequently, AI agents must demonstrate holistic audio understanding to qualify as generally intelligent. However, evaluating auditory intelligence comprehensively remains challenging. To address this gap, we introduce MMAU-Pro, the most comprehensive and rigorously curated benchmark for assessing audio intelligence in AI systems. MMAU-Pro contains 5,305 instances, where each instance has one or more audios paired with human expert-generated question-answer pairs, spanning speech, sound, music, and their combinations. Unlike existing benchmarks, MMAU-Pro evaluates auditory intelligence across 49 unique skills and multiple complex dimensions, including long-form audio comprehension, spatial audio reasoning, multi-audio understanding, among others. All questions are meticulously designed to require deliberate multi-hop reasoning, including both multiple-choice and open-ended response formats. Importantly, audio data is sourced directly ``from the wild" rather than from existing datasets with known distributions. We evaluate 22 leading open-source and proprietary multimodal AI models, revealing significant limitations: even state-of-the-art models such as Gemini 2.5 Flash and Audio Flamingo 3 achieve only 59.2% and 51.7% accuracy, respectively, approaching random performance in multiple categories. Our extensive analysis highlights specific shortcomings and provides novel insights, offering actionable perspectives for the community to enhance future AI systems' progression toward audio general intelligence. The benchmark and code is available at https://sonalkum.github.io/mmau-pro.
Community
MMAU-Pro is a comprehensive benchmark to evaluate audio intelligence in multimodal systems. Benchmark will be publicly released soon.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MultiVox: Benchmarking Voice Assistants for Multimodal Interactions (2025)
- AURA: A Fine-Grained Benchmark and Decomposed Metric for Audio-Visual Reasoning (2025)
- SpeechR: A Benchmark for Speech Reasoning in Large Audio-Language Models (2025)
- MECAT: A Multi-Experts Constructed Benchmark for Fine-Grained Audio Understanding Tasks (2025)
- OmniEval: A Benchmark for Evaluating Omni-modal Models with Visual, Auditory, and Textual Inputs (2025)
- MSU-Bench: Towards Understanding the Conversational Multi-talker Scenarios (2025)
- HV-MMBench: Benchmarking MLLMs for Human-Centric Video Understanding (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper