AHELM: A Holistic Evaluation of Audio-Language Models
Abstract
AHELM is a comprehensive benchmark for audio-language models that evaluates multiple aspects including fairness, safety, and reasoning across various datasets and models.
Evaluations of audio-language models (ALMs) -- multimodal models that take interleaved audio and text as input and output text -- are hindered by the lack of standardized benchmarks; most benchmarks measure only one or two capabilities and omit evaluative aspects such as fairness or safety. Furthermore, comparison across models is difficult as separate evaluations test a limited number of models and use different prompting methods and inference parameters. To address these shortfalls, we introduce AHELM, a benchmark that aggregates various datasets -- including 2 new synthetic audio-text datasets called PARADE, which evaluates the ALMs on avoiding stereotypes, and CoRe-Bench, which measures reasoning over conversational audio through inferential multi-turn question answering -- to holistically measure the performance of ALMs across 10 aspects we have identified as important to the development and usage of ALMs: audio perception, knowledge, reasoning, emotion detection, bias, fairness, multilinguality, robustness, toxicity, and safety. We also standardize the prompts, inference parameters, and evaluation metrics to ensure equitable comparisons across models. We test 14 open-weight and closed-API ALMs from 3 developers and 3 additional simple baseline systems each consisting of an automatic speech recognizer and a language model. Our results show that while Gemini 2.5 Pro ranks top in 5 out of 10 aspects, it exhibits group unfairness (p=0.01) on ASR tasks whereas most of the other models do not. We also find that the baseline systems perform reasonably well on AHELM, with one ranking 5th overall despite having only speech-to-text capabilities. For transparency, all raw prompts, model generations, and outputs are available on our website at https://crfm.stanford.edu/helm/audio/v1.0.0. AHELM is intended to be a living benchmark and new datasets and models will be added over time.
Community
AHELM provides a standardized, holistic evaluation benchmark for audio-language models, introducing PARADE and CoRe-Bench to measure 10 aspects and enable fair, comparable assessments.
Useful Links:
Leaderboard: https://crfm.stanford.edu/helm/audio/latest/
Codebase: https://github.com/stanford-crfm/helm
[New] PARADE Data: https://huggingface.co/datasets/UCSC-VLAA/PARADE_audio
[New] CoRe-Bench Data: https://huggingface.co/datasets/stanford-crfm/CoReBench_v1
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Audio Flamingo 3: Advancing Audio Intelligence with Fully Open Large Audio Language Models (2025)
- Multi-TW: Benchmarking Multimodal Models on Traditional Chinese Question Answering in Taiwan (2025)
- SpeechR: A Benchmark for Speech Reasoning in Large Audio-Language Models (2025)
- MSU-Bench: Towards Understanding the Conversational Multi-talker Scenarios (2025)
- LLaSO: A Foundational Framework for Reproducible Research in Large Language and Speech Model (2025)
- Step-Audio 2 Technical Report (2025)
- CodecBench: A Comprehensive Benchmark for Acoustic and Semantic Evaluation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper