Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
License:
BoJack commited on
Commit
980f8fb
ยท
verified ยท
1 Parent(s): ea0eeaf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -3
README.md CHANGED
@@ -1,3 +1,62 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+
6
+ # MMAR: A Challenging Benchmark for Deep Reasoning in Speech, Audio, Music, and Their Mix
7
+ [**๐Ÿ“– arXiv (Comming Soon)**]() | [**๐ŸŽฌ MMAR Demo Video**](https://www.youtube.com/watch?v=Dab13opIGqU) | [**๐Ÿ› ๏ธ GitHub Code**](https://github.com/ddlBoJack/MMAR) <br>
8
+ [**๐Ÿ”Š MMAR Audio Download (HuggingFace)**](https://huggingface.co/datasets/BoJack/MMAR) |[**๐Ÿ”Š MMAR Audio Download (Google Drive)**](https://drive.google.com/file/d/1AkAbrD1GNFimMxBqAXFc5ve6fc3RgLLv/view?usp=sharing)
9
+
10
+ <p align="center"><img src="assets/logo.png" alt="MMAR Benchmark Logo" width="300"/></p>
11
+
12
+ ## Overview of MMAR
13
+ We introduce MMAR, a new benchmark designed to evaluate the deep reasoning capabilities of Audio-Language Models (ALMs) across massive multi-disciplinary tasks.
14
+ MMAR comprises 1,000 meticulously curated audio-question-answer triplets, collected from real-world internet videos and refined through iterative error corrections and quality checks to ensure high quality.
15
+ Each item in the benchmark demands multi-step deep reasoning beyond surface-level understanding. Moreover, a part of the questions requires graduate-level perceptual and domain-specific knowledge, elevating the benchmark's difficulty and depth.
16
+ Examples include:
17
+
18
+ ![Example](assets/example.png)
19
+
20
+ The metadata for MMAR is available in [this file](MMAR-meta.json). Unlike previous benchmarks, MMAR not only covers traditional modalities such as speech, audio, and music, but also extends to their mix, collected from in-the-wild videos. The distribution of data across these modalities is illustrated in the left figure. Furthermore, each question is annotated with a designated category and sub-category, as shown in the right figure.
21
+
22
+ For each question, we also provide the URL and corresponding timestamp of the original video, as well as the spoken language (if present) in the clip. To prevent potential data leakage into training for reasoning models, we have withheld reasoning cues and chain-of-thought annotations, which will be released at an appropriate time.
23
+
24
+ <p float="left">
25
+ <img src="assets/modality_pie.png" width="49%" />
26
+ <img src="assets/category_sunburst.png" width="49%" />
27
+ </p>
28
+
29
+ ## Benchmark Results
30
+ We benchmark the models on MMAR across five model categories:
31
+ 1. Large Audio Language Models (LALMs)
32
+ 2. Large Audio Reasoning Models (LARMs)
33
+ 3. Omni Language Models (OLMs)
34
+ 4. Large Language Models (LLMs) with audio captions as input
35
+ 5. Large Reasoning Models (LRMs) with audio captions as input
36
+
37
+ ![Pipeline](assets/benchmark.png)
38
+
39
+ ## Dataset Creation
40
+ The MMAR benchmark was constructed with a comprehensive pipeline. The process includes:
41
+ 1. Brainstorming challenging questions
42
+ 2. Building a taxonomy through human-LLM collaboration
43
+ 3. Heuristic-based data collection and annotation
44
+ 4. Crawling audio data and enriching content across multiple slots
45
+ 5. Performing iterative correction and quality inspection to ensure high data fidelity
46
+
47
+ ![Pipeline](assets/pipeline.png)
48
+
49
+ ## Test Your Model !
50
+
51
+ To ensure a smooth integration into existing evaluation pipelines, we adopt an evaluation methodology modified from [MMAU](https://github.com/Sakshi113/MMAU), implemented in [evaluation.py](code/evaluation.py). The input to the evaluation script should be the same as [MMAR-meta.json](MMAR-meta.json), with an additional key named `model_prediction`, which stores the model prediction for each question.
52
+
53
+ To run the script:
54
+ ```bash
55
+ python evaluation.py --input INPUT_JSON_PATH
56
+ ```
57
+
58
+ ## Acknowledge
59
+ We gratefully acknowledge that our evaluation code is modified from the official implementation of [MMAU](https://github.com/Sakshi113/MMAU).
60
+
61
+ ## Citation
62
+ Coming Soon