huanqia commited on
Commit
4ae87f1
·
verified ·
1 Parent(s): 25dada2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -49,9 +49,11 @@ configs:
49
 
50
  ## Dataset Description
51
 
52
- **MM-IQ** is a new benchmark designed to evaluate MLLMs' intelligence through multiple reasoning patterns demanding abstract reasoning abilities. It encompasses **three input formats, six problem configurations, and eight reasoning patterns**. With **2,710 samples**, MM-IQ is the most comprehensive and largest AVR benchmark for evaluating the intelligence of MLLMs, and **3x and 10x** larger than two very recent benchmarks MARVEL and MathVista-IQTest, respectively. By focusing on AVR problems, MM-IQ provides a targeted assessment of the cognitive capabilities and intelligence of MLLMs, contributing to a more comprehensive understanding of their strengths and limitations in the pursuit of AGI.
53
- <img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/MMIQ_distribution.png" style="zoom:50%;" />
 
54
 
 
55
 
56
 
57
  ## Paper Information
 
49
 
50
  ## Dataset Description
51
 
52
+ IQ testing has served as a foundational methodology for evaluating human cognitive capabilities, deliberately decoupling assessment from linguistic background, language proficiency, or domain-specific knowledge to isolate core competencies in abstraction and reasoning. Yet, artificial intelligence research currently lacks systematic benchmarks to quantify these critical cognitive dimensions in multimodal systems. To address this critical gap, we propose **MM-IQ**, a comprehensive evaluation framework comprising **2,710** meticulously curated test items spanning **8** distinct reasoning paradigms.
53
+
54
+ Through systematic evaluation of leading open-source and proprietary multimodal models, our benchmark reveals striking limitations: even state-of-the-art architectures achieve only marginally superior performance to random chance (27.49% vs. 25% baseline accuracy). This substantial performance chasm highlights the inadequacy of current multimodal systems in approximating fundamental human reasoning capacities, underscoring the need for paradigm-shifting advancements to bridge this cognitive divide.
55
 
56
+ <img src="https://acechq.github.io/MMIQ-benchmark/static/imgs/MMIQ_distribution.png" style="zoom:50%;" />
57
 
58
 
59
  ## Paper Information