huanqia commited on
Commit
f2422dd
·
verified ·
1 Parent(s): 4ae87f1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -37,7 +37,7 @@ configs:
37
  ---
38
  # Dataset Card for "MM-IQ"
39
 
40
- - [Dataset Description](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#dataset-description)
41
  - [Paper Information](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#paper-information)
42
  - [Dataset Examples](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#dataset-examples)
43
  - [Leaderboard](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#leaderboard)
@@ -47,7 +47,7 @@ configs:
47
  - [Automatic Evaluation](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#automatic-evaluation)
48
  - [Citation](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#citation)
49
 
50
- ## Dataset Description
51
 
52
  IQ testing has served as a foundational methodology for evaluating human cognitive capabilities, deliberately decoupling assessment from linguistic background, language proficiency, or domain-specific knowledge to isolate core competencies in abstraction and reasoning. Yet, artificial intelligence research currently lacks systematic benchmarks to quantify these critical cognitive dimensions in multimodal systems. To address this critical gap, we propose **MM-IQ**, a comprehensive evaluation framework comprising **2,710** meticulously curated test items spanning **8** distinct reasoning paradigms.
53
 
 
37
  ---
38
  # Dataset Card for "MM-IQ"
39
 
40
+ - [Introduction](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#dataset-description)
41
  - [Paper Information](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#paper-information)
42
  - [Dataset Examples](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#dataset-examples)
43
  - [Leaderboard](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#leaderboard)
 
47
  - [Automatic Evaluation](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#automatic-evaluation)
48
  - [Citation](https://huggingface.co/datasets/huanqia/MM-IQ/blob/main/README.md#citation)
49
 
50
+ ## Introduction
51
 
52
  IQ testing has served as a foundational methodology for evaluating human cognitive capabilities, deliberately decoupling assessment from linguistic background, language proficiency, or domain-specific knowledge to isolate core competencies in abstraction and reasoning. Yet, artificial intelligence research currently lacks systematic benchmarks to quantify these critical cognitive dimensions in multimodal systems. To address this critical gap, we propose **MM-IQ**, a comprehensive evaluation framework comprising **2,710** meticulously curated test items spanning **8** distinct reasoning paradigms.
53