Datasets:
File size: 6,326 Bytes
623d0a7 f968120 5ba2070 f968120 5ba2070 623d0a7 f968120 623d0a7 f968120 623d0a7 f968120 623d0a7 f968120 5ba2070 f968120 623d0a7 5ba2070 bc553b1 5ba2070 a819561 5ba2070 5b97f46 5ba2070 bc553b1 5ba2070 a819561 5ba2070 a819561 bc553b1 5ba2070 9092179 5ba2070 9092179 5ba2070 9092179 5b97f46 5ba2070 7b36fbe 5ba2070 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 |
---
language:
- en
license: apache-2.0
size_categories:
- 1K<n<10K
task_categories:
- visual-question-answering
tags:
- cognitive-science
- multimodal
- vision
- reasoning
- webdataset
- benchmark
- core-knowledge
- developmental-psychology
dataset_info:
- config_name: ConceptHacking
features:
- name: id
dtype: string
- name: concept
dtype: string
- name: stage
dtype: string
- name: type
dtype: string
- name: question
dtype: string
- name: images
dtype: string
- name: videos
dtype: string
- name: answer
dtype: string
- name: choices
dtype: string
- name: image_paths
sequence: image
splits:
- name: train
num_bytes: 34117309.0
num_examples: 90
download_size: 33663182
dataset_size: 34117309.0
- config_name: default
features:
- name: id
dtype: string
- name: concept
dtype: string
- name: stage
dtype: string
- name: type
dtype: string
- name: question
dtype: string
- name: images
dtype: string
- name: videos
dtype: string
- name: answer
dtype: string
- name: choices
dtype: string
- name: image_paths
sequence: image
configs:
- config_name: ConceptHacking
data_files:
- split: train
path: ConceptHacking/train-*
- config_name: complete
data_files:
- split: train
path: CoreCognition_20250622.zip
- config_name: default
data_files:
- split: train
path: data/train-*
---
# CoreCognition: A Core Knowledge Benchmark for Multi-modal Large Language Models
## Dataset Description
**CoreCognition** is a large-scale benchmark encompassing 12 core knowledge grounded in developmental cognitive science, designed to evaluate the fundamental core abilities of Multi-modal Large Language Models (MLLMs).
While MLLMs demonstrate impressive abilities over high-level perception and reasoning, their robustness in the wild remains limited, often falling short on tasks that are intuitive and effortless for humans. We examine the hypothesis that these deficiencies stem from the absence of **core knowledge**βrudimentary core abilities innate to humans.
This dataset contains **1,423 multimodal CoreCognition** samples and **80 Concept Hacking** questions with images/videos and questions, covering fundamental concepts like object permanence, spatial reasoning, counting, and other core abilities that emerge in human development.
- π **Website**: [https://williamium3000.github.io/core-knowledge/](https://williamium3000.github.io/core-knowledge/)
- π **Paper**: [https://arxiv.org/abs/2410.10855](https://arxiv.org/abs/2410.10855)
- π **Github**: [https://github.com/williamium3000/core-knowledge](https://github.com/williamium3000/core-knowledge)
## Formats
1. **HuggingFace Preview** - For browsing and exploration (visible in HuggingFace viewer, contains embedded 448*448-pixel image preview but no videos)
β οΈ Warning: this format is primarily for HuggingFace viewer; it DOES NOT contain full data.
2. **Complete Dataset ZIP (Recommended)** - Full data with all images and videos before resizing, 6.41GB
```
CoreCognition_20250622.zip
βββ CoreCognition.csv # Complete metadata CSV
βββ media/ # All images and videos
βββ imagename1.png
βββ imagename2.png
βββ videoname1.mp4
βββ ...
```
## Quick Start
1. Browse metadata and image preview in this huggingface repo
2. Download the complete dataset (6.41GB) by
```python
from datasets import load_dataset
# this will download huggingface.co/datasets/williamium/CoreCognition/blob/main/CoreCognition_20250622.zip
dataset = load_dataset("williamium/CoreCognition", "complete")
# this will download 90 ConceptHacking questions, with original image object embedded
dataset = load_dataset("williamium/CoreCognition", "ConceptHacking")
```
## Dataset Fields
### Metadata Fields (visible in viewer)
- `id`: Unique sample identifier
- `concept`: Core knowledge concept detailed below
- `type`: Question type ("MC" for multiple choice, "TF" for True/False)
- `question`: The question text with interleaved <image-placeholder: ...> and/or <video-placeholder: ...>
- `images`: Semicolon-separated image filenames, can be found in [ZIP data](https://huggingface.co/datasets/williamium/CoreCognition/blob/main/CoreCognition_20250622.zip)
- `videos`: Semicolon-separated video filenames, can be found in [ZIP data](https://huggingface.co/datasets/williamium/CoreCognition/blob/main/CoreCognition_20250622.zip)
- `answer`: Correct answer choice
- `choices`: Choice options as JSON string
- `image_paths`: Embedded image column for HuggingFace viewer only
## Core Knowledge Concepts (12 Categories)
The benchmark covers these fundamental cognitive concepts grounded in developmental science:
- **Boundary**: The transition from one object to another
- **Continuity**: Objects persist as unified, cohesive entities across space and time
- **Permanence**: Objects do not cease to exist when they are no longer perceived
- **Spatiality**: The *a priori* understanding of the Euclidean properties of the world
- **Perceptual Constancy**: Changes in appearances don't mean changes in physical properties
- **Intuitive Physics**: Intuitions about the laws of how things interact in the physical world
- **Perspective**: To see what others see
- **Hierarchy**: Understanding of inclusion and exclusion of objects and categories
- **Conservation**: Invariances of properties despite transformations
- **Tool Use**: The capacity to manipulate specific objects to achieve goals
- **Intentionality**: To see what others want
- **Mechanical Reasoning**: Inferring actions from system states and vice versa

## Paper Citation
If you use CoreCognition in your research, please cite our paper:
```bibtex
@inproceedings{
li2025core,
title={Core Knowledge Deficits in Multi-Modal Language Models},
author={Yijiang Li and Qingying Gao and Tianwei Zhao and Bingyang Wang and Haoran Sun and Haiyun Lyu and Robert D. Hawkins and Nuno Vasconcelos and Tal Golan and Dezhi Luo and Hokin Deng},
booktitle={Forty-second International Conference on Machine Learning},
year={2025},
url={https://openreview.net/forum?id=EIK6xxIoCB}
}
```
## License
Apache 2.0 License
|