Datasets:
image
imagewidth (px) 300
540
| label
class label 5
classes |
---|---|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
|
0img
|
TrueMICL: True Multimodal In-Context Learning Dataset
A comprehensive multimodal dataset designed to evaluate and improve true multimodal in-context learning capabilities in Multimodal Large Language Models (MLLMs).
Paper | Code | Project page
Dataset Overview
TrueMICL addresses a critical limitation in current Multimodal Large Language Models: their tendency to neglect visual information in multimodal demonstrations, leading to superficial text imitation. This dataset is specifically designed to test true multimodal in-context learning by ensuring that:
- Tasks are unsolvable without visual context
- Novel image-text relationships are introduced
- Visual information is perceivable and critical
- Compatibility with language model backbones is maintained
Key Statistics
- Total samples: 867 evaluation samples + extensive training data
- Task categories: 4 major categories
- Distinct tasks: 7 different tasks
- Domains: Mathematical reasoning, pattern recognition, concept learning, visual question answering
Dataset Structure
The dataset is organized into task-specific directories, each containing:
File Organization
dataset/
βββ classification/ # Character classification task
β βββ img/ # Query and support images
β βββ query.json # Test queries
β βββ support.json # Support examples
βββ clevr/ # CLEVR-based reasoning tasks
β βββ material/ # Material-based images
β βββ query/ # Query images
β βββ shape/ # Shape-based images
β βββ size/ # Size-based images
β βββ support/ # Support images
β βββ query.json # Main queries
β βββ support.json # Support examples
β βββ [query/support]_[material/shape/size].json # Task-specific splits
βββ clock/ # Clock reading and math
β βββ img/ # Clock face images
β βββ query.json # Test queries
β βββ support.json # Support examples
βββ operator_induction/ # Mathematical operator learning
β βββ query.json # Test queries
β βββ support.json # Support examples
β βββ processed_training_data.json # Training data
βββ palindrome_dataset/ # Palindrome pattern recognition
β βββ query.json # Test queries
β βββ support.json # Support examples
β βββ training_data.json # Training data
βββ shapes_count/ # Shape counting task
β βββ query.json # Test queries
β βββ support.json # Support examples
β βββ training_data.json # Training data
βββ sudoku/ # Sudoku puzzle solving
β βββ query.json # Test queries
β βββ support.json # Support examples
βββ vqav2/ # Visual Question Answering v2
βββ query.json # Test queries
βββ support.json # Support examples
Data Format
Each JSON file contains structured data with the following schema:
Query/Support Format:
{
"id": "unique_identifier",
"image": ["path/to/image.png"],
"question": "Question text with multiple choice options",
"answer": "Correct answer"
}
VQA Format (slightly different):
{
"image_id": 12345,
"question_id": 67890,
"question": "Question text",
"answer": "Answer text"
}
Data Types and Columns
Field | Type | Description |
---|---|---|
id |
string | Unique identifier for the sample |
image |
array | List of image file paths |
question |
string | Question or task description |
answer |
string | Ground truth answer |
image_id |
integer | Image identifier (VQA format) |
question_id |
integer | Question identifier (VQA format) |
Tasks and Domains
1. Mathematical Reasoning
- Operator Induction: Learn novel mathematical operators from visual examples
- Clock Math: Time reading and calculation tasks
2. Concept Binding
- Character Classification: Classify novel character types from visual examples
- CLEVR Count: Object counting and attribute reasoning
3. Pattern Finding
- Sudoku: Complete Sudoku puzzles using visual pattern recognition
- Palindrome: Identify palindromic patterns in visual sequences
4. Novel Concept Learning
- Shapes Count: Count specific shapes and understand spatial relationships
- VQA: General visual question answering requiring multimodal reasoning
Usage Examples
Basic Data Exploration
import json
import matplotlib.pyplot as plt
from PIL import Image
# Load and examine a sample
with open("classification/query.json", "r") as f:
data = json.load(f)
sample = data[0]
print(f"ID: {sample['id']}")
print(f"Question: {sample['question']}")
print(f"Answer: {sample['answer']}")
# Load and display the image
img_path = sample['image'][0]
img = Image.open(img_path)
plt.imshow(img)
plt.title(sample['question'])
plt.show()
Task-Specific Loading
# Load CLEVR subtasks
clevr_tasks = ['material', 'shape', 'size']
for task in clevr_tasks:
with open(f"clevr/query_{task}.json", "r") as f:
task_data = json.load(f)
print(f"CLEVR {task}: {len(task_data)} samples")
Data Collection Methodology
The dataset was constructed following rigorous criteria to ensure true multimodal learning:
- Visual Dependency: All tasks require visual information and cannot be solved through text-only reasoning
- Novel Relationships: Introduction of previously unseen image-text mappings
- Perceptual Validity: Visual elements are clearly perceivable and unambiguous
- Model Compatibility: Designed to work with standard language model architectures
Source Data
- CLEVR: Modified from the original CLEVR dataset for visual reasoning
- VQAv2: Subset of the Visual Question Answering v2 dataset
- Synthetic Tasks: Custom-generated tasks for operator induction, palindromes, and shape counting
- Novel Concepts: Artificially created character types and visual patterns
Citation
@inproceedings{wu2024fiva,
title={True Multimodal In-Context Learning Needs Attention to the Visual Context},
author={Tong Wu and Yinghao Xu and Ryan Po and Mengchen Zhang and Guandao Yang and Jiaqi Wang and Ziwei Liu and Dahua Lin and Gordon Wetzstein},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=Vp6HAjrdIg}
}
License
This dataset is released under the MIT License. Please see the license file for detailed terms and conditions.
Contact
For questions, issues, or contributions regarding this dataset:
- Project Website: https://chenxshuo.github.io/true-micl-colm/
- Paper: https://huggingface.co/papers/2507.15807
- Code: https://github.com/chenxshuo/true-micl-colm
- Issues: Please report bugs or request features through the appropriate channels
Note: This dataset is designed for research purposes to advance multimodal in-context learning. The novel tasks and visual concepts are specifically crafted to test true multimodal understanding rather than superficial pattern matching.
- Downloads last month
- 562