Dataset Viewer
Auto-converted to Parquet
Search is not available for this dataset
image
imagewidth (px)
300
540
label
class label
5 classes
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
0img
End of preview. Expand in Data Studio

TrueMICL: True Multimodal In-Context Learning Dataset

A comprehensive multimodal dataset designed to evaluate and improve true multimodal in-context learning capabilities in Multimodal Large Language Models (MLLMs).

Paper | Code | Project page

Dataset Overview

TrueMICL addresses a critical limitation in current Multimodal Large Language Models: their tendency to neglect visual information in multimodal demonstrations, leading to superficial text imitation. This dataset is specifically designed to test true multimodal in-context learning by ensuring that:

  • Tasks are unsolvable without visual context
  • Novel image-text relationships are introduced
  • Visual information is perceivable and critical
  • Compatibility with language model backbones is maintained

Key Statistics

  • Total samples: 867 evaluation samples + extensive training data
  • Task categories: 4 major categories
  • Distinct tasks: 7 different tasks
  • Domains: Mathematical reasoning, pattern recognition, concept learning, visual question answering

Dataset Structure

The dataset is organized into task-specific directories, each containing:

File Organization

dataset/
β”œβ”€β”€ classification/          # Character classification task
β”‚   β”œβ”€β”€ img/                # Query and support images
β”‚   β”œβ”€β”€ query.json          # Test queries 
β”‚   └── support.json        # Support examples 
β”œβ”€β”€ clevr/                  # CLEVR-based reasoning tasks
β”‚   β”œβ”€β”€ material/           # Material-based images
β”‚   β”œβ”€β”€ query/              # Query images
β”‚   β”œβ”€β”€ shape/              # Shape-based images
β”‚   β”œβ”€β”€ size/               # Size-based images
β”‚   β”œβ”€β”€ support/            # Support images
β”‚   β”œβ”€β”€ query.json          # Main queries 
β”‚   β”œβ”€β”€ support.json        # Support examples 
β”‚   └── [query/support]_[material/shape/size].json  # Task-specific splits
β”œβ”€β”€ clock/                  # Clock reading and math
β”‚   β”œβ”€β”€ img/                # Clock face images
β”‚   β”œβ”€β”€ query.json          # Test queries 
β”‚   └── support.json        # Support examples
β”œβ”€β”€ operator_induction/     # Mathematical operator learning
β”‚   β”œβ”€β”€ query.json          # Test queries 
β”‚   β”œβ”€β”€ support.json        # Support examples 
β”‚   └── processed_training_data.json  # Training data
β”œβ”€β”€ palindrome_dataset/     # Palindrome pattern recognition
β”‚   β”œβ”€β”€ query.json          # Test queries 
β”‚   β”œβ”€β”€ support.json        # Support examples 
β”‚   └── training_data.json  # Training data
β”œβ”€β”€ shapes_count/           # Shape counting task
β”‚   β”œβ”€β”€ query.json          # Test queries 
β”‚   β”œβ”€β”€ support.json        # Support examples 
β”‚   └── training_data.json  # Training data 
β”œβ”€β”€ sudoku/                 # Sudoku puzzle solving
β”‚   β”œβ”€β”€ query.json          # Test queries 
β”‚   └── support.json        # Support examples 
└── vqav2/                  # Visual Question Answering v2
    β”œβ”€β”€ query.json          # Test queries 
    └── support.json        # Support examples 

Data Format

Each JSON file contains structured data with the following schema:

Query/Support Format:

{
  "id": "unique_identifier",
  "image": ["path/to/image.png"],
  "question": "Question text with multiple choice options",
  "answer": "Correct answer"
}

VQA Format (slightly different):

{
  "image_id": 12345,
  "question_id": 67890,
  "question": "Question text",
  "answer": "Answer text"
}

Data Types and Columns

Field Type Description
id string Unique identifier for the sample
image array List of image file paths
question string Question or task description
answer string Ground truth answer
image_id integer Image identifier (VQA format)
question_id integer Question identifier (VQA format)

Tasks and Domains

1. Mathematical Reasoning

  • Operator Induction: Learn novel mathematical operators from visual examples
  • Clock Math: Time reading and calculation tasks

2. Concept Binding

  • Character Classification: Classify novel character types from visual examples
  • CLEVR Count: Object counting and attribute reasoning

3. Pattern Finding

  • Sudoku: Complete Sudoku puzzles using visual pattern recognition
  • Palindrome: Identify palindromic patterns in visual sequences

4. Novel Concept Learning

  • Shapes Count: Count specific shapes and understand spatial relationships
  • VQA: General visual question answering requiring multimodal reasoning

Usage Examples

Basic Data Exploration

import json
import matplotlib.pyplot as plt
from PIL import Image

# Load and examine a sample
with open("classification/query.json", "r") as f:
    data = json.load(f)

sample = data[0]
print(f"ID: {sample['id']}")
print(f"Question: {sample['question']}")
print(f"Answer: {sample['answer']}")

# Load and display the image
img_path = sample['image'][0]
img = Image.open(img_path)
plt.imshow(img)
plt.title(sample['question'])
plt.show()

Task-Specific Loading

# Load CLEVR subtasks
clevr_tasks = ['material', 'shape', 'size']
for task in clevr_tasks:
    with open(f"clevr/query_{task}.json", "r") as f:
        task_data = json.load(f)
    print(f"CLEVR {task}: {len(task_data)} samples")

Data Collection Methodology

The dataset was constructed following rigorous criteria to ensure true multimodal learning:

  1. Visual Dependency: All tasks require visual information and cannot be solved through text-only reasoning
  2. Novel Relationships: Introduction of previously unseen image-text mappings
  3. Perceptual Validity: Visual elements are clearly perceivable and unambiguous
  4. Model Compatibility: Designed to work with standard language model architectures

Source Data

  • CLEVR: Modified from the original CLEVR dataset for visual reasoning
  • VQAv2: Subset of the Visual Question Answering v2 dataset
  • Synthetic Tasks: Custom-generated tasks for operator induction, palindromes, and shape counting
  • Novel Concepts: Artificially created character types and visual patterns

Citation

@inproceedings{wu2024fiva,
      title={True Multimodal In-Context Learning Needs Attention to the Visual Context},
      author={Tong Wu and Yinghao Xu and Ryan Po and Mengchen Zhang and Guandao Yang and Jiaqi Wang and Ziwei Liu and Dahua Lin and Gordon Wetzstein},
      booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
      year={2024},
      url={https://openreview.net/forum?id=Vp6HAjrdIg}
      }

License

This dataset is released under the MIT License. Please see the license file for detailed terms and conditions.

Contact

For questions, issues, or contributions regarding this dataset:


Note: This dataset is designed for research purposes to advance multimodal in-context learning. The novel tasks and visual concepts are specifically crafted to test true multimodal understanding rather than superficial pattern matching.

Downloads last month
562