Datasets:
license: cc-by-sa-4.0
task_categories:
- zero-shot-classification
language:
- en
pretty_name: ' simco-comco'
size_categories:
- 10K<n<100K
ComCo & SimCo Datasets
๐ GitHub Project Page | ๐ arXiv Paper
Overview
This repository contains two datasets, ComCo and SimCo, designed for evaluating multi-object representation in Vision-Language Models (VLMs). These datasets provide controlled environments for analyzing model biases, object recognition, and compositionality in multi-object scenarios.
- ComCo: Composed of real-world objects derived from the COCO dataset.
- SimCo: Contains simple geometric shapes in structured multi-object settings.
ComCo Dataset
The ComCo (Complex COCO Objects) dataset consists of images featuring 2 to 5 objects from the COCO dataset. Each zip file contains different arrangements of objects with variations in:
- Size (e.g., large vs. small objects)
- Position (top-left, middle, bottom-right, etc.)
ComCo is specifically designed to test VLMs on real-world objects, allowing precise control over object placement and ensuring a systematic evaluation of compositional understanding.
SimCo Dataset
The SimCo (Simple Compositional Objects) dataset consists of synthetic images featuring geometric shapes such as:
- Cubes
- Spheres
- Cylinders
- Triangles
- Pentagons
SimCo is used to isolate model biases by removing real-world semantics, enabling controlled evaluation of how VLMs process object interactions purely based on size, shape, and position.
Usage
These datasets are useful for:
- Analyzing VLM biases (e.g., preference for larger objects)
- Compositionality testing (how models handle multiple objects in images)
- Zero-shot & fine-tuning tasks (evaluating robustness of vision-language embeddings)
Loading with Hugging Face datasets
Library
You can load the dataset directly using:
from datasets import load_dataset
# Load ComCo dataset
comco = load_dataset("clip-oscope/simco-comco", data_dir="ComCo")
# Load SimCo dataset
simco = load_dataset("clip-oscope/simco-comco", data_dir="SimCo")
Citation
If you use this dataset in your research, please cite:
@inproceedings{abbasi2025clip,
title={CLIP Under the Microscope: A Fine-Grained Analysis of Multi-Object Representation},
author={Abbasi, Reza and Nazari, Ali and Sefid, Aminreza and Banayeeanzade, Mohammadali and Rohban, Mohammad Hossein and Soleymani Baghshah, Mahdieh},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2025}
}