BenSchneider's picture
Improve dataset card: Add paper and code links, sample usage, and refine tags (#3)
1e4a1ea verified
metadata
language:
  - en
license: mit
size_categories:
  - 1M<n<10M
task_categories:
  - visual-question-answering
  - image-text-to-text
pretty_name: ABC-Pretraining-Data
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
dataset_info:
  features:
    - name: caption
      dtype: string
    - name: url
      dtype: string
    - name: id
      dtype: int64
    - name: image
      dtype: string
    - name: negatives
      sequence: int64
  splits:
    - name: train
      num_bytes: 2289772991
      num_examples: 2252041
  download_size: 1855548818
  dataset_size: 2289772991
tags:
  - visual
  - multimodal
  - vision-language-model
  - retrieval

ABC Pretraining Data

This dataset contains the pretraining data for ABC, an open-source multimodal embedding model that uses a vision-language model backbone to deeply integrate image features with natural language instructions, advancing the state of visual embeddings with natural language control.

This dataset is derived from Google's Conceptual Captions dataset. Each item in the dataset contains a URL where the corresponding image can be downloaded and mined negatives for each item. The full dataset is ~300 GB of images. For a detailed description of how we mined the negatives, please check out our paper. Update: The images have been added to this repository. For an example of how to use and download this dataset, see our repository.

Paper, Project Page, and Code

Sample Usage

Quick Start

First, install the necessary dependencies by cloning the repository and installing requirements:

git clone https://github.com/TIGER-AI-Lab/ABC
cd ABC
pip install -r requirements.txt

Then, you can start making multimodal embeddings:

python -i ./quick_start.py

Fetching Datasets from 🤗 Hub

Our datasets are hosted on HuggingFace Hub. The text data and dataset metadata can be fetched using HF's load_dataset utility. To fetch the images from our datasets, we provide scripts in the fetch_datasets directory. These scripts will pull the pretraining/finetuning image data off the hub and unpack them in your huggingface datasets cache (under a directory called tigerlab). Run python ./fetch_datasets/pretrain.py to get the pretraining dataset and python ./fetch_datasets/instruct.py to get the finetuning dataset, respectively.

Citation

If you find any of our work helpful, please consider citing:

@misc{schneider2025abcachievingbettercontrol,
      title={ABC: Achieving Better Control of Multimodal Embeddings using VLMs},
      author={Benjamin Schneider and Florian Kerschbaum and Wenhu Chen},
      year={2025},
      eprint={2503.00329},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2503.00329},
}