XTD-10 / README.md
Haon-Chen's picture
Add task categories, link to paper. (#2)
63227af verified
metadata
license: mit
language:
  - en
task_categories:
  - image-text-to-text
tags:
  - embedding
  - multimodal
  - multilingual
pretty_name: XTD
size_categories:
  - 1K<n<10K
configs:
  - config_name: it
    data_files:
      - split: test
        path: it/it.parquet
  - config_name: es
    data_files:
      - split: test
        path: es/es.parquet
  - config_name: ru
    data_files:
      - split: test
        path: ru/ru.parquet
  - config_name: zh
    data_files:
      - split: test
        path: zh/zh.parquet
  - config_name: pl
    data_files:
      - split: test
        path: pl/pl.parquet
  - config_name: tr
    data_files:
      - split: test
        path: tr/tr.parquet
  - config_name: ko
    data_files:
      - split: test
        path: ko/ko.parquet

XTD Multimodal Multilingual Data With Instruction

This dataset contains datasets (with English instruction) used for evaluating the multilingual capability of a multimodal embedding model, including seven languages:

  • it, es, ru, zh, pl, tr, ko

Dataset Usage

  • The instruction on the query side is: "Retrieve an image of this caption."
  • The instruction on the document side is: "Represent the given image."
  • Each example contains a query and a set of targets. The first one in the candidate list is the groundtruth target.

Image Preparation

First, you should prepare the images used for evaluation:

Image Downloads

XTD10 images

mkdir -p images && cd images
wget https://huggingface.co/datasets/Haon-Chen/XTD-10/resolve/main/XTD10_dataset.tar.gz
tar -I "pigz -d -p 8" -xf XTD10_dataset.tar.gz

Image Organization

  images/
  ├── XTD10_dataset/
       └── ... .jpg 

You can refer to the image paths in each subset to view the image organization.

You can also customize your image paths by altering the image_path fields.

Citation

If you use this dataset in your research, feel free to cite the original paper of XTD and the mmE5 paper.

mmE5: Improving Multimodal Multilingual Embeddings via High-quality Synthetic Data

@article{chen2025mmE5,
  title={mmE5: Improving Multimodal Multilingual Embeddings via High-quality Synthetic Data},
  author={Chen, Haonan and Wang, Liang and Yang, Nan and Zhu, Yutao and Zhao, Ziliang and Wei, Furu and Dou, Zhicheng},
  journal={arXiv preprint arXiv:2502.08468},
  year={2025}
}

@article{XTD,
  author       = {Pranav Aggarwal and
                  Ajinkya Kale},
  title        = {Towards Zero-shot Cross-lingual Image Retrieval},
  journal      = {CoRR},
  volume       = {abs/2012.05107},
  year         = {2020},
  url          = {https://arxiv.org/abs/2012.05107},
  eprinttype    = {arXiv},
  eprint       = {2012.05107},
  timestamp    = {Sat, 02 Jan 2021 15:43:30 +0100},
  biburl       = {https://dblp.org/rec/journals/corr/abs-2012-05107.bib},
  bibsource    = {dblp computer science bibliography, https://dblp.org}
}