MOAT / README.md
nielsr's picture
nielsr HF staff
Add task category to MOAT dataset card
fb4ed8a verified
|
raw
history blame
3.07 kB
metadata
pretty_name: MOAT
configs:
  - config_name: default
    data_files:
      - split: test
        path: MOAT.parquet
task_categories:
  - image-text-to-text

MOAT: Evaluating LMMs for Capability Integration and Instruction Grounding

Zhoutong Ye, Mingze Sun, Huan-ang Gao, Chun Yu, Yuanchun Shi

Overview

MOAT (Multimodal model Of All Trades) is a challenging benchmark for large multimodal models (LMMs). It consists of vision language (VL) tasks that require the LMM to integrate several VL capabilities and engage in human-like generalist visual problem solving. Moreover, many tasks in MOAT focus on LMMs' capability to ground complex text and visual instructions, which is crucial for the application of LMMs in-the-wild. Developing on the VL capability taxonomies proposed in previous benchmark papers, we define 10 fundamental VL capabilities in MOAT.

Please check out our GitHub repo for further information.

Usage

Please check out our GitHub repo for detail usage.

Run Your Own Evaluation

You can access our dataset with the following code:

from datasets import load_dataset
dataset = load_dataset("waltsun/MOAT", split='test')

As some questions are formatted as interleaved text and image(s), we recommend referring to the ./inference/eval_API.py file in our GitHub repo for the correct way to query the LMM.

Column Description

  • index: The index of the question in the dataset.
  • question: The question text.
  • choices: A list of the answer choices. Can be empty.
  • images: The list of PIL images.
  • outside_knowledge_text: The essential information for answering the question. Optional.
  • outside_knowledge_images: The list of PIL images that are essential for answering the question. Can be empty.
  • answer: The correct answer.
  • capability: The VL capabilities required to answer the question. A list of strings.
  • human_cot: The human annotation for the CoT reasoning process.

Future Work

Going forward, we intend to further increase the diversity of the tasks in MOAT, involving more capability combinations and encompassing more domains and scenarios. Stay tuned!