|
--- |
|
language: en |
|
tags: [music, multimodal, qa, midi, vision] |
|
license: mit |
|
--- |
|
# MusiXQA π΅ |
|
**MusiXQA** is a multimodal dataset for evaluating and training music sheet understanding systems. Each data sample is composed of: |
|
- A scanned music sheet image (`.png`) |
|
- Its corresponding MIDI file (`.mid`) |
|
- A structured annotation (from `metadata.json`) |
|
- QuestionβAnswer (QA) pairs targeting musical structure, semantics, and optical music recognition (OMR) |
|
 |
|
|
|
--- |
|
|
|
## π Dataset Structure |
|
``` |
|
MusiXQA/ |
|
βββ images.tar # PNG files of music sheets (e.g., 0000000.png) |
|
βββ midi.tar # MIDI files (e.g., 0000000.mid), compressed |
|
βββ train_qa_omr.json # OMR-tasks QA pairs (train split) |
|
βββ train_qa_simple.json # Simple musical info QAs (train split) |
|
βββ test_qa_omr.json # OMR-tasks QA pairs (test split) |
|
βββ test_qa_simple.json # Simple musical info QAs (test split) |
|
βββ metadata.json # Annotation for each document (e.g., key, time, instruments) |
|
``` |
|
## π§Ύ Metadata |
|
The `metadata.json` file provides comprehensive annotations of the full music sheet content, facilitating research in symbolic music reasoning, score reconstruction, and multimodal alignment with audio or MIDI. |
|
 |
|
|
|
|
|
## β QA Data Format |
|
|
|
Each QA file (e.g., train_qa_omr.json) is a list of QA entries like this: |
|
``` |
|
{ |
|
"doc_id": "0086400", |
|
"question": "Please extract the pitch and duration of all notes in the 2nd bar of the treble clef.", |
|
"answer": "qB4~ sB4 e.E5~ sE5 sB4 eB4 e.E5 sG#4", |
|
"encode_format": "beat" |
|
}, |
|
``` |
|
β’ doc_id: corresponds to a sample in images/, midi/, and metadata.json<br /> |
|
β’ question: natural language query<br /> |
|
β’ answer: ground truth answer<br /> |
|
β’ encode_format: how the input is encoded (e.g., "beat", "note", etc.)<br /> |
|
|
|
## π Reference |
|
If you use this dataset in your work, please cite it using the following reference: |
|
``` |
|
@misc{chen2025musixqaadvancingvisualmusic, |
|
title={MusiXQA: Advancing Visual Music Understanding in Multimodal Large Language Models}, |
|
author={Jian Chen and Wenye Ma and Penghang Liu and Wei Wang and Tengwei Song and Ming Li and Chenguang Wang and Jiayu Qin and Ruiyi Zhang and Changyou Chen}, |
|
year={2025}, |
|
eprint={2506.23009}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CV}, |
|
url={https://arxiv.org/abs/2506.23009}, |
|
} |
|
``` |