TAT-DQA / README.md
zhufb's picture
Update README.md
bc46208 verified
---
license: cc-by-4.0
task_categories:
- question-answering
language:
- en
tags:
- finance
- table-text
- visual-document-QA
- numerical-reasoning
size_categories:
- 10K<n<100K
---
# TAT-DQA
- [**Project Page**](https://nextplusplus.github.io/TAT-DQA/)
- [**Paper - MM 22**](https://dl.acm.org/doi/abs/10.1145/3503161.3548422)
- [**Paper - Arxiv**](https://arxiv.org/abs/2207.11871)
- [**Github**](https://github.com/NExTplusplus/TAT-DQA)
- [**Leaderboard**](https://nextplusplus.github.io/TAT-DQA/#leaderboard)
**TAT-DQA** is a large-scale Document VQA dataset, which is constructed by extending the TAT-QA. It aims to stimulate the progress of QA research over more complex and realistic **visually-rich documents** with rich tabular and textual content, especially those requiring numerical reasoning.
The unique features of TAT-DQA include:
- The documents in TAT-DQA dataset are sampled from real-world high-quality financial reports and each document contains both tabular and textual data;
- The average number of words of each document in TAT-DQA is around 550, which is significantly larger than all existing Document VQA datasets.
- Around 85% of the documents in the dataset have only one page while 15% has multiple pages.
- Similar to TAT-QA, the answer forms are diverse, including single span, multiple spans and free-form and various numerical reasoning capabilities are usually required, including addition (+), subtraction (-), multiplication (x), division (/), counting, comparison, sorting, and their compositions;
In total, TAT-DQA contains 16,558 questions associated with 2,758 documents ( 3,067 document pages ) sampled from real-world financial reports.
## Citation
```python
@inproceedings{zhu2022towards,
title={Towards complex document understanding by discrete reasoning},
author={Zhu, Fengbin and Lei, Wenqiang and Feng, Fuli and Wang, Chao and Zhang, Haozhou and Chua, Tat-Seng},
booktitle={Proceedings of the 30th ACM International Conference on Multimedia},
pages={4857--4866},
year={2022}
}
```