zhufb commited on
Commit
6d28dcd
·
verified ·
1 Parent(s): ebffbfd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +111 -11
README.md CHANGED
@@ -1,11 +1,111 @@
1
- ---
2
- license: cc-by-4.0
3
- task_categories:
4
- - question-answering
5
- language:
6
- - en
7
- tags:
8
- - finance
9
- size_categories:
10
- - 10K<n<100K
11
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - question-answering
5
+ language:
6
+ - en
7
+ tags:
8
+ - finance
9
+ - table-text
10
+ - discrete_reasoning
11
+ - numerical_reasoning
12
+ size_categories:
13
+ - 10K<n<100K
14
+ ---
15
+
16
+
17
+ # TAT-QA
18
+
19
+ [**Project Page**](https://nextplusplus.github.io/TAT-QA/); [**Paper - ACL 21**](https://aclanthology.org/2021.acl-long.254/); [**Source Code**](https://github.com/NExTplusplus/TAT-QA) [**Leaderboard**](https://nextplusplus.github.io/TAT-QA/#leaderboard)
20
+
21
+ TAT-QA (Tabular And Textual dataset for Question Answering) is a large-scale QA dataset, aiming to stimulate progress of QA research over more complex and realistic tabular and textual data, especially those requiring numerical reasoning.
22
+
23
+ The unique features of TAT-QA include:
24
+
25
+ - The context given is hybrid, comprising a semi-structured table and at least two relevant paragraphs that describe, analyze or complement the table;
26
+ - The questions are generated by the humans with rich financial knowledge, most are practical;
27
+ - The answer forms are diverse, including single span, multiple spans and free-form;
28
+ - To answer the questions, various numerical reasoning capabilities are usually required, including addition (+), subtraction (-), multiplication (x), division (/), counting, comparison, sorting, and their compositions;
29
+ - In addition to the ground-truth answers, the corresponding derivations and scale are also provided if any.
30
+
31
+
32
+ In total, TAT-QA contains 16,552 questions associated with 2,757 hybrid contexts from real-world financial reports.
33
+
34
+ For more details, please refer to the project page: https://nextplusplus.github.io/TAT-QA/
35
+
36
+ ## Data Format
37
+
38
+ ```phthon
39
+ {
40
+ "table": { # The tabular data in a hybrid context
41
+ "uid": "3ffd9053-a45d-491c-957a-1b2fa0af0570", # The unique id of a table
42
+ "table": [ # The table content which is 2d-array
43
+ [
44
+ "",
45
+ "2019",
46
+ "2018",
47
+ "2017"
48
+ ],
49
+ [
50
+ "Fixed Price",
51
+ "$ 1,452.4",
52
+ "$ 1,146.2",
53
+ "$ 1,036.9"
54
+ ],
55
+ ...
56
+ ]
57
+ },
58
+ "paragraphs": [ # The textual data in a hybrid context comprising at least two associated paragraphs to the table
59
+ {
60
+ "uid": "f4ac7069-10a2-47e9-995c-3903293b3d47", # The unique id of a paragraph
61
+ "order": 1, # The order of the paragraph in all associated paragraphs, starting from 1
62
+ "text": "Sales by Contract Type: Substantially all of # The content of the paragraph
63
+ our contracts are fixed-price type contracts.
64
+ Sales included in Other contract types represent cost
65
+ plus and time and material type contracts."
66
+ },
67
+ ...
68
+ ],
69
+ "questions": [ # The questions associated to the hybrid context
70
+ {
71
+ "uid": "eb787966-fa02-401f-bfaf-ccabf3828b23", # The unique id of a question
72
+ "order": 2, # The order of the question in all questions, starting from 1
73
+ "question": "What is the change in Other in 2019 from 2018?", # The question itself
74
+ "answer": -12.6, # The ground-truth answer
75
+ "derivation": "44.1 - 56.7", # The derivation that can be executed to arrive at the ground-truth answer
76
+ "answer_type": "arithmetic", # The answer type including `span`, `spans`, `arithmetic` and `counting`.
77
+ "answer_from": "table-text", # The source of the answer including `table`, `table` and `table-text`
78
+ "rel_paragraphs": [ # The orders of the paragraphs that are relied to infer the answer if any.
79
+ "2"
80
+ ],
81
+ "req_comparison": false, # A flag indicating if `comparison/sorting` is needed to answer the question whose answer is a single span or multiple spans
82
+ "scale": "million" # The scale of the answer including `None`, `thousand`, `million`, `billion` and `percent`
83
+ }
84
+ ]
85
+ }
86
+
87
+ ```
88
+
89
+ ## Citation
90
+
91
+ ```bash
92
+ @inproceedings{zhu2021tat,
93
+ title = "{TAT}-{QA}: A Question Answering Benchmark on a Hybrid of Tabular and Textual Content in Finance",
94
+ author = "Zhu, Fengbin and
95
+ Lei, Wenqiang and
96
+ Huang, Youcheng and
97
+ Wang, Chao and
98
+ Zhang, Shuo and
99
+ Lv, Jiancheng and
100
+ Feng, Fuli and
101
+ Chua, Tat-Seng",
102
+ booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
103
+ month = aug,
104
+ year = "2021",
105
+ address = "Online",
106
+ publisher = "Association for Computational Linguistics",
107
+ url = "https://aclanthology.org/2021.acl-long.254",
108
+ doi = "10.18653/v1/2021.acl-long.254",
109
+ pages = "3277--3287"
110
+ }
111
+ ```