Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,14 @@
|
|
1 |
# MMSci_Table
|
2 |
|
|
|
3 |
Dataset for the paper "[Does Table Source Matter? Benchmarking and Improving Multimodal Scientific Table Understanding and Reasoning](s)"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
|
5 |
# MMSci Dataset Collection
|
6 |
|
@@ -13,11 +21,7 @@ The MMSci dataset collection consists of three complementary datasets designed f
|
|
13 |
- **MMSci-Eval**: A benchmark with 3,114 testing samples for numerical reasoning evaluation
|
14 |
|
15 |
## Framework Overview
|
16 |
-

|
83 |
-

|
87 |
-
"
|
5 |
+
<p align="center">
|
6 |
+
|
7 |
+
<p>
|
8 |
+
|
9 |
+
<p align="center">
|
10 |
+
📑 <a href="https://arxiv.org/pdf/">Paper</a>    </a>
|
11 |
+
</p>
|
12 |
|
13 |
# MMSci Dataset Collection
|
14 |
|
|
|
21 |
- **MMSci-Eval**: A benchmark with 3,114 testing samples for numerical reasoning evaluation
|
22 |
|
23 |
## Framework Overview
|
24 |
+

|
|
|
|
|
|
|
|
|
25 |
*Figure 1: Overview of the MMSci framework showing the four key stages: Table Image Generation, Dataset Construction, Table Structure Learning, and Visual Instruction Tuning.*
|
26 |
|
27 |
## Dataset Details
|
|
|
37 |
- Complex layouts and relationships from scientific papers
|
38 |
- Focus on tables with significant numerical values
|
39 |
|
40 |
+

|
41 |
*Figure 2: Example from MMSci-Pre dataset showing the table image and its corresponding HTML representation.*
|
42 |
|
43 |
### MMSci-Ins
|
|
|
54 |
- Built upon scientific domain tables
|
55 |
|
56 |
|
57 |
+

|
58 |
*Figure 3: Example from MMSci-Ins dataset showing instruction-following samples across different tasks.*
|
59 |
|
60 |
### MMSci-Eval
|
|
|
84 |
|
85 |
|
86 |
#### Table Question Answering (TQA)
|
87 |
+

|
88 |
*Figure 4: Example of a TQA task showing the question, reasoning steps, and answer.*
|
89 |
|
90 |
#### Table Fact Verification (TFV)
|
91 |
+

|
92 |
*Figure 5: Example of a TFV task showing the statement, verification process, and conclusion.*
|
93 |
|
94 |
## Citation
|