yangbh217 commited on
Commit
58bbd53
·
verified ·
1 Parent(s): bec866c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -9
README.md CHANGED
@@ -1,6 +1,14 @@
1
  # MMSci_Table
2
 
 
3
  Dataset for the paper "[Does Table Source Matter? Benchmarking and Improving Multimodal Scientific Table Understanding and Reasoning](s)"
 
 
 
 
 
 
 
4
 
5
  # MMSci Dataset Collection
6
 
@@ -13,11 +21,7 @@ The MMSci dataset collection consists of three complementary datasets designed f
13
  - **MMSci-Eval**: A benchmark with 3,114 testing samples for numerical reasoning evaluation
14
 
15
  ## Framework Overview
16
- ![Framework Overview](model.pdf)
17
- <div class="flex justify-center">
18
- <img src="./model.pdf" alt="Framework Overview">
19
- </div>
20
-
21
  *Figure 1: Overview of the MMSci framework showing the four key stages: Table Image Generation, Dataset Construction, Table Structure Learning, and Visual Instruction Tuning.*
22
 
23
  ## Dataset Details
@@ -33,7 +37,7 @@ The MMSci dataset collection consists of three complementary datasets designed f
33
  - Complex layouts and relationships from scientific papers
34
  - Focus on tables with significant numerical values
35
 
36
- ![MMSci-Pre Example](html1.pdf)
37
  *Figure 2: Example from MMSci-Pre dataset showing the table image and its corresponding HTML representation.*
38
 
39
  ### MMSci-Ins
@@ -50,7 +54,7 @@ The MMSci dataset collection consists of three complementary datasets designed f
50
  - Built upon scientific domain tables
51
 
52
 
53
- ![MMSci-Ins Example](input1.pdf)
54
  *Figure 3: Example from MMSci-Ins dataset showing instruction-following samples across different tasks.*
55
 
56
  ### MMSci-Eval
@@ -80,11 +84,11 @@ The datasets were created through a rigorous process:
80
 
81
 
82
  #### Table Question Answering (TQA)
83
- ![TQA Example](tqa1.pdf)
84
  *Figure 4: Example of a TQA task showing the question, reasoning steps, and answer.*
85
 
86
  #### Table Fact Verification (TFV)
87
- ![TFV Example](tfv1.pdf)
88
  *Figure 5: Example of a TFV task showing the statement, verification process, and conclusion.*
89
 
90
  ## Citation
 
1
  # MMSci_Table
2
 
3
+
4
  Dataset for the paper "[Does Table Source Matter? Benchmarking and Improving Multimodal Scientific Table Understanding and Reasoning](s)"
5
+ <p align="center">
6
+
7
+ <p>
8
+
9
+ <p align="center">
10
+ 📑 <a href="https://arxiv.org/pdf/">Paper</a> &nbsp&nbsp </a>
11
+ </p>
12
 
13
  # MMSci Dataset Collection
14
 
 
21
  - **MMSci-Eval**: A benchmark with 3,114 testing samples for numerical reasoning evaluation
22
 
23
  ## Framework Overview
24
+ ![Framework Overview](./assets/model.jpg)
 
 
 
 
25
  *Figure 1: Overview of the MMSci framework showing the four key stages: Table Image Generation, Dataset Construction, Table Structure Learning, and Visual Instruction Tuning.*
26
 
27
  ## Dataset Details
 
37
  - Complex layouts and relationships from scientific papers
38
  - Focus on tables with significant numerical values
39
 
40
+ ![MMSci-Pre Example](./assets/html1.jpg)
41
  *Figure 2: Example from MMSci-Pre dataset showing the table image and its corresponding HTML representation.*
42
 
43
  ### MMSci-Ins
 
54
  - Built upon scientific domain tables
55
 
56
 
57
+ ![MMSci-Ins Example](./assets/input1.jpg)
58
  *Figure 3: Example from MMSci-Ins dataset showing instruction-following samples across different tasks.*
59
 
60
  ### MMSci-Eval
 
84
 
85
 
86
  #### Table Question Answering (TQA)
87
+ ![TQA Example](./assets/tqa1.jpg)
88
  *Figure 4: Example of a TQA task showing the question, reasoning steps, and answer.*
89
 
90
  #### Table Fact Verification (TFV)
91
+ ![TFV Example](./assets/tfv1.jpg)
92
  *Figure 5: Example of a TFV task showing the statement, verification process, and conclusion.*
93
 
94
  ## Citation