File size: 4,753 Bytes
4906ab8
 
 
 
 
 
 
 
 
 
 
 
 
ddc37b9
 
c733ceb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a9a09dd
 
 
 
ddc37b9
 
 
 
e380d3b
 
ddc37b9
 
c3d57a0
ddc37b9
731822f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c3d57a0
ddc37b9
731822f
 
c3d57a0
731822f
 
 
 
 
 
 
 
 
 
 
 
 
 
5d83608
731822f
 
 
 
c3d57a0
731822f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c3d57a0
731822f
c3d57a0
731822f
 
 
 
c3d57a0
ddc37b9
 
a9a09dd
e380d3b
3d5749f
e380d3b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
---
task_categories:
- question-answering
- summarization
language:
- en
tags:
- numeric
- arithmetic
- math
pretty_name: NumericBench
size_categories:
- 10K<n<100K

configs:
- config_name: arithmetic_operation
  data_files: "arithmetic_operation/*.json"
- config_name: mixed_number_sting
  data_files: "mixed_number_sting/*.json"
- config_name: num_list
  data_files: "num_list/*.json"
- config_name: sequence
  data_files: "sequence/*.json"
- config_name: stock-single-trun
  data_files: "stock/single-turn/*.json"
- config_name: stock-multi-turn
  data_files: "stock/multi-turn/*.json"
- config_name: weather-single-turn
  data_files: "weather/single-turn/*.json"
- config_name: weather-multi-turn
  data_files: "weather/multi-turn/*.json"
---

# Introduction

**NumericBench** is a comprehensive benchmark designed to evaluate the numerical reasoning capabilities of Large Language Models, addressing their limitations in tasks like arithmetic, number recognition, contextual retrieval, comparison, summarization, and logical reasoning. By incorporating diverse datasets ranging from synthetic number lists to real-world domains like stock trends and weather patterns, NumericBench systematically tests LLMs in both structured and noisy contexts. Experiments on models such as GPT-4o and DeepSeek-V3 reveal significant weaknesses, emphasizing the need for numerically-aware modeling to enhance LLMs' real-world applicability.

Github Repo: https://github.com/TreeAI-Lab/NumericBench

Arxiv Paper: https://arxiv.org/abs/2502.11075

# How to use it?

## Loading Data

``` python
from huggingface_hub import hf_hub_download
import pandas as pd

dataset_name_list = ['arithmetic_operation/context_arithmetic_operation.json', 'arithmetic_operation/arithmetic_operation.json',
            'mixed_number_sting/mixed_number_string_500_per_sample.json', 
            'num_list/num_list_500_per_sample_1000_length.json', 'num_list/num_list_500_per_sample_100_length.json', 
            'sequence/sequence_500_sample_100_length.json', 
            'stock/single-turn/stock_500_per_sample_150_length.json', 'stock/single-turn/stock_500_per_sample_300_length.json', 'stock/multi-turn/stock_multi_turn_100_per_sample_100_length.json',
            'weather/single-turn/weather_500_per_sample_200_length.json', 'weather/single-turn/weather_500_per_sample_400_length.json', 'weather/multi-turn/weather_multi_turn_100_per_sample_100_length.json'
            ]

REPO_ID = "TreeAILab/NumericBench"

for dataset_name in dataset_name_list:
    dataset = pd.read_json(
        hf_hub_download(repo_id=REPO_ID, filename=dataset_name, repo_type="dataset")
    )
```

Alternatively, you can download the dataset from [this link](https://zenodo.org/records/14875784?token=eyJhbGciOiJIUzUxMiJ9.eyJpZCI6IjZhYjZlYzYwLWZkMTgtNGU1Ni1iM2I2LWUwNWVlMGIwZmYwZCIsImRhdGEiOnt9LCJyYW5kb20iOiIwNzQxNmU0NWY5ZDkxMzQ4ODVmYjdlZDgyOGJmNjVhYSJ9.m-_8Owb3TohJ76cGt2Mu4wrpsxMgC4E_aJG7Q07KHOTlKaxB4kipMY3eBZPaQEIkOv_iJEkRmlvZr23rLkTMBw).

## Data Format

Due to the excessive length of the content, "..." is used to indicate omission.

### single-turn

``` json
{
    "system_prompt": "...",
    "system_prompt_cot": "...",
    "description": "...",
    "data": [
        {
            "idx": 0,
            "question_index": 0,
            "question": "What is the result of A + B? Please round the answer to two decimal places. ",
            "struct_data": "{'A': 6.755, 'B': -1.225}",
            "answer": 5.53,
            "ability": "1-digit integer with 3 decimal places"
        }, ...
    ]
}
```

### multi-turn

``` json
{
    "system_prompt": "...",
    "system_prompt_cot": "...",
    "description": "...",
    "data": [
        {
            "idx": 0,
            "multi_turn_QA": [
                {
                    "turn_index": 0,
                    "question_index": 6,
                    "struct_data": "...",
                    "question": "...",
                    "answer": "F",
                    "ability": "contextual retrieval"
                }, ...
            ]
        }, ...
    ]
}
```

## Evaluation

# Dataset statistics

<p align="center">
  <img src="./figure/data_statistics.png" width=900>
</p>

For more details, please refer to our paper.

# Citation

```
@misc{li2025exposingnumeracygapsbenchmark,
      title={Exposing Numeracy Gaps: A Benchmark to Evaluate Fundamental Numerical Abilities in Large Language Models}, 
      author={Haoyang Li and Xuejia Chen and Zhanchao XU and Darian Li and Nicole Hu and Fei Teng and Yiming Li and Luyu Qiu and Chen Jason Zhang and Qing Li and Lei Chen},
      year={2025},
      eprint={2502.11075},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2502.11075}, 
}
```