Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
nielsr HF Staff commited on
Commit
06e9202
·
verified ·
1 Parent(s): 18f57f7

Add link to paper, add table-question-answering task category and code to reproduce results

Browse files

This PR ensures the dataset is linked to the paper (and can be found via the paper), using https://huggingface.co/papers/2506.14028

This PR also ensures the appropriate task category is selected.

The PR also adds the code used to reproduce the results.

Files changed (1) hide show
  1. README.md +92 -14
README.md CHANGED
@@ -1,4 +1,17 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: task_id
@@ -20,21 +33,9 @@ configs:
20
  data_files:
21
  - split: test
22
  path: data/test-*
23
- license: apache-2.0
24
- language:
25
- - en
26
- - zh
27
- - jp
28
- - es
29
- - el
30
  tags:
31
  - finance
32
  - multilingual
33
- pretty_name: PolyFiQA-Expert
34
- size_categories:
35
- - n<1K
36
- task_categories:
37
- - question-answering
38
  ---
39
 
40
  # Dataset Card for PolyFiQA-Expert
@@ -67,7 +68,7 @@ task_categories:
67
 
68
  - **Homepage:** https://huggingface.co/collections/TheFinAI/multifinben-6826f6fc4bc13d8af4fab223
69
  - **Repository:** https://huggingface.co/datasets/TheFinAI/polyfiqa-expert
70
- - **Paper:** MultiFinBen: A Multilingual, Multimodal, and Difficulty-Aware Benchmark for Financial LLM Evaluation
71
  - **Leaderboard:** https://huggingface.co/spaces/TheFinAI/Open-FinLLM-Leaderboard
72
 
73
  ### Dataset Summary
@@ -78,6 +79,7 @@ task_categories:
78
 
79
  - **Tasks:**
80
  - question-answering
 
81
  - **Evaluation Metrics:**
82
  - ROUGE-1
83
 
@@ -185,4 +187,80 @@ If you use this dataset, please cite:
185
  primaryClass={cs.CL},
186
  url={https://arxiv.org/abs/2506.14028},
187
  }
188
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ - zh
5
+ - jp
6
+ - es
7
+ - el
8
+ license: apache-2.0
9
+ size_categories:
10
+ - n<1K
11
+ task_categories:
12
+ - question-answering
13
+ - table-question-answering
14
+ pretty_name: PolyFiQA-Expert
15
  dataset_info:
16
  features:
17
  - name: task_id
 
33
  data_files:
34
  - split: test
35
  path: data/test-*
 
 
 
 
 
 
 
36
  tags:
37
  - finance
38
  - multilingual
 
 
 
 
 
39
  ---
40
 
41
  # Dataset Card for PolyFiQA-Expert
 
68
 
69
  - **Homepage:** https://huggingface.co/collections/TheFinAI/multifinben-6826f6fc4bc13d8af4fab223
70
  - **Repository:** https://huggingface.co/datasets/TheFinAI/polyfiqa-expert
71
+ - **Paper:** [MultiFinBen: A Multilingual, Multimodal, and Difficulty-Aware Benchmark for Financial LLM Evaluation](https://huggingface.co/papers/2506.14028)
72
  - **Leaderboard:** https://huggingface.co/spaces/TheFinAI/Open-FinLLM-Leaderboard
73
 
74
  ### Dataset Summary
 
79
 
80
  - **Tasks:**
81
  - question-answering
82
+ - table-question-answering
83
  - **Evaluation Metrics:**
84
  - ROUGE-1
85
 
 
187
  primaryClass={cs.CL},
188
  url={https://arxiv.org/abs/2506.14028},
189
  }
190
+ ```
191
+
192
+ ### Code to reproduce results
193
+
194
+ 1. Navigate to the evaluation folder:
195
+ ```bash
196
+ cd FinBen/finlm_eval/
197
+ ```
198
+
199
+ 2. Create and activate a new conda environment:
200
+ ```bash
201
+ conda create -n finben python=3.12
202
+ conda activate finben
203
+ ```
204
+
205
+ 3. Install the required dependencies:
206
+ ```bash
207
+ pip install -e .
208
+ pip install -e .[vllm]
209
+ ```
210
+
211
+ 4. Log into Hugging Face
212
+
213
+ Set your Hugging Face token as an environment variable:
214
+ ```bash
215
+ export HF_TOKEN="your_hf_token"
216
+ ```
217
+
218
+ 5. Model Evaluation
219
+
220
+ 6. Navigate to the FinBen directory:
221
+ ```bash
222
+ cd FinBen/
223
+ ```
224
+
225
+ 7. Set the VLLM worker multiprocessing method:
226
+ ```bash
227
+ export VLLM_WORKER_MULTIPROC_METHOD="spawn"
228
+ ```
229
+
230
+ 8. Run evaluation:
231
+
232
+ Important Notes on Evaluation
233
+ - 0-shot setting: Use `num_fewshot=0` and `lm-eval-results-gr-0shot` as the results repository.
234
+ - 5-shot setting: Use `num_fewshot=5` and `lm-eval-results-gr-5shot` as the results repository.
235
+ - Base models: Remove `apply_chat_template`.
236
+ - Instruction models: Use `apply_chat_template`.
237
+
238
+ For gr Tasks
239
+ Execute the following command:
240
+ ```bash
241
+ lm_eval --model vllm \
242
+ --model_args "pretrained=meta-llama/Llama-3.2-1B-Instruct,tensor_parallel_size=4,gpu_memory_utilization=0.8,max_model_len=1024" \
243
+ --tasks gr \
244
+ --num_fewshot 5 \
245
+ --batch_size auto \
246
+ --output_path results \
247
+ --hf_hub_log_args "hub_results_org=TheFinAI,details_repo_name=lm-eval-results-gr-5shot,push_results_to_hub=True,push_samples_to_hub=True,public_repo=False" \
248
+ --log_samples \
249
+ --apply_chat_template \
250
+ --include_path ./tasks
251
+ ```
252
+
253
+ For gr_long Task
254
+ Execute the following command:
255
+ ```bash
256
+ lm_eval --model vllm \
257
+ --model_args "pretrained=Qwen/Qwen2.5-72B-Instruct,tensor_parallel_size=4,gpu_memory_utilization=0.8,max_length=8192" \
258
+ --tasks gr_long \
259
+ --num_fewshot 5 \
260
+ --batch_size auto \
261
+ --output_path results \
262
+ --hf_hub_log_args "hub_results_org=TheFinAI,details_repo_name=lm-eval-results-gr-5shot,push_results_to_hub=True,push_samples_to_hub=True,public_repo=False" \
263
+ --log_samples \
264
+ --apply_chat_template \
265
+ --include_path ./tasks
266
+ ```