andylizf commited on
Commit
9a2b7d3
·
verified ·
1 Parent(s): 596a7dd

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitmodules +10 -0
  2. latency_breakdown_diskann_20250219_033737.json +0 -0
  3. rag-evaluation-harness/.ruff_cache/0.9.3/11003261178682080991 +0 -0
  4. rag-evaluation-harness/.ruff_cache/0.9.3/1195792812016165223 +0 -0
  5. rag-evaluation-harness/.ruff_cache/0.9.3/12014129994672639323 +0 -0
  6. rag-evaluation-harness/.ruff_cache/0.9.3/12288333375405380992 +0 -0
  7. rag-evaluation-harness/.ruff_cache/0.9.3/12495519353814268830 +0 -0
  8. rag-evaluation-harness/.ruff_cache/0.9.3/13179638047834592586 +0 -0
  9. rag-evaluation-harness/.ruff_cache/0.9.3/13594465565451147476 +0 -0
  10. rag-evaluation-harness/.ruff_cache/0.9.3/13996579915424681555 +0 -0
  11. rag-evaluation-harness/.ruff_cache/0.9.3/16061065282831403620 +0 -0
  12. rag-evaluation-harness/.ruff_cache/0.9.3/16593127969924090722 +0 -0
  13. rag-evaluation-harness/.ruff_cache/0.9.3/16992967298380143688 +0 -0
  14. rag-evaluation-harness/.ruff_cache/0.9.3/18243468368813636277 +0 -0
  15. rag-evaluation-harness/.ruff_cache/0.9.3/2719175708618349960 +0 -0
  16. rag-evaluation-harness/.ruff_cache/0.9.3/474605506059877149 +0 -0
  17. rag-evaluation-harness/.ruff_cache/0.9.3/546521684010417666 +0 -0
  18. rag-evaluation-harness/.ruff_cache/0.9.3/6007768929244549241 +0 -0
  19. rag-evaluation-harness/.ruff_cache/0.9.3/7177632790420859335 +0 -0
  20. rag-evaluation-harness/.ruff_cache/0.9.3/7685340309003606770 +0 -0
  21. rag-evaluation-harness/.ruff_cache/0.9.3/789121911436317890 +0 -0
  22. rag-evaluation-harness/.ruff_cache/0.9.3/8058834852024398863 +0 -0
  23. rag-evaluation-harness/.ruff_cache/0.9.3/8864226155108425509 +0 -0
  24. rag-evaluation-harness/.ruff_cache/0.9.3/9882451912837559226 +0 -0
  25. rag-evaluation-harness/lm_eval/__init__.py +1 -0
  26. rag-evaluation-harness/lm_eval/__main__.py +511 -0
  27. rag-evaluation-harness/lm_eval/decontamination/janitor.py +328 -0
  28. rag-evaluation-harness/lm_eval/tasks/__init__.py +449 -0
  29. rag-evaluation-harness/lm_eval/tasks/anli/README.md +56 -0
  30. rag-evaluation-harness/lm_eval/tasks/anli/anli_r3.yaml +5 -0
  31. rag-evaluation-harness/lm_eval/tasks/csatqa/_generate_configs.py +51 -0
  32. rag-evaluation-harness/lm_eval/tasks/csatqa/csatqa_gr.yaml +3 -0
  33. rag-evaluation-harness/lm_eval/tasks/csatqa/csatqa_rch.yaml +3 -0
  34. rag-evaluation-harness/lm_eval/tasks/csatqa/csatqa_rcss.yaml +3 -0
  35. rag-evaluation-harness/lm_eval/tasks/french_bench/_default_template_yaml +4 -0
  36. rag-evaluation-harness/lm_eval/tasks/french_bench/french_bench_arc_challenge.yaml +21 -0
  37. rag-evaluation-harness/lm_eval/tasks/french_bench/french_bench_boolqa.yaml +23 -0
  38. rag-evaluation-harness/lm_eval/tasks/french_bench/french_bench_fquadv2_genq.yaml +31 -0
  39. rag-evaluation-harness/lm_eval/tasks/french_bench/french_bench_hellaswag.yaml +20 -0
  40. rag-evaluation-harness/lm_eval/tasks/french_bench/french_bench_opus_perplexity.yaml +23 -0
  41. rag-evaluation-harness/lm_eval/tasks/french_bench/french_bench_orangesum_title.yaml +28 -0
  42. rag-evaluation-harness/lm_eval/tasks/french_bench/french_bench_wikitext_fr.yaml +25 -0
  43. rag-evaluation-harness/lm_eval/tasks/french_bench/preprocess_wikitext.py +48 -0
  44. rag-evaluation-harness/lm_eval/tasks/french_bench/utils.py +102 -0
  45. rag-evaluation-harness/lm_eval/tasks/kobest/README.md +37 -0
  46. rag-evaluation-harness/lm_eval/tasks/kobest/kobest_boolq.yaml +23 -0
  47. rag-evaluation-harness/lm_eval/tasks/kobest/utils.py +48 -0
  48. rag-evaluation-harness/lm_eval/tasks/okapi/arc_multilingual/arc_ar.yaml +7 -0
  49. rag-evaluation-harness/lm_eval/tasks/okapi/arc_multilingual/arc_nl.yaml +7 -0
  50. rag-evaluation-harness/lm_eval/tasks/okapi/hellaswag_multilingual/README.md +48 -0
.gitmodules ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ [submodule "sglang_repo"]
2
+ path = sglang_repo
3
+ url = https://github.com/yichuan520030910320/sglang.git
4
+ branch = usedinPowerRAG
5
+ [submodule "DiskANN"]
6
+ path = DiskANN
7
+ url = https://github.com/yichuan520030910320/DiskANN.git
8
+ [submodule "SPANN"]
9
+ path = SPANN
10
+ url = https://github.com/yichuan520030910320/SPANN.git
latency_breakdown_diskann_20250219_033737.json ADDED
The diff for this file is too large to render. See raw diff
 
rag-evaluation-harness/.ruff_cache/0.9.3/11003261178682080991 ADDED
Binary file (134 Bytes). View file
 
rag-evaluation-harness/.ruff_cache/0.9.3/1195792812016165223 ADDED
Binary file (132 Bytes). View file
 
rag-evaluation-harness/.ruff_cache/0.9.3/12014129994672639323 ADDED
Binary file (134 Bytes). View file
 
rag-evaluation-harness/.ruff_cache/0.9.3/12288333375405380992 ADDED
Binary file (136 Bytes). View file
 
rag-evaluation-harness/.ruff_cache/0.9.3/12495519353814268830 ADDED
Binary file (150 Bytes). View file
 
rag-evaluation-harness/.ruff_cache/0.9.3/13179638047834592586 ADDED
Binary file (132 Bytes). View file
 
rag-evaluation-harness/.ruff_cache/0.9.3/13594465565451147476 ADDED
Binary file (129 Bytes). View file
 
rag-evaluation-harness/.ruff_cache/0.9.3/13996579915424681555 ADDED
Binary file (136 Bytes). View file
 
rag-evaluation-harness/.ruff_cache/0.9.3/16061065282831403620 ADDED
Binary file (143 Bytes). View file
 
rag-evaluation-harness/.ruff_cache/0.9.3/16593127969924090722 ADDED
Binary file (144 Bytes). View file
 
rag-evaluation-harness/.ruff_cache/0.9.3/16992967298380143688 ADDED
Binary file (138 Bytes). View file
 
rag-evaluation-harness/.ruff_cache/0.9.3/18243468368813636277 ADDED
Binary file (170 Bytes). View file
 
rag-evaluation-harness/.ruff_cache/0.9.3/2719175708618349960 ADDED
Binary file (195 Bytes). View file
 
rag-evaluation-harness/.ruff_cache/0.9.3/474605506059877149 ADDED
Binary file (156 Bytes). View file
 
rag-evaluation-harness/.ruff_cache/0.9.3/546521684010417666 ADDED
Binary file (188 Bytes). View file
 
rag-evaluation-harness/.ruff_cache/0.9.3/6007768929244549241 ADDED
Binary file (146 Bytes). View file
 
rag-evaluation-harness/.ruff_cache/0.9.3/7177632790420859335 ADDED
Binary file (189 Bytes). View file
 
rag-evaluation-harness/.ruff_cache/0.9.3/7685340309003606770 ADDED
Binary file (145 Bytes). View file
 
rag-evaluation-harness/.ruff_cache/0.9.3/789121911436317890 ADDED
Binary file (139 Bytes). View file
 
rag-evaluation-harness/.ruff_cache/0.9.3/8058834852024398863 ADDED
Binary file (143 Bytes). View file
 
rag-evaluation-harness/.ruff_cache/0.9.3/8864226155108425509 ADDED
Binary file (176 Bytes). View file
 
rag-evaluation-harness/.ruff_cache/0.9.3/9882451912837559226 ADDED
Binary file (163 Bytes). View file
 
rag-evaluation-harness/lm_eval/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ from .evaluator import evaluate, simple_evaluate
rag-evaluation-harness/lm_eval/__main__.py ADDED
@@ -0,0 +1,511 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import json
3
+ import logging
4
+ import os
5
+ import sys
6
+ from functools import partial
7
+ from typing import Union
8
+
9
+ from lm_eval import evaluator, utils
10
+ from lm_eval.evaluator import request_caching_arg_to_dict
11
+ from lm_eval.loggers import EvaluationTracker, WandbLogger
12
+ from lm_eval.tasks import TaskManager
13
+ from lm_eval.utils import handle_non_serializable, make_table, simple_parse_args_string
14
+
15
+
16
+ def _int_or_none_list_arg_type(
17
+ min_len: int, max_len: int, defaults: str, value: str, split_char: str = ","
18
+ ):
19
+ def parse_value(item):
20
+ item = item.strip().lower()
21
+ if item == "none":
22
+ return None
23
+ try:
24
+ return int(item)
25
+ except ValueError:
26
+ raise argparse.ArgumentTypeError(f"{item} is not an integer or None")
27
+
28
+ items = [parse_value(v) for v in value.split(split_char)]
29
+ num_items = len(items)
30
+
31
+ if num_items == 1:
32
+ # Makes downstream handling the same for single and multiple values
33
+ items = items * max_len
34
+ elif num_items < min_len or num_items > max_len:
35
+ raise argparse.ArgumentTypeError(
36
+ f"Argument requires {max_len} integers or None, separated by '{split_char}'"
37
+ )
38
+ elif num_items != max_len:
39
+ logging.warning(
40
+ f"Argument requires {max_len} integers or None, separated by '{split_char}'. "
41
+ "Missing values will be filled with defaults."
42
+ )
43
+ default_items = [parse_value(v) for v in defaults.split(split_char)]
44
+ items.extend(
45
+ default_items[num_items:]
46
+ ) # extend items list with missing defaults
47
+
48
+ return items
49
+
50
+
51
+ def check_argument_types(parser: argparse.ArgumentParser):
52
+ """
53
+ Check to make sure all CLI args are typed, raises error if not
54
+ """
55
+ for action in parser._actions:
56
+ if action.dest != "help" and not action.const:
57
+ if action.type is None:
58
+ raise ValueError(
59
+ f"Argument '{action.dest}' doesn't have a type specified."
60
+ )
61
+ else:
62
+ continue
63
+
64
+
65
+ def setup_parser() -> argparse.ArgumentParser:
66
+ parser = argparse.ArgumentParser(formatter_class=argparse.RawTextHelpFormatter)
67
+ parser.add_argument(
68
+ "--model", "-m", type=str, default="hf", help="Name of model e.g. `hf`"
69
+ )
70
+ parser.add_argument(
71
+ "--tasks",
72
+ "-t",
73
+ default=None,
74
+ type=str,
75
+ metavar="task1,task2",
76
+ help="To get full list of tasks, use the command lm-eval --tasks list",
77
+ )
78
+ parser.add_argument(
79
+ "--model_args",
80
+ "-a",
81
+ default="",
82
+ type=str,
83
+ help="Comma separated string arguments for model, e.g. `pretrained=EleutherAI/pythia-160m,dtype=float32`",
84
+ )
85
+ parser.add_argument(
86
+ "--num_fewshot",
87
+ "-f",
88
+ type=int,
89
+ default=None,
90
+ metavar="N",
91
+ help="Number of examples in few-shot context",
92
+ )
93
+ parser.add_argument(
94
+ "--batch_size",
95
+ "-b",
96
+ type=str,
97
+ default=1,
98
+ metavar="auto|auto:N|N",
99
+ help="Acceptable values are 'auto', 'auto:N' or N, where N is an integer. Default 1.",
100
+ )
101
+ parser.add_argument(
102
+ "--max_batch_size",
103
+ type=int,
104
+ default=None,
105
+ metavar="N",
106
+ help="Maximal batch size to try with --batch_size auto.",
107
+ )
108
+ parser.add_argument(
109
+ "--device",
110
+ type=str,
111
+ default=None,
112
+ help="Device to use (e.g. cuda, cuda:0, cpu).",
113
+ )
114
+ parser.add_argument(
115
+ "--output_path",
116
+ "-o",
117
+ default=None,
118
+ type=str,
119
+ metavar="DIR|DIR/file.json",
120
+ help="The path to the output file where the result metrics will be saved. If the path is a directory and log_samples is true, the results will be saved in the directory. Else the parent directory will be used.",
121
+ )
122
+ parser.add_argument(
123
+ "--limit",
124
+ "-L",
125
+ type=float,
126
+ default=None,
127
+ metavar="N|0<N<1",
128
+ help="Limit the number of examples per task. "
129
+ "If <1, limit is a percentage of the total number of examples.",
130
+ )
131
+ parser.add_argument(
132
+ "--use_cache",
133
+ "-c",
134
+ type=str,
135
+ default=None,
136
+ metavar="DIR",
137
+ help="A path to a sqlite db file for caching model responses. `None` if not caching.",
138
+ )
139
+ parser.add_argument(
140
+ "--cache_requests",
141
+ type=str,
142
+ default=None,
143
+ choices=["true", "refresh", "delete"],
144
+ help="Speed up evaluation by caching the building of dataset requests. `None` if not caching.",
145
+ )
146
+ parser.add_argument(
147
+ "--check_integrity",
148
+ action="store_true",
149
+ help="Whether to run the relevant part of the test suite for the tasks.",
150
+ )
151
+ parser.add_argument(
152
+ "--write_out",
153
+ "-w",
154
+ action="store_true",
155
+ default=False,
156
+ help="Prints the prompt for the first few documents.",
157
+ )
158
+ parser.add_argument(
159
+ "--log_samples",
160
+ "-s",
161
+ action="store_true",
162
+ default=False,
163
+ help="If True, write out all model outputs and documents for per-sample measurement and post-hoc analysis. Use with --output_path.",
164
+ )
165
+ parser.add_argument(
166
+ "--system_instruction",
167
+ type=str,
168
+ default=None,
169
+ help="System instruction to be used in the prompt",
170
+ )
171
+ parser.add_argument(
172
+ "--apply_chat_template",
173
+ action="store_true",
174
+ default=False,
175
+ help="If True, applies the chat template to the prompt",
176
+ )
177
+ parser.add_argument(
178
+ "--fewshot_as_multiturn",
179
+ action="store_true",
180
+ default=False,
181
+ help="If True, uses the fewshot as a multi-turn conversation",
182
+ )
183
+ parser.add_argument(
184
+ "--show_config",
185
+ action="store_true",
186
+ default=False,
187
+ help="If True, shows the the full config of all tasks at the end of the evaluation.",
188
+ )
189
+ parser.add_argument(
190
+ "--include_path",
191
+ type=str,
192
+ default=None,
193
+ metavar="DIR",
194
+ help="Additional path to include if there are external tasks to include.",
195
+ )
196
+ parser.add_argument(
197
+ "--gen_kwargs",
198
+ type=str,
199
+ default=None,
200
+ help=(
201
+ "String arguments for model generation on greedy_until tasks,"
202
+ " e.g. `temperature=0,top_k=0,top_p=0`."
203
+ ),
204
+ )
205
+ parser.add_argument(
206
+ "--verbosity",
207
+ "-v",
208
+ type=str.upper,
209
+ default="INFO",
210
+ metavar="CRITICAL|ERROR|WARNING|INFO|DEBUG",
211
+ help="Controls the reported logging error level. Set to DEBUG when testing + adding new task configurations for comprehensive log output.",
212
+ )
213
+ parser.add_argument(
214
+ "--wandb_args",
215
+ type=str,
216
+ default="",
217
+ help="Comma separated string arguments passed to wandb.init, e.g. `project=lm-eval,job_type=eval",
218
+ )
219
+ parser.add_argument(
220
+ "--hf_hub_log_args",
221
+ type=str,
222
+ default="",
223
+ help="Comma separated string arguments passed to Hugging Face Hub's log function, e.g. `hub_results_org=EleutherAI,hub_repo_name=lm-eval-results`",
224
+ )
225
+ parser.add_argument(
226
+ "--predict_only",
227
+ "-x",
228
+ action="store_true",
229
+ default=False,
230
+ help="Use with --log_samples. Only model outputs will be saved and metrics will not be evaluated.",
231
+ )
232
+ default_seed_string = "0,1234,1234,1234"
233
+ parser.add_argument(
234
+ "--seed",
235
+ type=partial(_int_or_none_list_arg_type, 3, 4, default_seed_string),
236
+ default=default_seed_string, # for backward compatibility
237
+ help=(
238
+ "Set seed for python's random, numpy, torch, and fewshot sampling.\n"
239
+ "Accepts a comma-separated list of 4 values for python's random, numpy, torch, and fewshot sampling seeds, "
240
+ "respectively, or a single integer to set the same seed for all four.\n"
241
+ f"The values are either an integer or 'None' to not set the seed. Default is `{default_seed_string}` "
242
+ "(for backward compatibility).\n"
243
+ "E.g. `--seed 0,None,8,52` sets `random.seed(0)`, `torch.manual_seed(8)`, and fewshot sampling seed to 52. "
244
+ "Here numpy's seed is not set since the second value is `None`.\n"
245
+ "E.g, `--seed 42` sets all four seeds to 42."
246
+ ),
247
+ )
248
+ parser.add_argument(
249
+ "--trust_remote_code",
250
+ action="store_true",
251
+ help="Sets trust_remote_code to True to execute code to create HF Datasets from the Hub",
252
+ )
253
+ parser.add_argument(
254
+ "--save_inputs_only",
255
+ action="store_true",
256
+ help="Sets save_inputs_only to True to only save the evaluation data without actually running any evaluation.",
257
+ )
258
+ parser.add_argument(
259
+ "--inputs_save_dir",
260
+ type=str,
261
+ default="lm-eval-data",
262
+ help="Path to save the evaluation data as the queries for retrieval.",
263
+ )
264
+ parser.add_argument(
265
+ "--answer_save_dir",
266
+ type=str,
267
+ default=None,
268
+ help="Path to save the evaluation data with answers for analysis.",
269
+ )
270
+ parser.add_argument(
271
+ "--overwrite_saved_inputs",
272
+ action="store_true",
273
+ help="Set overwrite_saved_inputs to True to overwrite the previously saved files.",
274
+ )
275
+ parser.add_argument(
276
+ "--retrieval_file",
277
+ type=str,
278
+ default="",
279
+ help="The retrieval documents for the current task.",
280
+ )
281
+ parser.add_argument(
282
+ "--retrieval_dir",
283
+ type=str,
284
+ default="",
285
+ help="The retrieval directory for the current task - currently ONLY supported for MMLU.",
286
+ )
287
+ parser.add_argument(
288
+ "--concat_k",
289
+ type=int,
290
+ default=0,
291
+ help="The number of retrieved documents to be prepended to the original input.",
292
+ )
293
+ parser.add_argument(
294
+ "--results_only_save_path",
295
+ type=str,
296
+ default=None,
297
+ help="A JSONL path to save the final results.",
298
+ )
299
+ parser.add_argument(
300
+ "--additional_system_prompt",
301
+ type=str,
302
+ default="",
303
+ help="An additional system prompt to prepend in the inputs.",
304
+ )
305
+ return parser
306
+
307
+
308
+ def parse_eval_args(parser: argparse.ArgumentParser) -> argparse.Namespace:
309
+ check_argument_types(parser)
310
+ return parser.parse_args()
311
+
312
+
313
+ def cli_evaluate(args: Union[argparse.Namespace, None] = None) -> None:
314
+ if not args:
315
+ # we allow for args to be passed externally, else we parse them ourselves
316
+ parser = setup_parser()
317
+ args = parse_eval_args(parser)
318
+
319
+ if args.wandb_args:
320
+ wandb_logger = WandbLogger(**simple_parse_args_string(args.wandb_args))
321
+
322
+ eval_logger = utils.eval_logger
323
+ eval_logger.setLevel(getattr(logging, f"{args.verbosity}"))
324
+ eval_logger.info(f"Verbosity set to {args.verbosity}")
325
+ os.environ["TOKENIZERS_PARALLELISM"] = "false"
326
+
327
+ # update the evaluation tracker args with the output path and the HF token
328
+ if args.output_path:
329
+ args.hf_hub_log_args += f",output_path={args.output_path}"
330
+ if os.environ.get("HF_TOKEN", None):
331
+ args.hf_hub_log_args += f",token={os.environ.get('HF_TOKEN')}"
332
+ evaluation_tracker_args = simple_parse_args_string(args.hf_hub_log_args)
333
+ evaluation_tracker = EvaluationTracker(**evaluation_tracker_args)
334
+
335
+ if args.predict_only:
336
+ args.log_samples = True
337
+ if (args.log_samples or args.predict_only) and not args.output_path:
338
+ raise ValueError(
339
+ "Specify --output_path if providing --log_samples or --predict_only"
340
+ )
341
+
342
+ if args.fewshot_as_multiturn and args.apply_chat_template is False:
343
+ raise ValueError(
344
+ "If fewshot_as_multiturn is set, apply_chat_template must be set to True."
345
+ )
346
+
347
+ if (
348
+ args.num_fewshot is None or args.num_fewshot == 0
349
+ ) and args.fewshot_as_multiturn:
350
+ raise ValueError(
351
+ "If fewshot_as_multiturn is set, num_fewshot must be greater than 0."
352
+ )
353
+
354
+ if args.include_path is not None:
355
+ eval_logger.info(f"Including path: {args.include_path}")
356
+ task_manager = TaskManager(args.verbosity, include_path=args.include_path)
357
+
358
+ if "push_samples_to_hub" in evaluation_tracker_args and not args.log_samples:
359
+ eval_logger.warning(
360
+ "Pushing samples to the Hub requires --log_samples to be set. Samples will not be pushed to the Hub."
361
+ )
362
+
363
+ if args.limit:
364
+ eval_logger.warning(
365
+ " --limit SHOULD ONLY BE USED FOR TESTING."
366
+ "REAL METRICS SHOULD NOT BE COMPUTED USING LIMIT."
367
+ )
368
+
369
+ if args.tasks is None:
370
+ eval_logger.error("Need to specify task to evaluate.")
371
+ sys.exit()
372
+ elif args.tasks == "list":
373
+ eval_logger.info(
374
+ "Available Tasks:\n - {}".format("\n - ".join(task_manager.all_tasks))
375
+ )
376
+ sys.exit()
377
+ else:
378
+ if os.path.isdir(args.tasks):
379
+ import glob
380
+
381
+ task_names = []
382
+ yaml_path = os.path.join(args.tasks, "*.yaml")
383
+ for yaml_file in glob.glob(yaml_path):
384
+ config = utils.load_yaml_config(yaml_file)
385
+ task_names.append(config)
386
+ else:
387
+ task_list = args.tasks.split(",")
388
+ task_names = task_manager.match_tasks(task_list)
389
+ for task in [task for task in task_list if task not in task_names]:
390
+ if os.path.isfile(task):
391
+ config = utils.load_yaml_config(task)
392
+ task_names.append(config)
393
+ task_missing = [
394
+ task for task in task_list if task not in task_names and "*" not in task
395
+ ] # we don't want errors if a wildcard ("*") task name was used
396
+
397
+ if task_missing:
398
+ missing = ", ".join(task_missing)
399
+ eval_logger.error(
400
+ f"Tasks were not found: {missing}\n"
401
+ f"{utils.SPACING}Try `lm-eval --tasks list` for list of available tasks",
402
+ )
403
+ raise ValueError(
404
+ f"Tasks not found: {missing}. Try `lm-eval --tasks list` for list of available tasks, or '--verbosity DEBUG' to troubleshoot task registration issues."
405
+ )
406
+
407
+ # Respect user's value passed in via CLI, otherwise default to True and add to comma-separated model args
408
+ if args.trust_remote_code:
409
+ os.environ["HF_DATASETS_TRUST_REMOTE_CODE"] = str(args.trust_remote_code)
410
+ args.model_args = (
411
+ args.model_args
412
+ + f",trust_remote_code={os.environ['HF_DATASETS_TRUST_REMOTE_CODE']}"
413
+ )
414
+
415
+ eval_logger.info(f"Selected Tasks: {task_names}")
416
+
417
+ request_caching_args = request_caching_arg_to_dict(
418
+ cache_requests=args.cache_requests
419
+ )
420
+
421
+ results = evaluator.simple_evaluate(
422
+ model=args.model,
423
+ model_args=args.model_args,
424
+ tasks=task_names,
425
+ num_fewshot=args.num_fewshot,
426
+ batch_size=args.batch_size,
427
+ max_batch_size=args.max_batch_size,
428
+ device=args.device,
429
+ use_cache=args.use_cache,
430
+ limit=args.limit,
431
+ check_integrity=args.check_integrity,
432
+ write_out=args.write_out,
433
+ log_samples=args.log_samples,
434
+ evaluation_tracker=evaluation_tracker,
435
+ system_instruction=args.system_instruction,
436
+ apply_chat_template=args.apply_chat_template,
437
+ fewshot_as_multiturn=args.fewshot_as_multiturn,
438
+ gen_kwargs=args.gen_kwargs,
439
+ task_manager=task_manager,
440
+ verbosity=args.verbosity,
441
+ predict_only=args.predict_only,
442
+ random_seed=args.seed[0],
443
+ numpy_random_seed=args.seed[1],
444
+ torch_random_seed=args.seed[2],
445
+ fewshot_random_seed=args.seed[3],
446
+ retrieval_args={
447
+ "save_inputs_only": args.save_inputs_only,
448
+ "inputs_save_dir": args.inputs_save_dir,
449
+ "answer_save_dir": args.answer_save_dir,
450
+ "overwrite_saved_inputs": args.overwrite_saved_inputs,
451
+ "retrieval_file": args.retrieval_file,
452
+ "retrieval_dir": args.retrieval_dir,
453
+ "concat_k": args.concat_k,
454
+ "additional_system_prompt": args.additional_system_prompt,
455
+ },
456
+ **request_caching_args,
457
+ )
458
+
459
+ if results is not None:
460
+ if args.log_samples:
461
+ samples = results.pop("samples")
462
+ dumped = json.dumps(
463
+ results, indent=2, default=handle_non_serializable, ensure_ascii=False
464
+ )
465
+ if args.show_config:
466
+ print(dumped)
467
+
468
+ batch_sizes = ",".join(map(str, results["config"]["batch_sizes"]))
469
+
470
+ # Add W&B logging
471
+ if args.wandb_args:
472
+ try:
473
+ wandb_logger.post_init(results)
474
+ wandb_logger.log_eval_result()
475
+ if args.log_samples:
476
+ wandb_logger.log_eval_samples(samples)
477
+ except Exception as e:
478
+ eval_logger.info(f"Logging to Weights and Biases failed due to {e}")
479
+
480
+ evaluation_tracker.save_results_aggregated(
481
+ results=results, samples=samples if args.log_samples else None
482
+ )
483
+
484
+ if args.log_samples:
485
+ for task_name, config in results["configs"].items():
486
+ evaluation_tracker.save_results_samples(
487
+ task_name=task_name, samples=samples[task_name]
488
+ )
489
+
490
+ if (
491
+ evaluation_tracker.push_results_to_hub
492
+ or evaluation_tracker.push_samples_to_hub
493
+ ):
494
+ evaluation_tracker.recreate_metadata_card()
495
+
496
+ print(
497
+ f"{args.model} ({args.model_args}), gen_kwargs: ({args.gen_kwargs}), limit: {args.limit}, num_fewshot: {args.num_fewshot}, num_docs: {args.concat_k} "
498
+ f"batch_size: {args.batch_size}{f' ({batch_sizes})' if batch_sizes else ''}"
499
+ )
500
+
501
+ print(make_table(results, log_path=args.results_only_save_path))
502
+ if "groups" in results:
503
+ print(make_table(results, "groups", log_path=args.results_only_save_path))
504
+
505
+ if args.wandb_args:
506
+ # Tear down wandb run once all the logging is done.
507
+ wandb_logger.run.finish()
508
+
509
+
510
+ if __name__ == "__main__":
511
+ cli_evaluate()
rag-evaluation-harness/lm_eval/decontamination/janitor.py ADDED
@@ -0,0 +1,328 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pickle
2
+ import re
3
+ import string
4
+ import traceback
5
+ from typing import Iterator, List, Sequence, Tuple, TypeVar
6
+
7
+
8
+ # This is a cpp module. Compile janitor_util.cpp with:
9
+ # c++ -O3 -Wall -shared -std=c++11 -fPIC $(python3 -m pybind11 --includes) janitor_util.cpp -o janitor_util$(python3-config --extension-suffix) -undefined dynamic_lookup
10
+ try:
11
+ import janitor_util
12
+
13
+ JANITOR_CPP = True
14
+ except Exception:
15
+ print("WARNING: C++ module could not be loaded. Janitor running in python mode")
16
+ traceback.print_exc()
17
+ JANITOR_CPP = False
18
+
19
+ T = TypeVar("T")
20
+
21
+
22
+ # Implementation from nltk source
23
+ # https://www.nltk.org/_modules/nltk/util.html
24
+ def form_ngrams(sequence: Iterator[T], n: int) -> Iterator[Tuple[T, ...]]:
25
+ history = []
26
+ while n > 1:
27
+ # PEP 479, prevent RuntimeError from being raised when StopIteration bubbles out of generator
28
+ try:
29
+ next_item = next(sequence)
30
+ except StopIteration:
31
+ # no more data, terminate the generator
32
+ return
33
+ history.append(next_item)
34
+ n -= 1
35
+ for item in sequence:
36
+ history.append(item)
37
+ yield tuple(history)
38
+ del history[0]
39
+
40
+
41
+ def word_ngrams(s: str, n: int) -> Iterator[str]:
42
+ """Splits a string into ngram words"""
43
+ tokens = s.split() # not a generator :(
44
+ ngram_seqs = form_ngrams(iter(tokens), n)
45
+ return (" ".join(ngram) for ngram in ngram_seqs)
46
+
47
+
48
+ # Does character sequences only - combined faster function to play around with later
49
+ # def word_ngrams_indices_combined(sequence, n):
50
+ # current_word = ""
51
+ # history = []
52
+ # gap = False;
53
+ # start = 0
54
+ # end = 0
55
+ # for character in sequence:
56
+ # if character == " ":
57
+ # if not gap:
58
+ # gap = True
59
+ # history.append(current_word)
60
+ # end += len(current_word) - 1
61
+ # current_word = ""
62
+ # if len(history) == n:
63
+ # yield (tuple(history), start, end)
64
+ # del history[0]
65
+ # start = end + 1
66
+ # end = start
67
+ # else:
68
+ # gap = False
69
+ # current_word += character
70
+
71
+
72
+ # https://stackoverflow.com/questions/13734451/string-split-with-indices-in-python
73
+ def split_indices(s: str) -> Iterator[Tuple[str, Tuple[int, int]]]:
74
+ """Splits a string on whitespaces and records the indices of each in the original string.
75
+ @:return generator((word, (start_idx, end_idx)), ...)
76
+ """
77
+ return ((m.group(0), (m.start(), m.end() - 1)) for m in re.finditer(r"\S+", s))
78
+
79
+
80
+ def word_ngrams_indices(s: str, n: int) -> Iterator[Tuple[str, Tuple[int, int]]]:
81
+ """Splits a string into pairs of (ngram words, their start/end indices)"""
82
+ tokens_with_indices = split_indices(s)
83
+
84
+ # Generator of ngrams of (word, idx_pairs)
85
+ # (
86
+ # [(word, (start,end)), (word, (start, end))...],
87
+ # [(word, (start, end)), ...],
88
+ # ...
89
+ # )
90
+ ngram_seqs_with_indices = form_ngrams(tokens_with_indices, n)
91
+
92
+ # Generator of pairs of word and index ngrams
93
+ # (
94
+ # ([word, word, ...], [(start,end), (start,end), ...]),
95
+ # ...
96
+ # )
97
+ ngram_indices_pairs = (
98
+ zip(*ngram_with_indices) for ngram_with_indices in ngram_seqs_with_indices
99
+ )
100
+
101
+ # Generator of ( (word_ngram, (start, end)), (word_ngram, start, end)), ...)
102
+ return (
103
+ (" ".join(ngram_seq), (indices[0][0], indices[-1][1]))
104
+ for ngram_seq, indices in ngram_indices_pairs
105
+ )
106
+
107
+
108
+ class Janitor:
109
+ # FIXME delete_chars: Should anything else go here? Special chars?
110
+ def __init__(
111
+ self,
112
+ ngram_n: int = 13,
113
+ window_to_remove: int = 200,
114
+ too_dirty_cutoff: int = 10,
115
+ minimum_slice_length: int = 200,
116
+ delete_chars: str = string.punctuation,
117
+ ) -> None:
118
+ self.ngram_n = ngram_n
119
+ self.window_to_remove = window_to_remove
120
+ self.too_dirty_cutoff = too_dirty_cutoff
121
+ self.minimum_slice_length = minimum_slice_length
122
+ self.delete_chars = delete_chars
123
+
124
+ self.dirt_ngrams = set()
125
+
126
+ # If in python, we'll translate uppercase to lowercase and delete naughty characters.
127
+ # This is fast by python standards
128
+ # https://stackoverflow.com/questions/638893/what-is-the-most-efficient-way-in-python-to-convert-a-string-to-all-lowercase-st
129
+ self.translation_table = str.maketrans(
130
+ string.ascii_lowercase + string.ascii_uppercase, # These characters
131
+ string.ascii_lowercase * 2, # Become these characters
132
+ self.delete_chars, # These are deleted
133
+ )
134
+
135
+ ##############
136
+ # I/O for saving contamination ngrams
137
+ ##############
138
+
139
+ def save_contamination_ngrams(self, filename: str) -> None:
140
+ with open(filename, "wb") as fp:
141
+ pickle.dump(filename, fp)
142
+
143
+ def load_contamination_ngrams(self, filename: str) -> None:
144
+ with open(filename, "rb") as fp:
145
+ self.dirt_ngrams = pickle.load(fp)
146
+
147
+ ##############
148
+ # Call these :)
149
+ ##############
150
+
151
+ def register_contaminant(self, dirt_string: str) -> None:
152
+ """Register a string as contamination to be removed, e.g. a test set
153
+ This breaks the dirt_string into ngrams to store for future cleaning"""
154
+ if JANITOR_CPP:
155
+ return self.register_contaminant_cpp(dirt_string)
156
+ else:
157
+ print("WARNING: Janitor running in python mode")
158
+ return self.register_contaminant_python(dirt_string)
159
+
160
+ def clean(self, dirty_string: str) -> List[str]:
161
+ """Clean a string (e.g. a training set) by removing all ngrams previously
162
+ registered as contaminants. Returns a list of clean chunks, or empty if
163
+ the string was too dirty"""
164
+ if JANITOR_CPP:
165
+ return self.clean_cpp(dirty_string)
166
+ else:
167
+ print("WARNING: Janitor running in python mode")
168
+ return self.clean_python(dirty_string)
169
+
170
+ def _split_chunks(
171
+ self, dirty_string: str, dirty_parts: Sequence[Tuple]
172
+ ) -> List[str]:
173
+ clean_chunks = []
174
+ splice_idx = 0
175
+ end = -1
176
+ for i, (ngram, start, end) in enumerate(dirty_parts):
177
+ if i >= self.too_dirty_cutoff:
178
+ return []
179
+ start = max(0, start - self.window_to_remove)
180
+ end = min(len(dirty_string), end + self.window_to_remove)
181
+
182
+ if start - splice_idx > self.minimum_slice_length:
183
+ clean_chunks.append(dirty_string[splice_idx:start])
184
+ splice_idx = end
185
+
186
+ if end < len(dirty_string) - self.minimum_slice_length:
187
+ clean_chunks.append(dirty_string[end + 1 :])
188
+
189
+ return clean_chunks
190
+
191
+ ##############
192
+ # Fast C++
193
+ ##############
194
+
195
+ def register_contaminant_cpp(self, dirt_string) -> None:
196
+ self.dirt_ngrams.update(
197
+ janitor_util.clean_ngram(dirt_string, self.delete_chars, self.ngram_n)
198
+ )
199
+
200
+ def clean_cpp(self, dirty_string: str) -> List[str]:
201
+ contamination_indices = janitor_util.clean_ngram_with_indices(
202
+ dirty_string, self.delete_chars, self.ngram_n
203
+ )
204
+ return self._split_chunks(dirty_string, contamination_indices)
205
+
206
+ ##############
207
+ # Slow python
208
+ ##############
209
+
210
+ def normalize_string(self, s: str) -> str:
211
+ return s.translate(self.translation_table)
212
+
213
+ def register_contaminant_python(self, dirt_string: str) -> None:
214
+ self.dirt_ngrams.update(
215
+ word_ngrams(self.normalize_string(dirt_string), self.ngram_n)
216
+ )
217
+
218
+ def clean_python(self, dirty_string: str) -> List[str]:
219
+ contamination_indices = (
220
+ (None, *idx_pair)
221
+ for dirty_ngram, idx_pair in word_ngrams_indices(dirty_string, self.ngram_n)
222
+ if self.normalize_string(dirty_ngram) in self.dirt_ngrams
223
+ )
224
+ return self._split_chunks(dirty_string, contamination_indices)
225
+
226
+
227
+ ##################################################################
228
+ # Tests
229
+ #################################################################
230
+
231
+ # def print_cpp():
232
+ # source = """ ,, I'm a very !dirty,, ,, dirty boy. Clean me daddy. \n\nhe he he hehe heh. lastword """ * 2
233
+
234
+ # for i in range(1, 10, 2):
235
+ # pprint(janitor_util.clean_ngram(source, string.punctuation, i))
236
+ # for ngram, start, end in \
237
+ # janitor_util.clean_ngram_with_indices(source, string.punctuation, i):
238
+ # print(ngram, "\t", start, end, source[start:end].replace("\n", "\\n"))
239
+
240
+
241
+ # def test_cpp():
242
+ # source = """ ,, I'm a very !dirty,, ,, dirty boy. Clean me daddy. \n\nhe he he hehe heh. lastword """ * 2
243
+ # contaminant = "dirty boy. Clean he he"
244
+
245
+ # jan_python = Janitor()
246
+ # jan_cpp = Janitor()
247
+
248
+ # jan_python.register_contaminant_python(contaminant)
249
+ # jan_cpp.register_contaminant(contaminant)
250
+
251
+ # assert jan_python.dirt_ngrams == jan_cpp.dirt_ngrams, (jan_python.dirt_ngrams, jan_cpp.dirt_ngrams)
252
+
253
+ # assert jan_python.clean_python(source) == jan_cpp.clean(source), \
254
+ # (jan_python.clean_python(source), jan_cpp.clean(source))
255
+
256
+ # print("Passed test, python==cpp")
257
+
258
+
259
+ # def benchmark():
260
+ # # Download and put in data folder: enwik8 (100 MB) from https://cs.fit.edu/~mmahoney/compression/textdata.html
261
+ # setup = \
262
+ # """
263
+ # with open("data/enwik8", "r") as f:
264
+ # data = f.read()
265
+ # jan = Janitor(too_dirty_cutoff=1000)
266
+ # jan.register_contaminant('''
267
+ # theories is that there is a connection between &quot;geekdom&quot; and autism.
268
+ # This is hinted, for instance, by a ''Wired Magazine'' article in 2001 entitled &quot;
269
+ # The [[Geek]] Syndrome&quot;, which is a point argued by many in the autism rights
270
+ # movement{{ref|Wired}}. This article, many professionals assert, is just one example of
271
+ # the media's application of mental disease labels to what is actually variant normal behavior
272
+ # &amp;mdash;they argue that shyness, lack of athletic ability or social skills, and intellectual
273
+ # interests, even when they seem unusual to others, are not in themselves signs of autism or
274
+ # Asperger's syndrome. Others assert that it is actually the medical profession which is applying
275
+ # mental disease labels to children who in the past would have simply been accepted as a little
276
+ # different or even labeled 'gifted'. See [[clinomorphism]] for further discussion of this issue.
277
+ # Due to the recent publicity surrounding autism and autis
278
+ # ultan Al Nahyan]] granted [[Petroleum]] concessions, and oil was first found in 1958. At first,
279
+ # oil money had a marginal impact. A few lowrise concete buildings were erected, and the first
280
+ # paved road was completed in 1961, but Sheikh Shakbut, uncertain whether the new oil royalties
281
+ # would last, took a cautious approach, preferring to save the revenue rather than investing it in
282
+ # development. His brother, [[Zayed bin Sultan Al Nahayan]], saw that oil wealth had the potential
283
+ # to transform Abu Dhabi. The ruling Al Nahayan family decided that Sheikh Zayed should replace his
284
+ # brother as Ruler and carry out his vision of developing the country. On [[August 6]], [[1966]],
285
+ # with the assistance of the British, Sheikh Zayed became the new ruler. See generally, Al-Fahim, M,
286
+ # ''From Rags to Riches: A Story of Abu Dhabi'', Chapter Six (London Centre of Arab Studies, 1995),
287
+ # ISBN 1 900404 00 1. With the announcement by Britain in 1968 that it would withdraw from the
288
+ # Gulf area by 1971, Sheikh Zayed became the main driving force behind the formation of the
289
+ # [[United Arab Emirates]]. After the Emirates gained independence in 1971,
290
+ # ''')
291
+ # """
292
+
293
+ # n = 1
294
+ # print(f"Timing {n} run on 100 MB")
295
+ # print("Register contaminant")
296
+ # # print("\tPython", timeit.timeit("jan.register_contaminant_python(data)", setup=setup, globals=globals(), number=n))
297
+ # print("\tCpp", timeit.timeit("jan.register_contaminant(data)", setup=setup, globals=globals(), number=n))
298
+
299
+ # print("Clean")
300
+ # # print("\tPython", timeit.timeit("jan.clean_python(data)", setup=setup, globals=globals(), number=n))
301
+ # print("\tCpp", timeit.timeit("jan.clean(data)", setup=setup, globals=globals(), number=n))
302
+
303
+
304
+ # def test_janitor_general():
305
+ # source = """ ,, I'm a very !dirty,, ,, dirty boy. Clean me daddy. \n\nhe he he hehe heh. lastword """ * 2
306
+ # contaminant = "dirty boy. Clean he he"
307
+
308
+ # jan = Janitor(ngram_n=3)
309
+ # jan.register_contaminant(contaminant)
310
+ # cleaned = " ".join(jan.clean(source))
311
+ # for contam in jan.dirt_ngrams:
312
+ # assert contam not in cleaned, contam
313
+
314
+ # filename = "data/saved_contam"
315
+ # jan.save_contamination_ngrams(filename)
316
+
317
+ # jan = Janitor(ngram_n=3)
318
+ # jan.load_contamination_ngrams(filename)
319
+ # cleaned = " ".join(jan.clean(source))
320
+ # for contam in jan.dirt_ngrams:
321
+ # assert contam not in cleaned, contam
322
+
323
+
324
+ # if __name__ == "__main__":
325
+ # test()
326
+ # # print_cpp()
327
+ # # test_cpp()
328
+ # # benchmark()
rag-evaluation-harness/lm_eval/tasks/__init__.py ADDED
@@ -0,0 +1,449 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import collections
2
+ import logging
3
+ import os
4
+ from functools import partial
5
+ from typing import Dict, List, Mapping, Optional, Union
6
+
7
+ from lm_eval import utils
8
+ from lm_eval.api.task import ConfigurableTask, Task
9
+
10
+
11
+ class TaskManager:
12
+ """TaskManager indexes all tasks from the default `lm_eval/tasks/`
13
+ and an optional directory if provided.
14
+
15
+ """
16
+
17
+ def __init__(self, verbosity="INFO", include_path: Optional[str] = None) -> None:
18
+ self.verbosity = verbosity
19
+ self.include_path = include_path
20
+ self.logger = utils.eval_logger
21
+ self.logger.setLevel(getattr(logging, f"{verbosity}"))
22
+
23
+ self._task_index = self.initialize_tasks(include_path=include_path)
24
+ self._all_tasks = sorted(list(self._task_index.keys()))
25
+
26
+ self.task_group_map = collections.defaultdict(list)
27
+
28
+ def initialize_tasks(self, include_path: Optional[str] = None):
29
+ """Creates a dictionary of tasks index.
30
+
31
+ :param include_path: str = None
32
+ An additional path to be searched for tasks
33
+
34
+ :return
35
+ Dictionary of task names as key and task metadata
36
+ """
37
+ all_paths = [os.path.dirname(os.path.abspath(__file__)) + "/"]
38
+ if include_path is not None:
39
+ if isinstance(include_path, str):
40
+ include_path = [include_path]
41
+ all_paths.extend(include_path)
42
+
43
+ task_index = {}
44
+ for task_dir in all_paths:
45
+ tasks = self._get_task_and_group(task_dir)
46
+ task_index = {**tasks, **task_index}
47
+
48
+ return task_index
49
+
50
+ @property
51
+ def all_tasks(self):
52
+ return self._all_tasks
53
+
54
+ @property
55
+ def task_index(self):
56
+ return self._task_index
57
+
58
+ def match_tasks(self, task_list):
59
+ return utils.pattern_match(task_list, self.all_tasks)
60
+
61
+ def _name_is_registered(self, name) -> bool:
62
+ if name in self.all_tasks:
63
+ return True
64
+ return False
65
+
66
+ def _name_is_task(self, name) -> bool:
67
+ if self._name_is_registered(name) and ("task" in self.task_index[name]["type"]):
68
+ return True
69
+ return False
70
+
71
+ def _name_is_group(self, name) -> bool:
72
+ if self._name_is_registered(name) and (
73
+ self.task_index[name]["type"] == "group"
74
+ ):
75
+ return True
76
+ return False
77
+
78
+ def _name_is_python_task(self, name):
79
+ if self._name_is_registered(name) and (
80
+ self.task_index[name]["type"] == "python_task"
81
+ ):
82
+ return True
83
+ return False
84
+
85
+ def _config_is_task(self, config) -> bool:
86
+ if ("task" in config) and isinstance(config["task"], str):
87
+ return True
88
+ return False
89
+
90
+ def _config_is_group(self, config) -> bool:
91
+ if ("task" in config) and isinstance(config["task"], list):
92
+ return True
93
+ return False
94
+
95
+ def _config_is_python_task(self, config) -> bool:
96
+ if "class" in config:
97
+ return True
98
+ return False
99
+
100
+ def _get_yaml_path(self, name):
101
+ if name not in self.task_index:
102
+ raise ValueError
103
+ return self.task_index[name]["yaml_path"]
104
+
105
+ def _get_config(self, name):
106
+ if name not in self.task_index:
107
+ raise ValueError
108
+ yaml_path = self._get_yaml_path(name)
109
+ if yaml_path == -1:
110
+ return {}
111
+ else:
112
+ return utils.load_yaml_config(yaml_path, mode="full")
113
+
114
+ def _get_tasklist(self, name):
115
+ if self._name_is_task(name):
116
+ raise ValueError
117
+ return self.task_index[name]["task"]
118
+
119
+ def _process_alias(self, config, group=None):
120
+ # If the group is not the same as the original
121
+ # group which the group alias was intended for,
122
+ # Set the group_alias to None instead.
123
+ if ("group_alias" in config) and ("group" in config) and group is not None:
124
+ if config["group"] != group:
125
+ config["group_alias"] = None
126
+ return config
127
+
128
+ def _load_individual_task_or_group(
129
+ self,
130
+ name_or_config: Optional[Union[str, dict]] = None,
131
+ parent_name: Optional[str] = None,
132
+ update_config: Optional[dict] = None,
133
+ yaml_path: Optional[str] = None,
134
+ ) -> Mapping:
135
+ def load_task(config, task, group=None, yaml_path=None):
136
+ if "include" in config:
137
+ if yaml_path is None:
138
+ raise ValueError
139
+ config = {
140
+ **utils.load_yaml_config(
141
+ yaml_path,
142
+ yaml_config={"include": config.pop("include")},
143
+ mode="full",
144
+ ),
145
+ **config,
146
+ }
147
+ if self._config_is_python_task(config):
148
+ task_object = config["class"]()
149
+ else:
150
+ config = self._process_alias(config, group=group)
151
+ task_object = ConfigurableTask(config=config)
152
+ if group is not None:
153
+ task_object = (group, task_object)
154
+ return {task: task_object}
155
+
156
+ if isinstance(name_or_config, str):
157
+ if update_config is not None:
158
+ # Process name_or_config as a dict instead
159
+ name_or_config = {"task": name_or_config, **update_config}
160
+ elif self._name_is_task(name_or_config):
161
+ task_config = self._get_config(name_or_config)
162
+ return load_task(task_config, task=name_or_config, group=parent_name)
163
+ else:
164
+ group_name = name_or_config
165
+ subtask_list = self._get_tasklist(name_or_config)
166
+ if subtask_list == -1:
167
+ group_config = self._get_config(name_or_config)
168
+ subtask_list = group_config["task"]
169
+
170
+ # This checks if we're at the root.
171
+ if parent_name is None:
172
+ group_config = self._get_config(name_or_config)
173
+ if set(group_config.keys()) > {"task", "group"}:
174
+ update_config = {
175
+ k: v
176
+ for k, v in group_config.items()
177
+ if k not in ["task", "group"]
178
+ }
179
+ yaml_path = self._get_yaml_path(group_name)
180
+
181
+ if (update_config is not None) and ("group_alias" in update_config):
182
+ group_name = update_config["group_alias"]
183
+ update_config.pop("group_alias")
184
+
185
+ if isinstance(name_or_config, dict):
186
+ if update_config is not None:
187
+ name_or_config = {
188
+ **name_or_config,
189
+ **update_config,
190
+ }
191
+
192
+ if self._config_is_task(name_or_config):
193
+ name = name_or_config["task"]
194
+ # If the name is registered as a group
195
+ # if self._name_is_task(name) is False:
196
+ if self._name_is_group(name):
197
+ group_name = name
198
+ update_config = {
199
+ k: v for k, v in name_or_config.items() if k != "task"
200
+ }
201
+ subtask_list = self._get_tasklist(name)
202
+ if subtask_list == -1:
203
+ subtask_list = self._get_config(name)["task"]
204
+ else:
205
+ if self._name_is_registered(name):
206
+ base_task_config = self._get_config(name)
207
+
208
+ # Check if this is a duplicate.
209
+ if parent_name is not None:
210
+ name_or_config["group"] = parent_name
211
+ num_duplicate = len(
212
+ list(
213
+ filter(
214
+ lambda x: x.startswith(name),
215
+ self.task_group_map[parent_name],
216
+ )
217
+ )
218
+ )
219
+ if num_duplicate > 0:
220
+ name = f"{name}-{num_duplicate}"
221
+ self.task_group_map[parent_name].append(name)
222
+
223
+ task_config = {
224
+ **base_task_config,
225
+ **name_or_config,
226
+ }
227
+ else:
228
+ task_config = name_or_config
229
+ return load_task(
230
+ task_config, task=name, group=parent_name, yaml_path=yaml_path
231
+ )
232
+ else:
233
+ group_name = name_or_config["group"]
234
+ subtask_list = name_or_config["task"]
235
+ if set(name_or_config.keys()) > {"task", "group"}:
236
+ update_config = {
237
+ k: v
238
+ for k, v in name_or_config.items()
239
+ if k not in ["task", "group"]
240
+ }
241
+
242
+ all_subtasks = {}
243
+ if parent_name is not None:
244
+ all_subtasks = {group_name: (parent_name, None)}
245
+
246
+ fn = partial(
247
+ self._load_individual_task_or_group,
248
+ parent_name=group_name,
249
+ update_config=update_config,
250
+ yaml_path=yaml_path,
251
+ )
252
+ all_subtasks = {
253
+ **all_subtasks,
254
+ **dict(collections.ChainMap(*map(fn, subtask_list))),
255
+ }
256
+ return all_subtasks
257
+
258
+ def load_task_or_group(self, task_list: Optional[Union[str, list]] = None) -> dict:
259
+ """Loads a dictionary of task objects from a list
260
+
261
+ :param task_list: Union[str, list] = None
262
+ Single string or list of string of task names to be loaded
263
+
264
+ :return
265
+ Dictionary of task objects
266
+ """
267
+ if isinstance(task_list, str):
268
+ task_list = [task_list]
269
+
270
+ all_loaded_tasks = dict(
271
+ collections.ChainMap(*map(self._load_individual_task_or_group, task_list))
272
+ )
273
+ return all_loaded_tasks
274
+
275
+ def load_config(self, config: Dict):
276
+ return self._load_individual_task_or_group(config)
277
+
278
+ def _get_task_and_group(self, task_dir: str):
279
+ """Creates a dictionary of tasks index with the following metadata,
280
+ - `type`, that can be either `task`, `python_task`, or `group`.
281
+ `task` refer to regular task configs, `python_task` are special
282
+ yaml files that only consists of `task` and `class` parameters.
283
+ `group` are group configs.
284
+ - `yaml_path`, path to the yaml file. If the entry is a `group` that
285
+ was configured through a task config, the yaml_path will be -1
286
+ and all subtasks will be listed in `task` (see below)
287
+ - `task`, reserved for entries with `type` as `group`. This will list
288
+ all subtasks. When a group config is created (as opposed to task
289
+ config having `group` parameter set), this will be set to -1 to
290
+ avoid recursive indexing. The whole list of subtasks will be loaded
291
+ at evaluation.
292
+
293
+ :param task_dir: str
294
+ A directory to check for tasks
295
+
296
+ :return
297
+ Dictionary of task names as key and task metadata
298
+ """
299
+ tasks_and_groups = collections.defaultdict()
300
+ for root, _, file_list in os.walk(task_dir):
301
+ for f in file_list:
302
+ if f.endswith(".yaml"):
303
+ yaml_path = os.path.join(root, f)
304
+ config = utils.load_yaml_config(yaml_path, mode="simple")
305
+ if self._config_is_python_task(config):
306
+ # This is a python class config
307
+ tasks_and_groups[config["task"]] = {
308
+ "type": "python_task",
309
+ "yaml_path": yaml_path,
310
+ }
311
+ elif self._config_is_group(config):
312
+ # This is a group config
313
+ tasks_and_groups[config["group"]] = {
314
+ "type": "group",
315
+ "task": -1, # This signals that
316
+ # we don't need to know
317
+ # the task list for indexing
318
+ # as it can be loaded
319
+ # when called.
320
+ "yaml_path": yaml_path,
321
+ }
322
+
323
+ # # Registered the level 1 tasks from a group config
324
+ # for config in config["task"]:
325
+ # if isinstance(config, dict) and self._config_is_task(config):
326
+ # task = config["task"]
327
+ # tasks_and_groups[task] = {
328
+ # "type": "task",
329
+ # "yaml_path": yaml_path,
330
+ # }
331
+
332
+ elif self._config_is_task(config):
333
+ # This is a task config
334
+ task = config["task"]
335
+ tasks_and_groups[task] = {
336
+ "type": "task",
337
+ "yaml_path": yaml_path,
338
+ }
339
+
340
+ if "group" in config:
341
+ groups = config["group"]
342
+ if isinstance(config["group"], str):
343
+ groups = [groups]
344
+
345
+ for group in groups:
346
+ if group not in tasks_and_groups:
347
+ tasks_and_groups[group] = {
348
+ "type": "group",
349
+ "task": [task],
350
+ "yaml_path": -1,
351
+ }
352
+ else:
353
+ tasks_and_groups[group]["task"].append(task)
354
+ else:
355
+ self.logger.debug(f"File {f} in {root} could not be loaded")
356
+
357
+ return tasks_and_groups
358
+
359
+
360
+ def get_task_name_from_config(task_config: Dict[str, str]) -> str:
361
+ if "task" in task_config:
362
+ return task_config["task"]
363
+ if "dataset_name" in task_config:
364
+ return "{dataset_path}_{dataset_name}".format(**task_config)
365
+ else:
366
+ return "{dataset_path}".format(**task_config)
367
+
368
+
369
+ def get_task_name_from_object(task_object):
370
+ if hasattr(task_object, "config"):
371
+ return task_object._config["task"]
372
+
373
+ # TODO: scrap this
374
+ # this gives a mechanism for non-registered tasks to have a custom name anyways when reporting
375
+ return (
376
+ task_object.EVAL_HARNESS_NAME
377
+ if hasattr(task_object, "EVAL_HARNESS_NAME")
378
+ else type(task_object).__name__
379
+ )
380
+
381
+
382
+ def get_task_dict(
383
+ task_name_list: Union[str, List[Union[str, Dict, Task]]],
384
+ task_manager: Optional[TaskManager] = None,
385
+ ):
386
+ """Creates a dictionary of task objects from either a name of task, config, or prepared Task object.
387
+
388
+ :param task_name_list: List[Union[str, Dict, Task]]
389
+ Name of model or LM object, see lm_eval.models.get_model
390
+ :param task_manager: TaskManager = None
391
+ A TaskManager object that stores indexed tasks. If not set,
392
+ task_manager will load one. This should be set by the user
393
+ if there are additional paths that want to be included
394
+ via `include_path`
395
+
396
+ :return
397
+ Dictionary of task objects
398
+ """
399
+ task_name_from_string_dict = {}
400
+ task_name_from_config_dict = {}
401
+ task_name_from_object_dict = {}
402
+
403
+ if isinstance(task_name_list, str):
404
+ task_name_list = [task_name_list]
405
+ elif isinstance(task_name_list, list):
406
+ if not all([isinstance(task, (str, dict, Task)) for task in task_name_list]):
407
+ raise TypeError(
408
+ "Expected all list items to be of types 'str', 'dict', or 'Task', but at least one entry did not match."
409
+ )
410
+ else:
411
+ raise TypeError(
412
+ f"Expected a 'str' or 'list' but received {type(task_name_list)}."
413
+ )
414
+
415
+ string_task_name_list = [task for task in task_name_list if isinstance(task, str)]
416
+ others_task_name_list = [
417
+ task for task in task_name_list if not isinstance(task, str)
418
+ ]
419
+ if len(string_task_name_list) > 0:
420
+ if task_manager is None:
421
+ task_manager = TaskManager()
422
+
423
+ task_name_from_string_dict = task_manager.load_task_or_group(
424
+ string_task_name_list
425
+ )
426
+
427
+ for task_element in others_task_name_list:
428
+ if isinstance(task_element, dict):
429
+ task_name_from_config_dict = {
430
+ **task_name_from_config_dict,
431
+ **task_manager.load_config(config=task_element),
432
+ }
433
+
434
+ elif isinstance(task_element, Task):
435
+ task_name_from_object_dict = {
436
+ **task_name_from_object_dict,
437
+ get_task_name_from_object(task_element): task_element,
438
+ }
439
+
440
+ if not set(task_name_from_string_dict.keys()).isdisjoint(
441
+ set(task_name_from_object_dict.keys())
442
+ ):
443
+ raise ValueError
444
+
445
+ return {
446
+ **task_name_from_string_dict,
447
+ **task_name_from_config_dict,
448
+ **task_name_from_object_dict,
449
+ }
rag-evaluation-harness/lm_eval/tasks/anli/README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ANLI
2
+
3
+ ### Paper
4
+
5
+ Title: `Adversarial NLI: A New Benchmark for Natural Language Understanding`
6
+
7
+ Paper Link: https://arxiv.org/abs/1910.14599
8
+
9
+ Adversarial NLI (ANLI) is a dataset collected via an iterative, adversarial
10
+ human-and-model-in-the-loop procedure. It consists of three rounds that progressively
11
+ increase in difficulty and complexity, and each question-answer includes annotator-
12
+ provided explanations.
13
+
14
+ Homepage: https://github.com/facebookresearch/anli
15
+
16
+ ### Citation
17
+
18
+ ```
19
+ @inproceedings{nie-etal-2020-adversarial,
20
+ title = "Adversarial {NLI}: A New Benchmark for Natural Language Understanding",
21
+ author = "Nie, Yixin and
22
+ Williams, Adina and
23
+ Dinan, Emily and
24
+ Bansal, Mohit and
25
+ Weston, Jason and
26
+ Kiela, Douwe",
27
+ booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
28
+ year = "2020",
29
+ publisher = "Association for Computational Linguistics",
30
+ }
31
+ ```
32
+
33
+ ### Groups and Tasks
34
+
35
+ #### Groups
36
+
37
+ * `anli`: Evaluates `anli_r1`, `anli_r2`, and `anli_r3`
38
+
39
+ #### Tasks
40
+ * `anli_r1`: The data collected adversarially in the first round.
41
+ * `anli_r2`: The data collected adversarially in the second round, after training on the previous round's data.
42
+ * `anli_r3`: The data collected adversarially in the third round, after training on the previous multiple rounds of data.
43
+
44
+
45
+ ### Checklist
46
+
47
+ For adding novel benchmarks/datasets to the library:
48
+ * [x] Is the task an existing benchmark in the literature?
49
+ * [x] Have you referenced the original paper that introduced the task?
50
+ * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
51
+
52
+
53
+ If other tasks on this dataset are already supported:
54
+ * [ ] Is the "Main" variant of this task clearly denoted?
55
+ * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
56
+ * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
rag-evaluation-harness/lm_eval/tasks/anli/anli_r3.yaml ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ include: anli_r1.yaml
2
+ task: anli_r3
3
+ training_split: train_r3
4
+ validation_split: dev_r3
5
+ test_split: test_r3
rag-evaluation-harness/lm_eval/tasks/csatqa/_generate_configs.py ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Take in a YAML, and output all other splits with this YAML
3
+ """
4
+
5
+ import argparse
6
+ import os
7
+
8
+ import yaml
9
+ from tqdm import tqdm
10
+
11
+ from lm_eval.logger import eval_logger
12
+
13
+
14
+ SUBSETS = ["WR", "GR", "RCS", "RCSS", "RCH", "LI"]
15
+
16
+
17
+ def parse_args():
18
+ parser = argparse.ArgumentParser()
19
+ parser.add_argument("--base_yaml_path", required=True)
20
+ parser.add_argument("--save_prefix_path", default="csatqa")
21
+ parser.add_argument("--task_prefix", default="")
22
+ return parser.parse_args()
23
+
24
+
25
+ if __name__ == "__main__":
26
+ args = parse_args()
27
+
28
+ # get filename of base_yaml so we can `"include": ` it in our other YAMLs.
29
+ base_yaml_name = os.path.split(args.base_yaml_path)[-1]
30
+ with open(args.base_yaml_path, encoding="utf-8") as f:
31
+ base_yaml = yaml.full_load(f)
32
+
33
+ for name in tqdm(SUBSETS):
34
+ yaml_dict = {
35
+ "include": base_yaml_name,
36
+ "task": f"csatqa_{args.task_prefix}_{name}"
37
+ if args.task_prefix != ""
38
+ else f"csatqa_{name.lower()}",
39
+ "dataset_name": name,
40
+ }
41
+
42
+ file_save_path = args.save_prefix_path + f"_{name.lower()}.yaml"
43
+ eval_logger.info(f"Saving yaml for subset {name} to {file_save_path}")
44
+ with open(file_save_path, "w", encoding="utf-8") as yaml_file:
45
+ yaml.dump(
46
+ yaml_dict,
47
+ yaml_file,
48
+ width=float("inf"),
49
+ allow_unicode=True,
50
+ default_style='"',
51
+ )
rag-evaluation-harness/lm_eval/tasks/csatqa/csatqa_gr.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ "dataset_name": "GR"
2
+ "include": "_default_csatqa_yaml"
3
+ "task": "csatqa_gr"
rag-evaluation-harness/lm_eval/tasks/csatqa/csatqa_rch.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ "dataset_name": "RCH"
2
+ "include": "_default_csatqa_yaml"
3
+ "task": "csatqa_rch"
rag-evaluation-harness/lm_eval/tasks/csatqa/csatqa_rcss.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ "dataset_name": "RCSS"
2
+ "include": "_default_csatqa_yaml"
3
+ "task": "csatqa_rcss"
rag-evaluation-harness/lm_eval/tasks/french_bench/_default_template_yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ test_split: test
2
+ fewshot_split: valid
3
+ fewshot_config:
4
+ sampler: first_n
rag-evaluation-harness/lm_eval/tasks/french_bench/french_bench_arc_challenge.yaml ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group:
2
+ - french_bench
3
+ - french_bench_mc
4
+ task: french_bench_arc_challenge
5
+ dataset_path: manu/french_bench_arc_challenge
6
+ output_type: multiple_choice
7
+ training_split: train
8
+ validation_split: validation
9
+ test_split: test
10
+ doc_to_text: "Question: {{question}}\nRéponse:"
11
+ doc_to_target: "{{['A', 'B', 'C', 'D'].index(answerKey)}}"
12
+ doc_to_choice: "{{choices}}"
13
+ should_decontaminate: true
14
+ doc_to_decontamination_query: "Question: {{question}}\nRéponse:"
15
+ metric_list:
16
+ - metric: acc
17
+ aggregation: mean
18
+ higher_is_better: true
19
+ - metric: acc_norm
20
+ aggregation: mean
21
+ higher_is_better: true
rag-evaluation-harness/lm_eval/tasks/french_bench/french_bench_boolqa.yaml ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ include: "_default_template_yaml"
2
+ group:
3
+ - french_bench
4
+ - french_bench_extra
5
+ description: "D'après l'information dans le contexte donné, quelle est la réponse à la question ?"
6
+ task: french_bench_boolqa
7
+ dataset_path: manu/french_boolq
8
+ output_type: multiple_choice
9
+ validation_split: valid
10
+ doc_to_text: "\nContexte: {{passage}}\n\nQuestion: {{question}}\n"
11
+ doc_to_choice: ["Oui", "Non"]
12
+ # doc_to_text: "\nContexte: {{passage}}\n\nQuestion: {{question}}\n\nD'après l'information dans le contexte, la réponse est:\nA. Oui \nB. Non\n\nRéponse:"
13
+ # doc_to_choice: ["A", "B"]
14
+ doc_to_target: "{{[1, 0].index(label)}}"
15
+ should_decontaminate: true
16
+ doc_to_decontamination_query: passage
17
+ metric_list:
18
+ - metric: acc
19
+ aggregation: mean
20
+ higher_is_better: true
21
+ - metric: acc_norm
22
+ aggregation: mean
23
+ higher_is_better: true
rag-evaluation-harness/lm_eval/tasks/french_bench/french_bench_fquadv2_genq.yaml ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ include: "_default_template_yaml"
2
+ group:
3
+ - french_bench
4
+ - french_bench_gen
5
+ description: "D'après l'information dans le contexte donné, quelle question a été posée pour obtenir la réponse donnée ?"
6
+ task: french_bench_fquadv2_genq
7
+ dataset_path: manu/fquad2_test
8
+ output_type: generate_until
9
+ validation_split: valid_hasAns
10
+ test_split: test_hasAns
11
+ fewshot_split: valid_hasAns
12
+ doc_to_text: "\nContexte: {{context}}\n\nRéponse: {% if answers.text| length > 0 %}{{answers.text[0]}}{% else %}{{['Impossible']}}{% endif %}\n\nQuestion:"
13
+ doc_to_target: "{{question}}"
14
+ target_delimiter: " "
15
+ should_decontaminate: true
16
+ doc_to_decontamination_query: question
17
+ generation_kwargs:
18
+ until:
19
+ - "\n"
20
+ # filter_list:
21
+ # - name: remove_whitespace
22
+ # filter:
23
+ # - function: remove_whitespace
24
+ # - function: take_first
25
+ metric_list:
26
+ - metric: !function utils.rouge1
27
+ higher_is_better: true
28
+ aggregation: !function utils.rouge1_agg
29
+ - metric: !function utils.f1
30
+ aggregation: mean
31
+ higher_is_better: true
rag-evaluation-harness/lm_eval/tasks/french_bench/french_bench_hellaswag.yaml ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group:
2
+ - french_bench
3
+ - french_bench_mc
4
+ task: french_bench_hellaswag
5
+ dataset_path: manu/french_bench_hellaswag
6
+ output_type: multiple_choice
7
+ training_split: validation
8
+ validation_split: validation
9
+ test_split: null
10
+ process_docs: !function utils.process_docs
11
+ doc_to_text: "{{query}}"
12
+ doc_to_target: "{{label}}"
13
+ doc_to_choice: "{{choices}}"
14
+ metric_list:
15
+ - metric: acc
16
+ aggregation: mean
17
+ higher_is_better: true
18
+ - metric: acc_norm
19
+ aggregation: mean
20
+ higher_is_better: true
rag-evaluation-harness/lm_eval/tasks/french_bench/french_bench_opus_perplexity.yaml ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group:
2
+ - french_bench_perplexity
3
+ task: french_bench_opus_perplexity
4
+ dataset_path: manu/opus100-en-fr
5
+ output_type: loglikelihood_rolling
6
+ test_split: test
7
+ fewshot_split: validation
8
+ validation_split: validation
9
+ num_fewshot: 0
10
+ doc_to_text: ""
11
+ doc_to_target: "{{text}}"
12
+ should_decontaminate: true
13
+ doc_to_decontamination_query: "{{text}}"
14
+ metric_list:
15
+ - metric: word_perplexity
16
+ aggregation: weighted_perplexity
17
+ higher_is_better: false
18
+ - metric: byte_perplexity
19
+ aggregation: weighted_perplexity
20
+ higher_is_better: false
21
+ - metric: bits_per_byte
22
+ aggregation: bits_per_byte
23
+ higher_is_better: false
rag-evaluation-harness/lm_eval/tasks/french_bench/french_bench_orangesum_title.yaml ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ include: "_default_template_yaml"
2
+ group:
3
+ - french_bench
4
+ - french_bench_extra
5
+ description: "Trouve le titre de l'article."
6
+ task: french_bench_orangesum_title
7
+ dataset_path: orange_sum
8
+ dataset_name: title
9
+ output_type: generate_until
10
+ validation_split: validation
11
+ fewshot_split: validation
12
+ doc_to_text: "\nArticle: {{text}}\n\nTitre:"
13
+ doc_to_target: "{{summary}}"
14
+ target_delimiter: " "
15
+ should_decontaminate: true
16
+ doc_to_decontamination_query: summary
17
+ generation_kwargs:
18
+ until:
19
+ - "\n"
20
+ # filter_list:
21
+ # - name: remove_whitespace
22
+ # filter:
23
+ # - function: remove_whitespace
24
+ # - function: take_first
25
+ metric_list:
26
+ - metric: !function utils.rouge1
27
+ higher_is_better: true
28
+ aggregation: !function utils.rouge1_agg
rag-evaluation-harness/lm_eval/tasks/french_bench/french_bench_wikitext_fr.yaml ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group:
2
+ - french_bench_perplexity
3
+ task: french_bench_wikitext_fr
4
+ dataset_path: asi/wikitext_fr
5
+ dataset_name: wikitext-35
6
+ output_type: loglikelihood_rolling
7
+ training_split: train
8
+ validation_split: validation
9
+ test_split: test
10
+ num_fewshot: 0
11
+ doc_to_text: ""
12
+ doc_to_target: !function preprocess_wikitext.wikitext_detokenizer
13
+ process_results: !function preprocess_wikitext.process_results
14
+ should_decontaminate: true
15
+ doc_to_decontamination_query: "{{paragraph}}"
16
+ metric_list:
17
+ - metric: word_perplexity
18
+ aggregation: weighted_perplexity
19
+ higher_is_better: false
20
+ - metric: byte_perplexity
21
+ aggregation: weighted_perplexity
22
+ higher_is_better: false
23
+ - metric: bits_per_byte
24
+ aggregation: bits_per_byte
25
+ higher_is_better: false
rag-evaluation-harness/lm_eval/tasks/french_bench/preprocess_wikitext.py ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import re
2
+
3
+
4
+ def wikitext_detokenizer(doc):
5
+ string = doc["paragraph"]
6
+ # contractions
7
+ string = string.replace("s '", "s'")
8
+ string = re.sub(r"/' [0-9]/", r"/'[0-9]/", string)
9
+ # number separators
10
+ string = string.replace(" @-@ ", "-")
11
+ string = string.replace(" @,@ ", ",")
12
+ string = string.replace(" @.@ ", ".")
13
+ # punctuation
14
+ string = string.replace(" : ", ": ")
15
+ string = string.replace(" ; ", "; ")
16
+ string = string.replace(" . ", ". ")
17
+ string = string.replace(" ! ", "! ")
18
+ string = string.replace(" ? ", "? ")
19
+ string = string.replace(" , ", ", ")
20
+ # double brackets
21
+ string = re.sub(r"\(\s*([^\)]*?)\s*\)", r"(\1)", string)
22
+ string = re.sub(r"\[\s*([^\]]*?)\s*\]", r"[\1]", string)
23
+ string = re.sub(r"{\s*([^}]*?)\s*}", r"{\1}", string)
24
+ string = re.sub(r"\"\s*([^\"]*?)\s*\"", r'"\1"', string)
25
+ string = re.sub(r"'\s*([^']*?)\s*'", r"'\1'", string)
26
+ # miscellaneous
27
+ string = string.replace("= = = =", "====")
28
+ string = string.replace("= = =", "===")
29
+ string = string.replace("= =", "==")
30
+ string = string.replace(" " + chr(176) + " ", chr(176))
31
+ string = string.replace(" \n", "\n")
32
+ string = string.replace("\n ", "\n")
33
+ string = string.replace(" N ", " 1 ")
34
+ string = string.replace(" 's", "'s")
35
+
36
+ return string
37
+
38
+
39
+ def process_results(doc, results):
40
+ (loglikelihood,) = results
41
+ # IMPORTANT: wikitext counts number of words in *original doc before detokenization*
42
+ _words = len(re.split(r"\s+", doc["paragraph"]))
43
+ _bytes = len(doc["paragraph"].encode("utf-8"))
44
+ return {
45
+ "word_perplexity": (loglikelihood, _words),
46
+ "byte_perplexity": (loglikelihood, _bytes),
47
+ "bits_per_byte": (loglikelihood, _bytes),
48
+ }
rag-evaluation-harness/lm_eval/tasks/french_bench/utils.py ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import collections
2
+ import re
3
+ import string
4
+
5
+ import datasets
6
+ import evaluate
7
+
8
+
9
+ def normalize_answer(s):
10
+ """Lower text and remove punctuation, articles and extra whitespace."""
11
+
12
+ def remove_articles(text):
13
+ regex = re.compile(r"\b(un|une|des|le|la|les)\b", re.UNICODE)
14
+ return re.sub(regex, " ", text)
15
+
16
+ def white_space_fix(text):
17
+ return " ".join(text.split())
18
+
19
+ def remove_punc(text):
20
+ exclude = set(string.punctuation)
21
+ return "".join(ch for ch in text if ch not in exclude)
22
+
23
+ def lower(text):
24
+ return text.lower()
25
+
26
+ return white_space_fix(remove_articles(remove_punc(lower(s))))
27
+
28
+
29
+ def get_tokens(s):
30
+ if not s:
31
+ return []
32
+ return normalize_answer(s).split()
33
+
34
+
35
+ # Exact match (the normalized answer exactly match the gold answer)
36
+ def exact(predictions, references):
37
+ return int(normalize_answer(references[0]) == normalize_answer(predictions[0]))
38
+
39
+
40
+ # The F-score of predicted tokens versus the gold answer
41
+ def f1(predictions, references):
42
+ gold_toks = get_tokens(references[0])
43
+ pred_toks = get_tokens(predictions[0])
44
+ common = collections.Counter(gold_toks) & collections.Counter(pred_toks)
45
+ num_same = sum(common.values())
46
+ if len(gold_toks) == 0 or len(pred_toks) == 0:
47
+ # If either is no-answer, then F1 is 1 if they agree, 0 otherwise
48
+ return int(gold_toks == pred_toks)
49
+ if num_same == 0:
50
+ return 0
51
+ precision = 1.0 * num_same / len(pred_toks)
52
+ recall = 1.0 * num_same / len(gold_toks)
53
+ f1 = (2 * precision * recall) / (precision + recall)
54
+ return f1
55
+
56
+
57
+ def rouge1(items):
58
+ """
59
+ # passthrough for efficiency
60
+ """
61
+ return items
62
+
63
+
64
+ def rouge1_agg(items):
65
+ """
66
+ Higher is better
67
+ """
68
+ refs = list(zip(*items))[0]
69
+ preds = list(zip(*items))[1]
70
+ rouge_scorer = evaluate.load("rouge")
71
+ return rouge_scorer.compute(predictions=preds, references=refs)["rouge1"]
72
+
73
+
74
+ def is_included(items):
75
+ """
76
+ # passthrough for efficiency
77
+ """
78
+ if items[0] in items[1]:
79
+ return True
80
+ return False
81
+
82
+
83
+ def preprocess(text):
84
+ text = text.strip()
85
+ # NOTE: Brackets are artifacts of the WikiHow dataset portion of HellaSwag.
86
+ text = text.replace(" [title]", ". ")
87
+ text = re.sub("\\[.*?\\]", "", text)
88
+ text = text.replace(" ", " ")
89
+ return text
90
+
91
+
92
+ def process_docs(dataset: datasets.Dataset) -> datasets.Dataset:
93
+ def _process_doc(doc):
94
+ ctx = doc["ctx_a"] + " " + doc["ctx_b"].capitalize()
95
+ out_doc = {
96
+ "query": preprocess(doc["activity_label"] + ": " + ctx),
97
+ "choices": [preprocess(ending) for ending in doc["endings"]],
98
+ "gold": int(doc["label"]),
99
+ }
100
+ return out_doc
101
+
102
+ return dataset.map(_process_doc)
rag-evaluation-harness/lm_eval/tasks/kobest/README.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # LAMBADA
2
+
3
+ ### Paper
4
+ Title: `KOBEST: Korean Balanced Evaluation of Significant Tasks`
5
+
6
+ Abstract: https://arxiv.org/abs/2204.04541
7
+
8
+ A well-formulated benchmark plays a critical role in spurring advancements in the natural language processing (NLP) field, as it allows objective and precise evaluation of diverse models. As modern language models (LMs) have become more elaborate and sophisticated, more difficult benchmarks that require linguistic knowledge and reasoning have been proposed. However, most of these benchmarks only support English, and great effort is necessary to construct benchmarks for other low resource languages. To this end, we propose a new benchmark named Korean balanced evaluation of significant tasks (KoBEST), which consists of five Korean-language downstream tasks. Professional Korean linguists designed the tasks that require advanced Korean linguistic knowledge. Moreover, our data is purely annotated by humans and thoroughly reviewed to guarantee high data quality. We also provide baseline models and human performance results. Our dataset is available on the Huggingface.
9
+
10
+
11
+ Homepage: https://huggingface.co/datasets/skt/kobest_v1
12
+
13
+ ### Groups and Tasks
14
+
15
+ #### Groups
16
+
17
+ - `kobest`
18
+
19
+ #### Tasks
20
+
21
+ - `kobest_boolq`
22
+ - `kobest_copa`
23
+ - `kobest_hallawag`
24
+ - `kobest_sentineg`
25
+ - `kobest_wic`
26
+
27
+
28
+ ### Citation
29
+
30
+ @misc{
31
+ author={Dohyeong Kim, Myeongjun Jang, Deuk Sin Kwon, Eric Davis},
32
+ title={KOBEST: Korean Balanced Evaluation of Significant Tasks},
33
+ DOI={https://doi.org/10.48550/arXiv.2204.04541},
34
+ publisher={arXiv},
35
+ year={2022},
36
+ month={Apr}
37
+ }
rag-evaluation-harness/lm_eval/tasks/kobest/kobest_boolq.yaml ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group:
2
+ - kobest
3
+ task: kobest_boolq
4
+ dataset_path: skt/kobest_v1
5
+ dataset_name: boolq
6
+ output_type: multiple_choice
7
+ training_split: train
8
+ validation_split: validation
9
+ test_split: test
10
+ doc_to_text: "{{paragraph}} 질문: {{question}} 답변: "
11
+ doc_to_target: "{{label}}"
12
+ doc_to_choice: ["아니오", "예"]
13
+ metric_list:
14
+ - metric: acc
15
+ aggregation: mean
16
+ higher_is_better: True
17
+ - metric: f1
18
+ aggregation: !function utils.macro_f1_score
19
+ average: macro
20
+ hf_evaluate: true
21
+ higher_is_better: True
22
+ metadata:
23
+ version: 1.0
rag-evaluation-harness/lm_eval/tasks/kobest/utils.py ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from datasets import Dataset
2
+ from sklearn.metrics import f1_score
3
+
4
+
5
+ def copa_doc_to_text(doc: dict) -> str:
6
+ connector = {"원인": " 왜냐하면", "결과": " 그래서"}[doc["question"].strip()]
7
+ return f"""{doc["premise"]} {connector}"""
8
+
9
+
10
+ def copa_doc_to_target(doc: dict) -> str:
11
+ correct_choice = doc["alternative_1"] if doc["label"] == 0 else doc["alternative_2"]
12
+ return f"""{correct_choice}"""
13
+
14
+
15
+ def copa_doc_to_choice(doc: dict) -> list:
16
+ return [f"""{doc["alternative_1"]}""", f"""{doc["alternative_2"]}"""]
17
+
18
+
19
+ def sentineg_doc_to_text(doc: dict):
20
+ return f"""문장: {doc["sentence"]} 긍부정:"""
21
+
22
+
23
+ def wic_doc_to_text(doc: dict) -> str:
24
+ return f"""문장1: {doc["context_1"]} 문장2: {doc["context_2"]} 두 문장에서 {doc["word"]}가 같은 뜻으로 쓰였나?"""
25
+
26
+
27
+ def hellaswag_process_doc(doc: Dataset) -> Dataset:
28
+ def preprocessor(dataset):
29
+ return {
30
+ "query": f"""문장: {dataset["context"]}""",
31
+ "choices": [
32
+ dataset["ending_1"],
33
+ dataset["ending_2"],
34
+ dataset["ending_3"],
35
+ dataset["ending_4"],
36
+ ],
37
+ "gold": int(dataset["label"]),
38
+ }
39
+
40
+ return doc.map(preprocessor)
41
+
42
+
43
+ def macro_f1_score(items):
44
+ unzipped_list = list(zip(*items))
45
+ golds = unzipped_list[0]
46
+ preds = unzipped_list[1]
47
+ fscore = f1_score(golds, preds, average="macro")
48
+ return fscore
rag-evaluation-harness/lm_eval/tasks/okapi/arc_multilingual/arc_ar.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ include: _arc_yaml
2
+ task: arc_ar
3
+ dataset_path: alexandrainst/m_arc
4
+ dataset_name: ar
5
+ training_split: train
6
+ validation_split: validation
7
+ test_split: test
rag-evaluation-harness/lm_eval/tasks/okapi/arc_multilingual/arc_nl.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ include: _arc_yaml
2
+ task: arc_nl
3
+ dataset_path: alexandrainst/m_arc
4
+ dataset_name: nl
5
+ training_split: train
6
+ validation_split: validation
7
+ test_split: test
rag-evaluation-harness/lm_eval/tasks/okapi/hellaswag_multilingual/README.md ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Multilingual HellaSwag
2
+
3
+ ### Paper
4
+
5
+ Title: `Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback`
6
+
7
+ Abstract: https://arxiv.org/abs/2307.16039
8
+
9
+ A key technology for the development of large language models (LLMs) involves instruction tuning that helps align the models' responses with human expectations to realize impressive learning abilities. Two major approaches for instruction tuning characterize supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF), which are currently applied to produce the best commercial LLMs (e.g., ChatGPT). To improve the accessibility of LLMs for research and development efforts, various instruction-tuned open-source LLMs have also been introduced recently, e.g., Alpaca, Vicuna, to name a few. However, existing open-source LLMs have only been instruction-tuned for English and a few popular languages, thus hindering their impacts and accessibility to many other languages in the world. Among a few very recent work to explore instruction tuning for LLMs in multiple languages, SFT has been used as the only approach to instruction-tune LLMs for multiple languages. This has left a significant gap for fine-tuned LLMs based on RLHF in diverse languages and raised important questions on how RLHF can boost the performance of multilingual instruction tuning. To overcome this issue, we present Okapi, the first system with instruction-tuned LLMs based on RLHF for multiple languages. Okapi introduces instruction and response-ranked data in 26 diverse languages to facilitate the experiments and development of future multilingual LLM research. We also present benchmark datasets to enable the evaluation of generative LLMs in multiple languages. Our experiments demonstrate the advantages of RLHF for multilingual instruction over SFT for different base models and datasets. Our framework and resources are released at this https URL.
10
+
11
+ Homepage: `https://github.com/nlp-uoregon/Okapi`
12
+
13
+
14
+ ### Citation
15
+
16
+ ```
17
+ @article{dac2023okapi,
18
+ title={Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback},
19
+ author={Dac Lai, Viet and Van Nguyen, Chien and Ngo, Nghia Trung and Nguyen, Thuat and Dernoncourt, Franck and Rossi, Ryan A and Nguyen, Thien Huu},
20
+ journal={arXiv e-prints},
21
+ pages={arXiv--2307},
22
+ year={2023}
23
+ }
24
+ ```
25
+
26
+ ### Groups and Tasks
27
+
28
+ #### Groups
29
+
30
+ - hellaswag_multilingual
31
+
32
+ #### Tasks
33
+
34
+ - `hellaswag_{ar,bn,ca,da,de,es,eu,fr,gu,hi,hr,hu,hy,id,it,kn,ml,mr,ne,nl,pt,ro,ru,sk,sr,sv,ta,te,uk,vi}`
35
+
36
+
37
+ ### Checklist
38
+
39
+ For adding novel benchmarks/datasets to the library:
40
+ * [x] Is the task an existing benchmark in the literature?
41
+ * [x] Have you referenced the original paper that introduced the task?
42
+ * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
43
+
44
+
45
+ If other tasks on this dataset are already supported:
46
+ * [ ] Is the "Main" variant of this task clearly denoted?
47
+ * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
48
+ * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?