|
--- |
|
license: cc-by-nc-nd-4.0 |
|
task_categories: |
|
- question-answering |
|
tags: |
|
- reasoning |
|
- linguistics |
|
- benchmark |
|
pretty_name: L2 |
|
size_categories: |
|
- 1K<n<10K |
|
source_datasets: |
|
- https://huggingface.co/datasets/ambean/lingOly |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: test |
|
path: test_small.zip |
|
extra_gated_prompt: >- |
|
### LingOly-TOO LICENSE AGREEMENT |
|
|
|
This dataset is governed by a CC-BY-NC-ND-4.0 license. |
|
|
|
In addition to this license, we ask that uses of the dataset are in line with |
|
the Acceptable Use policy described below. |
|
|
|
|
|
|
|
This dataset is intended as a benchmark for reasoning in large language |
|
models. For the integrity of the benchmark, users should not: |
|
* Re-distribute the questions or answers of the benchmark in formats (such as plain text) which leak the benchmark to web-scraping. |
|
* Train language models directly using the content of this benchmark. |
|
extra_gated_fields: |
|
By clicking Submit below I accept the terms of the license and Acceptable Use policy: checkbox |
|
extra_gated_button_content: Submit |
|
--- |
|
 |
|
## Links |
|
- Website |
|
- Paper |
|
- Leaderboard |
|
- Repo |
|
|
|
## Summary |
|
|
|
## Dataset format |
|
The data was created by generating up to 6 obfuscations per problem for 82 problems source from original LingOly benchmark. |
|
|
|
```python |
|
{'question_n': # The question number in the problem |
|
'prompt': # The main text of the question |
|
'subprompts': [ |
|
{'questionpart_n': # For questions with several sub-parts (e.g. translating several sentences or matching paris of sentences) |
|
'question': # The text of the question part |
|
'answer': # The correct answer |
|
'matches_human_scoring': # Annotation metadata |
|
'fuzzy_matching': # Annotation metadata |
|
'manual_edit': # Annotation metadata |
|
}], |
|
'metadata.preamble': # General text from the overall problem that is shared across all questions for that problem |
|
'metadata.context': # Context text that includes important information, you should prepend your prompt with context for solvability |
|
'metadata.obfuscated': # If this example was obfuscated or not |
|
'metadata.obf_num': # Obfuscation counter (unique per problem). This is 0 if unobfuscated |
|
'metadata.overall_question_n': # The problem number |
|
'metadata.obfuscated_question_n': # Concatenation of problem number and obfuscation number |
|
} |
|
``` |
|
|
|
## Citation |
|
``` |
|
@article{khouja2025lingolytoo, |
|
title={LINGOLY-TOO: Disentangling Memorisation from Reasoning with Linguistic Templatisation and Orthographic Obfuscation}, |
|
author={Khouja, Jude and Korgul, Karolina and Hellsten, Simeon and Yang, Lingyi and Neacșu, Vlad A. and Mayne, Harry and Kearns, Ryan O. and Bean, Andrew M. and Mahdi, Adam}, |
|
year={2025}, |
|
eprint=, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL}, |
|
url=, |
|
} |
|
|
|
``` |