Datasets:
metadata
license: cc-by-nc-nd-4.0
task_categories:
- question-answering
tags:
- reasoning
- linguistics
- benchmark
pretty_name: L2
size_categories:
- 1K<n<10K
source_datasets:
- https://huggingface.co/datasets/ambean/lingOly
configs:
- config_name: default
data_files:
- split: test
path: test_small.zip
extra_gated_prompt: >-
### LingOly-TOO LICENSE AGREEMENT
This dataset is governed by a CC-BY-NC-ND-4.0 license.
In addition to this license, we ask that uses of the dataset are in line with
the Acceptable Use policy described below.
### Acceptable Use Policy
This dataset is intended as a benchmark for reasoning in large language
models. For the integrity of the benchmark, users should not:
* Re-distribute the questions or answers of the benchmark in formats (such as plain text) which leak the benchmark to web-scraping.
* Train language models directly using the content of this benchmark.
extra_gated_fields:
By clicking Submit below I accept the terms of the license and Acceptable Use policy: checkbox
extra_gated_button_content: Submit
LingOly-TOO (L2)
Links
- 📊 Website and Leaderboard
- 📎 Paper
- 🧩 Code
Summary
LingOly-TOO (L2) is a challenging linguistics reasoning benchmark designed to counteracts answering without reasoning (e.g. by guessing or memorizing answers).
Dataset format
LingOly-TOO benchmark was created by generating up to 6 obfuscations per problem for 82 problems source from original LingOly benchmark. Dataset contains over 1200 question answer pairs. Some answers consists of multiple parts.
{'question_n': # The question number in the problem
'prompt': # The main text of the question including preamble, context and previous questions
'completion': # The correct answer
'question': # The question text only (without the the rest of the prompt)
'context': # Context text that includes important information, you should prepend your prompt with context for solvability
'obfuscated': # If this example was obfuscated or not
'overall_question_n': # The problem number
'obfuscated_question_n': # Concatenation of problem number and obfuscation number
}
Citation
@article{khouja2025lingolytoodisentanglingmemorisationreasoning,
title={LINGOLY-TOO: Disentangling Memorisation from Reasoning with Linguistic Templatisation and Orthographic Obfuscation},
author={Jude Khouja and Karolina Korgul and Simi Hellsten and Lingyi Yang and Vlad Neacsu and Harry Mayne and Ryan Kearns and Andrew Bean and Adam Mahdi},
year={2025},
eprint={2503.02972},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.02972},
}