text
stringlengths 63
77.2k
| metadata
dict |
---|---|
# Task-name
### Paper
Title: [MELA: Multilingual Evaluation of Linguistic Acceptability](https://arxiv.org/abs/2311.09033)
**Abstract**: In this work, we present the largest benchmark to date on linguistic acceptability: Multilingual Evaluation of Linguistic Acceptability -- MELA, with 46K samples covering 10 languages from a diverse set of language families. We establish LLM baselines on this benchmark, and investigate cross-lingual transfer in acceptability judgements with XLM-R. In pursuit of multilingual interpretability, we conduct probing experiments with fine-tuned XLM-R to explore the process of syntax capability acquisition. Our results show that GPT-4o exhibits a strong multilingual ability, outperforming fine-tuned XLM-R, while open-source multilingual models lag behind by a noticeable gap. Cross-lingual transfer experiments show that transfer in acceptability judgment is non-trivial: 500 Icelandic fine-tuning examples lead to 23 MCC performance in a completely unrelated language -- Chinese. Results of our probing experiments indicate that training on MELA improves the performance of XLM-R on syntax-related tasks.
Homepage: https://github.com/sjtu-compling/MELA
### Citation
```
@inproceedings{zhang2023mela,
author = {Ziyin Zhang and
Yikang Liu and
Weifang Huang and
Junyu Mao and
Rui Wang and
Hai Hu},
title = {{MELA:} Multilingual Evaluation of Linguistic Acceptability},
booktitle = {Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), {ACL} 2024, Bangkok, Thailand},
publisher = {Association for Computational Linguistics},
year = {2024},
url = {https://doi.org/10.48550/arXiv.2311.09033}
}
```
### Groups and Tasks
#### Groups
- `mela`: multilingual evaluation of linguistic acceptability
#### Tasks
- `mela_en`: English
- `mela_zh`: Chinese
- `mela_it`: Italian
- `mela_ru`: Russian
- `mela_de`: Germany
- `mela_fr`: French
- `mela_es`: Spanish
- `mela_ja`: Japanese
- `mela_ar`: Arabic
- `mela_ar`: Icelandic
### Checklist
For adding novel benchmarks/datasets to the library:
- [x] Is the task an existing benchmark in the literature?
- [x] Have you referenced the original paper that introduced the task?
- [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
- [ ] Is the "Main" variant of this task clearly denoted?
- [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
- [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/mela/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mela/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 2836
} |
# MGSM
### Paper
Title: `Language Models are Multilingual Chain-of-Thought Reasoners`
Abstract: https://arxiv.org/abs/2210.03057
Multilingual Grade School Math Benchmark (MGSM) is a benchmark of grade-school math problems, proposed in the paper [Language models are multilingual chain-of-thought reasoners](http://arxiv.org/abs/2210.03057).
The same 250 problems from [GSM8K](https://arxiv.org/abs/2110.14168) are each translated via human annotators in 10 languages. The 10 languages are:
- Spanish
- French
- German
- Russian
- Chinese
- Japanese
- Thai
- Swahili
- Bengali
- Telugu
GSM8K (Grade School Math 8K) is a dataset of 8.5K high quality linguistically diverse grade school math word problems. The dataset was created to support the task of question answering on basic mathematical problems that require multi-step reasoning.
You can find the input and targets for each of the ten languages (and English) as `.tsv` files.
We also include few-shot exemplars that are also manually translated from each language in `exemplars.py`.
Homepage: https://github.com/google-research/url-nlp/tree/main/mgsm
### Citation
```
@misc{cobbe2021training,
title={Training Verifiers to Solve Math Word Problems},
author={Karl Cobbe and Vineet Kosaraju and Mohammad Bavarian and Jacob Hilton and Reiichiro Nakano and Christopher Hesse and John Schulman},
year={2021},
eprint={2110.14168},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
@misc{shi2022language,
title={Language Models are Multilingual Chain-of-Thought Reasoners},
author={Freda Shi and Mirac Suzgun and Markus Freitag and Xuezhi Wang and Suraj Srivats and Soroush Vosoughi and Hyung Won Chung and Yi Tay and Sebastian Ruder and Denny Zhou and Dipanjan Das and Jason Wei},
year={2022},
eprint={2210.03057},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Groups and Tasks
#### Groups
* `mgsm_direct`: Direct question
* `mgsm_direct_bn`: Bengali
* `mgsm_direct_de`: German
* `mgsm_direct_en`: English
* `mgsm_direct_es`: Spanish
* `mgsm_direct_fr`: French
* `mgsm_direct_ja`: Japanese
* `mgsm_direct_ru`: Russian
* `mgsm_direct_sw`: Swahili
* `mgsm_direct_te`: Telugu
* `mgsm_direct_th`: Thai
* `mgsm_direct_zh`: Chinese
* `mgsm_cot_native`: Question with Answer followed by CoT prompt in the same language as the dataset.
* `mgsm_cot_native_bn`: Bengali
* `mgsm_cot_native_de`: German
* `mgsm_cot_native_en`: English
* `mgsm_cot_native_es`: Spanish
* `mgsm_cot_native_fr`: French
* `mgsm_cot_native_ja`: Japanese
* `mgsm_cot_native_ru`: Russian
* `mgsm_cot_native_sw`: Swahili
* `mgsm_cot_native_te`: Telugu
* `mgsm_cot_native_th`: Thai
* `mgsm_cot_native_zh`: Chinese
Examplar Samples: https://github.com/google-research/url-nlp/blob/main/mgsm/exemplars.py
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/mgsm/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mgsm/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 3503
} |
# MATH
ℹ️ This is the 4-shot variant!
## Paper
Measuring Mathematical Problem Solving With the MATH Dataset
https://arxiv.org/abs/2103.03874
Many intellectual endeavors require mathematical problem solving, but this skill remains beyond the capabilities of computers. To measure this ability in machine learning models, we introduce MATH, a new dataset of 12,500 challenging competition mathematics problems. Each problem in MATH has a full step-by-step solution which can be used to teach models to generate answer derivations and explanations.
NOTE: The few-shot and the generated answer extraction is based on the [Minerva](https://arxiv.org/abs/2206.14858) and exact match equivalence is calculated using the `sympy` library. This requires additional dependencies, which can be installed via the `lm-eval[math]` extra.
Homepage: https://github.com/hendrycks/math
## Citation
```
@article{hendrycksmath2021,
title={Measuring Mathematical Problem Solving With the MATH Dataset},
author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt},
journal={NeurIPS},
year={2021}
}
@misc{2206.14858,
Author = {Aitor Lewkowycz and Anders Andreassen and David Dohan and Ethan Dyer and Henryk Michalewski and Vinay Ramasesh and Ambrose Slone and Cem Anil and Imanol Schlag and Theo Gutman-Solo and Yuhuai Wu and Behnam Neyshabur and Guy Gur-Ari and Vedant Misra},
Title = {Solving Quantitative Reasoning Problems with Language Models},
Year = {2022},
Eprint = {arXiv:2206.14858},
}
```
### Groups and Tasks
#### Groups
- `minerva_math`
#### Tasks
- `minerva_math_algebra`
- `minerva_math_counting_and_prob`
- `minerva_math_geometry`
- `minerva_math_intermediate_algebra`
- `minerva_math_num_theory`
- `minerva_math_prealgebra`
- `minerva_math_precalc`
### Checklist
The checklist is the following:
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
* The implementation in the original paper is one where the model is first fine-tuned on the data. They do have a few-shot evaluation for GPT-3, however the few-shot context used here is sourced from [Lewkowycz et al](https://arxiv.org/abs/2206.14858). The achieved accuracy on Llama-2 models is comparable to that provided in the paper, though not identical.
If other tasks on this dataset are already supported:
* [x] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [x] Have you noted which, if any, published evaluation setups are matched by this variant?
### Variant Wishlist
- [ ] zero-shot variant | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/minerva_math/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/minerva_math/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 2966
} |
# Task-name
### Paper
Title: `Measuring Massive Multitask Language Understanding`
Abstract: `https://arxiv.org/abs/2009.03300`
`The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more.`
Homepage: `https://github.com/hendrycks/test`
Note: The `Flan` variants are derived from [here](https://github.com/jasonwei20/flan-2), and as described in Appendix D.1 of [Scaling Instruction-Finetuned Language Models](https://arxiv.org/abs/2210.11416).
### Citation
```
@article{hendryckstest2021,
title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
}
@article{hendrycks2021ethics,
title={Aligning AI With Shared Human Values},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andrew Critch and Jerry Li and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
}
```
### Groups, Tags, and Tasks
#### Groups
* `mmlu`: `Original multiple-choice MMLU benchmark`
* `mmlu_continuation`: `MMLU but with continuation prompts`
* `mmlu_generation`: `MMLU generation`
MMLU is the original benchmark as implemented by Hendrycks et al. with the choices in context and the answer letters (e.g `A`, `B`, `C`, `D`) in the continuation.
`mmlu_continuation` is a cloze-style variant without the choices in context and the full answer choice in the continuation.
`mmlu_generation` is a generation variant, similar to the original but the LLM is asked to generate the correct answer letter.
#### Subgroups
* `mmlu_stem'
* `mmlu_humanities'
* `mmlu_social_sciences'
* `mmlu_other'
Subgroup variants are prefixed with the subgroup name, e.g. `mmlu_stem_continuation`.
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [x] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [x] Have you noted which, if any, published evaluation setups are matched by this variant?
# changelog
ver 1: PR #497
switch to original implementation
ver 2: PR #2116
add missing newline in description. | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/mmlu/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mmlu/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 2738
} |
# mmlu_pro
### Paper
Title: `MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark`
Abstract: `In the age of large-scale language models, benchmarks like the Massive Multitask Language Understanding (MMLU) have been pivotal in pushing the boundaries of what AI can achieve in language comprehension and reasoning across diverse domains. However, as models continue to improve, their performance on these benchmarks has begun to plateau, making it increasingly difficult to discern differences in model capabilities. This paper introduces MMLU-Pro, an enhanced dataset designed to extend the mostly knowledge-driven MMLU benchmark by integrating more challenging, reasoning-focused questions and expanding the choice set from four to ten options. Additionally, MMLU-Pro eliminates the trivial and noisy questions in MMLU. Our experimental results show that MMLU-Pro not only raises the challenge, causing a significant drop in accuracy by 16% to 33% compared to MMLU but also demonstrates greater stability under varying prompts. With 24 different prompt styles tested, the sensitivity of model scores to prompt variations decreased from 4-5% in MMLU to just 2% in MMLU-Pro. Additionally, we found that models utilizing Chain of Thought (CoT) reasoning achieved better performance on MMLU-Pro compared to direct answering, which is in stark contrast to the findings on the original MMLU, indicating that MMLU-Pro includes more complex reasoning questions. Our assessments confirm that MMLU-Pro is a more discriminative benchmark to better track progress in the field.`
Homepage: https://huggingface.co/datasets/TIGER-Lab/MMLU-Pro
### Citation
```bibtex
@misc{wang2024mmlupro,
title={MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark},
author={Yubo Wang and Xueguang Ma and Ge Zhang and Yuansheng Ni and Abhranil Chandra and Shiguang Guo and Weiming Ren and Aaran Arulraj and Xuan He and Ziyan Jiang and Tianle Li and Max Ku and Kai Wang and Alex Zhuang and Rongqi Fan and Xiang Yue and Wenhu Chen},
year={2024},
eprint={2406.01574},
archivePrefix={arXiv},
primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}
}
```
### Groups and Tasks
#### Groups
* `mmlu_pro`: 'All 14 subjects of the mmlu_pro dataset, evaluated following the methodology in mmlu's original implementation'
#### Tasks
The following tasks evaluate subjects in the mmlu_pro dataset
- `mmlu_pro_biology`
- `mmlu_pro_business`
- `mmlu_pro_chemistry`
- `mmlu_pro_computer_science`
- `mmlu_pro_economics`
- `mmlu_pro_engineering`
- `mmlu_pro_health`
- `mmlu_pro_history`
- `mmlu_pro_law`
- `mmlu_pro_math`
- `mmlu_pro_other`
- `mmlu_pro_philosophy`
- `mmlu_pro_physics`
- `mmlu_pro_psychology`
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
### Changelog
* (tasks, group) 2024-09-23 -- (version 1 --> version 2)
* Added one newline to task description(s) as per [reference implementation](https://github.com/TIGER-AI-Lab/MMLU-Pro/blob/47b9891aacb8bd7cda29d5c5ba17b9434dd333bc/evaluate_from_local.py#L93) | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/mmlu_pro/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mmlu_pro/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 4175
} |
# MMLU-SR
## Paper
Title: [Reasoning or Simply Next Token Prediction? A Benchmark for Stress-Testing Large Language Models](https://arxiv.org/abs/2406.15468v1)
We propose MMLU-SR, a novel dataset designed to measure the true comprehension abilities of Large Language Models (LLMs) by challenging their performance in question-answering tasks with modified terms. We reasoned that an agent that ``truly'' understands a concept can still evaluate it when key terms are replaced by suitably defined alternate terms, and sought to differentiate such comprehension from mere text replacement. In our study, we modified standardized test questions by replacing a key term with a dummy word along with its definition. The key term could be in the context of questions, answers, or both questions and answers.
Notwithstanding the high scores achieved by recent popular LLMs on the MMLU leaderboard, we found a substantial reduction in model performance after such replacement, suggesting poor comprehension. This new benchmark provides a rigorous benchmark for testing true model comprehension, and poses a challenge to the broader scientific community.
Github Homepage: [https://github.com/Wang-ML-Lab/MMLU-SR](https://github.com/Wang-ML-Lab/MMLU-SR)
Huggingface Dataset: [https://huggingface.co/datasets/NiniCat/MMLU-SR]([https://huggingface.co/datasets/NiniCat/MMLU-SR)
## Citation
```bib
@misc{wang2024reasoningsimplytokenprediction,
title={Reasoning or Simply Next Token Prediction? A Benchmark for Stress-Testing Large Language Models},
author={Wentian Wang and Paul Kantor and Jacob Feldman and Lazaros Gallos and Hao Wang},
year={2024},
eprint={2406.15468},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2406.15468},
}
```
### Groups and Tasks
#### Groups
- `mmlusr`: MMLU variant where the terminology in the question and answers are modified.
- `mmlusr_answer_only`: MMLU variant where the terminology in the answers are modified.
- `mmlusr_question_only`: MMLU variant where the terminology in the question is modified.
#### Tasks
There are 57 symbol replaced subjects in each group. You can run a single task by:
* `mmlusr_question_only_abstract_algebra`
Or by categories:
* `mmlusr_question_only_stem_tasks `
### Checklist
The checklist is the following:
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
* The implementation in the original paper is one where the model is first fine-tuned on the data. They do have a few-shot evaluation for GPT-3, however the few-shot context used here is sourced from [Lewkowycz et al](https://arxiv.org/abs/2206.14858). The achieved accuracy on Llama-2 models is comparable to that provided in the paper, though not identical.
If other tasks on this dataset are already supported:
* [x] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [x] Have you noted which, if any, published evaluation setups are matched by this variant?
### Variant Wishlist
- [ ] zero-shot variant | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/mmlusr/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mmlusr/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 3420
} |
# MMMU Benchmark
### Paper
Title: `MMMU: A Massive Multi-discipline MultimodalUnderstanding and Reasoning Benchmark for Expert AGI`
Abstract: `MMMU is a new benchmark designed to evaluate multimodal models on massive multi-discipline tasks demanding college-level subject knowledge and deliberate reasoning.`
`The benchmark is composed of 30 tasks, for a total of 900 mixed image+text examples (some with multiple images in context)`
Homepage: `https://github.com/MMMU-Benchmark/MMMU/tree/main/mmmu`
Note: Some questions have multiple images in context. To control for this use `max_images=N` in model init.
### Citation
```
@inproceedings{yue2023mmmu,
title={MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI},
author={Xiang Yue and Yuansheng Ni and Kai Zhang and Tianyu Zheng and Ruoqi Liu and Ge Zhang and Samuel Stevens and Dongfu Jiang and Weiming Ren and Yuxuan Sun and Cong Wei and Botao Yu and Ruibin Yuan and Renliang Sun and Ming Yin and Boyuan Zheng and Zhenzhu Yang and Yibo Liu and Wenhao Huang and Huan Sun and Yu Su and Wenhu Chen},
booktitle={Proceedings of CVPR},
year={2024},
}
```
### Groups, Tags, and Tasks
#### Groups
* `mmmu_val`
* `mmmu_val_art_and_design`
* `mmmu_val_business`
* `mmmu_val_health_and_medicine`
* `mmmu_val_humanities_and_social_science`
* `mmmu_val_science`
* `mmmu_val_tech_and_engineering`
#### Tags
#### Tasks
* `mmmu_val_accounting`
* `mmmu_val_agriculture`
* `mmmu_val_architecture_and_engineering.yaml`
* `mmmu_val_art`
* `mmmu_val_art_theory`
* `mmmu_val_basic_medical_science`
* `mmmu_val_biology`
* `mmmu_val_chemistry`
* `mmmu_val_computer_science`
* `mmmu_val_clinical_medicine`
* `mmmu_val_design`
* `mmmu_val_diagnostics_and_laboratory_medicine`
* `mmmu_val_electronics`
* `mmmu_val_energy_and_power`
* `mmmu_val_economics`
* `mmmu_val_finance`
* `mmmu_val_geography`
* `mmmu_val_history`
* ...
### Variants
The `mmmu_val` group implements MMMU using processing code [from the original MMMU authors](https://github.com/MMMU-Benchmark/MMMU/tree/main/mmmu) and uses the prompt format found in [the MMMU repository for Llava-1.5](https://github.com/MMMU-Benchmark/MMMU/blob/main/mmmu/configs/llava1.5.yaml). This implementation should give scores on par with or slightly higher than those reported by [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval/tree/main/lmms_eval/tasks/mmmu) for `mmmu_val` and the MMMU repository code.
Scores on several tested models (**all with `--apply_chat_template`**) are:
Qwen2-VL-2B:
```
hf-multimodal (pretrained=Qwen/Qwen2-VL-2B-Instruct,attn_implementation=flash_attention_2,dtype=bfloat16,convert_img_format=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 2
```
```
| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
|--------------------------------|------:|------|------|------|---|-----:|---|-----:|
|mmmu_val | 0|none | |acc |↑ |0.3778|± |0.0155|
| - Art and Design | 0|none | |acc |↑ |0.5500|± |0.0415|
| - Business | 0|none | |acc |↑ |0.3600|± |0.0389|
| - Health and Medicine | 0|none | |acc |↑ |0.3667|± |0.0394|
| - Humanities and Social Science| 0|none | |acc |↑ |0.5167|± |0.0438|
| - Science | 0|none | |acc |↑ |0.2467|± |0.0352|
| - Tech and Engineering | 0|none | |acc |↑ |0.3143|± |0.0317|
```
Author-reported score: 41.1%
Qwen2-VL-7B:
```
hf-multimodal (pretrained=Qwen/Qwen2-VL-7B-Instruct,attn_implementation=flash_attention_2,dtype=bfloat16,convert_img_format=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 2
```
```
| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
|--------------------------------|------:|------|------|------|---|-----:|---|-----:|
|mmmu_val | 0|none | |acc |↑ |0.5056|± |0.0160|
| - Art and Design | 0|none | |acc |↑ |0.6917|± |0.0398|
| - Business | 0|none | |acc |↑ |0.4333|± |0.0406|
| - Health and Medicine | 0|none | |acc |↑ |0.5667|± |0.0401|
| - Humanities and Social Science| 0|none | |acc |↑ |0.6750|± |0.0426|
| - Science | 0|none | |acc |↑ |0.3800|± |0.0392|
| - Tech and Engineering | 0|none | |acc |↑ |0.4000|± |0.0341|
```
Author-reported score: 54.1%
Idefics2-8B:
```
hf-multimodal (pretrained=HuggingFaceM4/idefics2-8b,attn_implementation=flash_attention_2,dtype=bfloat16,convert_img_format=True,max_images=2), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 2
```
```
| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
|--------------------------------|------:|------|------|------|---|-----:|---|-----:|
|mmmu_val | 0|none | |acc |↑ |0.4011|± |0.0154|
| - Art and Design | 0|none | |acc |↑ |0.6167|± |0.0436|
| - Business | 0|none | |acc |↑ |0.3200|± |0.0373|
| - Health and Medicine | 0|none | |acc |↑ |0.4000|± |0.0401|
| - Humanities and Social Science| 0|none | |acc |↑ |0.5750|± |0.0424|
| - Science | 0|none | |acc |↑ |0.2600|± |0.0358|
| - Tech and Engineering | 0|none | |acc |↑ |0.3381|± |0.0312|
```
Author-reported score: ~43%
Llava-v1.6-Mistral-7B:
```
hf-multimodal (pretrained=llava-hf/llava-v1.6-mistral-7b-hf,attn_implementation=flash_attention_2,dtype=bfloat16,convert_img_format=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 2
```
```
| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
|--------------------------------|------:|------|------|------|---|-----:|---|-----:|
|mmmu_val | 0|none | |acc |↑ |0.3522|± |0.0151|
| - Art and Design | 0|none | |acc |↑ |0.5167|± |0.0440|
| - Business | 0|none | |acc |↑ |0.2667|± |0.0362|
| - Health and Medicine | 0|none | |acc |↑ |0.3867|± |0.0397|
| - Humanities and Social Science| 0|none | |acc |↑ |0.5917|± |0.0433|
| - Science | 0|none | |acc |↑ |0.2200|± |0.0342|
| - Tech and Engineering | 0|none | |acc |↑ |0.2524|± |0.0299|
```
Author-reported score: 35.3%
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [x] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [x] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/mmmu/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mmmu/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 7418
} |
# MuTual
### Paper
Title: `MuTual: A Dataset for Multi-Turn Dialogue Reasoning`
Abstract: https://www.aclweb.org/anthology/2020.acl-main.130/
MuTual is a retrieval-based dataset for multi-turn dialogue reasoning, which is
modified from Chinese high school English listening comprehension test data.
Homepage: https://github.com/Nealcly/MuTual
### Citation
```
@inproceedings{mutual,
title = "MuTual: A Dataset for Multi-Turn Dialogue Reasoning",
author = "Cui, Leyang and Wu, Yu and Liu, Shujie and Zhang, Yue and Zhou, Ming" ,
booktitle = "Proceedings of the 58th Conference of the Association for Computational Linguistics",
year = "2020",
publisher = "Association for Computational Linguistics",
}
```
### Groups and Tasks
#### Groups
* Not part of a group yet.
#### Tasks
* `mutual`
* `mutual_plus`
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/mutual/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mutual/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1515
} |
# NoticIA
### Paper
Title: `NoticIA: A Clickbait Article Summarization Dataset in Spanish`
Abstract: https://arxiv.org/abs/2404.07611
We present NoticIA, a dataset consisting of 850 Spanish news articles featuring prominent clickbait headlines, each paired with high-quality, single-sentence generative summarizations written by humans. This task demands advanced text understanding and summarization abilities, challenging the models' capacity to infer and connect diverse pieces of information to meet the user's informational needs generated by the clickbait headline. We evaluate the Spanish text comprehension capabilities of a wide range of state-of-the-art large language models. Additionally, we use the dataset to train ClickbaitFighter, a task-specific model that achieves near-human performance in this task.
Homepage: https://github.com/ikergarcia1996/NoticIA
### Citation
```
@article{noticia2024,
title={NoticIA: A Clickbait Article Summarization Dataset in Spanish},
author={Iker García-Ferrero and Begoña Altuna},
year={2024},
journal = {Procesamiento del Lenguaje Natural},
volume = {73},
number = {0},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Groups and Tasks
#### Groups
* Not part of a group yet.
#### Tasks
* `noticia`
#### Metrics
Following the original implementation, this task will compute the 'Rouge1 score' and 'Average Summary Length.'
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [x] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [x] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/noticia/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/noticia/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 2118
} |
### Paper
Question Answering dataset based on aggregated user queries from Google Search.
Homepage: https://research.google/pubs/natural-questions-a-benchmark-for-question-answering-research/
Homepage: [google-research-datasets/natural-questions@master/nq_open](https://github.com/google-research-datasets/natural-questions/tree/master/nq_open)
Paper: [aclanthology.org/P19-1612](https://aclanthology.org/P19-1612/)
Derived from the Natural Questions dataset, introduced in https://storage.googleapis.com/gweb-research2023-media/pubtools/pdf/1f7b46b5378d757553d3e92ead36bda2e4254244.pdf .
### Citation
```
@article{47761,
title = {Natural Questions: a Benchmark for Question Answering Research},
author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov},
year = {2019},
journal = {Transactions of the Association of Computational Linguistics}}
```
### Tasks
* `nq_open` | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/nq_open/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/nq_open/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1164
} |
# OpenBookQA
### Paper
Title: `Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering`
Abstract: https://arxiv.org/abs/1809.02789
OpenBookQA is a question-answering dataset modeled after open book exams for
assessing human understanding of a subject. It consists of 5,957 multiple-choice
elementary-level science questions (4,957 train, 500 dev, 500 test), which probe
the understanding of a small “book” of 1,326 core science facts and the application
of these facts to novel situations. For training, the dataset includes a mapping
from each question to the core science fact it was designed to probe. Answering
OpenBookQA questions requires additional broad common knowledge, not contained
in the book. The questions, by design, are answered incorrectly by both a retrieval-
based algorithm and a word co-occurrence algorithm.
Homepage: https://allenai.org/data/open-book-qa
### Citation
```
@inproceedings{OpenBookQA2018,
title={Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering},
author={Todor Mihaylov and Peter Clark and Tushar Khot and Ashish Sabharwal},
booktitle={EMNLP},
year={2018}
}
```
### Groups and Tasks
#### Groups
* Not part of a group yet
#### Tasks
* `openbookqa`
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/openbookqa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/openbookqa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1964
} |
# Paloma
### Paper
Title: Paloma: A Benchmark for Evaluating Language Model Fit
Abstract: https://arxiv.org/abs/2312.10523v1
Paloma is a comprehensive benchmark designed to evaluate open language models across a wide range of domains, ranging from niche artist communities to mental health forums on Reddit. It assesses the performance of various models across 585 distinct domains.
Homepage: https://allenai.org/olmo
### Note
If you are running the entire `paloma` benchmark (or just `paloma_dolma_100_programing_languages`) with a HuggingFace model, make sure to pass `logits_cache=False` to `--model_args`, for example:
```
lm_eval --model hf --model_args pretrained=EleutherAI/pythia-160m,logits_cache=False --tasks paloma
```
### Citation
```
@article{paloma,
title={{Paloma}: A Benchmark for Evaluating Language Model Fit},
author={Magnusson, Ian and Bhagia, Akshita and Hofmann, Valentin and Soldaini, Luca and Harsh Jha, Ananya and Tafjord, Oyvind and Schwenk,Dustin and Walsh, Evan Pete and Elazar, Yanai and Lo, Kyle and Groenveld,Dirk and Beltagy,Iz and Hajishirz,Hanneneh and Smith, Noah A. and Richardson,Kyle and Dodge,Jesse},
journal={technical report},
year={2023},
url={https://paloma.allen.ai/}
}
```
### Groups and Tasks
#### Groups
* `paloma`
#### Tasks
* `paloma_4chan_meta_sep`
* `paloma_c4_100_domains`
* `paloma_c4_en`
* `paloma_dolma_100_programing_languages`
* `paloma_dolma_100_subreddits`
* `paloma_dolma-v1_5`
* `paloma_falcon-refinedweb`
* `paloma_gab`
* `paloma_m2d2_s2orc_unsplit`
* `paloma_m2d2_wikipedia_unsplit`
* `paloma_manosphere_meta_sep`
* `paloma_mc4`
* `paloma_ptb`
* `paloma_redpajama`
* `paloma_twitterAAE_HELM_fixed`
* `paloma_wikitext_103`
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/paloma/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/paloma/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 2390
} |
# PAWS-X
### Paper
Title: `PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification`
Abstract: https://arxiv.org/abs/1908.11828
The dataset consists of 23,659 human translated PAWS evaluation pairs and
296,406 machine translated training pairs in 6 typologically distinct languages.
Examples are adapted from PAWS-Wiki
Prompt format (same as in mGPT):
"<s>" + sentence1 + ", right? " + mask + ", " + sentence2 + "</s>",
where mask is the string that matches the label:
Yes, No.
Example:
<s> The Tabaci River is a tributary of the River Leurda in Romania, right? No, The Leurda River is a tributary of the River Tabaci in Romania.</s>
Language specific prompts are translated word-by-word with Google Translate
and may differ from the ones used by mGPT and XGLM (they do not provide their prompts).
Homepage: https://github.com/google-research-datasets/paws/tree/master/pawsx
### Citation
```
@inproceedings{yang-etal-2019-paws,
title = "{PAWS}-{X}: A Cross-lingual Adversarial Dataset for Paraphrase Identification",
author = "Yang, Yinfei and
Zhang, Yuan and
Tar, Chris and
Baldridge, Jason",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-1382",
doi = "10.18653/v1/D19-1382",
pages = "3687--3692",
}
```
### Groups and Tasks
#### Groups
* `pawsx`
#### Tasks
* `paws_de`: German
* `paws_en`: English
* `paws_es`: Spanish
* `paws_fr`: French
* `paws_ja`: Japanese
* `paws_ko`: Korean
* `paws_zh`: Chinese
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/paws-x/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/paws-x/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 2479
} |
# The Pile
### Paper
Title: The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Abstract: https://arxiv.org/abs/2101.00027
The Pile is a 825 GiB diverse, open source language modelling data set that consists
of 22 smaller, high-quality datasets combined together. To score well on Pile
BPB (bits per byte), a model must be able to understand many disparate domains
including books, github repositories, webpages, chat logs, and medical, physics,
math, computer science, and philosophy papers.
Homepage: https://pile.eleuther.ai/
### Citation
```
@article{pile,
title={The {P}ile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and Presser, Shawn and Leahy, Connor},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}
```
### Groups and Tasks
#### Groups
* `pile`
#### Tasks
* `pile_arxiv`
* `pile_bookcorpus2`
* `pile_books3`
* `pile_dm-mathematics`
* `pile_enron`
* `pile_europarl`
* `pile_freelaw`
* `pile_github`
* `pile_gutenberg`
* `pile_hackernews`
* `pile_nih-exporter`
* `pile_opensubtitles`
* `pile_openwebtext2`
* `pile_philpapers`
* `pile_pile-cc`
* `pile_pubmed-abstracts`
* `pile_pubmed-central`
* `pile_stackexchange`
* `pile_ubuntu-irc`
* `pile_uspto`
* `pile_wikipedia`
* `pile_youtubesubtitles`
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/pile/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/pile/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 2119
} |
# Pile-10k
### Paper
Title: `NeelNanda/pile-10k`
Abstract: The first 10K elements of [The Pile](https://pile.eleuther.ai/), useful for debugging models trained on it. See the [HuggingFace page for the full Pile](https://huggingface.co/datasets/the_pile) for more info. Inspired by [stas' great resource](https://huggingface.co/datasets/stas/openwebtext-10k) doing the same for OpenWebText
Homepage: [https://huggingface.co/datasets/NeelNanda/pile-10k](https://huggingface.co/datasets/NeelNanda/pile-10k)
### Citation
```
@misc{Nanda2022Pile10K,
author = {Nanda, Neel},
title = {{NeelNanda/pile-10k} \textendash\ Datasets at Hugging Face},
year = {2022},
howpublished = {\url{https://huggingface.co/datasets/NeelNanda/pile-10k}},
}
```
### Groups and Tasks
#### Groups
* Not part of a group yet.
#### Tasks
* `pile_10k`: `The first 10K elements of The Pile, useful for debugging models trained on it.`
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/pile_10k/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/pile_10k/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1601
} |
# PIQA
### Paper
Title: `PIQA: Reasoning about Physical Commonsense in Natural Language`
Abstract: https://arxiv.org/abs/1911.11641
Physical Interaction: Question Answering (PIQA) is a physical commonsense
reasoning and a corresponding benchmark dataset. PIQA was designed to investigate
the physical knowledge of existing models. To what extent are current approaches
actually learning about the world?
Homepage: https://yonatanbisk.com/piqa/
### Citation
```
@inproceedings{Bisk2020,
author = {Yonatan Bisk and Rowan Zellers and
Ronan Le Bras and Jianfeng Gao
and Yejin Choi},
title = {PIQA: Reasoning about Physical Commonsense in
Natural Language},
booktitle = {Thirty-Fourth AAAI Conference on
Artificial Intelligence},
year = {2020},
}
```
### Groups and Tasks
#### Groups
* Not part of a group yet.
#### Tasks
* `piqa`
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/piqa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/piqa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1583
} |
# PolEmo 2.0
### Paper
Title: `Multi-Level Sentiment Analysis of PolEmo 2.0: Extended Corpus of Multi-Domain Consumer Reviews`
Abstract: https://aclanthology.org/K19-1092/
The PolEmo 2.0 is a dataset of online consumer reviews in Polish from four domains: medicine, hotels, products, and university. It is human-annotated on a level of full reviews and individual sentences. It comprises over 8000 reviews, about 85% from the medicine and hotel domains.
The goal is to predict the sentiment of a review. There are two separate test sets, to allow for in-domain (medicine and hotels) as well as out-of-domain (products and university) validation.
Homepage: https://clarin-pl.eu/dspace/handle/11321/710
### Citation
```
@inproceedings{kocon-etal-2019-multi,
title = "Multi-Level Sentiment Analysis of {P}ol{E}mo 2.0: Extended Corpus of Multi-Domain Consumer Reviews",
author = "Koco{\'n}, Jan and
Mi{\l}kowski, Piotr and
Za{\'s}ko-Zieli{\'n}ska, Monika",
booktitle = "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/K19-1092",
doi = "10.18653/v1/K19-1092",
pages = "980--991",
abstract = "In this article we present an extended version of PolEmo {--} a corpus of consumer reviews from 4 domains: medicine, hotels, products and school. Current version (PolEmo 2.0) contains 8,216 reviews having 57,466 sentences. Each text and sentence was manually annotated with sentiment in 2+1 scheme, which gives a total of 197,046 annotations. We obtained a high value of Positive Specific Agreement, which is 0.91 for texts and 0.88 for sentences. PolEmo 2.0 is publicly available under a Creative Commons copyright license. We explored recent deep learning approaches for the recognition of sentiment, such as Bi-directional Long Short-Term Memory (BiLSTM) and Bidirectional Encoder Representations from Transformers (BERT).",
}
```
### Groups and Tasks
#### Groups
* `polemo2`: Evaluates `polemo2_in` and `polemo2_out`
#### Tasks
* `polemo2_in`: evaluates sentiment predictions of in-domain (medicine and hotels) reviews
* `polemo2_out`: evaluates sentiment predictions of out-of-domain (products and university) reviews
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation?
If other tasks on this dataset are already supported:
* [x] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [x] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/polemo2/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/polemo2/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 2947
} |
# PortugueseBench
### Paper
PortugueseBench is a benchmark for evaluating language models in Portuguese tasks. This is, it evaluates the ability of a language model to understand and generate Portuguese text. PortugueseBench offers a combination of pre-existing, open datasets. All the details of PortugueseBench will be published in a paper soon.
The datasets included in PortugueseBench are:
| Task | Category | Paper title | Homepage |
|:-------------:|:-----:|:-------------:|:-----:|
| Belebele_es | Reading Comprehension | [The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants](https://arxiv.org/abs/2308.16884) | https://huggingface.co/datasets/facebook/belebele |
| FLORES_es | Translation | [The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation](https://arxiv.org/abs/2106.03193) | https://huggingface.co/datasets/facebook/flores |
| ASSIN | Natural Language Inference + Paraphrasing | [Avaliando a similaridade semântica entre frases curtas através de uma abordagem híbrida](https://aclanthology.org/W17-6612/) | https://huggingface.co/datasets/nilc-nlp/assin |
### Citation
Paper for PortugueseBench coming soon.
### Groups and Tasks
#### Groups
- `portuguese_bench`: All tasks included in PortugueseBench.
- `flores_pt`: All FLORES translation tasks from or to Portuguese.
#### Tasks
The following tasks evaluate tasks on PortugueseBench dataset using various scoring methods.
- `assin_paraphrase`
- `assin_entailment`
- `belebele_por_Latn`
- `flores_pt`
- `flores_pt-ca`
- `flores_pt-de`
- `flores_pt-en`
- `flores_pt-es`
- `flores_pt-eu`
- `flores_pt-fr`
- `flores_pt-gl`
- `flores_pt-it`
- `flores_ca-pt`
- `flores_de-pt`
- `flores_en-pt`
- `flores_es-pt`
- `flores_eu-pt`
- `flores_fr-pt`
- `flores_gl-pt`
- `flores_it-pt`
Some of these tasks are taken from benchmarks already available in LM Evaluation Harness. These are:
- `belebele_por_Latn`: Belebele Portuguese
### Checklist
* [x] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation?
* [ ] Yes, original implementation contributed by author of the benchmark
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/portuguese_bench/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/portuguese_bench/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 2639
} |
# PROST
### Paper
Title: `PROST: Physical Reasoning about Objects Through Space and Time`
Abstract: https://arxiv.org/abs/2106.03634
PROST, Physical Reasoning about Objects Through Space and Time, is a dataset
consisting of 18,736 multiple-choice questions made from 14 manually curated
templates, covering 10 physical reasoning concepts. All questions are designed
to probe both causal and masked language models in a zero-shot setting.
NOTE: PROST is limited to the zero-shot setting to adhere to authors' intentions
as discussed in section 7 of the paper: "We hope that the community will use
this dataset in the intended way: in a zero-shot setting to probe models which
have been trained on data not specifically collected to succeed on PROST."
Homepage: https://github.com/nala-cub/prost
### Citation
```
@inproceedings{aroca-ouellette-etal-2021-prost,
title = "{PROST}: {P}hysical Reasoning about Objects through Space and Time",
author = "Aroca-Ouellette, St{\'e}phane and
Paik, Cory and
Roncone, Alessandro and
Kann, Katharina",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.404",
pages = "4597--4608",
}
```
### Groups and Tasks
#### Groups
* Not part of a group yet.
#### Tasks
* `prost`
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/prost/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/prost/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 2148
} |
# PubMedQA
### Paper
Title: `PubMedQA: A Dataset for Biomedical Research Question Answering`
Abstract: https://arxiv.org/abs/1909.06146
PubMedQA is a novel biomedical question answering (QA) dataset collected from
PubMed abstracts. The task of PubMedQA is to answer research questions with
yes/no/maybe (e.g.: Do preoperative statins reduce atrial fibrillation after
coronary artery bypass grafting?) using the corresponding abstracts. PubMedQA
has 1k expert-annotated, 61.2k unlabeled and 211.3k artificially generated QA
instances. Each PubMedQA instance is composed of (1) a question which is either
an existing research article title or derived from one, (2) a context which is
the corresponding abstract without its conclusion, (3) a long answer, which is
the conclusion of the abstract and, presumably, answers the research question,
and (4) a yes/no/maybe answer which summarizes the conclusion.
Homepage: https://pubmedqa.github.io/
### Citation
```
@inproceedings{jin2019pubmedqa,
title={PubMedQA: A Dataset for Biomedical Research Question Answering},
author={Jin, Qiao and Dhingra, Bhuwan and Liu, Zhengping and Cohen, William and Lu, Xinghua},
booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
pages={2567--2577},
year={2019}
}
```
### Groups and Tasks
#### Groups
* Not part of a group yet
#### Tasks
* `pubmed_qa`
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/pubmedqa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/pubmedqa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 2179
} |
# QA4MRE
### Paper
Title: `QA4MRE 2011-2013: Overview of Question Answering for Machine Reading Evaluation`
Abstract: https://www.cs.cmu.edu/~./hovy/papers/13CLEF-QA4MRE.pdf
The (English only) QA4MRE challenge which was run as a Lab at CLEF 2011-2013.
The main objective of this exercise is to develop a methodology for evaluating
Machine Reading systems through Question Answering and Reading Comprehension
Tests. Systems should be able to extract knowledge from large volumes of text
and use this knowledge to answer questions. Four different tasks have been
organized during these years: Main Task, Processing Modality and Negation for
Machine Reading, Machine Reading of Biomedical Texts about Alzheimer's disease,
and Entrance Exam.
Homepage: http://nlp.uned.es/clef-qa/repository/qa4mre.php
### Citation
```
@inproceedings{Peas2013QA4MRE2O,
title={QA4MRE 2011-2013: Overview of Question Answering for Machine Reading Evaluation},
author={Anselmo Pe{\~n}as and Eduard H. Hovy and Pamela Forner and {\'A}lvaro Rodrigo and Richard F. E. Sutcliffe and Roser Morante},
booktitle={CLEF},
year={2013}
}
```
### Groups and Tasks
#### Groups
* `qa4mre`
#### Tasks
* `qa4mre_2011`
* `qa4mre_2012`
* `qa4mre_2013`
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/qa4mre/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/qa4mre/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1917
} |
# QASPER
### Paper
Title: `A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers`
Abstract: https://arxiv.org/abs/2105.03011
QASPER is a dataset of 5,049 questions over 1,585 Natural Language Processing papers.
Each question is written by an NLP practitioner who read only the title and abstract
of the corresponding paper, and the question seeks information present in the full
text. The questions are then answered by a separate set of NLP practitioners who also
provide supporting evidence to answers.
Homepage: https://allenai.org/data/qasper
### Citation
```
@article{DBLP:journals/corr/abs-2105-03011,
author = {Pradeep Dasigi and
Kyle Lo and
Iz Beltagy and
Arman Cohan and
Noah A. Smith and
Matt Gardner},
title = {A Dataset of Information-Seeking Questions and Answers Anchored in
Research Papers},
journal = {CoRR},
volume = {abs/2105.03011},
year = {2021},
url = {https://arxiv.org/abs/2105.03011},
eprinttype = {arXiv},
eprint = {2105.03011},
timestamp = {Fri, 14 May 2021 12:13:30 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-03011.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Groups and Tasks
#### Groups
* `qasper`: executes both `qasper_bool` and `qasper_freeform`
#### Tasks
* `qasper_bool`: Multiple choice task that evaluates the task with `answer_type="bool"`
* `qasper_freeform`: Greedy generation task that evaluates the samples from the task with `answer_type="free form answer"`
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/qasper/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/qasper/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 2340
} |
# RACE
### Paper
Title: `RACE: Large-scale ReAding Comprehension Dataset From Examinations`
Abstract: https://arxiv.org/abs/1704.04683
RACE is a large-scale reading comprehension dataset with more than 28,000 passages
and nearly 100,000 questions. The dataset is collected from English examinations
in China, which are designed for middle school and high school students. The dataset
can be served as the training and test sets for machine comprehension.
Homepage: https://www.cs.cmu.edu/~glai1/data/race/
### Citation
```
@inproceedings{lai-etal-2017-race,
title = "{RACE}: Large-scale {R}e{A}ding Comprehension Dataset From Examinations",
author = "Lai, Guokun and
Xie, Qizhe and
Liu, Hanxiao and
Yang, Yiming and
Hovy, Eduard",
editor = "Palmer, Martha and
Hwa, Rebecca and
Riedel, Sebastian",
booktitle = "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D17-1082",
doi = "10.18653/v1/D17-1082",
pages = "785--794"
}
```
### Groups and Tasks
#### Groups
* Not part of a group yet.
#### Tasks
* `race`
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/race/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/race/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1973
} |
# SciQ
### Paper
Title: `Crowdsourcing Multiple Choice Science Questions`
Abstract: https://aclanthology.org/W17-4413.pdf
The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics,
Chemistry and Biology, among others. The questions are in multiple-choice format
with 4 answer options each. For the majority of the questions, an additional paragraph
with supporting evidence for the correct answer is provided.
Homepage: https://allenai.org/data/sciq
### Citation
```
@inproceedings{Welbl2017CrowdsourcingMC,
title={Crowdsourcing Multiple Choice Science Questions},
author={Johannes Welbl and Nelson F. Liu and Matt Gardner},
booktitle={NUT@EMNLP},
year={2017}
}
```
### Groups and Tasks
#### Groups
* Not part of a group yet.
#### Tasks
* `sciq`
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/sciq/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/sciq/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1479
} |
"""
SCROLLS: Standardized CompaRison Over Long Language Sequences
https://arxiv.org/abs/2201.03533
SCROLLS is a suite of datasets that require synthesizing information over long texts.
The benchmark includes seven natural language tasks across multiple domains,
including summarization, question answering, and natural language inference.
Homepage: https://www.scrolls-benchmark.com/
Since SCROLLS tasks are generally longer than the maximum sequence length of many models,
it is possible to create "subset" tasks that contain only those samples whose tokenized length
is less than some pre-defined limit. For example, to create a subset of "Qasper" that would
be suitable for a model using the GPTNeoX tokenizer and a 4K maximum sequence length:
```
class QasperGPTNeoX4K(Qasper):
PRUNE_TOKENIZERS = ["EleutherAI/pythia-410m-deduped"]
PRUNE_MAX_TOKENS = 4096
PRUNE_NUM_PROC = _num_cpu_cores() # optional, to speed up pruning of large datasets like NarrativeQA
```
`PRUNE_TOKENIZERS` can contain more than one tokenizer; this will include only samples that are
less than `PRUNE_MAX_TOKENS` for ALL of the tokenizers. This can be useful to comparing models
that use different tokenizers but the same maximum sequence length.
Once the subset task class has been defined in this file, it can be used by adding the class
to `lm_eval/tasks/__init__.py`.
NOTE: GovReport may need `max_gen_toks` set larger for causal models.
""" | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/scrolls/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/scrolls/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1441
} |
# Social IQA
### Paper
Title: Social IQA: Commonsense Reasoning about Social Interactions
Abstract: https://arxiv.org/abs/1904.09728
> We introduce Social IQa, the first largescale benchmark for commonsense reasoning about social situations. Social IQa contains 38,000 multiple choice questions for probing emotional and social intelligence in a variety of everyday situations (e.g., Q: "Jordan wanted to tell Tracy a secret, so Jordan leaned towards Tracy. Why did Jordan do this?" A: "Make sure no one else could hear"). Through crowdsourcing, we collect commonsense questions along with correct and incorrect answers about social interactions, using a new framework that mitigates stylistic artifacts in incorrect answers by asking workers to provide the right answer to a different but related question. Empirical results show that our benchmark is challenging for existing question-answering models based on pretrained language models, compared to human performance (>20% gap). Notably, we further establish Social IQa as a resource for transfer learning of commonsense knowledge, achieving state-of-the-art performance on multiple commonsense reasoning tasks (Winograd Schemas, COPA).
Homepage: https://allenai.org/data/socialiqa
### Citation
```
@inproceedings{sap2019social,
title={Social IQa: Commonsense Reasoning about Social Interactions},
author={Sap, Maarten and Rashkin, Hannah and Chen, Derek and Le Bras, Ronan and Choi, Yejin},
booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
pages={4463--4473},
year={2019}
}
```
### Checklist
For adding novel benchmarks/datasets to the library:
* [X] Is the task an existing benchmark in the literature?
* [X] Have you referenced the original paper that introduced the task?
* [X] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? The original paper doesn't have an associated implementation, but there is an official entry in [BigBench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/social_iqa). I use the same prompting format as BigBench.
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/siqa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/siqa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 2606
} |
# SpanishBench
### Paper
SpanishBench is a benchmark for evaluating language models in Spanish tasks. This is, it evaluates the ability of a language model to understand and generate Spanish text. SpanishBench offers a combination of pre-existing, open datasets. All the details of SpanishBench will be published in a paper soon.
The datasets included in SpanishBench are:
| Task | Category | Paper title | Homepage |
|:-------------:|:-----:|:-------------:|:-----:|
| Belebele_es | Reading Comprehension | [The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants](https://arxiv.org/abs/2308.16884) | https://huggingface.co/datasets/facebook/belebele |
| FLORES_es | Translation | [The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation](https://arxiv.org/abs/2106.03193) | https://huggingface.co/datasets/facebook/flores |
| MGSM_es | Math | [Language Models are Multilingual Chain-of-Thought Reasoners](https://arxiv.org/abs/2210.03057) | https://huggingface.co/datasets/juletxara/mgsm |
| PAWS-X_es | Paraphrasing | [PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification](https://aclanthology.org/D19-1382/) | https://huggingface.co/datasets/google-research-datasets/paws-x |
| WNLI-es | Natural Language Inference | No paper. | https://huggingface.co/datasets/PlanTL-GOB-ES/wnli-es |
| XL-Sum_es | Summarization | [XL-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages](https://aclanthology.org/2021.findings-acl.413/) | https://huggingface.co/datasets/csebuetnlp/xlsum |
| XNLI_es | Natural Language Inference | [XNLI: Evaluating Cross-lingual Sentence Representations](https://aclanthology.org/D18-1269/) | https://huggingface.co/datasets/facebook/xnli |
| XQuAD_es | Question Answering | [On the Cross-lingual Transferability of Monolingual Representations](https://aclanthology.org/2020.acl-main.421/) | https://huggingface.co/datasets/google/xquad |
| XStoryCloze_es | Commonsense Reasoning | [Few-shot Learning with Multilingual Generative Language Models](https://aclanthology.org/2022.emnlp-main.616/) | https://huggingface.co/datasets/juletxara/xstory_cloze |
### Citation
Paper for SpanishBench coming soon.
### Groups and Tasks
#### Groups
- `spanish_bench`: All tasks included in SpanishBench.
- `flores_es`: All FLORES translation tasks from or to Spanish.
#### Tags
- `phrases_es`: Two Phrases_va tasks for language adaptation between Spanish and Valencian.
#### Tasks
The following tasks evaluate tasks on SpanishBench dataset using various scoring methods.
- `belebele_spa_Latn`
- `flores_es`
- `flores_es-ca`
- `flores_es-de`
- `flores_es-en`
- `flores_es-eu`
- `flores_es-fr`
- `flores_es-gl`
- `flores_es-it`
- `flores_es-pt`
- `flores_ca-es`
- `flores_de-es`
- `flores_en-es`
- `flores_eu-es`
- `flores_fr-es`
- `flores_gl-es`
- `flores_it-es`
- `flores_pt-es`
- `mgsm_direct_es_v2` (`v2` is due to an existing open issue in the original task)
- `paws_es`
- `phrases_es`
- `wnli_es`
- `xlsum_es`
- `xnli_es`
- `xquad_es`
- `xstorycloze_es`
Some of these tasks are taken from benchmarks already available in LM Evaluation Harness. These are:
- `belebele_spa_Latn`: Belebele Spanish
- `mgsm_direct_es`: MGSM Spanish (We fix an existing open issue in the original task)
- `paws_es`: PAWS-X Spanish
- `xnli_es`: XNLI Spanish
- `xstorycloze_es`: XStoryCloze Spanish
### Checklist
* [x] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation?
* [ ] Yes, original implementation contributed by author of the benchmark
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/spanish_bench/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/spanish_bench/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 4091
} |
# Squad-completion
### Paper
Title: Simple Linear Attention Language Models Balance The Recall-Throughput Tradeoff
A Variant of the SQuAD question answering task, as implemented by Based. See [https://github.com/EleutherAI/lm-evaluation-harness/lm_eval/tasks/squadv2/README.md] for more info.
Homepage: https://github.com/HazyResearch/based-evaluation-harness
### Citation
```
@misc{arora2024simple,
title={Simple linear attention language models balance the recall-throughput tradeoff},
author={Simran Arora and Sabri Eyuboglu and Michael Zhang and Aman Timalsina and Silas Alberti and Dylan Zinsley and James Zou and Atri Rudra and Christopher Ré},
year={2024},
eprint={2402.18668},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{rajpurkar2018know,
title={Know What You Don't Know: Unanswerable Questions for SQuAD},
author={Pranav Rajpurkar and Robin Jia and Percy Liang},
year={2018},
eprint={1806.03822},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Groups and Tasks
#### Tasks
* `squad_completion`: the SQuAD task as implemented in the paper "Simple linear attention language models balance the recall-throughput tradeoff". Designed for zero-shot evaluation of small LMs.
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [x] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [x] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/squad_completion/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/squad_completion/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1945
} |
# Task-name
### Paper
Title: `Know What You Don’t Know: Unanswerable Questions for SQuAD`
Abstract: https://arxiv.org/abs/1806.03822
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset,
consisting of questions posed by crowdworkers on a set of Wikipedia articles,
where the answer to every question is a segment of text, or span, from the
corresponding reading passage, or the question might be unanswerable.
SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable
questions written adversarially by crowdworkers to look similar to answerable ones.
To do well on SQuAD2.0, systems must not only answer questions when possible, but
also determine when no answer is supported by the paragraph and abstain from answering.
Homepage: https://rajpurkar.github.io/SQuAD-explorer/
### Citation
```
@misc{rajpurkar2018know,
title={Know What You Don't Know: Unanswerable Questions for SQuAD},
author={Pranav Rajpurkar and Robin Jia and Percy Liang},
year={2018},
eprint={1806.03822},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Groups and Tasks
#### Groups
* Not part of a group yet
#### Tasks
* `squadv2`: `Default squadv2 task`
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/squadv2/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/squadv2/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1898
} |
# StoryCloze
### Paper
Title: `A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories`
Abstract: `https://arxiv.org/abs/1604.01696`
Homepage: https://cs.rochester.edu/nlp/rocstories/
'Story Cloze Test' is a new commonsense reasoning framework for evaluating story understanding, story generation, and script learning. This test requires a system to choose the correct ending to a four-sentence story
### Citation
```
@misc{mostafazadeh2016corpus,
title={A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories},
author={Nasrin Mostafazadeh and
Nathanael Chambers and
Xiaodong He and
Devi Parikh and
Dhruv Batra and
Lucy Vanderwende and
Pushmeet Kohli and
James Allen},
year={2016},
eprint={1604.01696},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Groups and Tasks
#### Groups
* `storycloze`
#### Tasks
* `storycloze_2016`
* `storycloze_2018`
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/storycloze/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/storycloze/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1674
} |
# SuperGLUE
### Paper
Title: `SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems`
Abstract: `https://w4ngatang.github.io/static/papers/superglue.pdf`
SuperGLUE is a benchmark styled after GLUE with a new set of more difficult language
understanding tasks.
Homepage: https://super.gluebenchmark.com/
### Citation
```
@inproceedings{NEURIPS2019_4496bf24,
author = {Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel},
booktitle = {Advances in Neural Information Processing Systems},
editor = {H. Wallach and H. Larochelle and A. Beygelzimer and F. d\textquotesingle Alch\'{e}-Buc and E. Fox and R. Garnett},
pages = {},
publisher = {Curran Associates, Inc.},
title = {SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},
url = {https://proceedings.neurips.cc/paper/2019/file/4496bf24afe7fab6f046bf4923da8de6-Paper.pdf},
volume = {32},
year = {2019}
}
```
### Groups, Tags, and Tasks
#### Groups
None.
#### Tags
* `super-glue-lm-eval-v1`: SuperGLUE eval adapted from LM Eval V1
* `super-glue-t5-prompt`: SuperGLUE prompt and evaluation that matches the T5 paper (if using accelerate, will error if record is included.)
#### Tasks
Comparison between validation split score on T5x and LM-Eval (T5x models converted to HF)
| T5V1.1 Base | SGLUE | BoolQ | CB | Copa | MultiRC | ReCoRD | RTE | WiC | WSC |
| ----------- | ------| ----- | --------- | ---- | ------- | ------ | --- | --- | --- |
| T5x | 69.47 | 78.47(acc) | 83.93(f1) 87.5(acc) | 50(acc) | 73.81(f1) 33.26(em) | 70.09(em) 71.34(f1) | 78.7(acc) | 63.64(acc) | 75(acc) |
| LM-Eval | 71.35 | 79.36(acc) | 83.63(f1) 87.5(acc) | 63(acc) | 73.45(f1) 33.26(em) | 69.85(em) 68.86(f1) | 78.34(acc) | 65.83(acc) | 75.96(acc) |
* `super-glue-lm-eval-v1`
- `boolq`
- `cb`
- `copa`
- `multirc`
- `record`
- `rte`
- `wic`
- `wsc`
* `super-glue-t5-prompt`
- `super_glue-boolq-t5-prompt`
- `super_glue-cb-t5-prompt`
- `super_glue-copa-t5-prompt`
- `super_glue-multirc-t5-prompt`
- `super_glue-record-t5-prompt`
- `super_glue-rte-t5-prompt`
- `super_glue-wic-t5-prompt`
- `super_glue-wsc-t5-prompt`
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/super_glue/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/super_glue/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 3001
} |
# SWAG
### Paper
Title: `SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference`
Abstract: https://arxiv.org/pdf/1808.05326.pdf
SWAG (Situations With Adversarial Generations) is an adversarial dataset
that consists of 113k multiple choice questions about grounded situations. Each
question is a video caption from LSMDC or ActivityNet Captions, with four answer
choices about what might happen next in the scene. The correct answer is the
(real) video caption for the next event in the video; the three incorrect
answers are adversarially generated and human verified, so as to fool machines
but not humans.
Homepage: https://rowanzellers.com/swag/
### Citation
```
@inproceedings{zellers2018swagaf,
title={SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference},
author={Zellers, Rowan and Bisk, Yonatan and Schwartz, Roy and Choi, Yejin},
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
year={2018}
}
```
### Groups and Tasks
#### Groups
* Not a part of a task yet.
#### Tasks
* `swag`
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/swag/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/swag/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1798
} |
# SWDE
### Paper
Title: Language Models Enable Simple Systems For
Generating Structured Views Of Heterogenous Data
Lakes
Abstract: A long standing goal of the data management community is to develop general, automated systems
that ingest semi-structured documents and output queryable tables without human effort or domain
specific customization. Given the sheer variety of potential documents, state-of-the art systems make
simplifying assumptions and use domain specific training. In this work, we ask whether we can
maintain generality by using large language models (LLMs). LLMs, which are pretrained on broad
data, can perform diverse downstream tasks simply conditioned on natural language task descriptions.
We propose and evaluate EVAPORATE, a simple, prototype system powered by LLMs. We identify
two fundamentally different strategies for implementing this system: prompt the LLM to directly
extract values from documents or prompt the LLM to synthesize code that performs the extraction.
Our evaluations show a cost-quality tradeoff between these two approaches. Code synthesis is cheap,
but far less accurate than directly processing each document with the LLM. To improve quality while
maintaining low cost, we propose an extended code synthesis implementation, EVAPORATE-CODE+,
which achieves better quality than direct extraction. Our key insight is to generate many candidate
functions and ensemble their extractions using weak supervision. EVAPORATE-CODE+ not only
outperforms the state-of-the art systems, but does so using a sublinear pass over the documents with
the LLM. This equates to a 110× reduction in the number of tokens the LLM needs to process,
averaged across 16 real-world evaluation settings of 10k documents each.
A task for LMs to perform Information Extraction, as implemented by Based.
Homepage: https://github.com/HazyResearch/based-evaluation-harness
Description:
> SWDE (Information Extraction). The task in the SWDE benchmark is to extract semi-structured relations from raw HTML websites. For example, given an IMBD page for a movie (e.g. Harry Potter and the Sorcerer’s Stone) and a relation key (e.g. release date), the model must extract the correct relation value (e.g. 2001). The SWDE benchmark was originally curated by Lockard et al. for the task of open information extraction from the semi-structured web. Because we are evaluating the zero-shot capabilities of relatively small language models, we adapt the task to make it slightly easier. Our task setup is similar after to that used in Arora et al.
### Citation
```
@misc{arora2024simple,
title={Simple linear attention language models balance the recall-throughput tradeoff},
author={Simran Arora and Sabri Eyuboglu and Michael Zhang and Aman Timalsina and Silas Alberti and Dylan Zinsley and James Zou and Atri Rudra and Christopher Ré},
year={2024},
eprint={2402.18668},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{arora2023language,
title={Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Data Lakes},
author={Simran Arora and Brandon Yang and Sabri Eyuboglu and Avanika Narayan and Andrew Hojel and Immanuel Trummer and Christopher Ré},
year={2023},
eprint={2304.09433},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{lockard-etal-2019-openceres,
title = "{O}pen{C}eres: {W}hen Open Information Extraction Meets the Semi-Structured Web",
author = "Lockard, Colin and
Shiralkar, Prashant and
Dong, Xin Luna",
editor = "Burstein, Jill and
Doran, Christy and
Solorio, Thamar",
booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
month = jun,
year = "2019",
address = "Minneapolis, Minnesota",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N19-1309",
doi = "10.18653/v1/N19-1309",
pages = "3047--3056",
abstract = "Open Information Extraction (OpenIE), the problem of harvesting triples from natural language text whose predicate relations are not aligned to any pre-defined ontology, has been a popular subject of research for the last decade. However, this research has largely ignored the vast quantity of facts available in semi-structured webpages. In this paper, we define the problem of OpenIE from semi-structured websites to extract such facts, and present an approach for solving it. We also introduce a labeled evaluation dataset to motivate research in this area. Given a semi-structured website and a set of seed facts for some relations existing on its pages, we employ a semi-supervised label propagation technique to automatically create training data for the relations present on the site. We then use this training data to learn a classifier for relation extraction. Experimental results of this method on our new benchmark dataset obtained a precision of over 70{\%}. A larger scale extraction experiment on 31 websites in the movie vertical resulted in the extraction of over 2 million triples.",
}
```
### Groups and Tasks
#### Tasks
* `swde`: the SWDE task as implemented in the paper "Simple linear attention language models balance the recall-throughput tradeoff". Designed for zero-shot evaluation of small LMs.
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [x] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [x] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/swde/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/swde/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 6130
} |
# tinyBenchmarks
### Paper
Title: `tinyBenchmarks: evaluating LLMs with fewer examples`
Abstract: https://arxiv.org/abs/2402.14992
The versatility of large language models (LLMs) led to the creation of diverse benchmarks that thoroughly test a variety of language models' abilities. These benchmarks consist of tens of thousands of examples making evaluation of LLMs very expensive. In this paper, we investigate strategies to reduce the number of evaluations needed to assess the performance of an LLM on several key benchmarks. For example, we show that to accurately estimate the performance of an LLM on MMLU, a popular multiple-choice QA benchmark consisting of 14K examples, it is sufficient to evaluate this LLM on 100 curated examples. We release evaluation tools and tiny versions of popular benchmarks: Open LLM Leaderboard, MMLU, HELM, and AlpacaEval 2.0. Our empirical analysis demonstrates that these tools and tiny benchmarks are sufficient to reliably and efficiently reproduce the original evaluation results.
Homepage: -
All configs and utils mirror the ones from their original dataset!
### Groups and Tasks
#### Groups
* `tinyBenchmarks`
#### Tasks
* `tinyArc`, `tinyGSM8k`, `tinyHellaswag`, `tinyMMLU`, `tinyTruthfulQA`, `tinyWinogrande`
### Usage
*tinyBenchmarks* can evaluate different benchmarks with a fraction of their examples.
To obtain accurate results, this task applies post-processing using the *tinyBenchmarks*-package.
You can install the package by running the following commands on the terminal (for more information see [here](https://github.com/felipemaiapolo/tinyBenchmarks/blob/main/README.md?plain=1)):
``` :sh
pip install git+https://github.com/felipemaiapolo/tinyBenchmarks
```
The value that is returned by the task corresponds to the '**IRT++**'-method from the [original paper](https://arxiv.org/abs/2402.14992).
Evaluate specific tasks individually (e.g. `--tasks tinyHellaswag`) or all [open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) tasks by specifying `--tasks tinyBenchmarks`.
### Advanced usage
To obtain the estimated accuracies from all methods from the original paper, the *tinyBenchmarks*-package has to be applied manually.
To do so, run the evaluation with the `--log_samples` and `--output_path` arguments. For example:
```bash
lm_eval --model hf \
--model_args pretrained="mistralai/Mistral-7B-Instruct-v0.2" \
--tasks tinyHellaswag \
--batch_size 4 \
--output_path '<output_path>' \
--log_samples
```
Afterwards, run include the correct `file_path` and run the following script:
```python
import json
import tinyBenchmarks as tb
import numpy as np
# Choose benchmark (e.g. hellaswag)
benchmark = 'hellaswag' # possible benchmarks:
# ['mmlu','truthfulqa', 'gsm8k',
# 'winogrande', 'arc', 'hellaswag']
# Get score vector from output-file (the metric [here `acc_norm`] depends on the benchmark)
file_path = '<output_path>/<output-file.jsonl>'
with open(file_path, 'r') as file:
outputs = json.load(file)
# Ensuring correct order of outputs
outputs = sorted(outputs, key=lambda x: x['doc_id'])
y = np.array([float(item['acc_norm']) for item in outputs])
### Evaluation
tb.evaluate(y, benchmark)
```
### Performance
We report in the following tables the average estimation error in the test set (using data from the paper) and standard deviation across LLMs.
#### Open LLM Leaderboard
Estimating performance for each scenario separately
|| IRT | p-IRT | gp-IRT |
|--|--|--|--|
| TruthfulQA | 0.013 (0.010) | 0.010 (0.009) | 0.011 (0.009) |
| GSM8K | 0.022 (0.017) | 0.029 (0.022) | 0.020 (0.017) |
| Winogrande | 0.022 (0.017) | 0.016 (0.014) | 0.015 (0.013) |
| ARC | 0.022 (0.018) | 0.017 (0.014) | 0.017 (0.013) |
| HellaSwag | 0.013 (0.016) | 0.015 (0.012) | 0.015 (0.012) |
| MMLU | 0.024 (0.017) | 0.016 (0.015) | 0.016 (0.015) |
Estimating performance for each scenario all at once
|| IRT | p-IRT | gp-IRT |
|--|--|--|--|
| TruthfulQA | 0.013 (0.010) | 0.016 (0.013) | 0.011 (0.009) |
| GSM8K | 0.022 (0.017) | 0.022 (0.017) | 0.020 (0.015) |
| Winogrande | 0.022 (0.017) | 0.011 (0.013) | 0.011 (0.011) |
| ARC | 0.022 (0.018) | 0.012 (0.010) | 0.010 (0.009) |
| HellaSwag | 0.013 (0.016) | 0.011 (0.020) | 0.011 (0.018) |
| MMLU | 0.024 (0.018) | 0.017 (0.017) | 0.015 (0.015) |
### Citation
```
@article{polo2024tinybenchmarks,
title={tinyBenchmarks: evaluating LLMs with fewer examples},
author={Maia Polo, Felipe and Weber, Lucas and Choshen, Leshem and Sun, Yuekai and Xu, Gongjun and Yurochkin, Mikhail},
journal={arXiv preprint arXiv:2402.14992},
year={2024}
}
```
Please also reference the respective original dataset that you are using!
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [x] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [x] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/tinyBenchmarks/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/tinyBenchmarks/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 5489
} |
# TMLU
### Paper
Title: `Measuring Taiwanese Mandarin Language Understanding`
Abstract: `The evaluation of large language models (LLMs) has drawn substantial attention in the field recently. This work focuses on evaluating LLMs in a Chinese context, specifically, for Traditional Chinese which has been largely underrepresented in existing benchmarks. We present TMLU, a holistic evaluation suit tailored for assessing the advanced knowledge and reasoning capability in LLMs, under the context of Taiwanese Mandarin. TMLU consists of an array of 37 subjects across social science, STEM, humanities, Taiwan-specific content, and others, ranging from middle school to professional levels. In addition, we curate chain-of-thought-like few-shot explanations for each subject to facilitate the evaluation of complex reasoning skills. To establish a comprehensive baseline, we conduct extensive experiments and analysis on 24 advanced LLMs. The results suggest that Chinese open-weight models demonstrate inferior performance comparing to multilingual proprietary ones, and open-weight models tailored for Taiwanese Mandarin lag behind the Simplified-Chinese counterparts. The findings indicate great headrooms for improvement, and emphasize the goal of TMLU to foster the development of localized Taiwanese-Mandarin LLMs. We release the benchmark and evaluation scripts for the community to promote future research.`
Homepage: [TMLU Huggingface Dataset](https://huggingface.co/datasets/miulab/tmlu)
### Citation
```
@article{DBLP:journals/corr/abs-2403-20180,
author = {Po{-}Heng Chen and
Sijia Cheng and
Wei{-}Lin Chen and
Yen{-}Ting Lin and
Yun{-}Nung Chen},
title = {Measuring Taiwanese Mandarin Language Understanding},
journal = {CoRR},
volume = {abs/2403.20180},
year = {2024},
url = {https://doi.org/10.48550/arXiv.2403.20180},
doi = {10.48550/ARXIV.2403.20180},
eprinttype = {arXiv},
eprint = {2403.20180},
timestamp = {Wed, 10 Apr 2024 17:37:45 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2403-20180.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Groups and Tasks
#### Groups
* `tmlu`: `The dataset comprises 2,981 multiple-choice questions from 37 subjects. `
#### Tasks
The following tasks evaluate subjects in the TMLU dataset using loglikelihood-based multiple-choice scoring:
* `tmlu_{subject_english}`
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [x] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [x] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/tmlu/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/tmlu/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 3221
} |
# TMMLU+
### Paper
Title: `An Improved Traditional Chinese Evaluation Suite for Foundation Model`
Abstract: `We present TMMLU+, a comprehensive dataset designed for the Traditional Chinese massive multitask language understanding dataset. TMMLU+ is a multiple-choice question-answering dataset with 66 subjects from elementary to professional level. Compared to its predecessor, TMMLU, TMMLU+ is six times larger and boasts a more balanced subject distribution. We included benchmark results in TMMLU+ from closed-source models and 24 open-weight Chinese large language models of parameters ranging from 1.8B to 72B. Our findings reveal that Traditional Chinese models still trail behind their Simplified Chinese counterparts. Additionally, current large language models have yet to outperform human performance in average scores. We publicly release our dataset and the corresponding benchmark source code.`
Homepage: [https://huggingface.co/datasets/ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus)
### Citation
```
@article{ikala2024improved,
title={An Improved Traditional Chinese Evaluation Suite for Foundation Model},
author={Tam, Zhi-Rui and Pai, Ya-Ting and Lee, Yen-Wei and Cheng, Sega and Shuai, Hong-Han},
journal={arXiv preprint arXiv:2403.01858},
year={2024}
}
```
### Groups and Tasks
#### Groups
* `tmmluplus`: `The dataset comprises 22,690 multiple-choice questions from 66 subjects ranging from primary to professional level. `
#### Tasks
The following tasks evaluate subjects in the TMMLU+ dataset using loglikelihood-based multiple-choice scoring:
* `tmmluplus_{subject_english}`
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [x] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [x] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/tmmluplus/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/tmmluplus/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 2318
} |
# ToxiGen
### Paper
Title: `ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection`
Abstract: https://arxiv.org/abs/2203.09509
Classify input text as either hateful or not hateful.
Homepage: https://github.com/microsoft/TOXIGEN
### Citation
```
@inproceedings{hartvigsen2022toxigen,
title={ToxiGen: A Large-Scale Machine-Generated Dataset for Implicit and Adversarial Hate Speech Detection},
author={Hartvigsen, Thomas and Gabriel, Saadia and Palangi, Hamid and Sap, Maarten and Ray, Dipankar and Kamar, Ece},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
year={2022}
}
```
### Groups and Tasks
#### Groups
* Not part of a group yet.
#### Tasks
* `toxigen`
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/toxigen/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/toxigen/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1457
} |
# Translation Tasks
### Paper
### Citation
```
```
### Groups and Tasks
#### Groups
* `gpt3_translation_tasks`
* `wmt14`
* `wmt16`
* `wmt20`
* `iwslt2017`
#### Tasks
*
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [x] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
* [ ] Checked for equivalence with v0.3.0 LM Evaluation Harness | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/translation/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/translation/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 924
} |
# Trivia QA
### Paper
Title: `TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension`
Abstract: https://arxiv.org/abs/1705.03551
TriviaQA is a reading comprehension dataset containing over 650K question-answer-evidence
triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts
and independently gathered evidence documents, six per question on average, that provide
high quality distant supervision for answering the questions.
Homepage: https://nlp.cs.washington.edu/triviaqa/
### Citation
```
@InProceedings{JoshiTriviaQA2017,
author = {Joshi, Mandar and Choi, Eunsol and Weld, Daniel S. and Zettlemoyer, Luke},
title = {TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension},
booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics},
month = {July},
year = {2017},
address = {Vancouver, Canada},
publisher = {Association for Computational Linguistics},
}
```
### Groups and Tasks
#### Groups
* Not part of a group yet.
#### Tasks
* `triviaqa`: `Generate and answer based on the question.`
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/triviaqa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/triviaqa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1851
} |
# TruthfulQA
### Paper
Title: `TruthfulQA: Measuring How Models Mimic Human Falsehoods`
Abstract: `https://arxiv.org/abs/2109.07958`
Homepage: `https://github.com/sylinrl/TruthfulQA`
### Citation
```
@inproceedings{lin-etal-2022-truthfulqa,
title = "{T}ruthful{QA}: Measuring How Models Mimic Human Falsehoods",
author = "Lin, Stephanie and
Hilton, Jacob and
Evans, Owain",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.229",
doi = "10.18653/v1/2022.acl-long.229",
pages = "3214--3252",
}
```
### Groups and Tasks
#### Groups
* Not part of a group yet.
#### Tasks
* `truthfulqa_mc1`: `Multiple-choice, single answer`
* `truthfulqa_mc2`: `Multiple-choice, multiple answers`
* `truthfulqa_gen`: `Answer generation`
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/truthfulqa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/truthfulqa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1698
} |
# TurkishMMLU
This repository contains configuration files for LM Evaluation Harness for Few-Shot and Chain-of-Thought experiments for TurkishMMLU. Using these configurations with LM Evaluation Harness, the results of this study are obtained.
TurkishMMLU is a multiple-choice Question-Answering dataset created for the Turkish Natural Language Processing (NLP) community based on Turkish Highschool Curricula across nine subjects. This comprehensive study is conducted to provide Question-Answering benchmark for Turkish language. The questions of the dataset are written by curriculum experts, suitable for the high-school curricula in Turkey, covering subjects ranging from natural sciences and math questions to more culturally representative topics such as Turkish Literature and the history of the Turkish Republic.
To access this dataset please send an email to:
[email protected] or [email protected].
## Abstract
Multiple choice question answering tasks evaluate the reasoning, comprehension, and mathematical abilities of Large Language Models (LLMs). While existing benchmarks employ automatic translation for multilingual evaluation, this approach is error-prone and potentially introduces culturally biased questions, especially in social sciences. We introduce the first multitask, multiple-choice Turkish QA benchmark, TurkishMMLU, to evaluate LLMs' understanding of the Turkish language. TurkishMMLU includes over 10,000 questions, covering 9 different subjects from Turkish high-school education curricula. These questions are written by curriculum experts, suitable for the high-school curricula in Turkey, covering subjects ranging from natural sciences and math questions to more culturally representative topics such as Turkish Literature and the history of the Turkish Republic. We evaluate over 20 LLMs, including multilingual open-source (e.g., Gemma, Llama, MT5), closed-source (GPT 4o, Claude, Gemini), and Turkish-adapted (e.g., Trendyol) models. We provide an extensive evaluation, including zero-shot and few-shot evaluation of LLMs, chain-of-thought reasoning, and question difficulty analysis along with model performance. We provide an in-depth analysis of the Turkish capabilities and limitations of current LLMs to provide insights for future LLMs for the Turkish language. We publicly release our code for the dataset and evaluation.
## Dataset
Dataset is divided into four categories Natural Sciences, Mathematics, Language, and Social Sciences and Humanities with a total of nine subjects in Turkish highschool education. It is available in multiple choice for LLM evaluation. The questions also contain difficulty indicator referred as Correctness ratio.
## Evaluation
5-Shot evaluation results from the paper includes open and closed source SOTA LLM with different architectures. For this study, multilingual and Turkish adapted models are tested.
The evaluation results of this study are obtained using the provided configurations with LM Evaluation Harness.
| Model | Source | Average | Natural Sciences | Math | Turkish L & L | Social Sciences and Humanities |
| ------------------- | ------ | ------- | ---------------- | ---- | ------------- | ------------------------------ |
| GPT 4o | Closed | 83.1 | 75.3 | 59.0 | 82.0 | 95.3 |
| Claude-3 Opus | Closed | 79.1 | 71.7 | 59.0 | 77.0 | 90.3 |
| GPT 4-turbo | Closed | 75.7 | 70.3 | 57.0 | 67.0 | 86.5 |
| Llama-3 70B-IT | Closed | 67.3 | 56.7 | 42.0 | 57.0 | 84.3 |
| Claude-3 Sonnet | Closed | 67.3 | 67.3 | 44.0 | 58.0 | 75.5 |
| Llama-3 70B | Open | 66.1 | 56.0 | 37.0 | 57.0 | 83.3 |
| Claude-3 Haiku | Closed | 65.4 | 57.0 | 40.0 | 61.0 | 79.3 |
| Gemini 1.0-pro | Closed | 63.2 | 52.7 | 29.0 | 63.0 | 79.8 |
| C4AI Command-r+ | Open | 60.6 | 50.0 | 26.0 | 57.0 | 78.0 |
| Aya-23 35B | Open | 55.6 | 43.3 | 31.0 | 49.0 | 72.5 |
| C4AI Command-r | Open | 54.9 | 44.7 | 29.0 | 49.0 | 70.5 |
| Mixtral 8x22B | Open | 54.8 | 45.3 | 27.0 | 49.0 | 70.3 |
| GPT 3.5-turbo | Closed | 51.0 | 42.7 | 39.0 | 35.0 | 61.8 |
| Llama-3 8B-IT | Open | 46.4 | 36.7 | 29.0 | 39.0 | 60.0 |
| Llama-3 8B | Open | 46.2 | 37.3 | 30.0 | 33.0 | 60.3 |
| Mixtral 8x7B-IT | Open | 45.2 | 41.3 | 28.0 | 39.0 | 54.0 |
| Aya-23 8B | Open | 45.0 | 39.0 | 23.0 | 31.0 | 58.5 |
| Gemma 7B | Open | 43.6 | 34.3 | 22.0 | 47.0 | 55.0 |
| Aya-101 | Open | 40.7 | 31.3 | 24.0 | 38.0 | 55.0 |
| Trendyol-LLM 7B-C-D | Open | 34.1 | 30.3 | 22.0 | 28.0 | 41.5 |
| mT0-xxl | Open | 33.9 | 29.3 | 28.0 | 21.0 | 42.0 |
| Mistral 7B-IT | Open | 32.0 | 34.3 | 26.0 | 38.0 | 30.3 |
| Llama-2 7B | Open | 22.3 | 25.3 | 20.0 | 20.0 | 19.8 |
| mT5-xxl | Open | 18.1 | 19.3 | 24.0 | 14.0 | 16.8 |
## Citation
```
@misc{yüksel2024turkishmmlumeasuringmassivemultitask,
title={TurkishMMLU: Measuring Massive Multitask Language Understanding in Turkish},
author={Arda Yüksel and Abdullatif Köksal and Lütfi Kerem Şenel and Anna Korhonen and Hinrich Schütze},
year={2024},
eprint={2407.12402},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.12402},
}
```
### Groups and Tasks
#### Groups
- `turkishmmlu`: 'All 9 Subjects of Turkish MMLU namely:
Biology, Chemistry, Physics, Geography, Philosophy, History, Religion and Ethics, Turkish Language and Literature, and Mathematics
#### Tasks
The following tasks evaluate subjects in the TurkishMMLU dataset
- `turkishmmlu_{subject}`
The following task evaluate subjects in the TurkishMMLU dataset in Chain-of-Thought (COT)
- `turkishmmlu_cot_{subject}`
### Checklist
For adding novel benchmarks/datasets to the library:
- [x] Is the task an existing benchmark in the literature?
- [x] Have you referenced the original paper that introduced the task?
- [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
- [ ] Is the "Main" variant of this task clearly denoted?
- [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
- [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/turkishmmlu/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/turkishmmlu/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 7598
} |
# Unitxt
### Paper
Title: `Unitxt: Flexible, Shareable and Reusable Data Preparation and Evaluation for Generative AI`
Abstract: `https://arxiv.org/abs/2401.14019`
Unitxt is a library for customizable textual data preparation and evaluation tailored to generative language models. Unitxt natively integrates with common libraries like HuggingFace and LM-eval-harness and deconstructs processing flows into modular components, enabling easy customization and sharing between practitioners. These components encompass model-specific formats, task prompts, and many other comprehensive dataset processing definitions. These components are centralized in the Unitxt-Catalog, thus fostering collaboration and exploration in modern textual data workflows.
The full Unitxt catalog can be viewed in an online explorer. `https://unitxt.readthedocs.io/en/latest/docs/demo.html`
Homepage: https://unitxt.readthedocs.io/en/latest/index.html
### Citation
```
@misc{unitxt,
title={Unitxt: Flexible, Shareable and Reusable Data Preparation and Evaluation for Generative AI},
author={Elron Bandel and Yotam Perlitz and Elad Venezian and Roni Friedman-Melamed and Ofir Arviv and Matan Orbach and Shachar Don-Yehyia and Dafna Sheinwald and Ariel Gera and Leshem Choshen and Michal Shmueli-Scheuer and Yoav Katz},
year={2024},
eprint={2401.14019},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Groups and Tasks
#### Groups
* `unitxt`: Subset of Unitxt tasks that were not in LM-Eval Harness task catalog, including new types of tasks like multi-label classification, grammatical error correction, named entity extraction.
#### Tasks
The full list of Unitxt tasks currently supported can be seen under `tasks/unitxt` directory.
### Adding tasks
You can add additional tasks from the Unitxt catalog by generating new LM-Eval yaml files for these datasets.
The Unitxt task yaml files are generated via the `generate_yamls.py` script in the `tasks/unitxt` directory.
To add a yaml file for an existing dataset Unitxt which is not yet in LM-Eval:
1. Add the card name to the `unitxt_datasets` file in the `tasks/unitxt` directory.
2. The generate_yaml.py contains the default Unitxt [template](https://unitxt.readthedocs.io/en/latest/docs/adding_template.html) used for each kind of NLP task in the `default_template_per_task` dictionary. If the dataset is of a Unitxt task type, previously not used in LM-Eval, you will need to add a default template for it in the dictionary.
```
default_template_per_task = {
"tasks.classification.multi_label" : "templates.classification.multi_label.title" ,
"tasks.classification.multi_class" : "templates.classification.multi_class.title" ,
"tasks.summarization.abstractive" : "templates.summarization.abstractive.full",
"tasks.regression.two_texts" : "templates.regression.two_texts.simple",
"tasks.qa.with_context.extractive" : "templates.qa.with_context.simple",
"tasks.grammatical_error_correction" : "templates.grammatical_error_correction.simple",
"tasks.span_labeling.extraction" : "templates.span_labeling.extraction.title"
}
```
3. Run `python generate_yaml.py` (this will generate all the datasets listed in the `unitxt_datasets`)
If you want to add a new dataset to the Unitxt catalog, see the Unitxt documentation:
https://unitxt.readthedocs.io/en/latest/docs/adding_dataset.html
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/unitxt/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/unitxt/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 4094
} |
# Unscramble
### Paper
Language Models are Few-Shot Learners
https://arxiv.org/pdf/2005.14165.pdf
Unscramble is a small battery of 5 “character manipulation” tasks. Each task
involves giving the model a word distorted by some combination of scrambling,
addition, or deletion of characters, and asking it to recover the original word.
Homepage: https://github.com/openai/gpt-3/tree/master/data
### Citation
```
@inproceedings{NEURIPS2020_1457c0d6,
author = {Brown, Tom and Mann, Benjamin and Ryder, Nick and Subbiah, Melanie and Kaplan, Jared D and Dhariwal, Prafulla and Neelakantan, Arvind and Shyam, Pranav and Sastry, Girish and Askell, Amanda and Agarwal, Sandhini and Herbert-Voss, Ariel and Krueger, Gretchen and Henighan, Tom and Child, Rewon and Ramesh, Aditya and Ziegler, Daniel and Wu, Jeffrey and Winter, Clemens and Hesse, Chris and Chen, Mark and Sigler, Eric and Litwin, Mateusz and Gray, Scott and Chess, Benjamin and Clark, Jack and Berner, Christopher and McCandlish, Sam and Radford, Alec and Sutskever, Ilya and Amodei, Dario},
booktitle = {Advances in Neural Information Processing Systems},
editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin},
pages = {1877--1901},
publisher = {Curran Associates, Inc.},
title = {Language Models are Few-Shot Learners},
url = {https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf},
volume = {33},
year = {2020}
}
```
### Groups and Tasks
#### Groups
* `unscramble`
#### Tasks
* `anagrams1` - Anagrams of all but the first and last letter.
* `anagrams2` - Anagrams of all but the first and last 2 letters.
* `cycle_letters` - Cycle letters in a word.
* `random_insertion` - Random insertions in the word that must be removed.
* `reversed_words` - Words spelled backwards that must be reversed.
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [x] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [x] Have you noted which, if any, published evaluation setups are matched by this variant?
* [x] Checked for equivalence with v0.3.0 LM Evaluation Harness | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/unscramble/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/unscramble/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 2610
} |
# WEBQs
### Paper
Title: `Semantic Parsing on Freebase from Question-Answer Pairs`
Abstract: `https://cs.stanford.edu/~pliang/papers/freebase-emnlp2013.pdf`
WebQuestions is a benchmark for question answering. The dataset consists of 6,642
question/answer pairs. The questions are supposed to be answerable by Freebase, a
large knowledge graph. The questions are mostly centered around a single named entity.
The questions are popular ones asked on the web (at least in 2013).
Homepage: `https://worksheets.codalab.org/worksheets/0xba659fe363cb46e7a505c5b6a774dc8a`
### Citation
```
@inproceedings{berant-etal-2013-semantic,
title = "Semantic Parsing on {F}reebase from Question-Answer Pairs",
author = "Berant, Jonathan and
Chou, Andrew and
Frostig, Roy and
Liang, Percy",
booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
month = oct,
year = "2013",
address = "Seattle, Washington, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D13-1160",
pages = "1533--1544",
}
```
### Groups and Tasks
#### Groups
* `freebase`
#### Tasks
* `webqs`: `Questions with multiple accepted answers.`
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/webqs/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/webqs/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1932
} |
# Wikitext
### Paper
Pointer Sentinel Mixture Models
https://arxiv.org/pdf/1609.07843.pdf
The WikiText language modeling dataset is a collection of over 100 million tokens
extracted from the set of verified Good and Featured articles on Wikipedia.
NOTE: This `Task` is based on WikiText-2.
Homepage: https://www.salesforce.com/products/einstein/ai-research/the-wikitext-dependency-language-modeling-dataset/
### Citation
```
@misc{merity2016pointer,
title={Pointer Sentinel Mixture Models},
author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},
year={2016},
eprint={1609.07843},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Groups and Tasks
#### Groups
* Not part of a group yet.
#### Tasks
* `wikitext`: measure perplexity on the Wikitext dataset, via rolling loglikelihoods.
### Checklist
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [x] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/wikitext/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/wikitext/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1476
} |
# WinoGrande
### Paper
Title: `WinoGrande: An Adversarial Winograd Schema Challenge at Scale`
Abstract: https://arxiv.org/abs/1907.10641
WinoGrande is a collection of 44k problems, inspired by Winograd Schema Challenge
(Levesque, Davis, and Morgenstern 2011), but adjusted to improve the scale and
robustness against the dataset-specific bias. Formulated as a fill-in-a-blank
task with binary options, the goal is to choose the right option for a given
sentence which requires commonsense reasoning.
NOTE: This evaluation of Winogrande uses partial evaluation as described by
Trinh & Le in Simple Method for Commonsense Reasoning (2018).
See: https://arxiv.org/abs/1806.02847
Homepage: https://leaderboard.allenai.org/winogrande/submissions/public
### Citation
```
@article{sakaguchi2019winogrande,
title={WinoGrande: An Adversarial Winograd Schema Challenge at Scale},
author={Sakaguchi, Keisuke and Bras, Ronan Le and Bhagavatula, Chandra and Choi, Yejin},
journal={arXiv preprint arXiv:1907.10641},
year={2019}
}
```
### Groups and Tasks
#### Groups
* Not part of a group yet.
#### Tasks
* `winogrande`
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/winogrande/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/winogrande/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1815
} |
# WMDP
### Paper
Title: `The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning`
Abstract: `https://arxiv.org/abs/2403.03218`
`The Weapons of Mass Destruction Proxy (WMDP) benchmark is a dataset of 4,157 multiple-choice questions surrounding hazardous knowledge in biosecurity cybersecurity, and chemical security. WMDP serves as both a proxy evaluation for hazardous knowledge in large language models (LLMs) and a benchmark for unlearning methods to remove such knowledge.`
Homepage: https://wmdp.ai
### Citation
```
@misc{li2024wmdp,
title={The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning},
author={Nathaniel Li and Alexander Pan and Anjali Gopal and Summer Yue and Daniel Berrios and Alice Gatti and Justin D. Li and Ann-Kathrin Dombrowski and Shashwat Goel and Long Phan and Gabriel Mukobi and Nathan Helm-Burger and Rassin Lababidi and Lennart Justen and Andrew B. Liu and Michael Chen and Isabelle Barrass and Oliver Zhang and Xiaoyuan Zhu and Rishub Tamirisa and Bhrugu Bharathi and Adam Khoja and Zhenqi Zhao and Ariel Herbert-Voss and Cort B. Breuer and Andy Zou and Mantas Mazeika and Zifan Wang and Palash Oswal and Weiran Liu and Adam A. Hunt and Justin Tienken-Harder and Kevin Y. Shih and Kemper Talley and John Guan and Russell Kaplan and Ian Steneker and David Campbell and Brad Jokubaitis and Alex Levinson and Jean Wang and William Qian and Kallol Krishna Karmakar and Steven Basart and Stephen Fitz and Mindy Levine and Ponnurangam Kumaraguru and Uday Tupakula and Vijay Varadharajan and Yan Shoshitaishvili and Jimmy Ba and Kevin M. Esvelt and Alexandr Wang and Dan Hendrycks},
year={2024},
eprint={2403.03218},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
### Groups, Tags, and Tasks
#### Groups
* `wmdp`: All 4,157 multiple-choice questions in biosecurity, cybersecurity, and chemical security
#### Tasks
* `wmdp_bio`: 1,520 multiple-choice questions in biosecurity
* `wmdp_cyber`: 2,225 multiple-choice questions in cybersecurity
* `wmdp_chemistry`: 412 multiple-choice questions in chemical security
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/wmdp/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/wmdp/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 2801
} |
# WMT16
### Paper
Title: `Findings of the 2016 Conference on Machine Translation`
Abstract: http://www.aclweb.org/anthology/W/W16/W16-2301
Homepage: https://huggingface.co/datasets/wmt16
### Citation
```
@InProceedings{bojar-EtAl:2016:WMT1,
author = {Bojar, Ond
{r}ej and Chatterjee, Rajen and Federmann, Christian and Graham, Yvette and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Koehn, Philipp and Logacheva, Varvara and Monz, Christof and Negri, Matteo and Neveol, Aurelie and Neves, Mariana and Popel, Martin and Post, Matt and Rubino, Raphael and Scarton, Carolina and Specia, Lucia and Turchi, Marco and Verspoor, Karin and Zampieri, Marcos},
title = {Findings of the 2016 Conference on Machine Translation},
booktitle = {Proceedings of the First Conference on Machine Translation},
month = {August},
year = {2016},
address = {Berlin, Germany},
publisher = {Association for Computational Linguistics},
pages = {131--198},
url = {http://www.aclweb.org/anthology/W/W16/W16-2301}
}
```
### Groups, Tags, and Tasks
#### Tasks
With specific prompt styles
* `wmt-ro-en-t5-prompt`: WMT16 with the prompt template used for T5
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/wmt2016/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/wmt2016/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1921
} |
# WSC273
### Paper
Title: `The Winograd Schema Challenge`
Abstract: http://commonsensereasoning.org/2011/papers/Levesque.pdf
A Winograd schema is a pair of sentences that differ in only one or two words
and that contain an ambiguity that is resolved in opposite ways in the two
sentences and requires the use of world knowledge and reasoning for its resolution.
The Winograd Schema Challenge 273 is a collection of 273 such Winograd schemas.
NOTE: This evaluation of Winograd Schema Challenge is based on `partial evaluation`
as described by Trinh & Le in Simple Method for Commonsense Reasoning (2018).
See: https://arxiv.org/abs/1806.0
Homepage: https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html
### Citation
```
@inproceedings{ea01b9c0db064caca6986b925d75f2bb,
title = "The winograd schema challenge",
abstract = "In this paper, we present an alternative to the Turing Test that has some conceptual and practical advantages. A Wino-grad schema is a pair of sentences that differ only in one or two words and that contain a referential ambiguity that is resolved in opposite directions in the two sentences. We have compiled a collection of Winograd schemas, designed so that the correct answer is obvious to the human reader, but cannot easily be found using selectional restrictions or statistical techniques over text corpora. A contestant in the Winograd Schema Challenge is presented with a collection of one sentence from each pair, and required to achieve human-level accuracy in choosing the correct disambiguation.",
author = "Levesque, {Hector J.} and Ernest Davis and Leora Morgenstern",
year = "2012",
language = "English (US)",
isbn = "9781577355601",
series = "Proceedings of the International Conference on Knowledge Representation and Reasoning",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "552--561",
booktitle = "13th International Conference on the Principles of Knowledge Representation and Reasoning, KR 2012",
note = "13th International Conference on the Principles of Knowledge Representation and Reasoning, KR 2012 ; Conference date: 10-06-2012 Through 14-06-2012",
}
```
### Groups and Tasks
#### Groups
* Not part of any group yet.
#### Tasks
* `wsc273`
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/wsc273/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/wsc273/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 2962
} |
# XCOPA
### Paper
Title: `XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning`
Abstract: https://ducdauge.github.io/files/xcopa.pdf
The Cross-lingual Choice of Plausible Alternatives dataset is a benchmark to evaluate the ability of machine learning models to transfer commonsense reasoning across languages.
The dataset is the translation and reannotation of the English COPA (Roemmele et al. 2011) and covers 11 languages from 11 families and several areas around the globe.
The dataset is challenging as it requires both the command of world knowledge and the ability to generalise to new languages.
All the details about the creation of XCOPA and the implementation of the baselines are available in the paper.
Homepage: https://github.com/cambridgeltl/xcopa
### Citation
```
@inproceedings{ponti2020xcopa,
title={{XCOPA: A} Multilingual Dataset for Causal Commonsense Reasoning},
author={Edoardo M. Ponti, Goran Glava\v{s}, Olga Majewska, Qianchu Liu, Ivan Vuli\'{c} and Anna Korhonen},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year={2020},
url={https://ducdauge.github.io/files/xcopa.pdf}
}
```
### Groups and Tasks
#### Groups
* `xcopa`
#### Tasks
* `xcopa_et`: Estonian
* `xcopa_ht`: Haitian Creole
* `xcopa_id`: Indonesian
* `xcopa_it`: Italian
* `xcopa_qu`: Cusco-Collao Quechua
* `xcopa_sw`: Kiswahili
* `xcopa_ta`: Tamil
* `xcopa_th`: Thai
* `xcopa_tr`: Turkish
* `xcopa_vi`: Vietnamese
* `xcopa_zh`: Mandarin Chinese
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/xcopa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/xcopa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 2210
} |
# XNLI
### Paper
Title: `XNLI: Evaluating Cross-lingual Sentence Representations`
Abstract: https://arxiv.org/abs/1809.05053
Based on the implementation of @yongzx (see https://github.com/EleutherAI/lm-evaluation-harness/pull/258)
Prompt format (same as XGLM and mGPT):
sentence1 + ", right? " + mask = (Yes|Also|No) + ", " + sentence2
Predicition is the full sequence with the highest likelihood.
Language specific prompts are translated word-by-word with Google Translate
and may differ from the ones used by mGPT and XGLM (they do not provide their prompts).
Homepage: https://github.com/facebookresearch/XNLI
### Citation
"""
@InProceedings{conneau2018xnli,
author = "Conneau, Alexis
and Rinott, Ruty
and Lample, Guillaume
and Williams, Adina
and Bowman, Samuel R.
and Schwenk, Holger
and Stoyanov, Veselin",
title = "XNLI: Evaluating Cross-lingual Sentence Representations",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods
in Natural Language Processing",
year = "2018",
publisher = "Association for Computational Linguistics",
location = "Brussels, Belgium",
}
"""
### Groups and Tasks
#### Groups
* `xnli`
#### Tasks
* `xnli_ar`: Arabic
* `xnli_bg`: Bulgarian
* `xnli_de`: German
* `xnli_el`: Greek
* `xnli_en`: English
* `xnli_es`: Spanish
* `xnli_fr`: French
* `xnli_hi`: Hindi
* `xnli_ru`: Russian
* `xnli_sw`: Swahili
* `xnli_th`: Thai
* `xnli_tr`: Turkish
* `xnli_ur`: Urdu
* `xnli_vi`: Vietnamese
* `xnli_zh`: Chinese
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/xnli/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/xnli/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 2223
} |
# XNLIeu
### Paper
Title: XNLIeu: a dataset for cross-lingual NLI in Basque
Abstract: https://arxiv.org/abs/2404.06996
XNLI is a popular Natural Language Inference (NLI) benchmark widely used to evaluate cross-lingual Natural Language Understanding (NLU) capabilities across languages. In this paper, we expand XNLI to include Basque, a low-resource language that can greatly benefit from transfer-learning approaches. The new dataset, dubbed XNLIeu, has been developed by first machine-translating the English XNLI corpus into Basque, followed by a manual post-edition step. We have conducted a series of experiments using mono- and multilingual LLMs to assess a) the effect of professional post-edition on the MT system; b) the best cross-lingual strategy for NLI in Basque; and c) whether the choice of the best cross-lingual strategy is influenced by the fact that the dataset is built by translation. The results show that post-edition is necessary and that the translate-train cross-lingual strategy obtains better results overall, although the gain is lower when tested in a dataset that has been built natively from scratch. Our code and datasets are publicly available under open licenses at https://github.com/hitz-zentroa/xnli-eu.
Homepage: https://github.com/hitz-zentroa/xnli-eu
### Citation
```bibtex
@misc{heredia2024xnlieu,
title={XNLIeu: a dataset for cross-lingual NLI in Basque},
author={Maite Heredia and Julen Etxaniz and Muitze Zulaika and Xabier Saralegi and Jeremy Barnes and Aitor Soroa},
year={2024},
eprint={2404.06996},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Groups, Tags, and Tasks
#### Tags
* `xnli_eu_mt_native`: Includes MT and Native variants of the XNLIeu dataset.
#### Tasks
* `xnli_eu`: XNLI in Basque postedited from MT.
* `xnli_eu_mt`: XNLI in Basque machine translated from English.
* `xnli_eu_native`: XNLI in Basque natively created.
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/xnli_eu/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/xnli_eu/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 2606
} |
# XStoryCloze
### Paper
Title: `Few-shot Learning with Multilingual Language Models`
Abstract: https://arxiv.org/abs/2112.10668
XStoryCloze consists of the professionally translated version of the [English StoryCloze dataset](https://cs.rochester.edu/nlp/rocstories/) (Spring 2016 version) to 10 non-English languages. This dataset is released by Meta AI.
Homepage: https://github.com/facebookresearch/fairseq/pull/4820
### Citation
```
@article{DBLP:journals/corr/abs-2112-10668,
author = {Xi Victoria Lin and
Todor Mihaylov and
Mikel Artetxe and
Tianlu Wang and
Shuohui Chen and
Daniel Simig and
Myle Ott and
Naman Goyal and
Shruti Bhosale and
Jingfei Du and
Ramakanth Pasunuru and
Sam Shleifer and
Punit Singh Koura and
Vishrav Chaudhary and
Brian O'Horo and
Jeff Wang and
Luke Zettlemoyer and
Zornitsa Kozareva and
Mona T. Diab and
Veselin Stoyanov and
Xian Li},
title = {Few-shot Learning with Multilingual Language Models},
journal = {CoRR},
volume = {abs/2112.10668},
year = {2021},
url = {https://arxiv.org/abs/2112.10668},
eprinttype = {arXiv},
eprint = {2112.10668},
timestamp = {Tue, 04 Jan 2022 15:59:27 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2112-10668.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Groups and Tasks
#### Groups
* `xstorycloze`
#### Tasks
* `xstorycloze_ar`: Arabic
* `xstorycloze_en`: English
* `xstorycloze_es`: Spanish
* `xstorycloze_eu`: Basque
* `xstorycloze_hi`: Hindi
* `xstorycloze_id`: Indonesian
* `xstorycloze_my`: Burmese
* `xstorycloze_ru`: Russian
* `xstorycloze_sw`: Swahili
* `xstorycloze_te`: Telugu
* `xstorycloze_zh`: Chinese
### Checklist
For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/xstorycloze/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/xstorycloze/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 2673
} |
# Task-name
### Paper
Title: `It's All in the Heads: Using Attention Heads as a Baseline for Cross-Lingual Transfer in Commonsense Reasoning`
Abstract: `https://arxiv.org/abs/2106.12066`
Multilingual winograd schema challenge that includes English, French, Japanese, Portuguese, Russian and Chinese. Winograd schema challenges come from the XWinograd dataset introduced in Tikhonov et al. As it only contains 16 Chinese schemas, we add 488 Chinese schemas from clue/cluewsc2020.
Homepage: `https://huggingface.co/datasets/Muennighoff/xwinograd`
### Citation
```
@misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{tikhonov2021heads,
title={It's All in the Heads: Using Attention Heads as a Baseline for Cross-Lingual Transfer in Commonsense Reasoning},
author={Alexey Tikhonov and Max Ryabinin},
year={2021},
eprint={2106.12066},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Groups and Tasks
#### Groups
* `xwinograd`
#### Tasks
List or describe tasks defined in this folder, and their names here:
* `xwinograd_en`: Winograd schema challenges in English.
* `xwinograd_fr`: Winograd schema challenges in French.
* `xwinograd_jp`: Winograd schema challenges in Japanese.
* `xwinograd_pt`: Winograd schema challenges in Portuguese.
* `xwinograd_ru`: Winograd schema challenges in Russian.
* `xwinograd_zh`: Winograd schema challenges in Chinese.
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/xwinograd/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/xwinograd/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 2600
} |
# Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to make participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies within all project spaces, and it also applies when
an individual is representing the project or its community in public spaces.
Examples of representing a project or community include using an official
project e-mail address, posting via an official social media account, or acting
as an appointed representative at an online or offline event. Representation of
a project may be further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at <[email protected]>. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq | {
"source": "simplescaling/s1",
"title": "eval/rebase/inference_scaling/finetune/gpt-accelera/CODE_OF_CONDUCT.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/inference_scaling/finetune/gpt-accelera/CODE_OF_CONDUCT.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 3342
} |
# Contributing to gpt-fast
We want to make contributing to this project as easy and transparent as
possible.
## Pull Requests
We actively welcome your pull requests.
1. Fork the repo and create your branch from `main`.
2. If you've added code that should be tested, add tests.
3. If you've changed APIs, update the documentation.
4. Ensure the test suite passes.
5. Make sure your code lints.
6. If you haven't already, complete the Contributor License Agreement ("CLA").
## Contributor License Agreement ("CLA")
In order to accept your pull request, we need you to submit a CLA. You only need
to do this once to work on any of Meta's open source projects.
Complete your CLA here: <https://code.facebook.com/cla>
## Issues
We use GitHub issues to track public bugs. Please ensure your description is
clear and has sufficient instructions to be able to reproduce the issue.
Meta has a [bounty program](https://www.facebook.com/whitehat/) for the safe
disclosure of security bugs. In those cases, please go through the process
outlined on that page and do not file a public issue.
## License
By contributing to `gpt-fast`, you agree that your contributions will be licensed
under the LICENSE file in the root directory of this source tree. | {
"source": "simplescaling/s1",
"title": "eval/rebase/inference_scaling/finetune/gpt-accelera/CONTRIBUTING.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/inference_scaling/finetune/gpt-accelera/CONTRIBUTING.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1245
} |
# gpt-fast
Simple and efficient pytorch-native transformer text generation.
Featuring:
1. Very low latency
2. <1000 lines of python
3. No dependencies other than PyTorch and sentencepiece
4. int8/int4 quantization
5. Speculative decoding
6. Tensor parallelism
7. Supports Nvidia and AMD GPUs
This is *NOT* intended to be a "framework" or "library" - it is intended to show off what kind of performance you can get with native PyTorch :) Please copy-paste and fork as you desire.
For an in-depth walkthrough of what's in this codebase, see this [blog post](https://pytorch.org/blog/accelerating-generative-ai-2/).
## Installation
[Download PyTorch nightly](https://pytorch.org/get-started/locally/)
Install sentencepiece and huggingface_hub
```bash
pip install sentencepiece huggingface_hub
```
To download llama models, go to https://huggingface.co/meta-llama/Llama-2-7b and go through steps to obtain access.
Then login with `huggingface-cli login`
## Downloading Weights
Models tested/supported
```text
openlm-research/open_llama_7b
meta-llama/Llama-2-7b-chat-hf
meta-llama/Llama-2-13b-chat-hf
meta-llama/Llama-2-70b-chat-hf
codellama/CodeLlama-7b-Python-hf
codellama/CodeLlama-34b-Python-hf
```
For example, to convert Llama-2-7b-chat-hf
```bash
export MODEL_REPO=meta-llama/Llama-2-7b-chat-hf
./scripts/prepare.sh $MODEL_REPO
```
## Benchmarks
Benchmarks run on an A100-80GB, power limited to 330W.
| Model | Technique | Tokens/Second | Memory Bandwidth (GB/s) |
| -------- | ------- | ------ | ------ |
| Llama-2-7B | Base | 104.9 | 1397.31 |
| | 8-bit | 155.58 | 1069.20 |
| | 4-bit (G=32) | 196.80 | 862.69 |
| Llama-2-70B | Base | OOM ||
| | 8-bit | 19.13 | 1322.58 |
| | 4-bit (G=32) | 25.25 | 1097.66 |
### Speculative Sampling
[Verifier: Llama-70B (int4), Draft: Llama-7B (int4)](./scripts/speculate_70B_int4.sh): 48.4 tok/s
### Tensor Parallelism
| Model | Number of GPUs | Tokens/Second | Memory Bandwidth (GB/s) |
| -------- | ------- | ------ | ------ |
| Llama-2-7B | 1 | 104.9 | 1397.31 |
| | 2 | 136.27 | 954.01 |
| | 4 | 168.78 | 635.09 |
| | 8 | 179.27 | 395.85 |
| Llama-2-70B | 1 | OOM | |
| | 2 | 20.53 | 1426.41 |
| | 4 | 34.15 | 1204.62 |
| | 8 | 47.25 | 858.28 |
### AMD
Benchmarks run on one GCD of a MI-250x.
| Model | Technique | Tokens/Second | Memory Bandwidth (GB/s) |
| -------- | ------- | ------ | ------ |
| Llama-2-7B | Base | 76.33 | 1028.70 |
| | 8-bit | 101.86 | 700.06 |
## Generate Text
Model definition in `model.py`, generation code in `generate.py`.
```bash
python generate.py --compile --checkpoint_path checkpoints/$MODEL_REPO/model.pth --prompt "Hello, my name is"
```
To squeeze out a little bit more performance, you can also compile the prefill with `--compile_prefill`. This will increase compilation times though.
## Quantization
### Int8 Weight-Only Quantization
To generate this version of the model
```bash
# Spits out model at checkpoints/$MODEL_REPO/model_int8.pth
python quantize.py --checkpoint_path checkpoints/$MODEL_REPO/model.pth --mode int8
```
To run with int8, just pass the int8 checkpoint to generate.py.
```bash
python generate.py --compile --checkpoint_path checkpoints/$MODEL_REPO/model_int8.pth
```
### Int4 Weight-Only Quantization
To generate int4 version of model
```bash
# Spits out model at checkpoints/$MODEL_REPO/model_int4.g32.pth
python quantize.py --checkpoint_path checkpoints/$MODEL_REPO/model.pth --mode int4 --groupsize 32
```
To run with int4, just pass the int4 checkpoint to generate.py.
```bash
python generate.py --checkpoint_path checkpoints/$MODEL_REPO/model_int4.g32.pth --compile
```
## Speculative Sampling
To generate with speculative sampling (DRAFT_MODEL_REPO should point to a smaller model compared with MODEL_REPO).
In this example, the "smaller" model is just the int8 quantized version of the model.
```
export DRAFT_MODEL_REPO=meta-llama/Llama-2-7b-chat-hf
python generate.py --compile --checkpoint_path checkpoints/$MODEL_REPO/model.pth --draft_checkpoint_path checkpoints/$DRAFT_MODEL_REPO/model_int8.pth
```
Note: Running on an A100 80GB, albeit power-limited to 330 watts. Empirically, seems like peak bandwidth is about 1700 GB/s.
## Tensor Parallelism
```bash
torchrun --standalone --nproc_per_node=2 generate.py --compile --checkpoint_path checkpoints/$MODEL_REPO/model.pth
```
## Experimental
### Evaluation
We use the EleutherAI evaluation harness to evaluate our model accuracy. To evaluate the accuracy, make sure the evaluation harness is installed and pass your model checkpoint and desired tasks to eval.py.
```bash
python eval.py --checkpoint_path checkpoints/$MODEL_REPO/model.pth --compile --tasks hellaswag winogrande
```
Note: Generative tasks are currently not supported for gpt-fast
Installation Instructions for the evaluation harness: https://github.com/EleutherAI/lm-evaluation-harness/tree/master#install
### GPTQ
We have a pure pytorch implementation of GPTQ that utilizes torch._dynamo.export to access the model structure. You can generate a GPTQ quantized
version of int4 quantization by using the same command to quantize it but adding 'gptq' to the quantization mode i.e.
```bash
# Spits out model at checkpoints/$MODEL_REPO/model_int4-gptq.g32.pth
python quantize.py --mode int4-gptq --calibration_tasks wikitext --calibration_seq_length 2048
```
You can then eval or generate text with this model in the same way as above.
## License
`gpt-fast` is released under the [BSD 3](https://github.com/pytorch-labs/gpt-fast/main/LICENSE) license.
## Acknowledgements
Thanks to:
* Lightning AI for supporting pytorch and work in flash attention, int8 quantization, and LoRA fine-tuning.
* GGML for driving forward fast, on device inference of LLMs
* Karpathy for spearheading simple, interpretable and fast LLM implementations
* MLC-LLM for pushing 4-bit quantization performance on heterogenous hardware | {
"source": "simplescaling/s1",
"title": "eval/rebase/inference_scaling/finetune/gpt-accelera/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/inference_scaling/finetune/gpt-accelera/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 6067
} |
## Install
```
pip3 install dspy-ai
```
Turn off cache at https://github.com/stanfordnlp/dspy/blob/34d8420383ec752037aa271825c1d3bf391e1277/dsp/modules/cache_utils.py#L10.
```
cache_turn_on = False
```
## Benchmark SGLang
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
```
python3 bench_dspy_intro.py --backend sglang
```
## Benchmark TGI
```
docker run --name tgi --rm -ti --gpus all --network host \
-v /home/ubuntu/model_weights/Llama-2-7b-chat-hf:/Llama-2-7b-chat-hf \
ghcr.io/huggingface/text-generation-inference:1.3.0 \
--model-id /Llama-2-7b-chat-hf --num-shard 1 --trust-remote-code \
--max-input-length 2048 --max-total-tokens 4096 \
--port 24000
```
```
python3 bench_dspy_intro.py --backend tgi
```
## Benchmark vLLM
```
python3 -m vllm.entrypoints.openai.api_server --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000
```
```
python3 bench_dspy_intro.py --backend vllm
``` | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/dspy/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/dspy/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 978
} |
## Download the dataset
```
wget -O agent_calls.jsonl https://drive.google.com/uc?export=download&id=19qLpD45e9JGTKF2cUjJJegwzSUEZEKht
```
## Run benchmark
Ensure that this benchmark is run in a serial manner (using --parallel 1) to preserve any potential dependencies between requests.
### Benchmark sglang
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
```
python3 bench_sglang.py --num-events 1000 --parallel 1
```
### Benchmark vllm
```
python3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000
```
```
python3 bench_other.py --num-events 1000 --backend vllm --parallel 1
```
### Benchmark guidance
```
python3 bench_other.py --num-events 1000 --backend guidance --parallel 1
``` | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/generative_agents/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/generative_agents/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 816
} |
## Download data
```
wget https://raw.githubusercontent.com/openai/grade-school-math/master/grade_school_math/data/test.jsonl
```
## Run benchmark
### Benchmark sglang
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
```
python3 bench_sglang.py --num-questions 200
```
### Benchmark vllm
```
python3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000
```
```
python3 bench_other.py --num-questions 200 --backend vllm
```
### Benchmark lightllm
```
# A10G
python -m lightllm.server.api_server --tokenizer_mode auto --model_dir ~/model_weights/llama-2-7b-chat-hf --max_total_token_num 16000 --port 22000
```
```
python3 bench_other.py --num-questions 200 --backend lightllm
```
### Benchmark guidance
```
python3 bench_other.py --num-questions 200 --backend guidance --parallel 1
```
### Benchmark lmql
```
CUDA_VISIBLE_DEVICES=0,1 lmql serve-model meta-llama/Llama-2-7b-chat-hf --cuda --port 23000
```
```
python3 bench_other.py --num-questions 100 --backend lmql --parallel 2
``` | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/gsm8k/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/gsm8k/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1115
} |
## Download data
```
wget https://raw.githubusercontent.com/rowanz/hellaswag/master/data/hellaswag_val.jsonl
```
## Run benchmark
### Benchmark sglang
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
```
python3 bench_sglang.py --num-questions 200
```
### Benchmark vllm
```
python3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000
```
```
python3 bench_other.py --num-questions 200 --backend vllm
```
### Benchmark lightllm
```
# A10G
python -m lightllm.server.api_server --tokenizer_mode auto --model_dir ~/model_weights/llama-2-7b-chat-hf --max_total_token_num 16000 --port 22000
```
```
python3 bench_other.py --num-questions 200 --backend lightllm
```
### Benchmark guidance
```
CUDA_VISIBLE_DEVICES=0,1 python3 bench_other.py --num-questions 200 --backend guidance --parallel 1
```
### Benchmark lmql
```
lmql serve-model meta-llama/Llama-2-7b-chat-hf --cuda --port 23000
```
```
python3 bench_other.py --num-questions 200 --backend lmql --port 23000 --parallel 1
``` | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/hellaswag/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/hellaswag/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1111
} |
## Run benchmark
### Build dataset
```
pip install wikipedia
python3 build_dataset.py
```
### Dependencies
```
llama_cpp_python 0.2.19
guidance 0.1.10
vllm 0.2.5
outlines 0.0.22
```
### Benchmark sglang
Run Llama-7B
```
python3 -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
Run Mixtral-8x7B
```
python3 -m sglang.launch_server --model-path mistralai/Mixtral-8x7B-Instruct-v0.1 --port 30000 --tp-size 8
```
Benchmark
```
python3 bench_sglang.py --num-questions 10
```
### Benchmark vllm
Run Llama-7B
```
python3 -m outlines.serve.serve --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000
```
Benchmark
```
python3 bench_other.py --backend vllm --num-questions 10
```
### Benchmark guidance
Run Llama-7B and benchmark
```
python3 bench_other.py --backend guidance --num-questions 10 --parallel 1
``` | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/json_decode_regex/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/json_decode_regex/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 965
} |
## Run benchmark
### Dependencies
```
llama_cpp_python 0.2.38
guidance 0.1.10
vllm 0.2.7
outlines 0.0.25
```
### Build dataset
When benchmarking long document information retrieval, run the following command to build the dataset:
```bash
pip install wikipedia
python3 build_dataset.py
```
### Benchmark sglang
Run Llama-7B
```bash
python3 -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
Benchmark Character Generation
```bash
python3 bench_sglang.py --mode character
```
Benchmark City Information Retrieval
```bash
python3 bench_sglang.py --mode city
```
### Benchmark vllm
Run Llama-7B
```bash
python3 -m outlines.serve.serve --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000
```
Benchmark Character Generation
```bash
python3 bench_other.py --mode character --backend vllm
```
Benchmark City Information Retrieval
```bash
python3 bench_other.py --mode city --backend vllm
```
### Benchmark guidance
Run Llama-7B and benchmark character generation
```bash
python3 bench_other.py --mode character --backend guidance --parallel 1
```
Run Llama-7B and benchmark city information retrieval
```bash
python3 bench_other.py --mode city --backend guidance --parallel 1
``` | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/json_jump_forward/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/json_jump_forward/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1339
} |
### Download data
```
wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json
```
### SGLang
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
```
python3 bench_throughput.py --backend srt --tokenizer meta-llama/Llama-2-7b-chat-hf --dataset ShareGPT_V3_unfiltered_cleaned_split.json --num-prompts 10 --request-rate 10 --port 30000
```
### vLLM
```
python3 -m vllm.entrypoints.api_server --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --swap-space 16 --port 21000
```
```
python3 bench_throughput.py --backend vllm --tokenizer meta-llama/Llama-2-7b-chat-hf --dataset ShareGPT_V3_unfiltered_cleaned_split.json --num-prompts 10 --request-rate 10 --port 21000
```
### LightLLM
```
python -m lightllm.server.api_server --model_dir ~/model_weights/Llama-2-7b-chat-hf --max_total_token_num 15600 --tokenizer_mode auto --port 22000
```
```
python3 bench_throughput.py --backend lightllm --tokenizer meta-llama/Llama-2-7b-chat-hf --dataset ShareGPT_V3_unfiltered_cleaned_split.json --num-prompts 10 --request-rate 10 --port 22000
``` | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/latency_throughput/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/latency_throughput/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1169
} |
## Download data
```
wget https://raw.githubusercontent.com/merrymercy/merrymercy.github.io/master/files/random_words.json
python3 gen_data.py --number 1000
```
## Run benchmark
### Benchmark sglang
```
python3 -m sglang.launch_server --model-path codellama/CodeLlama-7b-hf --port 30000
```
```
python3 bench_sglang.py --src-index 600 --num-q 50 --parallel 1
```
###
```
# original
Accuracy: 0.940, latency: 332.83 s
# parallel encoding (no_adjust, offset = 1000)
Accuracy: 0.760, latency: 238.46 s
# parallel encoding (no_adjust, offset = 3000)
Accuracy: 0.760, latency: 238.46 s
# parallel encoding (no_adjust, offset = 0)
Accuracy: 0.520, latency: 238.46 s
# parallel encoding (adjust_cache)
Accuracy: 0.460, latency: 257.66 s
``` | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/line_retrieval/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/line_retrieval/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 744
} |
## Download benchmark images
```
python3 download_images.py
```
image benchmark source: https://huggingface.co/datasets/liuhaotian/llava-bench-in-the-wild
### Other Dependency
```
pip3 install "sglang[all]"
pip3 install "torch>=2.1.2" "transformers>=4.36" pillow
```
## Run benchmark
### Benchmark sglang
Launch a server
```
python3 -m sglang.launch_server --model-path liuhaotian/llava-v1.5-7b --tokenizer-path llava-hf/llava-1.5-7b-hf --port 30000
```
Run benchmark
```
# Run with local models
python3 bench_sglang.py --num-questions 60
# Run with OpenAI models
python3 bench_sglang.py --num-questions 60 --backend gpt-4-vision-preview
```
### Bench LLaVA original code
```
git clone [email protected]:haotian-liu/LLaVA.git
cd LLaVA
git reset --hard 9a26bd1435b4ac42c282757f2c16d34226575e96
pip3 install -e .
cd ~/sglang/benchmark/llava_bench
CUDA_VISIBLE_DEVICES=0 bash bench_hf_llava_bench.sh
```
### Benchmark llama.cpp
```
# Install
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
pip install sse_starlette starlette_context pydantic_settings
# Download weights
mkdir -p ~/model_weights/llava-v1.5-7b/
wget https://huggingface.co/mys/ggml_llava-v1.5-7b/resolve/main/ggml-model-f16.gguf -O ~/model_weights/llava-v1.5-7b/ggml-model-f16.gguf
wget https://huggingface.co/mys/ggml_llava-v1.5-7b/resolve/main/mmproj-model-f16.gguf -O ~/model_weights/llava-v1.5-7b/mmproj-model-f16.gguf
```
```
python3 -m llama_cpp.server --model ~/model_weights/llava-v1.5-7b/ggml-model-f16.gguf --clip_model_path ~/model_weights/llava-v1.5-7b/mmproj-model-f16.gguf --chat_format llava-1-5 --port 23000
OPENAI_BASE_URL=http://localhost:23000/v1 python3 bench_sglang.py --backend gpt-4-vision-preview --num-q 1
``` | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/llava_bench/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/llava_bench/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1722
} |
## Run benchmark
### Benchmark sglang
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
```
python3 bench_sglang.py --num-questions 25 --parallel 8
python3 bench_sglang.py --num-questions 16 --parallel 1
```
### Benchmark vllm
```
python3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000
```
```
python3 bench_other.py --backend vllm --num-questions 25
```
### Benchmark guidance
```
python3 bench_other.py --backend guidance --num-questions 25 --parallel 1
``` | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/llm_judge/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/llm_judge/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 591
} |
## Run benchmark
### Benchmark sglang
```
python3 -m sglang.launch_server --model-path codellama/CodeLlama-7b-instruct-hf --port 30000
```
```
python3 bench_sglang.py --num-questions 5 --parallel 1
```
### Benchmark vllm
```
python3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model codellama/CodeLlama-7b-instruct-hf --disable-log-requests --port 21000 --gpu 0.97
```
```
python3 bench_other.py --backend vllm --num-questions 5
```
### Benchmark guidance
```
python3 bench_other.py --backend guidance --num-questions 5 --parallel 1
```
### Build dataset
```
pip install wikipedia
python3 build_dataset.py
``` | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/long_json_decode/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/long_json_decode/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 630
} |
## Download data
```
wget https://people.eecs.berkeley.edu/~hendrycks/data.tar
tar xf data.tar
```
## Run benchmark
### Benchmark sglang
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
```
python3 bench_sglang.py --nsub 10
```
```
# OpenAI models
python3 bench_sglang.py --backend gpt-3.5-turbo --parallel 8
```
### Benchmark vllm
```
python3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000
```
```
python3 bench_other.py --nsub 10 --backend vllm
```
### Benchmark lightllm
```
# A10G
python -m lightllm.server.api_server --tokenizer_mode auto --model_dir ~/model_weights/llama-2-7b-chat-hf --max_total_token_num 16000 --port 22000
# V100
python -m lightllm.server.api_server --tokenizer_mode auto --model_dir ~/model_weights/llama-2-7b-chat-hf --max_total_token_num 4500 --port 22000
```
```
python3 bench_other.py --nsub 10 --backend lightllm
```
### Benchmark guidance
```
python3 bench_other.py --nsub 10 --backend guidance --parallel 1
```
### Benchmark lmql
```
CUDA_VISIBLE_DEVICES=0,1 lmql serve-model meta-llama/Llama-2-7b-chat-hf --cuda --port 23000
```
```
python3 bench_other.py --nsub 10 --backend lmql --parallel 2
``` | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/mmlu/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/mmlu/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1273
} |
## Run benchmark
### Benchmark sglang
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
```
python3 bench_sglang.py --num-questions 80
```
### Benchmark vllm
```
python3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000
```
```
python3 bench_other.py --num-questions 80 --backend vllm
```
### Benchmark lightllm
```
# A10G
python -m lightllm.server.api_server --tokenizer_mode auto --model_dir ~/model_weights/llama-2-7b-chat-hf --max_total_token_num 16000 --port 22000
```
```
python3 bench_other.py --num-questions 80 --backend lightllm
``` | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/mtbench/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/mtbench/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 672
} |
## Download data
```
wget https://raw.githubusercontent.com/openai/grade-school-math/master/grade_school_math/data/test.jsonl
```
## Run benchmark
### Benchmark sglang
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
```
python3 bench_sglang.py --num-questions 64
python3 bench_sglang.py --num-questions 32 --parallel 1
```
### Benchmark vllm
```
python3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000
```
```
python3 bench_other.py --num-questions 64 --backend vllm
```
### Benchmark lightllm
```
# A10G
python -m lightllm.server.api_server --tokenizer_mode auto --model_dir ~/model_weights/llama-2-7b-chat-hf --max_total_token_num 16000 --port 22000
```
```
python3 bench_other.py --num-questions 64 --backend lightllm
```
### Benchmark guidance
```
python3 bench_other.py --num-questions 8 --backend guidance --parallel 1
``` | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/multi_chain_reasoning/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/multi_chain_reasoning/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 965
} |
## Run benchmark
### Benchmark sglang
```
python3 -m sglang.launch_server --model-path codellama/CodeLlama-7b-instruct-hf --port 30000
```
```
python3 bench_sglang.py --num-questions 10 --parallel 1
```
### Benchmark vllm
```
python3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model codellama/CodeLlama-7b-instruct-hf --disable-log-requests --port 21000 --gpu 0.97
```
```
python3 bench_other.py --backend vllm --num-questions 64
```
### Benchmark guidance
```
python3 bench_other.py --backend guidance --num-questions 32 --parallel 1
```
### Build dataset
```
pip install PyPDF2
python3 build_dataset.py
```
```python
import PyPDF2
with open('llama2.pdf', 'rb') as file:
reader = PyPDF2.PdfReader(file)
text = ''
for page_num in range(len(reader.pages)):
text += reader.pages[page_num].extract_text()
with open('output.txt', 'w') as text_file:
text_file.write(text)
``` | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/multi_document_qa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/multi_document_qa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 928
} |
### Benchmark sglang
Run Llama-7B
```
python3 -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
Run Mixtral-8x7B
(When there is a CUDA out-of-memory error, try to reduce the `--mem-fraction-static`)
```
python3 -m sglang.launch_server --model-path mistralai/Mixtral-8x7B-Instruct-v0.1 --port 30000 --tp-size 8
```
Benchmark(short output)
```
python3 bench_sglang.py --tokenizer meta-llama/Llama-2-7b-chat-hf
```
Benchmark(long output)
```
python3 bench_sglang.py --tokenizer meta-llama/Llama-2-7b-chat-hf --long
```
### Benchmark vLLM
Run Llama-7B
```
python3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000
```
Run Mixtral-8x7B
```
python3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model mistralai/Mixtral-8x7B-Instruct-v0.1 --disable-log-requests --port 21000 --tensor-parallel-size 8
```
Benchmark(short output)
```
python3 bench_other.py --tokenizer meta-llama/Llama-2-7b-chat-hf --backend vllm
```
Benchmark(long output)
```
python3 bench_other.py --tokenizer meta-llama/Llama-2-7b-chat-hf --backend vllm --long
```
### Benchmark guidance
Benchmark Llama-7B (short output)
```
python3 bench_other.py --tokenizer meta-llama/Llama-2-7b-chat-hf --backend guidance --parallel 1
```
Benchmark Llama-7B (long output)
```
python3 bench_other.py --tokenizer meta-llama/Llama-2-7b-chat-hf --backend guidance --parallel 1 --long
``` | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/multi_turn_chat/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/multi_turn_chat/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1476
} |
## Run benchmark
NOTE: This is an implementation for replaying a given trace for throughput/latency benchmark purposes. It is not an actual ReAct agent implementation.
### Benchmark sglang
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
```
python3 bench_sglang.py --num-questions 100
```
### Benchmark vllm
```
python3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000
```
```
python3 bench_other.py --num-questions 100 --backend vllm
```
### Benchmark guidance
```
python3 bench_other.py --num-questions 100 --backend guidance --parallel 1
``` | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/react/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/react/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 677
} |
## Run benchmark
### Benchmark sglang
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
```
python3 bench_sglang.py --num-questions 64
python3 bench_sglang.py --num-questions 32 --parallel 1
```
### Benchmark vllm
```
python3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000
```
```
python3 bench_other.py --backend vllm --num-questions 64
```
### Benchmark guidance
```
python3 bench_other.py --backend guidance --num-questions 32 --parallel 1
``` | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/tip_suggestion/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/tip_suggestion/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 578
} |
## Download data
```
wget https://raw.githubusercontent.com/openai/grade-school-math/master/grade_school_math/data/test.jsonl
```
## Run benchmark
NOTE: This is an implementation for throughput/latency benchmark purposes. The prompts are not tuned to achieve good accuracy on the GSM-8K tasks.
### Benchmark sglang
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
```
python3 bench_sglang.py --num-questions 32
python3 bench_sglang.py --num-questions 16 --parallel 1
```
### Benchmark vllm
```
python3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000
```
```
python3 bench_other.py --num-questions 32 --backend vllm
```
### Benchmark lightllm
```
# A10G
python -m lightllm.server.api_server --tokenizer_mode auto --model_dir ~/model_weights/llama-2-7b-chat-hf --max_total_token_num 16000 --port 22000
```
```
python3 bench_other.py --num-questions 32 --backend lightllm
```
### Benchmark guidance
```
python3 bench_other.py --num-questions 8 --backend guidance --parallel 1
``` | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/tree_of_thought_deep/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/tree_of_thought_deep/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1113
} |
## Download data
```
wget https://raw.githubusercontent.com/openai/grade-school-math/master/grade_school_math/data/test.jsonl
```
## Run benchmark
### Benchmark sglang
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
```
python3 bench_sglang.py --num-questions 32 --parallel 16
python3 bench_sglang.py --num-questions 10 --parallel 1
```
### Benchmark vllm
```
python3 -m vllm.entrypoints.api_server --tokenizer-mode auto --model meta-llama/Llama-2-7b-chat-hf --disable-log-requests --port 21000
```
```
python3 bench_other.py --num-questions 32 --backend vllm
```
### Benchmark lightllm
```
# A10G
python -m lightllm.server.api_server --tokenizer_mode auto --model_dir ~/model_weights/llama-2-7b-chat-hf --max_total_token_num 16000 --port 22000
```
```
python3 bench_other.py --num-questions 32 --backend lightllm
```
### Benchmark guidance
```
python3 bench_other.py --num-questions 32 --backend guidance --parallel 1
``` | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/benchmark/tree_of_thought_v0/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/benchmark/tree_of_thought_v0/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 980
} |
#Arabic COPA
### Paper
Original Title: `COPA`
The Choice Of Plausible Alternatives (COPA) evaluation provides researchers with a tool for assessing progress in open-domain commonsense causal reasoning.
[Homepage](https://people.ict.usc.edu/~gordon/copa.html)
AlGhafa has translated this dataset to Arabic[AlGafa](https://aclanthology.org/2023.arabicnlp-1.21.pdf)
The link to the Arabic version of the dataset [PICA](https://gitlab.com/tiiuae/alghafa/-/tree/main/arabic-eval/copa_ar)
### Citation
### Groups and Tasks
#### Groups
* Not part of a group yet.
#### Tasks
* `copa_ar`
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [x] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [x] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/alghafa/copa_ar/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/alghafa/copa_ar/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1272
} |
#Arabic PIQA
### Paper
Original Title: `PIQA: Reasoning about Physical Commonsense in Natural Language`
Original paper: [PICA](https://arxiv.org/abs/1911.11641)
Physical Interaction: Question Answering (PIQA) is a physical commonsense
reasoning and a corresponding benchmark dataset. PIQA was designed to investigate
the physical knowledge of existing models. To what extent are current approaches
actually learning about the world?
[Homepage](https://yonatanbisk.com/piqa)
AlGhafa has translated this dataset to Arabic[AlGafa](https://aclanthology.org/2023.arabicnlp-1.21.pdf)
The link to the Arabic version of the dataset [PICA](https://gitlab.com/tiiuae/alghafa/-/tree/main/arabic-eval/pica_ar)
### Citation
### Groups and Tasks
#### Groups
* Not part of a group yet.
#### Tasks
* `piqa_ar`
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [x] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [x] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/alghafa/piqa_ar/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/alghafa/piqa_ar/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 1486
} |
# MultiMedQA (multiple-choice subset)
### Paper
Title: Large Language Models Encode Clinical Knowledge
Abstract: https://arxiv.org/abs/2212.13138
A benchmark combining four existing multiple-choice question answering datasets spanning professional medical exams and research queries.
### Citation
```
@Article{Singhal2023,
author={Singhal, Karan and Azizi, Shekoofeh and Tu, Tao and Mahdavi, S. Sara and Wei, Jason and Chung, Hyung Won and Scales, Nathan and Tanwani, Ajay and Cole-Lewis, Heather and Pfohl, Stephen and Payne, Perry and Seneviratne, Martin and Gamble, Paul and Kelly, Chris and Babiker, Abubakr and Sch{\"a}rli, Nathanael and Chowdhery, Aakanksha and Mansfield, Philip and Demner-Fushman, Dina and Ag{\"u}era y Arcas, Blaise and Webster, Dale and Corrado, Greg S. and Matias, Yossi and Chou, Katherine and Gottweis, Juraj and Tomasev, Nenad and Liu, Yun and Rajkomar, Alvin and Barral, Joelle and Semturs, Christopher and Karthikesalingam, Alan and Natarajan, Vivek},
title={Large language models encode clinical knowledge},
journal={Nature},
year={2023},
month={Aug},
day={01},
volume={620},
number={7972},
pages={172-180},
issn={1476-4687},
doi={10.1038/s41586-023-06291-2},
url={https://doi.org/10.1038/s41586-023-06291-2}
}
```
### Tasks
* [PubMedQA](https://pubmedqa.github.io/) - 1,000 expert-labeled Q&A pairs where a question and corresponding PubMed abstract as context is given and the a yes/maybe/no answer must be produced. Unlike the rest of the tasks in this suite, PubMedQA is a closed-domain Q&A task.
* [MedQA](https://github.com/jind11/MedQA) - US Medical License Exam (USMLE) questions with 4 or 5 possible answers. Typically, only the 4-option questions are used.
* [MedMCQA](https://medmcqa.github.io/) - 4-option multiple choice questions from Indian medical entrance examinations, >191k total questions.
* [MMLU](https://arxiv.org/abs/2009.03300) - 4-option multiple choice exam questions from a variety of domains. The following 6 domains are utilized here:
* Anatomy
* Clinical Knowledge
* College Medicine
* Medical Genetics
* Professional Medicine
* College Biology
Note that MultiMedQA also includes some short-form and long-form Q&A tasks (LiveQA, MedicationQA, HealthSearchQA). Evaluation on these tasks is usually done by experts and is not typically performed automatically, and therefore is ignored here. | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/benchmarks/multimedqa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/benchmarks/multimedqa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 2370
} |
# Multilingual ARC
### Paper
Title: `Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback`
Abstract: https://arxiv.org/abs/2307.16039
A key technology for the development of large language models (LLMs) involves instruction tuning that helps align the models' responses with human expectations to realize impressive learning abilities. Two major approaches for instruction tuning characterize supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF), which are currently applied to produce the best commercial LLMs (e.g., ChatGPT). To improve the accessibility of LLMs for research and development efforts, various instruction-tuned open-source LLMs have also been introduced recently, e.g., Alpaca, Vicuna, to name a few. However, existing open-source LLMs have only been instruction-tuned for English and a few popular languages, thus hindering their impacts and accessibility to many other languages in the world. Among a few very recent work to explore instruction tuning for LLMs in multiple languages, SFT has been used as the only approach to instruction-tune LLMs for multiple languages. This has left a significant gap for fine-tuned LLMs based on RLHF in diverse languages and raised important questions on how RLHF can boost the performance of multilingual instruction tuning. To overcome this issue, we present Okapi, the first system with instruction-tuned LLMs based on RLHF for multiple languages. Okapi introduces instruction and response-ranked data in 26 diverse languages to facilitate the experiments and development of future multilingual LLM research. We also present benchmark datasets to enable the evaluation of generative LLMs in multiple languages. Our experiments demonstrate the advantages of RLHF for multilingual instruction over SFT for different base models and datasets. Our framework and resources are released at this https URL.
Homepage: `https://github.com/nlp-uoregon/Okapi`
### Citation
```
@article{dac2023okapi,
title={Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback},
author={Dac Lai, Viet and Van Nguyen, Chien and Ngo, Nghia Trung and Nguyen, Thuat and Dernoncourt, Franck and Rossi, Ryan A and Nguyen, Thien Huu},
journal={arXiv e-prints},
pages={arXiv--2307},
year={2023}
}
```
### Groups and Tasks
#### Groups
- arc_multilingual
#### Tasks
- `arc_{ar,bn,ca,da,de,es,eu,fr,gu,hi,hr,hu,hy,id,it,kn,ml,mr,ne,nl,pt,ro,ru,sk,sr,sv,ta,te,uk,vi,zh}`
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/okapi/arc_multilingual/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/okapi/arc_multilingual/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 3252
} |
# Multilingual HellaSwag
### Paper
Title: `Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback`
Abstract: https://arxiv.org/abs/2307.16039
A key technology for the development of large language models (LLMs) involves instruction tuning that helps align the models' responses with human expectations to realize impressive learning abilities. Two major approaches for instruction tuning characterize supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF), which are currently applied to produce the best commercial LLMs (e.g., ChatGPT). To improve the accessibility of LLMs for research and development efforts, various instruction-tuned open-source LLMs have also been introduced recently, e.g., Alpaca, Vicuna, to name a few. However, existing open-source LLMs have only been instruction-tuned for English and a few popular languages, thus hindering their impacts and accessibility to many other languages in the world. Among a few very recent work to explore instruction tuning for LLMs in multiple languages, SFT has been used as the only approach to instruction-tune LLMs for multiple languages. This has left a significant gap for fine-tuned LLMs based on RLHF in diverse languages and raised important questions on how RLHF can boost the performance of multilingual instruction tuning. To overcome this issue, we present Okapi, the first system with instruction-tuned LLMs based on RLHF for multiple languages. Okapi introduces instruction and response-ranked data in 26 diverse languages to facilitate the experiments and development of future multilingual LLM research. We also present benchmark datasets to enable the evaluation of generative LLMs in multiple languages. Our experiments demonstrate the advantages of RLHF for multilingual instruction over SFT for different base models and datasets. Our framework and resources are released at this https URL.
Homepage: `https://github.com/nlp-uoregon/Okapi`
### Citation
```
@article{dac2023okapi,
title={Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback},
author={Dac Lai, Viet and Van Nguyen, Chien and Ngo, Nghia Trung and Nguyen, Thuat and Dernoncourt, Franck and Rossi, Ryan A and Nguyen, Thien Huu},
journal={arXiv e-prints},
pages={arXiv--2307},
year={2023}
}
```
### Groups and Tasks
#### Groups
- hellaswag_multilingual
#### Tasks
- `hellaswag_{ar,bn,ca,da,de,es,eu,fr,gu,hi,hr,hu,hy,id,it,kn,ml,mr,ne,nl,pt,ro,ru,sk,sr,sv,ta,te,uk,vi}`
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/okapi/hellaswag_multilingual/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/okapi/hellaswag_multilingual/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 3268
} |
# Multilingual TruthfulQA
### Paper
Title: `Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback`
Abstract: https://arxiv.org/abs/2307.16039
A key technology for the development of large language models (LLMs) involves instruction tuning that helps align the models' responses with human expectations to realize impressive learning abilities. Two major approaches for instruction tuning characterize supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF), which are currently applied to produce the best commercial LLMs (e.g., ChatGPT). To improve the accessibility of LLMs for research and development efforts, various instruction-tuned open-source LLMs have also been introduced recently, e.g., Alpaca, Vicuna, to name a few. However, existing open-source LLMs have only been instruction-tuned for English and a few popular languages, thus hindering their impacts and accessibility to many other languages in the world. Among a few very recent work to explore instruction tuning for LLMs in multiple languages, SFT has been used as the only approach to instruction-tune LLMs for multiple languages. This has left a significant gap for fine-tuned LLMs based on RLHF in diverse languages and raised important questions on how RLHF can boost the performance of multilingual instruction tuning. To overcome this issue, we present Okapi, the first system with instruction-tuned LLMs based on RLHF for multiple languages. Okapi introduces instruction and response-ranked data in 26 diverse languages to facilitate the experiments and development of future multilingual LLM research. We also present benchmark datasets to enable the evaluation of generative LLMs in multiple languages. Our experiments demonstrate the advantages of RLHF for multilingual instruction over SFT for different base models and datasets. Our framework and resources are released at this https URL.
Homepage: `https://github.com/nlp-uoregon/Okapi`
### Citation
```
@article{dac2023okapi,
title={Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback},
author={Dac Lai, Viet and Van Nguyen, Chien and Ngo, Nghia Trung and Nguyen, Thuat and Dernoncourt, Franck and Rossi, Ryan A and Nguyen, Thien Huu},
journal={arXiv e-prints},
pages={arXiv--2307},
year={2023}
}
```
### Groups and Tasks
#### Groups
- truthfulqa_multilingual
#### Tasks
- `truthfulqa_{ar,bn,ca,da,de,es,eu,fr,gu,hi,hr,hu,hy,id,it,kn,ml,mr,ne,nl,pt,ro,ru,sk,sr,sv,ta,te,uk,vi,zh}`
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/okapi/truthfulqa_multilingual/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/okapi/truthfulqa_multilingual/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 3273
} |
# sglang_triton
Build the docker image:
```
docker build -t sglang-triton .
```
Then do:
```
docker run -ti --gpus=all --network=host --name sglang-triton -v ./models:/mnt/models sglang-triton
```
inside the docker container:
```
cd sglang
python3 -m sglang.launch_server --model-path mistralai/Mistral-7B-Instruct-v0.2 --port 30000 --mem-fraction-static 0.9
```
with another shell, inside the docker container:
```
docker exec -ti sglang-triton /bin/bash
cd /mnt
tritonserver --model-repository=/mnt/models
```
Send request to the server:
```
curl -X POST http://localhost:8000/v2/models/character_generation/generate \
-H "Content-Type: application/json" \
-d '{
"INPUT_TEXT": ["harry"]
}'
``` | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/examples/usage/triton/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/examples/usage/triton/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5206,
"description": "s1: Simple test-time scaling",
"file_size": 704
} |
Legal Disclaimer
Within this source code, the comments in Chinese shall be the original, governing version. Any comment in other languages are for reference only. In the event of any conflict between the Chinese language version comments and other language version comments, the Chinese language version shall prevail.
法律免责声明
关于代码注释部分,中文注释为官方版本,其它语言注释仅做参考。中文注释可能与其它语言注释存在不一致,当中文注释与其它语言注释存在不一致时,请以中文注释为准。 | {
"source": "OpenSPG/KAG",
"title": "LEGAL.md",
"url": "https://github.com/OpenSPG/KAG/blob/master/LEGAL.md",
"date": "2024-09-21T13:56:44",
"stars": 5095,
"description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.",
"file_size": 406
} |
# KAG: Knowledge Augmented Generation
<div align="center">
<a href="https://spg.openkg.cn/en-US">
<img src="./_static/images/OpenSPG-1.png" width="520" alt="openspg logo">
</a>
</div>
<p align="center">
<a href="./README.md">English</a> |
<a href="./README_cn.md">简体中文</a> |
<a href="./README_ja.md">日本語版ドキュメント</a>
</p>
<p align="center">
<a href='https://arxiv.org/pdf/2409.13731'><img src='https://img.shields.io/badge/arXiv-2409.13731-b31b1b'></a>
<a href="https://github.com/OpenSPG/KAG/releases/latest">
<img src="https://img.shields.io/github/v/release/OpenSPG/KAG?color=blue&label=Latest%20Release" alt="Latest Release">
</a>
<a href="https://openspg.yuque.com/ndx6g9/docs_en">
<img src="https://img.shields.io/badge/User%20Guide-1e8b93?logo=readthedocs&logoColor=f5f5f5" alt="User Guide">
</a>
<a href="https://github.com/OpenSPG/KAG/blob/main/LICENSE">
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
</a>
</p>
<p align="center">
<a href="https://discord.gg/PURG77zhQ7">
<img src="https://img.shields.io/discord/1329648479709958236?style=for-the-badge&logo=discord&label=Discord" alt="Discord">
</a>
</p>
# 1. What is KAG?
KAG is a logical reasoning and Q&A framework based on the [OpenSPG](https://github.com/OpenSPG/openspg) engine and large language models, which is used to build logical reasoning and Q&A solutions for vertical domain knowledge bases. KAG can effectively overcome the ambiguity of traditional RAG vector similarity calculation and the noise problem of GraphRAG introduced by OpenIE. KAG supports logical reasoning and multi-hop fact Q&A, etc., and is significantly better than the current SOTA method.
The goal of KAG is to build a knowledge-enhanced LLM service framework in professional domains, supporting logical reasoning, factual Q&A, etc. KAG fully integrates the logical and factual characteristics of the KGs. Its core features include:
- Knowledge and Chunk Mutual Indexing structure to integrate more complete contextual text information
- Knowledge alignment using conceptual semantic reasoning to alleviate the noise problem caused by OpenIE
- Schema-constrained knowledge construction to support the representation and construction of domain expert knowledge
- Logical form-guided hybrid reasoning and retrieval to support logical reasoning and multi-hop reasoning Q&A
⭐️ Star our repository to stay up-to-date with exciting new features and improvements! Get instant notifications for new releases! 🌟

# 2. Core Features
## 2.1 Knowledge Representation
In the context of private knowledge bases, unstructured data, structured information, and business expert experience often coexist. KAG references the DIKW hierarchy to upgrade SPG to a version that is friendly to LLMs.
For unstructured data such as news, events, logs, and books, as well as structured data like transactions, statistics, and approvals, along with business experience and domain knowledge rules, KAG employs techniques such as layout analysis, knowledge extraction, property normalization, and semantic alignment to integrate raw business data and expert rules into a unified business knowledge graph.

This makes it compatible with schema-free information extraction and schema-constrained expertise construction on the same knowledge type (e. G., entity type, event type), and supports the cross-index representation between the graph structure and the original text block.
This mutual index representation is helpful to the construction of inverted index based on graph structure, and promotes the unified representation and reasoning of logical forms.
## 2.2 Mixed Reasoning Guided by Logic Forms

KAG proposes a logically formal guided hybrid solution and inference engine.
The engine includes three types of operators: planning, reasoning, and retrieval, which transform natural language problems into problem solving processes that combine language and notation.
In this process, each step can use different operators, such as exact match retrieval, text retrieval, numerical calculation or semantic reasoning, so as to realize the integration of four different problem solving processes: Retrieval, Knowledge Graph reasoning, language reasoning and numerical calculation.
# 3. Release Notes
## 3.1 Latest Updates
* 2025.01.07 : Support domain knowledge injection, domain schema customization, QFS tasks support, Visual query analysis, enables schema-constraint mode for extraction, etc.
* 2024.11.21 : Support Word docs upload, model invoke concurrency setting, User experience optimization, etc.
* 2024.10.25 : KAG initial release
## 3.2 Future Plans
* Logical reasoning optimization, conversational tasks support
* kag-model release, kag solution for event reasoning knowledge graph and medical knowledge graph
* kag front-end open source, distributed build support, mathematical reasoning optimization
# 4. Quick Start
## 4.1 product-based (for ordinary users)
### 4.1.1 Engine & Dependent Image Installation
* **Recommend System Version:**
```text
macOS User:macOS Monterey 12.6 or later
Linux User:CentOS 7 / Ubuntu 20.04 or later
Windows User:Windows 10 LTSC 2021 or later
```
* **Software Requirements:**
```text
macOS / Linux User:Docker,Docker Compose
Windows User:WSL 2 / Hyper-V,Docker,Docker Compose
```
Use the following commands to download the docker-compose.yml file and launch the services with Docker Compose.
```bash
# set the HOME environment variable (only Windows users need to execute this command)
# set HOME=%USERPROFILE%
curl -sSL https://raw.githubusercontent.com/OpenSPG/openspg/refs/heads/master/dev/release/docker-compose-west.yml -o docker-compose-west.yml
docker compose -f docker-compose-west.yml up -d
```
### 4.1.2 Use the product
Navigate to the default url of the KAG product with your browser: <http://127.0.0.1:8887>
```text
Default Username: openspg
Default password: openspg@kag
```
See [KAG usage (product mode)](https://openspg.yuque.com/ndx6g9/cwh47i/rs7gr8g4s538b1n7#rtOlA) for detailed introduction.
## 4.2 toolkit-based (for developers)
### 4.2.1 Engine & Dependent Image Installation
Refer to the 3.1 section to complete the installation of the engine & dependent image.
### 4.2.2 Installation of KAG
**macOS / Linux developers**
```text
# Create conda env: conda create -n kag-demo python=3.10 && conda activate kag-demo
# Clone code: git clone https://github.com/OpenSPG/KAG.git
# Install KAG: cd KAG && pip install -e .
```
**Windows developers**
```text
# Install the official Python 3.8.10 or later, install Git.
# Create and activate Python venv: py -m venv kag-demo && kag-demo\Scripts\activate
# Clone code: git clone https://github.com/OpenSPG/KAG.git
# Install KAG: cd KAG && pip install -e .
```
### 4.2.3 Use the toolkit
Please refer to [KAG usage (developer mode)](https://openspg.yuque.com/ndx6g9/cwh47i/rs7gr8g4s538b1n7#cikso) guide for detailed introduction of the toolkit. Then you can use the built-in components to reproduce the performance results of the built-in datasets, and apply those components to new busineness scenarios.
# 5. Technical Architecture

The KAG framework includes three parts: kg-builder, kg-solver, and kag-model. This release only involves the first two parts, kag-model will be gradually open source release in the future.
kg-builder implements a knowledge representation that is friendly to large-scale language models (LLM). Based on the hierarchical structure of DIKW (data, information, knowledge and wisdom), IT upgrades SPG knowledge representation ability, and is compatible with information extraction without schema constraints and professional knowledge construction with schema constraints on the same knowledge type (such as entity type and event type), it also supports the mutual index representation between the graph structure and the original text block, which supports the efficient retrieval of the reasoning question and answer stage.
kg-solver uses a logical symbol-guided hybrid solving and reasoning engine that includes three types of operators: planning, reasoning, and retrieval, to transform natural language problems into a problem-solving process that combines language and symbols. In this process, each step can use different operators, such as exact match retrieval, text retrieval, numerical calculation or semantic reasoning, so as to realize the integration of four different problem solving processes: Retrieval, Knowledge Graph reasoning, language reasoning and numerical calculation.
# 6. Community & Support
**GitHub**: <https://github.com/OpenSPG/KAG>
**Website**: <https://spg.openkg.cn/>
## Discord <a href="https://discord.gg/PURG77zhQ7"> <img src="https://img.shields.io/discord/1329648479709958236?style=for-the-badge&logo=discord&label=Discord" alt="Discord"></a>
Join our [Discord](https://discord.gg/PURG77zhQ7) community.
## WeChat
Follow OpenSPG Official Account to get technical articles and product updates about OpenSPG and KAG.
<img src="./_static/images/openspg-qr.png" alt="Contact Us: OpenSPG QR-code" width="200">
Scan the QR code below to join our WeChat group.
<img src="./_static/images/robot-qr.JPG" alt="Join WeChat group" width="200">
# 7. Differences between KAG, RAG, and GraphRAG
**KAG introduction and applications**: <https://github.com/orgs/OpenSPG/discussions/52>
# 8. Citation
If you use this software, please cite it as below:
* [KAG: Boosting LLMs in Professional Domains via Knowledge Augmented Generation](https://arxiv.org/abs/2409.13731)
* KGFabric: A Scalable Knowledge Graph Warehouse for Enterprise Data Interconnection
```bibtex
@article{liang2024kag,
title={KAG: Boosting LLMs in Professional Domains via Knowledge Augmented Generation},
author={Liang, Lei and Sun, Mengshu and Gui, Zhengke and Zhu, Zhongshu and Jiang, Zhouyu and Zhong, Ling and Qu, Yuan and Zhao, Peilong and Bo, Zhongpu and Yang, Jin and others},
journal={arXiv preprint arXiv:2409.13731},
year={2024}
}
@article{yikgfabric,
title={KGFabric: A Scalable Knowledge Graph Warehouse for Enterprise Data Interconnection},
author={Yi, Peng and Liang, Lei and Da Zhang, Yong Chen and Zhu, Jinye and Liu, Xiangyu and Tang, Kun and Chen, Jialin and Lin, Hao and Qiu, Leijie and Zhou, Jun}
}
```
# License
[Apache License 2.0](LICENSE) | {
"source": "OpenSPG/KAG",
"title": "README.md",
"url": "https://github.com/OpenSPG/KAG/blob/master/README.md",
"date": "2024-09-21T13:56:44",
"stars": 5095,
"description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.",
"file_size": 10666
} |
# 大模型知识服务框架 KAG
<div align="center">
<a href="https://spg.openkg.cn/en-US">
<img src="./_static/images/OpenSPG-1.png" width="520" alt="openspg logo">
</a>
</div>
<p align="center">
<a href="./README.md">English</a> |
<a href="./README_cn.md">简体中文</a> |
<a href="./README_ja.md">日本語版ドキュメント</a>
</p>
<p align="center">
<a href='https://arxiv.org/pdf/2409.13731'><img src='https://img.shields.io/badge/arXiv-2409.13731-b31b1b'></a>
<a href="https://github.com/OpenSPG/KAG/releases/latest">
<img src="https://img.shields.io/github/v/release/OpenSPG/KAG?color=blue&label=Latest%20Release" alt="Latest Release">
</a>
<a href="https://openspg.yuque.com/ndx6g9/docs">
<img src="https://img.shields.io/badge/用户手册-1e8b93?logo=readthedocs&logoColor=f5f5f5" alt="用户手册">
</a>
<a href="https://github.com/OpenSPG/KAG/blob/main/LICENSE">
<img height="21" src="https://img.shields.io/badge/License-Apache--2.0-ffffff?labelColor=d4eaf7&color=2e6cc4" alt="license">
</a>
</p>
# 1. KAG 是什么
KAG 是基于 [OpenSPG](https://github.com/OpenSPG/openspg) 引擎和大型语言模型的逻辑推理问答框架,用于构建垂直领域知识库的逻辑推理问答解决方案。KAG 可以有效克服传统 RAG 向量相似度计算的歧义性和 OpenIE 引入的 GraphRAG 的噪声问题。KAG 支持逻辑推理、多跳事实问答等,并且明显优于目前的 SOTA 方法。
KAG 的目标是在专业领域构建知识增强的 LLM 服务框架,支持逻辑推理、事实问答等。KAG 充分融合了 KG 的逻辑性和事实性特点,其核心功能包括:
* 知识与 Chunk 互索引结构,以整合更丰富的上下文文本信息
* 利用概念语义推理进行知识对齐,缓解 OpenIE 引入的噪音问题
* 支持 Schema-Constraint 知识构建,支持领域专家知识的表示与构建
* 逻辑符号引导的混合推理与检索,实现逻辑推理和多跳推理问答
⭐️点击右上角的 Star 关注 KAG,可以获取最新发布的实时通知!🌟

# 2. KAG 核心功能
## 2.1 LLM 友好的语义化知识管理
私域知识库场景,非结构化数据、结构化信息、业务专家经验 往往三者共存,KAG 提出了一种对大型语言模型(LLM)友好的知识表示框架,在 DIKW(数据、信息、知识和智慧)的层次结构基础上,将 SPG 升级为对 LLM 友好的版本,命名为 LLMFriSPG。
这使得它能够在同一知识类型(如实体类型、事件类型)上兼容无 schema 约束的信息提取和有 schema 约束的专业知识构建,并支持图结构与原始文本块之间的互索引表示。
这种互索引表示有助于基于图结构的倒排索引的构建,并促进了逻辑形式的统一表示、推理和检索。同时通过知识理解、语义对齐等进一步降低信息抽取的噪声,提升知识的准确率和一致性。

## 2.2 逻辑符号引导的混合推理引擎
KAG 提出了一种逻辑符号引导的混合求解和推理引擎。该引擎包括三种类型的运算符:规划、推理和检索,将自然语言问题转化为结合语言和符号的问题求解过程。
在这个过程中,每一步都可以利用不同的运算符,如精确匹配检索、文本检索、数值计算或语义推理,从而实现四种不同问题求解过程的集成:图谱推理、逻辑计算、Chunk 检索和 LLM 推理。

# 3. 版本发布
## 3.1 最近更新
* 2025.01.07 : 支持 领域知识注入、领域 schema 自定义、摘要生成类任务支持、可视化图分析查询、schema-constraint模式抽取等
* 2024.11.21 : 支持 Word 文档上传、知识库删除、模型调用并发度设置、用户体验优化等
* 2024.10.25 : KAG 首次发布
## 3.2 后续计划
* 逻辑推理 优化、对话式任务支持
* kag-model 发布、事理图谱 和 医疗图谱的 kag 解决方案发布
* kag 前端开源、分布式构建支持、数学推理 优化
# 4. 快速开始
## 4.1 基于产品(面向普通用户)
### 4.1.1 引擎&依赖 镜像安装
* **推荐系统版本:**
```text
macOS 用户:macOS Monterey 12.6 或更新版本
Linux 用户:CentOS 7 / Ubuntu 20.04 或更新版本
Windows 用户:Windows 10 LTSC 2021 或更新版本
```
* **软件要求:**
```text
macOS / Linux 用户:Docker,Docker Compose
Windows 用户:WSL 2 / Hyper-V,Docker,Docker Compose
```
使用以下命令下载 docker-compose.yml 并用 Docker Compose 启动服务。
```bash
# 设置 HOME 环境变量(仅 Windows 用户需要执行)
# set HOME=%USERPROFILE%
curl -sSL https://raw.githubusercontent.com/OpenSPG/openspg/refs/heads/master/dev/release/docker-compose.yml -o docker-compose.yml
docker compose -f docker-compose.yml up -d
```
### 4.1.2 使用
浏览器打开 KAG 产品默认链接:<http://127.0.0.1:8887> 。
```text
Default Username: openspg
Default password: openspg@kag
```
具体使用请参考 [KAG使用(产品模式)](https://openspg.yuque.com/ndx6g9/0.6/quzq24g4esal7q17#JQH6Y)。
## 4.2 基于工具包(面向开发者)
### 4.2.1 引擎&依赖 镜像安装
参考 4.1 部分完成引擎&依赖的镜像安装。
### 4.2.2 KAG 安装
**macOS / Linux 开发者**
```text
# 安装 Python 虚拟环境:conda create -n kag-demo python=3.10 && conda activate kag-demo
# 代码 clone:git clone https://github.com/OpenSPG/KAG.git
# KAG 安装: cd KAG && pip install -e .
```
**Windows 开发者**
```
# 安装官方 Python 3.8.10 或更新版本,安装 Git。
# 创建、激活 Python 虚拟环境:py -m venv kag-demo && kag-demo\Scripts\activate
# 代码 clone:git clone https://github.com/OpenSPG/KAG.git
# KAG 安装: cd KAG && pip install -e .
```
### 4.2.3 使用
开发者可以参考 [KAG使用(开发者模式)](https://openspg.yuque.com/ndx6g9/0.6/quzq24g4esal7q17#MRgKi),基于 KAG 内置的各种组件,实现内置数据集的效果复现 + 新场景的落地。
# 5. 技术架构

KAG 框架包括 kg-builder、kg-solver、kag-model 三部分。本次发布只涉及前两部分,kag-model 将在后续逐步开源发布。
kg-builder 实现了一种对大型语言模型(LLM)友好的知识表示,在 DIKW(数据、信息、知识和智慧)的层次结构基础上,升级 SPG 知识表示能力,在同一知识类型(如实体类型、事件类型)上兼容无 schema 约束的信息提取和有 schema 约束的专业知识构建,并支持图结构与原始文本块之间的互索引表示,为推理问答阶段的高效检索提供支持。
kg-solver 采用逻辑形式引导的混合求解和推理引擎,该引擎包括三种类型的运算符:规划、推理和检索,将自然语言问题转化为结合语言和符号的问题求解过程。在这个过程中,每一步都可以利用不同的运算符,如精确匹配检索、文本检索、数值计算或语义推理,从而实现四种不同问题求解过程的集成:检索、知识图谱推理、语言推理和数值计算。
# 6. 联系我们
**GitHub**: <https://github.com/OpenSPG/KAG>
**OpenSPG**: <https://spg.openkg.cn/>
<img src="./_static/images/openspg-qr.png" alt="联系我们:OpenSPG 二维码" width="200">
# 7. KAG 与 RAG、GraphRAG 差异
**KAG introduction and applications**: <https://github.com/orgs/OpenSPG/discussions/52>
# 8. 引用
如果您使用本软件,请以下面的方式引用:
* [KAG: Boosting LLMs in Professional Domains via Knowledge Augmented Generation](https://arxiv.org/abs/2409.13731)
* KGFabric: A Scalable Knowledge Graph Warehouse for Enterprise Data Interconnection
```bibtex
@article{liang2024kag,
title={KAG: Boosting LLMs in Professional Domains via Knowledge Augmented Generation},
author={Liang, Lei and Sun, Mengshu and Gui, Zhengke and Zhu, Zhongshu and Jiang, Zhouyu and Zhong, Ling and Qu, Yuan and Zhao, Peilong and Bo, Zhongpu and Yang, Jin and others},
journal={arXiv preprint arXiv:2409.13731},
year={2024}
}
@article{yikgfabric,
title={KGFabric: A Scalable Knowledge Graph Warehouse for Enterprise Data Interconnection},
author={Yi, Peng and Liang, Lei and Da Zhang, Yong Chen and Zhu, Jinye and Liu, Xiangyu and Tang, Kun and Chen, Jialin and Lin, Hao and Qiu, Leijie and Zhou, Jun}
}
```
# 许可协议
[Apache License 2.0](LICENSE) | {
"source": "OpenSPG/KAG",
"title": "README_cn.md",
"url": "https://github.com/OpenSPG/KAG/blob/master/README_cn.md",
"date": "2024-09-21T13:56:44",
"stars": 5095,
"description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.",
"file_size": 5624
} |
# KAG: 知識強化生成
[English version](./README.md)
[中文版文档](./README_cn.md)
## 1. KAGとは
検索強化生成(RAG)技術は、ドメインアプリケーションと大規模言語モデルの統合を促進します。しかし、RAGには、ベクトル類似性と知識推論の相関性のギャップが大きいことや、数値、時間関係、専門家のルールなどの知識ロジックに対して鈍感であるという問題があり、これが専門知識サービスの実装を妨げています。
2024年10月24日、OpenSPGはv0.5をリリースし、知識強化生成(KAG)の専門ドメイン知識サービスフレームワークを正式にリリースしました。KAGは、知識グラフとベクトル検索の利点を最大限に活用し、RAGの課題を解決するために、4つの側面から大規模言語モデルと知識グラフを双方向に強化することを目的としています:(1)LLMに優しい知識表現、(2)知識グラフと元のテキストフラグメントの相互インデックス、(3)論理形式に基づくハイブリッド推論エンジン、(4)意味推論との知識整合。
KAGは、NaiveRAG、HippoRAGなどの方法に比べて、マルチホップ質問応答タスクで顕著に優れています。hotpotQAでのF1スコアは19.6%相対的に向上し、2wikiでのF1スコアは33.5%相対的に向上しました。私たちは、KAGをAnt Groupの2つの専門知識質問応答タスク(電子政府質問応答と電子健康質問応答)に成功裏に適用し、RAG方法に比べて専門性が大幅に向上しました。
⭐️ リポジトリをスター登録して、エキサイティングな新機能やアップデートを最新の状態に保ちましょう!すべての新しいリリースに関する即時通知を受け取れます!🌟

### 1.1 技術アーキテクチャ

KAGフレームワークは、kg-builder、kg-solver、kag-modelの3つの部分で構成されています。このリリースでは最初の2つの部分のみが含まれており、kag-modelは今後段階的にオープンソースリリースされる予定です。
kg-builderは、大規模言語モデル(LLM)に優しい知識表現を実装しています。DIKW(データ、情報、知識、知恵)の階層構造に基づいて、SPGの知識表現能力を向上させ、同じ知識タイプ(例えば、エンティティタイプ、イベントタイプ)でスキーマ制約のない情報抽出とスキーマ制約のある専門知識構築の両方に対応し、グラフ構造と元のテキストブロックの相互インデックス表現をサポートし、推論質問応答段階の効率的な検索をサポートします。
kg-solverは、論理形式に基づくハイブリッド推論エンジンを使用しており、計画、推論、検索の3種類のオペレーターを含み、自然言語の問題を言語と記号を組み合わせた問題解決プロセスに変換します。このプロセスでは、各ステップで異なるオペレーター(例えば、正確な一致検索、テキスト検索、数値計算、または意味推論)を使用することができ、検索、知識グラフ推論、言語推論、数値計算の4つの異なる問題解決プロセスの統合を実現します。
### 1.2 知識表現
プライベートナレッジベースのコンテキストでは、非構造化データ、構造化情報、ビジネスエキスパートの経験が共存することがよくあります。KAGはDIKW階層を参照して、SPGをLLMに優しいバージョンにアップグレードします。ニュース、イベント、ログ、書籍などの非構造化データ、および取引、統計、承認などの構造化データ、ビジネス経験、ドメイン知識ルールに対して、KAGはレイアウト分析、知識抽出、プロパティ正規化、意味整合などの技術を使用して、元のビジネスデータと専門家のルールを統一されたビジネス知識グラフに統合します。

これにより、同じ知識タイプ(例えば、エンティティタイプ、イベントタイプ)でスキーマ制約のない情報抽出とスキーマ制約のある専門知識構築の両方に対応し、グラフ構造と元のテキストブロックの相互インデックス表現をサポートします。この相互インデックス表現は、グラフ構造に基づく逆インデックスの構築に役立ち、論理形式の統一表現と推論を促進します。
### 1.3 論理形式に基づくハイブリッド推論

KAGは、論理形式に基づくハイブリッド推論エンジンを提案しています。このエンジンは、計画、推論、検索の3種類のオペレーターを含み、自然言語の問題を言語と記号を組み合わせた問題解決プロセスに変換します。このプロセスでは、各ステップで異なるオペレーター(例えば、正確な一致検索、テキスト検索、数値計算、または意味推論)を使用することができ、検索、知識グラフ推論、言語推論、数値計算の4つの異なる問題解決プロセスの統合を実現します。
## 2. 効果はどうですか?
### 2.1 公開データセットの効果(マルチホップ推論)

最適化後、KAGの垂直分野での適応性を検証しただけでなく、一般的なデータセットのマルチホップ質問応答で既存のRAG方法と比較しました。その結果、SOTA方法よりも明らかに優れており、2wikiでのF1スコアが33.5%、hotpotQAでのF1スコアが19.6%向上しました。このフレームワークを引き続き最適化しており、エンドツーエンドの実験とアブレーション実験の指標を通じてその有効性を実証しています。論理記号駆動の推論と概念整合の手法により、このフレームワークの有効性を実証しました。
### 2.2 ドメイン知識シナリオの効果(リスクマイニング)
#### 2.2.1 専門家ルールの定義
* 「ギャンブルAPP」識別ルールの定義
**define riskAppTaxo rule**
```text
Define (s:App)-[p:belongTo]->(o:`TaxOfRiskApp`/`GamblingApp`) {
Structure {
(s)
}
Constraint {
R1("risk label marked as gambling") s.riskMark like "%Gambling%"
}
}
```
* 「App開発者」識別ルールの定義
**define app developper rule**
```text
Define (s:Person)-[p:developed]->(o:App) {
Structure {
(s)-[:hasDevice]->(d:Device)-[:install]->(o)
}
Constraint {
deviceNum = group(s,o).count(d)
R1("device installed same app"): deviceNum > 5
}
}
```
* 「ギャンブルApp開発者」識別ルールの定義
**define a RiskUser of gambling app rule**
```text
Define (s:Person)-[p:belongTo]->(o:`TaxOfRiskUser`/`DeveloperOfGamblingApp`) {
Structure {
(s)-[:developed]->(app:`TaxOfRiskApp`/`GamblingApp`)
}
Constraint {
}
}
```
#### 2.2.2 ビジネスデータ

#### 2.2.3 推論プロセス

推論プロセスの重要なステップは次のとおりです。
* 自然言語の問題を実行可能な論理式に変換します。これはプロジェクトの概念モデリングに依存しており、ブラックプロダクトマイニングドキュメントを参照してください。
* 変換された論理式をOpenSPGリゾルバーに提出して実行し、ユーザーの分類結果を取得します。
* ユーザーの分類結果に基づいて回答を生成します。
OpenSPGの概念モデリングと組み合わせることで、KAGは自然言語変換グラフクエリの難易度を下げ、データ指向の変換を分類概念指向の変換に変え、元のOpenSPGプロジェクトで自然言語質問応答の分野アプリケーションを迅速に実現できます。
## 3. どうやって使うの?
### 3.1 製品ベース(一般ユーザー向け)
#### 3.1.1 エンジン&依存関係のイメージインストール
* **推奨システムバージョン:**
```text
macOSユーザー:macOS Monterey 12.6以降
Linuxユーザー:CentOS 7 / Ubuntu 20.04以降
Windowsユーザー:Windows 10 LTSC 2021以降
```
* **ソフトウェア要件:**
```text
macOS / Linuxユーザー:Docker、Docker Compose
Windowsユーザー:WSL 2 / Hyper-V、Docker、Docker Compose
```
以下のコマンドを使用してdocker-compose.ymlファイルをダウンロードし、Docker Composeでサービスを起動します。
```bash
# HOME環境変数を設定(Windowsユーザーのみ実行が必要)
# set HOME=%USERPROFILE%
curl -sSL https://raw.githubusercontent.com/OpenSPG/openspg/refs/heads/master/dev/release/docker-compose.yml -o docker-compose.yml
docker compose -f docker-compose.yml up -d
```
#### 3.1.2 製品の使用
ブラウザでKAG製品のデフォルトURLを開きます:<http://127.0.0.1:8887>
詳細な紹介については、[製品使用](https://openspg.yuque.com/ndx6g9/cwh47i/rs7gr8g4s538b1n7#rtOlA)ガイドを参照してください。
### 3.2 ツールキットベース(開発者向け)
#### 3.2.1 エンジン&依存関係のイメージインストール
3.1セクションを参照して、エンジン&依存関係のイメージインストールを完了します。
#### 3.2.2 KAGのインストール
**macOS / Linux開発者**
```text
# conda環境の作成:conda create -n kag-demo python=3.10 && conda activate kag-demo
# コードのクローン:git clone https://github.com/OpenSPG/KAG.git
# KAGのインストール: cd KAG && pip install -e .
```
**Windows開発者**
```text
# 公式のPython 3.8.10以降をインストールし、Gitをインストールします。
# Python仮想環境の作成とアクティベート:py -m venv kag-demo && kag-demo\Scripts\activate
# コードのクローン:git clone https://github.com/OpenSPG/KAG.git
# KAGのインストール: cd KAG && pip install -e .
```
#### 3.2.3 ツールキットの使用
詳細な紹介については、[クイックスタート](https://openspg.yuque.com/ndx6g9/cwh47i/rs7gr8g4s538b1n7#cikso)ガイドを参照してください。その後、組み込みのコンポーネントを使用して、組み込みデータセットのパフォーマンス結果を再現し、新しいビジネスシナリオにこれらのコンポーネントを適用できます。
## 4. どのように拡張するの?
### 4.1 KAGの能力を拡張する
KAGが提供する組み込みコンポーネントが要件を満たさない場合、開発者はkag-builderおよびkag-solverの実装を独自に拡張できます。[KAG-Builder拡張](https://openspg.yuque.com/ndx6g9/cwh47i/ephl8hgth3gcgucn)および[KAG-Solver拡張](https://openspg.yuque.com/ndx6g9/cwh47i/rqdwk204izit2hsm)を参照してください。
#### 4.1.1 kag-builder拡張

KAGは、BuilderChainを使用して、リーダー、スプリッター、マッピング、エクストラクター、アライナー、ベクトライザーなどのコンポーネントを連結します。開発者は、kagが事前定義したBuilderChainを使用してグラフ構築を完了することも、事前定義されたコンポーネントを組み合わせてBuilderChainを取得することもできます。
同時に、開発者はビルダー内のコンポーネントをカスタマイズし、BuilderChainに埋め込んで実行することができます。
```text
kag
├──interface
│ ├── builder
│ │ ├── aligner_abc.py
│ │ ├── extractor_abc.py
│ │ ├── mapping_abc.py
│ │ ├── reader_abc.py
│ │ ├── splitter_abc.py
│ │ ├── vectorizer_abc.py
│ │ └── writer_abc.py
```
#### 4.1.2 kag-solver拡張
kag-solverは、リゾルバー、ジェネレーター、リフレクターコンポーネントで構成されるsolver-pipelineを実行します。KAGはデフォルトのリゾルバー、ジェネレーター、リフレクターを提供します。開発者は、次のAPIに基づいてカスタム実装を提供することもできます。
```text
kag
├── solver
│ ├── logic
│ │ └── solver_pipeline.py
├── interface
├── retriever
│ ├── chunk_retriever_abc.py
│ └── kg_retriever_abc.py
└── solver
├── kag_generator_abc.py
├── kag_memory_abc.py
├── kag_reasoner_abc.py
├── kag_reflector_abc.py
└── lf_planner_abc.py
```
### 4.2 KAGをカスタムモデルに適応させる
#### 4.2.1 生成モデルの適応
KAGは、Qwen / DeepSeek / GPTなどのOpenAIサービスと互換性のあるMaaS APIとの接続をサポートし、vLLM / Ollamaによってデプロイされたローカルモデルとの接続もサポートします。開発者は、llm_clientインターフェースに基づいてカスタムモデルサービスのサポートを追加できます。
```text
kag
├── common
├── llm
├── client
│ ├── llm_client.py
│ ├── ollama_client.py
│ ├── openai_client.py
│ ├── vllm_client.py
```
#### 4.2.2 表示モデルの適応
KAGは、OpenAIの表示モデルなどの呼び出しをサポートしており、OpenAIの埋め込みサービス、Ollamaによってデプロイされたbge-m3モデルを含みます。また、ローカルの埋め込みモデルのロードと使用もサポートしています。
```text
kag
├── common
├── vectorizer
│ ├── vectorizer.py
│ ├── openai_vectorizer.py
│ ├── local_bge_m3_vectorizer.py
│ ├── local_bge_vectorizer.py
```
### 4.3 KAGを他のフレームワークと統合する
他のフレームワークと統合する際には、外部のビジネスデータと専門知識を入力として使用し、kag-builderパイプラインを呼び出して知識グラフの構築を完了します。また、kag-solverを呼び出してQ&A推論プロセスを完了し、推論結果と中間プロセスをビジネスシステムに公開します。
他のフレームワークがkagを統合する方法は、次のように簡単に説明できます。

## 5. 今後の計画
* ドメイン知識の注入、ドメイン概念グラフとエンティティグラフの融合を実現
* kag-modelの最適化、KG構築とQ&Aの効率向上
* 知識ロジック制約の幻覚抑制
## 6. お問い合わせ
**GitHub**: <https://github.com/OpenSPG/KAG>
**OpenSPG**: <https://spg.openkg.cn/>
<img src="./_static/images/openspg-qr.png" alt="お問い合わせ:OpenSPG QRコード" width="200">
# 引用
このソフトウェアを使用する場合は、以下の方法で引用してください:
* [KAG: Boosting LLMs in Professional Domains via Knowledge Augmented Generation](https://arxiv.org/abs/2409.13731)
* KGFabric: A Scalable Knowledge Graph Warehouse for Enterprise Data Interconnection
```bibtex
@article{liang2024kag,
title={KAG: Boosting LLMs in Professional Domains via Knowledge Augmented Generation},
author={Liang, Lei and Sun, Mengshu and Gui, Zhengke and Zhu, Zhongshu and Jiang, Zhouyu and Zhong, Ling and Qu, Yuan and Zhao, Peilong and Bo, Zhongpu and Yang, Jin and others},
journal={arXiv preprint arXiv:2409.13731},
year={2024}
}
@article{yikgfabric,
title={KGFabric: A Scalable Knowledge Graph Warehouse for Enterprise Data Interconnection},
author={Yi, Peng and Liang, Lei and Da Zhang, Yong Chen and Zhu, Jinye and Liu, Xiangyu and Tang, Kun and Chen, Jialin and Lin, Hao and Qiu, Leijie and Zhou, Jun}
}
```
# ライセンス
[Apache License 2.0](LICENSE) | {
"source": "OpenSPG/KAG",
"title": "README_ja.md",
"url": "https://github.com/OpenSPG/KAG/blob/master/README_ja.md",
"date": "2024-09-21T13:56:44",
"stars": 5095,
"description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.",
"file_size": 9029
} |
---
sidebar_position: 1
slug: /release_notes
---
# Release notes
Key features, improvements and bug fixes in the latest releases.
## Version 0.5.1 (2024-11-21)
This version focuses on addressing user feedback and introduces a series of new features and user experience optimizations.
---
### **New Features**
- **Support for Word Documents**
Users can now directly upload `.doc` or `.docx` files to streamline the knowledge base construction process.
<img src="https://github.com/user-attachments/assets/86ad11d8-42ed-44f4-91ab-f9a7c6346df2" width="600" >
- **New Project Deletion API**
Quickly clear and delete projects and related data through an API, compatible with the latest Neo4j image version.
- **Model Call Concurrency Setting**
Added the `builder.model.execute.num` parameter, with a default concurrency of 5, to improve efficiency in large-scale knowledge base construction.
<img src="https://github.com/user-attachments/assets/ac7653bd-bf0c-464f-839b-8385ae6fb2c2" width="600" >
- **Improved Logging**
Added a startup success marker in the logs to help users quickly verify if the service is running correctly.
<img src="https://github.com/user-attachments/assets/56d42e84-d6c7-4743-a50c-5bf38fc87f58" width="600" >
---
### **Fixed issues**
- **Neo4j Memory Overflow Issues**
Addressed memory overflow problems in Neo4j during large-scale data processing, ensuring stable operation for extensive datasets.
- **Concurrent Neo4j Query Execution Issues**
Optimized execution strategies to resolve Graph Data Science (GDS) library conflicts or failures in high-concurrency scenarios.
- **Schema Preview Prefix Issue**
Fixed issues where extracted schema preview entities lacked necessary prefixes, ensuring consistency between extracted entities and predefined schemas.
- **Default Neo4j Password for Project Creation/Modification**
Automatically fills a secure default password if none is specified during project creation or modification, simplifying the configuration process.
- **Frontend Bug Fixes**
Resolved issues with JS dependencies relying on external addresses and embedded all frontend files into the image. Improved the knowledge base management interface for a smoother user experience.
- **Empty Node/Edge Type in Neo4j Writes**
Enhanced writing logic to handle empty node or edge types during knowledge graph construction, preventing errors or data loss in such scenarios.
## Version 0.5 (2024-10-25)
retrieval Augmentation Generation (RAG) technology promotes the integration of domain applications with large models. However, RAG has problems such as a large gap between vector similarity and knowledge reasoning correlation, and insensitivity to knowledge logic (such as numerical values, time relationships, expert rules, etc.), which hinder the implementation of professional knowledge services. On October 25, officially releasing the professional domain knowledge Service Framework for knowledge enhancement generation (KAG) .
---
### KAG: Knowledge Augmented Generation
KAG aims to make full use of the advantages of Knowledge Graph and vector retrieval, and bi-directionally enhance large language models and knowledge graphs through four aspects to solve RAG challenges
(1) LLM-friendly semantic knowledge management
(2) Mutual indexing between the knowledge map and the original snippet.
(3) Logical symbol-guided hybrid inference engine
(4) Knowledge alignment based on semantic reasoning
KAG is significantly better than NaiveRAG, HippoRAG and other methods in multi-hop question and answer tasks. The F1 score on hotpotQA is relatively improved by 19.6, and the F1 score on 2wiki is relatively improved by 33.5
The KAG framework includes three parts: kg-builder, kg-solver, and kag-model. This release only involves the first two parts, kag-model will be gradually open source release in the future.
#### kg-builder
implements a knowledge representation that is friendly to large-scale language models (LLM). Based on the hierarchical structure of DIKW (data, information, knowledge and wisdom), IT upgrades SPG knowledge representation ability, and is compatible with information extraction without schema constraints and professional knowledge construction with schema constraints on the same knowledge type (such as entity type and event type), it also supports the mutual index representation between the graph structure and the original text block, which supports the efficient retrieval of the reasoning question and answer stage.
#### kg-solver
uses a logical symbol-guided hybrid solving and reasoning engine that includes three types of operators: planning, reasoning, and retrieval, to transform natural language problems into a problem-solving process that combines language and symbols. In this process, each step can use different operators, such as exact match retrieval, text retrieval, numerical calculation or semantic reasoning, so as to realize the integration of four different problem solving processes: Retrieval, Knowledge Graph reasoning, language reasoning and numerical calculation. | {
"source": "OpenSPG/KAG",
"title": "docs/release_notes.md",
"url": "https://github.com/OpenSPG/KAG/blob/master/docs/release_notes.md",
"date": "2024-09-21T13:56:44",
"stars": 5095,
"description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.",
"file_size": 5126
} |
# KAG Examples
[English](./README.md) |
[简体中文](./README_cn.md)
## 1. Precondition
Please refer to [Quick Start](https://openspg.yuque.com/ndx6g9/cwh47i/rs7gr8g4s538b1n7) to install KAG and its dependency OpenSPG server, and learn about using KAG in developer mode.
## 2. Create a knowledge base
### 2.1 Create the project
#### Step 1: Enter the examples directory
```bash
cd kag/examples
```
#### Step 2: Edit project configuration
```bash
vim ./example_config.yaml
```
```yaml
#------------project configuration start----------------#
openie_llm: &openie_llm
api_key: key
base_url: https://api.deepseek.com
model: deepseek-chat
type: maas
chat_llm: &chat_llm
api_key: key
base_url: https://api.deepseek.com
model: deepseek-chat
type: maas
vectorize_model: &vectorize_model
api_key: key
base_url: https://api.siliconflow.cn/v1/
model: BAAI/bge-m3
type: openai
vector_dimensions: 1024
vectorizer: *vectorize_model
log:
level: INFO
project:
biz_scene: default
host_addr: http://127.0.0.1:8887
id: "1"
language: en
namespace: TwoWikiTest
#------------project configuration end----------------#
#------------kag-builder configuration start----------------#
kag_builder_pipeline:
chain:
type: unstructured_builder_chain # kag.builder.default_chain.DefaultUnstructuredBuilderChain
extractor:
type: schema_free_extractor # kag.builder.component.extractor.schema_free_extractor.SchemaFreeExtractor
llm: *openie_llm
ner_prompt:
type: default_ner # kag.builder.prompt.default.ner.OpenIENERPrompt
std_prompt:
type: default_std # kag.builder.prompt.default.std.OpenIEEntitystandardizationdPrompt
triple_prompt:
type: default_triple # kag.builder.prompt.default.triple.OpenIETriplePrompt
reader:
type: dict_reader # kag.builder.component.reader.dict_reader.DictReader
post_processor:
type: kag_post_processor # kag.builder.component.postprocessor.kag_postprocessor.KAGPostProcessor
splitter:
type: length_splitter # kag.builder.component.splitter.length_splitter.LengthSplitter
split_length: 100000
window_length: 0
vectorizer:
type: batch_vectorizer # kag.builder.component.vectorizer.batch_vectorizer.BatchVectorizer
vectorize_model: *vectorize_model
writer:
type: kg_writer # kag.builder.component.writer.kg_writer.KGWriter
num_threads_per_chain: 1
num_chains: 16
scanner:
type: 2wiki_dataset_scanner # kag.builder.component.scanner.dataset_scanner.MusiqueCorpusScanner
#------------kag-builder configuration end----------------#
#------------kag-solver configuration start----------------#
search_api: &search_api
type: openspg_search_api #kag.solver.tools.search_api.impl.openspg_search_api.OpenSPGSearchAPI
graph_api: &graph_api
type: openspg_graph_api #kag.solver.tools.graph_api.impl.openspg_graph_api.OpenSPGGraphApi
exact_kg_retriever: &exact_kg_retriever
type: default_exact_kg_retriever # kag.solver.retriever.impl.default_exact_kg_retriever.DefaultExactKgRetriever
el_num: 5
llm_client: *chat_llm
search_api: *search_api
graph_api: *graph_api
fuzzy_kg_retriever: &fuzzy_kg_retriever
type: default_fuzzy_kg_retriever # kag.solver.retriever.impl.default_fuzzy_kg_retriever.DefaultFuzzyKgRetriever
el_num: 5
vectorize_model: *vectorize_model
llm_client: *chat_llm
search_api: *search_api
graph_api: *graph_api
chunk_retriever: &chunk_retriever
type: default_chunk_retriever # kag.solver.retriever.impl.default_fuzzy_kg_retriever.DefaultFuzzyKgRetriever
llm_client: *chat_llm
recall_num: 10
rerank_topk: 10
kag_solver_pipeline:
memory:
type: default_memory # kag.solver.implementation.default_memory.DefaultMemory
llm_client: *chat_llm
max_iterations: 3
reasoner:
type: default_reasoner # kag.solver.implementation.default_reasoner.DefaultReasoner
llm_client: *chat_llm
lf_planner:
type: default_lf_planner # kag.solver.plan.default_lf_planner.DefaultLFPlanner
llm_client: *chat_llm
vectorize_model: *vectorize_model
lf_executor:
type: default_lf_executor # kag.solver.execute.default_lf_executor.DefaultLFExecutor
llm_client: *chat_llm
force_chunk_retriever: true
exact_kg_retriever: *exact_kg_retriever
fuzzy_kg_retriever: *fuzzy_kg_retriever
chunk_retriever: *chunk_retriever
merger:
type: default_lf_sub_query_res_merger # kag.solver.execute.default_sub_query_merger.DefaultLFSubQueryResMerger
vectorize_model: *vectorize_model
chunk_retriever: *chunk_retriever
generator:
type: default_generator # kag.solver.implementation.default_generator.DefaultGenerator
llm_client: *chat_llm
generate_prompt:
type: resp_simple # kag/examples/2wiki/solver/prompt/resp_generator.py
reflector:
type: default_reflector # kag.solver.implementation.default_reflector.DefaultReflector
llm_client: *chat_llm
#------------kag-solver configuration end----------------#
```
Update the generative model configurations ``openie_llm`` and ``chat_llm`` and the representational model configuration ``vectorize_model`` in the configuration file.
You need to fill in correct ``api_key``s. If your model providers and model names are different from the default values, you also need to update ``base_url`` and ``model``.
#### Step 3: Create the project (i.e. knowledge base in product mode)
```bash
knext project create --config_path ./example_config.yaml
```
#### Step 4: Initial contents of the directory
After creating the project, a directory with the same name as the ``namespace`` field in the ``project`` configuration (e.g., ``TwoWikiTest`` in this example) will be created under the ``kag/examples`` directory, and the KAG framework project code will be initialized.
Users can modify one or more of the following files to complete the customization of business-specific knowledge graph construction and reasoning-based question answering.
```text
.
├── builder
│ ├── __init__.py
│ ├── data
│ │ └── __init__.py
│ ├── indexer.py
│ └── prompt
│ └── __init__.py
├── kag_config.yaml
├── reasoner
│ └── __init__.py
├── schema
│ ├── TwoWikiTest.schema
│ └── __init__.py
└── solver
├── __init__.py
├── data
│ └── __init__.py
└── prompt
└── __init__.py
```
### 2.2 Update the project (Optional)
If there are configuration changes, you can refer to this section to update the configuration information to the server.
#### Step 1: Enter the project directory
```bash
cd kag/examples/TwoWikiTest
```
#### Step 2: Edit project configuration
**Note**: The embedding vectors generated by different representation models can vary significantly. It is recommended not to update the ``vectorize_model`` configuration after the project is created. If you need to update the ``vectorize_model`` configuration, please create a new project.
```bash
vim ./kag_config.yaml
```
#### Step 3: Run the update command
After editing the project configuration, use the ``knext project update`` command to update the local configuration information to the OpenSPG server.
```bash
knext project update --proj_path .
```
## 3. Import documents
### Step 1: Enter the project directory
```bash
cd kag/examples/TwoWikiTest
```
### Step 2: Retrieve corpus data
The test corpus data for the 2wiki dataset is located at ``kag/examples/2wiki/builder/data/2wiki_corpus.json``, containing 6,119 documents and 1,000 question-answer pairs. To quickly complete the entire process, there is also a ``2wiki_corpus_sub.json`` file in the same directory, which contains only 3 documents. We will use this smaller dataset as an example for the experiment.
Copy it to the directory with the same name as the ``TwoWikiTest`` project:
```bash
cp ../2wiki/builder/data/2wiki_sub_corpus.json builder/data
```
### Step 3: Edit the schema (Optional)
Edit the schema file ``schema/TwoWikiTest.schema``. For an introduction of OpenSPG schema, please refer to [Declarative Schema](https://openspg.yuque.com/ndx6g9/cwh47i/fiq6zum3qtzr7cne).
### Step 4: Commit the schema to OpenSPG server
```bash
knext schema commit
```
### Step 5: Execute the build task
Define the build task in the file ``builder/indexer.py``:
```python
import os
import logging
from kag.common.registry import import_modules_from_path
from kag.builder.runner import BuilderChainRunner
logger = logging.getLogger(__name__)
def buildKB(file_path):
from kag.common.conf import KAG_CONFIG
runner = BuilderChainRunner.from_config(
KAG_CONFIG.all_config["kag_builder_pipeline"]
)
runner.invoke(file_path)
logger.info(f"\n\nbuildKB successfully for {file_path}\n\n")
if __name__ == "__main__":
import_modules_from_path(".")
dir_path = os.path.dirname(__file__)
# Set file_path to the path of the corpus file prepared earlier
file_path = os.path.join(dir_path, "data/2wiki_sub_corpus.json")
buildKB(file_path)
```
Run the ``indexer.py`` script to complete the knowledge graph construction for unstructured data.
```bash
cd builder
python indexer.py
```
After the build script is started, a checkpoint directory for the task will be generated in the current working directory, recording the checkpoints and statistical information of the build process.
```text
ckpt
├── chain
├── extractor
├── kag_checkpoint_0_1.ckpt
├── postprocessor
├── reader
└── splitter
```
You can view the extraction task statistics, such as how many nodes/edges were extracted from each document, using the following command:
```bash
less ckpt/kag_checkpoint_0_1.ckpt
```
To see how many document entries were successfully written to the graph database, use the following command:
```bash
wc -l ckpt/kag_checkpoint_0_1.ckpt
```
The KAG framework provides checkpoint-based resumption functionality. If the task is interrupted due to a program error or other external factors (e.g., insufficient LLM invocation credits), you can rerun ``indexer.py``. KAG will automatically load the checkpoint file and reuse the existing results.
### Step 6: Inspect the constructed knowledge graph
Currently, OpenSPG-KAG provides the [Knowledge Exploration](https://openspg.yuque.com/ndx6g9/cwh47i/mzq74eaynm4rqx4b) capability in product mode, along with the corresponding API documentation [HTTP API Reference](https://openspg.yuque.com/ndx6g9/cwh47i/qvbgge62p7argtd2).

## 4. Reasoning-based question answering
### Step 1: Enter the project directory
```bash
cd kag/examples/TwoWikiTest
```
### Step 2: Edit the QA script
```bash
vim ./solver/qa.py
```
Paste the following content into ``qa.py``.
```python
import json
import logging
import os
import time
from concurrent.futures import ThreadPoolExecutor, as_completed
from tqdm import tqdm
from kag.common.benchmarks.evaluate import Evaluate
from kag.solver.logic.solver_pipeline import SolverPipeline
from kag.common.conf import KAG_CONFIG
from kag.common.registry import import_modules_from_path
from kag.common.checkpointer import CheckpointerManager
logger = logging.getLogger(__name__)
class EvaFor2wiki:
"""
init for kag client
"""
def __init__(self):
pass
"""
qa from knowledge base,
"""
def qa(self, query):
resp = SolverPipeline.from_config(KAG_CONFIG.all_config["kag_solver_pipeline"])
answer, traceLog = resp.run(query)
logger.info(f"\n\nso the answer for '{query}' is: {answer}\n\n")
return answer, traceLog
if __name__ == "__main__":
import_modules_from_path("./prompt")
evalObj = EvaFor2wiki()
evalObj.qa("Which Stanford University professor works on Alzheimer's?")
```
### Step 3: Execute the QA task
```bash
cd solver
python qa.py
```
## 5. Other built-in examples
You can enter the [kag/examples](.) directory to explore the built-in examples provided in the source code of KAG.
* [musique](./musique/README.md) (Multi-hop Q&A)
* [twowiki](./2wiki/README.md) (Multi-hop Q&A)
* [hotpotqa](./hotpotqa/README.md) (Multi-hop Q&A)
* [Risk Mining Knowledge Graph](./riskmining/README.md)
* [Enterprise Supply Chain Knowledge Graph](./supplychain/README.md)
* [Medical Knowledge Graph](./medicine/README.md) | {
"source": "OpenSPG/KAG",
"title": "kag/examples/README.md",
"url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/README.md",
"date": "2024-09-21T13:56:44",
"stars": 5095,
"description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.",
"file_size": 12349
} |
# KAG 示例
[English](./README.md) |
[简体中文](./README_cn.md)
## 1. 前置条件
参考文档 [快速开始](https://openspg.yuque.com/ndx6g9/0.6/quzq24g4esal7q17) 安装 KAG 及其依赖的 OpenSPG server,了解开发者模式 KAG 的使用流程。
## 2. 创建知识库
### 2.1 新建项目
#### Step 1:进入 examples 目录
```bash
cd kag/examples
```
#### Step 2:编辑项目配置
```bash
vim ./example_config.yaml
```
```yaml
#------------project configuration start----------------#
openie_llm: &openie_llm
api_key: key
base_url: https://api.deepseek.com
model: deepseek-chat
type: maas
chat_llm: &chat_llm
api_key: key
base_url: https://api.deepseek.com
model: deepseek-chat
type: maas
vectorize_model: &vectorize_model
api_key: key
base_url: https://api.siliconflow.cn/v1/
model: BAAI/bge-m3
type: openai
vector_dimensions: 1024
vectorizer: *vectorize_model
log:
level: INFO
project:
biz_scene: default
host_addr: http://127.0.0.1:8887
id: "1"
language: en
namespace: TwoWikiTest
#------------project configuration end----------------#
#------------kag-builder configuration start----------------#
kag_builder_pipeline:
chain:
type: unstructured_builder_chain # kag.builder.default_chain.DefaultUnstructuredBuilderChain
extractor:
type: schema_free_extractor # kag.builder.component.extractor.schema_free_extractor.SchemaFreeExtractor
llm: *openie_llm
ner_prompt:
type: default_ner # kag.builder.prompt.default.ner.OpenIENERPrompt
std_prompt:
type: default_std # kag.builder.prompt.default.std.OpenIEEntitystandardizationdPrompt
triple_prompt:
type: default_triple # kag.builder.prompt.default.triple.OpenIETriplePrompt
reader:
type: dict_reader # kag.builder.component.reader.dict_reader.DictReader
post_processor:
type: kag_post_processor # kag.builder.component.postprocessor.kag_postprocessor.KAGPostProcessor
splitter:
type: length_splitter # kag.builder.component.splitter.length_splitter.LengthSplitter
split_length: 100000
window_length: 0
vectorizer:
type: batch_vectorizer # kag.builder.component.vectorizer.batch_vectorizer.BatchVectorizer
vectorize_model: *vectorize_model
writer:
type: kg_writer # kag.builder.component.writer.kg_writer.KGWriter
num_threads_per_chain: 1
num_chains: 16
scanner:
type: 2wiki_dataset_scanner # kag.builder.component.scanner.dataset_scanner.MusiqueCorpusScanner
#------------kag-builder configuration end----------------#
#------------kag-solver configuration start----------------#
search_api: &search_api
type: openspg_search_api #kag.solver.tools.search_api.impl.openspg_search_api.OpenSPGSearchAPI
graph_api: &graph_api
type: openspg_graph_api #kag.solver.tools.graph_api.impl.openspg_graph_api.OpenSPGGraphApi
exact_kg_retriever: &exact_kg_retriever
type: default_exact_kg_retriever # kag.solver.retriever.impl.default_exact_kg_retriever.DefaultExactKgRetriever
el_num: 5
llm_client: *chat_llm
search_api: *search_api
graph_api: *graph_api
fuzzy_kg_retriever: &fuzzy_kg_retriever
type: default_fuzzy_kg_retriever # kag.solver.retriever.impl.default_fuzzy_kg_retriever.DefaultFuzzyKgRetriever
el_num: 5
vectorize_model: *vectorize_model
llm_client: *chat_llm
search_api: *search_api
graph_api: *graph_api
chunk_retriever: &chunk_retriever
type: default_chunk_retriever # kag.solver.retriever.impl.default_fuzzy_kg_retriever.DefaultFuzzyKgRetriever
llm_client: *chat_llm
recall_num: 10
rerank_topk: 10
kag_solver_pipeline:
memory:
type: default_memory # kag.solver.implementation.default_memory.DefaultMemory
llm_client: *chat_llm
max_iterations: 3
reasoner:
type: default_reasoner # kag.solver.implementation.default_reasoner.DefaultReasoner
llm_client: *chat_llm
lf_planner:
type: default_lf_planner # kag.solver.plan.default_lf_planner.DefaultLFPlanner
llm_client: *chat_llm
vectorize_model: *vectorize_model
lf_executor:
type: default_lf_executor # kag.solver.execute.default_lf_executor.DefaultLFExecutor
llm_client: *chat_llm
force_chunk_retriever: true
exact_kg_retriever: *exact_kg_retriever
fuzzy_kg_retriever: *fuzzy_kg_retriever
chunk_retriever: *chunk_retriever
merger:
type: default_lf_sub_query_res_merger # kag.solver.execute.default_sub_query_merger.DefaultLFSubQueryResMerger
vectorize_model: *vectorize_model
chunk_retriever: *chunk_retriever
generator:
type: default_generator # kag.solver.implementation.default_generator.DefaultGenerator
llm_client: *chat_llm
generate_prompt:
type: resp_simple # kag/examples/2wiki/solver/prompt/resp_generator.py
reflector:
type: default_reflector # kag.solver.implementation.default_reflector.DefaultReflector
llm_client: *chat_llm
#------------kag-solver configuration end----------------#
```
您需要更新其中的生成模型配置 ``openie_llm`` 和 ``chat_llm`` 和表示模型配置 ``vectorize_model``。
您需要设置正确的 ``api_key``。如果使用的模型供应商和模型名与默认值不同,您还需要更新 ``base_url`` 和 ``model``。
#### Step 3:创建项目(与产品模式中的知识库一一对应)
```bash
knext project create --config_path ./example_config.yaml
```
#### Step 4:目录初始化
创建项目之后会在 ``kag/examples`` 目录下创建一个与 ``project`` 配置中 ``namespace`` 字段同名的目录(示例中为 ``TwoWikiTest``),并完成 KAG 项目代码框架初始化。
用户可以修改下述文件的一个或多个,完成业务自定义图谱构建 & 推理问答。
```text
.
├── builder
│ ├── __init__.py
│ ├── data
│ │ └── __init__.py
│ ├── indexer.py
│ └── prompt
│ └── __init__.py
├── kag_config.yaml
├── reasoner
│ └── __init__.py
├── schema
│ ├── TwoWikiTest.schema
│ └── __init__.py
└── solver
├── __init__.py
├── data
│ └── __init__.py
└── prompt
└── __init__.py
```
### 2.2 更新项目(Optional)
如果有配置变更,可以参考本节内容,更新配置信息到服务端。
#### Step 1:进入项目目录
```bash
cd kag/examples/TwoWikiTest
```
#### Step 2:编辑项目配置
**注意**:由不同表示模型生成的 embedding 向量差异较大,``vectorize_model`` 配置在项目创建后建议不再更新;如有更新 ``vectorize_model`` 配置的需求,请创建一个新项目。
```bash
vim ./kag_config.yaml
```
#### Step 3:运行命令
配置修改后,需要使用 ``knext project update`` 命令将本地配置信息更新到 OpenSPG 服务端。
```bash
knext project update --proj_path .
```
## 3. 导入文档
### Step 1:进入项目目录
```bash
cd kag/examples/TwoWikiTest
```
### Step 2:获取语料数据
2wiki 数据集的测试语料数据为 ``kag/examples/2wiki/builder/data/2wiki_corpus.json``,有 6119 篇文档,和 1000 个问答对。为了迅速跑通整个流程,目录下还有一个 ``2wiki_corpus_sub.json`` 文件,只有 3 篇文档,我们以该小规模数据集为例进行试验。
将其复制到 ``TwoWikiTest`` 项目的同名目录下:
```bash
cp ../2wiki/builder/data/2wiki_sub_corpus.json builder/data
```
### Step 3:编辑 schema(Optional)
编辑 ``schema/TwoWikiTest.schema`` 文件,schema 文件格式参考 [声明式 schema](https://openspg.yuque.com/ndx6g9/0.6/fzhov4l2sst6bede) 相关章节。
### Step 4:提交 schema 到服务端
```bash
knext schema commit
```
### Step 5:执行构建任务
在 ``builder/indexer.py`` 文件中定义任务构建脚本:
```python
import os
import logging
from kag.common.registry import import_modules_from_path
from kag.builder.runner import BuilderChainRunner
logger = logging.getLogger(__name__)
def buildKB(file_path):
from kag.common.conf import KAG_CONFIG
runner = BuilderChainRunner.from_config(
KAG_CONFIG.all_config["kag_builder_pipeline"]
)
runner.invoke(file_path)
logger.info(f"\n\nbuildKB successfully for {file_path}\n\n")
if __name__ == "__main__":
import_modules_from_path(".")
dir_path = os.path.dirname(__file__)
# 将 file_path 设置为之前准备好的语料文件路径
file_path = os.path.join(dir_path, "data/2wiki_sub_corpus.json")
buildKB(file_path)
```
运行 ``indexer.py`` 脚本完成非结构化数据的图谱构建。
```bash
cd builder
python indexer.py
```
构建脚本启动后,会在当前工作目录下生成任务的 checkpoint 目录,记录了构建链路的 checkpoint 和统计信息。
```text
ckpt
├── chain
├── extractor
├── kag_checkpoint_0_1.ckpt
├── postprocessor
├── reader
└── splitter
```
通过以下命令查看抽取任务统计信息,如每个文档抽取出多少点 / 边。
```bash
less ckpt/kag_checkpoint_0_1.ckpt
```
通过以下命令可以查看有多少文档数据被成功写入图数据库。
```bash
wc -l ckpt/kag_checkpoint_0_1.ckpt
```
KAG 框架基于 checkpoint 文件提供了断点续跑的功能。如果由于程序出错或其他外部原因(如 LLM 余额不足)导致任务中断,可以重新执行 indexer.py,KAG 会自动加载 checkpoint 文件并复用已有结果。
### Step 6:结果检查
当前,OpenSPG-KAG 在产品端已提供 [知识探查](https://openspg.yuque.com/ndx6g9/0.6/fw4ge5c18tyfl2yq) 能力,以及对应的 API 文档 [HTTP API Reference](https://openspg.yuque.com/ndx6g9/0.6/zde1yunbb8sncxtv)。

## 4. 推理问答
### Step 1:进入项目目录
```bash
cd kag/examples/TwoWikiTest
```
### Step 2:编写问答脚本
```bash
vim ./solver/qa.py
```
将以下内容粘贴到 ``qa.py`` 中。
```python
import json
import logging
import os
import time
from concurrent.futures import ThreadPoolExecutor, as_completed
from tqdm import tqdm
from kag.common.benchmarks.evaluate import Evaluate
from kag.solver.logic.solver_pipeline import SolverPipeline
from kag.common.conf import KAG_CONFIG
from kag.common.registry import import_modules_from_path
from kag.common.checkpointer import CheckpointerManager
logger = logging.getLogger(__name__)
class EvaFor2wiki:
"""
init for kag client
"""
def __init__(self):
pass
"""
qa from knowledge base,
"""
def qa(self, query):
resp = SolverPipeline.from_config(KAG_CONFIG.all_config["kag_solver_pipeline"])
answer, traceLog = resp.run(query)
logger.info(f"\n\nso the answer for '{query}' is: {answer}\n\n")
return answer, traceLog
if __name__ == "__main__":
import_modules_from_path("./prompt")
evalObj = EvaFor2wiki()
evalObj.qa("Which Stanford University professor works on Alzheimer's?")
```
### Step 3:运行命令
```bash
cd solver
python qa.py
```
## 5. 其他内置案例
可进入 [kag/examples](.) 目录体验源码中自带的案例。
* [musique](./musique/README_cn.md)(多跳问答)
* [twowiki](./2wiki/README_cn.md)(多跳问答)
* [hotpotqa](./hotpotqa/README_cn.md)(多跳问答)
* [黑产挖掘](./riskmining/README_cn.md)
* [企业供应链](./supplychain/README_cn.md)
* [医疗图谱](./medicine/README_cn.md) | {
"source": "OpenSPG/KAG",
"title": "kag/examples/README_cn.md",
"url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/README_cn.md",
"date": "2024-09-21T13:56:44",
"stars": 5095,
"description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.",
"file_size": 9759
} |
# KAG Example: TwoWiki
[English](./README.md) |
[简体中文](./README_cn.md)
[2WikiMultiHopQA](https://arxiv.org/abs/2011.01060) is a multi-hop QA dataset for comprehensive evaluation of reasoning steps. It's used by [KAG](https://arxiv.org/abs/2409.13731) and [HippoRAG](https://arxiv.org/abs/2405.14831) for multi-hop question answering performance evaluation.
Here we demonstrate how to build a knowledge graph for the 2WikiMultiHopQA dataset, generate answers to those evaluation questions with KAG and calculate EM and F1 metrics of the KAG generated answers compared to the ground-truth answers.
## 1. Precondition
Please refer to [Quick Start](https://openspg.yuque.com/ndx6g9/cwh47i/rs7gr8g4s538b1n7) to install KAG and its dependency OpenSPG server, and learn about using KAG in developer mode.
## 2. Steps to reproduce
### Step 1: Enter the example directory
```bash
cd kag/examples/2wiki
```
### Step 2: Configure models
Update the generative model configurations ``openie_llm`` and ``chat_llm`` and the representational model configuration ``vectorize_model`` in [kag_config.yaml](./kag_config.yaml).
You need to fill in correct ``api_key``s. If your model providers and model names are different from the default values, you also need to update ``base_url`` and ``model``.
### Step 3: Project initialization
Initiate the project with the following command.
```bash
knext project restore --host_addr http://127.0.0.1:8887 --proj_path .
```
### Step 4: Commit the schema
Execute the following command to commit the schema [TwoWiki.schema](./schema/TwoWiki.schema).
```bash
knext schema commit
```
### Step 5: Build the knowledge graph
Execute [indexer.py](./builder/indexer.py) in the [builder](./builder) directory to build the knowledge graph.
```bash
cd builder && python indexer.py && cd ..
```
### Step 6: Execute the QA tasks
Execute [evaFor2wiki.py](./solver/evaFor2wiki.py) in the [solver](./solver) directory to generate the answers and calculate the EM and F1 metrics.
```bash
cd solver && python evaFor2wiki.py && cd ..
```
The generated answers are saved to ``./solver/2wiki_res_*.json``.
The calculated EM and F1 metrics are saved to ``./solver/2wiki_metrics_*.json``.
### Step 7: (Optional) Cleanup
To delete the checkpoints, execute the following command.
```bash
rm -rf ./builder/ckpt
rm -rf ./solver/ckpt
```
To delete the KAG project and related knowledge graph, execute the following similar command. Replace the OpenSPG server address and KAG project id with actual values.
```bash
curl http://127.0.0.1:8887/project/api/delete?projectId=1
```
### Step 8: (Optional) Try the larger datasets
Restart from Step 1 and modify [indexer.py](./builder/indexer.py) and [evaFor2wiki.py](./solver/evaFor2wiki.py) to try the larger datasets. | {
"source": "OpenSPG/KAG",
"title": "kag/examples/2wiki/README.md",
"url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/2wiki/README.md",
"date": "2024-09-21T13:56:44",
"stars": 5095,
"description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.",
"file_size": 2787
} |
# KAG 示例:TwoWiki
[English](./README.md) |
[简体中文](./README_cn.md)
[2WikiMultiHopQA](https://arxiv.org/abs/2011.01060) 是一个用于对推理步骤进行全面评估的多跳问答数据集。[KAG](https://arxiv.org/abs/2409.13731) 和 [HippoRAG](https://arxiv.org/abs/2405.14831) 用它评估多跳问答的性能。
本例我们展示为 2WikiMultiHopQA 数据集构建知识图谱,然后用 KAG 为评估问题生成答案,并与标准答案对比计算 EM 和 F1 指标。
## 1. 前置条件
参考文档 [快速开始](https://openspg.yuque.com/ndx6g9/0.6/quzq24g4esal7q17) 安装 KAG 及其依赖的 OpenSPG server,了解开发者模式 KAG 的使用流程。
## 2. 复现步骤
### Step 1:进入示例目录
```bash
cd kag/examples/2wiki
```
### Step 2:配置模型
更新 [kag_config.yaml](./kag_config.yaml) 中的生成模型配置 ``openie_llm`` 和 ``chat_llm`` 和表示模型配置 ``vectorize_model``。
您需要设置正确的 ``api_key``。如果使用的模型供应商和模型名与默认值不同,您还需要更新 ``base_url`` 和 ``model``。
### Step 3:初始化项目
先对项目进行初始化。
```bash
knext project restore --host_addr http://127.0.0.1:8887 --proj_path .
```
### Step 4:提交 schema
执行以下命令提交 schema [TwoWiki.schema](./schema/TwoWiki.schema)。
```bash
knext schema commit
```
### Step 5:构建知识图谱
在 [builder](./builder) 目录执行 [indexer.py](./builder/indexer.py) 构建知识图谱。
```bash
cd builder && python indexer.py && cd ..
```
### Step 6:执行 QA 任务
在 [solver](./solver) 目录执行 [evaFor2wiki.py](./solver/evaFor2wiki.py) 生成答案并计算 EM 和 F1 指标。
```bash
cd solver && python evaFor2wiki.py && cd ..
```
生成的答案被保存至 ``./solver/2wiki_res_*.json``.
计算出的 EM 和 F1 指标被保存至 ``./solver/2wiki_metrics_*.json``.
### Step 7:(可选)清理
若要删除 checkpoint,可执行以下命令。
```bash
rm -rf ./builder/ckpt
rm -rf ./solver/ckpt
```
若要删除 KAG 项目及关联的知识图谱,可执行以下类似命令,将 OpenSPG server 地址和 KAG 项目 id 换为实际的值。
```bash
curl http://127.0.0.1:8887/project/api/delete?projectId=1
```
### Step 8:(可选)尝试更大的数据集
从 Step 1 重新开始,修改 [indexer.py](./builder/indexer.py) 和 [evaFor2wiki.py](./solver/evaFor2wiki.py) 以尝试更大的数据集。 | {
"source": "OpenSPG/KAG",
"title": "kag/examples/2wiki/README_cn.md",
"url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/2wiki/README_cn.md",
"date": "2024-09-21T13:56:44",
"stars": 5095,
"description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.",
"file_size": 1727
} |
# KAG Example: BaiKe
[English](./README.md) |
[简体中文](./README_cn.md)
## 1. Precondition
Please refer to [Quick Start](https://openspg.yuque.com/ndx6g9/cwh47i/rs7gr8g4s538b1n7) to install KAG and its dependency OpenSPG server, and learn about using KAG in developer mode.
## 2. Steps to reproduce
### Step 1: Enter the example directory
```bash
cd kag/examples/baike
```
### Step 2: Configure models
Update the generative model configurations ``openie_llm`` and ``chat_llm`` and the representational model configuration ``vectorize_model`` in [kag_config.yaml](./kag_config.yaml).
You need to fill in correct ``api_key``s. If your model providers and model names are different from the default values, you also need to update ``base_url`` and ``model``.
### Step 3: Project initialization
Initiate the project with the following command.
```bash
knext project restore --host_addr http://127.0.0.1:8887 --proj_path .
```
### Step 4: Commit the schema
Execute the following command to commit the schema [BaiKe.schema](./schema/BaiKe.schema).
```bash
knext schema commit
```
### Step 5: Build the knowledge graph
Execute [indexer.py](./builder/indexer.py) in the [builder](./builder) directory to build the knowledge graph.
```bash
cd builder && python indexer.py && cd ..
```
### Step 6: Execute the QA tasks
Execute [eval.py](./solver/eval.py) in the [solver](./solver) directory to ask demo questions and view the answers and trace logs.
```bash
cd solver && python eval.py && cd ..
```
### Step 7: (Optional) Cleanup
To delete the checkpoints, execute the following command.
```bash
rm -rf ./builder/ckpt
```
To delete the KAG project and related knowledge graph, execute the following similar command. Replace the OpenSPG server address and KAG project id with actual values.
```bash
curl http://127.0.0.1:8887/project/api/delete?projectId=1
``` | {
"source": "OpenSPG/KAG",
"title": "kag/examples/baike/README.md",
"url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/baike/README.md",
"date": "2024-09-21T13:56:44",
"stars": 5095,
"description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.",
"file_size": 1872
} |
# KAG 示例:百科问答(BaiKe)
[English](./README.md) |
[简体中文](./README_cn.md)
## 1. 前置条件
参考文档 [快速开始](https://openspg.yuque.com/ndx6g9/0.6/quzq24g4esal7q17) 安装 KAG 及其依赖的 OpenSPG server,了解开发者模式 KAG 的使用流程。
## 2. 复现步骤
### Step 1:进入示例目录
```bash
cd kag/examples/baike
```
### Step 2:配置模型
更新 [kag_config.yaml](./kag_config.yaml) 中的生成模型配置 ``openie_llm`` 和 ``chat_llm`` 和表示模型配置 ``vectorize_model``。
您需要设置正确的 ``api_key``。如果使用的模型供应商和模型名与默认值不同,您还需要更新 ``base_url`` 和 ``model``。
### Step 3:初始化项目
先对项目进行初始化。
```bash
knext project restore --host_addr http://127.0.0.1:8887 --proj_path .
```
### Step 4:提交 schema
执行以下命令提交 schema [BaiKe.schema](./schema/BaiKe.schema)。
```bash
knext schema commit
```
### Step 5:构建知识图谱
在 [builder](./builder) 目录执行 [indexer.py](./builder/indexer.py) 构建知识图谱。
```bash
cd builder && python indexer.py && cd ..
```
### Step 6:执行 QA 任务
在 [solver](./solver) 目录执行 [eval.py](./solver/eval.py) 问示例问题并查看答案和 trace log。
```bash
cd solver && python eval.py && cd ..
```
### Step 7:(可选)清理
若要删除 checkpoint,可执行以下命令。
```bash
rm -rf ./builder/ckpt
```
若要删除 KAG 项目及关联的知识图谱,可执行以下类似命令,将 OpenSPG server 地址和 KAG 项目 id 换为实际的值。
```bash
curl http://127.0.0.1:8887/project/api/delete?projectId=1
``` | {
"source": "OpenSPG/KAG",
"title": "kag/examples/baike/README_cn.md",
"url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/baike/README_cn.md",
"date": "2024-09-21T13:56:44",
"stars": 5095,
"description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.",
"file_size": 1203
} |
# KAG Example: CSQA
[English](./README.md) |
[简体中文](./README_cn.md)
The [UltraDomain](https://huggingface.co/datasets/TommyChien/UltraDomain/tree/main) ``cs.jsonl`` dataset contains 10 documents in Computer Science and 100 questions with their answers about those documents.
Here we demonstrate how to build a knowledge graph for those documents, generate answers to those questions with KAG and compare KAG generated answers with those from other RAG systems.
## 1. Precondition
Please refer to [Quick Start](https://openspg.yuque.com/ndx6g9/cwh47i/rs7gr8g4s538b1n7) to install KAG and its dependency OpenSPG server, and learn about using KAG in developer mode.
## 2. Steps to reproduce
### Step 1: Enter the example directory
```bash
cd kag/examples/csqa
```
### Step 2: (Optional) Prepare the data
Download [UltraDomain](https://huggingface.co/datasets/TommyChien/UltraDomain/tree/main) ``cs.jsonl`` and execute [generate_data.py](./generate_data.py) to generate data files in [./builder/data](./builder/data) and [./solver/data](./solver/data). Since the generated files were committed, this step is optional.
```bash
python generate_data.py
```
### Step 3: Configure models
Update the generative model configurations ``openie_llm`` and ``chat_llm`` and the representational model configuration ``vectorize_model`` in [kag_config.yaml](./kag_config.yaml).
You need to fill in correct ``api_key``s. If your model providers and model names are different from the default values, you also need to update ``base_url`` and ``model``.
The ``splitter`` and ``num_threads_per_chain`` configurations may also be updated to match with other systems.
### Step 4: Project initialization
Initiate the project with the following command.
```bash
knext project restore --host_addr http://127.0.0.1:8887 --proj_path .
```
### Step 5: Commit the schema
Execute the following command to commit the schema [CsQa.schema](./schema/CsQa.schema).
```bash
knext schema commit
```
### Step 6: Build the knowledge graph
Execute [indexer.py](./builder/indexer.py) in the [builder](./builder) directory to build the knowledge graph.
```bash
cd builder && python indexer.py && cd ..
```
### Step 7: Generate the answers
Execute [eval.py](./solver/eval.py) in the [solver](./solver) directory to generate the answers.
```bash
cd solver && python eval.py && cd ..
```
The results are saved to ``./solver/data/csqa_kag_answers.json``.
### Step 8: (Optional) Get the answers generated by other systems
Follow the LightRAG [Reproduce](https://github.com/HKUDS/LightRAG?tab=readme-ov-file#reproduce) steps to generate answers to the questions and save the results to [./solver/data/csqa_lightrag_answers.json](./solver/data/csqa_lightrag_answers.json). Since a copy was committed, this step is optional.
### Step 9: Calculate the metrics
Update the LLM configurations in [summarization_metrics.py](./solver/summarization_metrics.py) and [factual_correctness.py](./solver/factual_correctness.py) and execute them to calculate the metrics.
```bash
python ./solver/summarization_metrics.py
python ./solver/factual_correctness.py
```
### Step 10: (Optional) Cleanup
To delete the checkpoints, execute the following command.
```bash
rm -rf ./builder/ckpt
rm -rf ./solver/ckpt
```
To delete the KAG project and related knowledge graph, execute the following similar command. Replace the OpenSPG server address and KAG project id with actual values.
```bash
curl http://127.0.0.1:8887/project/api/delete?projectId=1
``` | {
"source": "OpenSPG/KAG",
"title": "kag/examples/csqa/README.md",
"url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/csqa/README.md",
"date": "2024-09-21T13:56:44",
"stars": 5095,
"description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.",
"file_size": 3519
} |
# KAG 示例:CSQA
[English](./README.md) |
[简体中文](./README_cn.md)
[UltraDomain](https://huggingface.co/datasets/TommyChien/UltraDomain/tree/main) ``cs.jsonl`` 数据集包含 10 个计算机科学领域的文档,和关于这些文档的 100 个问题及答案。
本例我们展示为如何为这些文档构建知识图谱,用 KAG 为这些问题生成答案,并与其他 RAG 系统生成的答案进行比较。
## 1. 前置条件
参考文档 [快速开始](https://openspg.yuque.com/ndx6g9/0.6/quzq24g4esal7q17) 安装 KAG 及其依赖的 OpenSPG server,了解开发者模式 KAG 的使用流程。
## 2. 复现步骤
### Step 1:进入示例目录
```bash
cd kag/examples/csqa
```
### Step 2:(可选)准备数据
下载 [UltraDomain](https://huggingface.co/datasets/TommyChien/UltraDomain/tree/main) ``cs.jsonl`` 并执行 [generate_data.py](./generate_data.py) 在 [./builder/data](./builder/data) 和 [./solver/data](./solver/data) 中生成数据文件。由于我们提交了生成的文件,因此本步骤是可选的。
```bash
python generate_data.py
```
### Step 3:配置模型
更新 [kag_config.yaml](./kag_config.yaml) 中的生成模型配置 ``openie_llm`` 和 ``chat_llm`` 和表示模型配置 ``vectorize_model``。
您需要设置正确的 ``api_key``。如果使用的模型供应商和模型名与默认值不同,您还需要更新 ``base_url`` 和 ``model``。
配置 ``splitter`` 和 ``num_threads_per_chain`` 可能也需要更新以与其他系统匹配。
### Step 4:初始化项目
先对项目进行初始化。
```bash
knext project restore --host_addr http://127.0.0.1:8887 --proj_path .
```
### Step 5:提交 schema
执行以下命令提交 schema [CsQa.schema](./schema/CsQa.schema)。
```bash
knext schema commit
```
### Step 6:构建知识图谱
在 [builder](./builder) 目录执行 [indexer.py](./builder/indexer.py) 构建知识图谱。
```bash
cd builder && python indexer.py && cd ..
```
### Step 7:生成答案
在 [solver](./solver) 目录执行 [eval.py](./solver/eval.py) 生成答案。
```bash
cd solver && python eval.py && cd ..
```
生成的结果被保存至 ``./solver/data/csqa_kag_answers.json``.
### Step 8:(可选)获取其他系统生成的答案
按 LightRAG [Reproduce](https://github.com/HKUDS/LightRAG?tab=readme-ov-file#reproduce) 所述复现步骤生成问题的答案,将结果保存至 [./solver/data/csqa_lightrag_answers.json](./solver/data/csqa_lightrag_answers.json)。由于我们提交了一份 LightRAG 生成的答案,因此本步骤是可选的。
### Step 9:计算指标
更新 [summarization_metrics.py](./solver/summarization_metrics.py) 和 [factual_correctness.py](./solver/factual_correctness.py) 中的大模型配置并执行它们以计算指标。
```bash
python ./solver/summarization_metrics.py
python ./solver/factual_correctness.py
```
### Step 10:(可选)清理
若要删除 checkpoint,可执行以下命令。
```bash
rm -rf ./builder/ckpt
rm -rf ./solver/ckpt
```
若要删除 KAG 项目及关联的知识图谱,可执行以下类似命令,将 OpenSPG server 地址和 KAG 项目 id 换为实际的值。
```bash
curl http://127.0.0.1:8887/project/api/delete?projectId=1
``` | {
"source": "OpenSPG/KAG",
"title": "kag/examples/csqa/README_cn.md",
"url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/csqa/README_cn.md",
"date": "2024-09-21T13:56:44",
"stars": 5095,
"description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.",
"file_size": 2315
} |
# KAG Example: DomainKG
[English](./README.md) |
[简体中文](./README_cn.md)
This example provides a case of knowledge injection in the medical domain, where the nodes of the domain knowledge graph are medical terms, and the relationships are defined as "isA." The document contains an introduction to a selection of medical terms.
## 1. Precondition
Please refer to [Quick Start](https://openspg.yuque.com/ndx6g9/cwh47i/rs7gr8g4s538b1n7) to install KAG and its dependency OpenSPG server, and learn about using KAG in developer mode.
## 2. Steps to reproduce
### Step 1: Enter the example directory
```bash
cd kag/examples/domain_kg
```
### Step 2: Configure models
Update the generative model configurations ``openie_llm`` and ``chat_llm`` and the representive model configuration ``vectorizer_model`` in [kag_config.yaml](./kag_config.yaml).
You need to fill in correct ``api_key``s. If your model providers and model names are different from the default values, you also need to update ``base_url`` and ``model``.
### Step 3: Project initialization
Initiate the project with the following command.
```bash
knext project restore --host_addr http://127.0.0.1:8887 --proj_path .
```
### Step 4: Commit the schema
Execute the following command to commit the schema [TwoWiki.schema](./schema/TwoWiki.schema).
```bash
knext schema commit
```
### Step 5: Build the knowledge graph
We first need to inject the domain knowledge graph into the graph database. This allows the PostProcessor component to link the extracted nodes with the nodes of the domain knowledge graph, thereby standardizing them during the construction of the graph from unstructured documents.
Execute [injection.py](./builder/injection.py) in the [builder](./builder) directory to inject the domain KG.
```bash
cd builder && python injection.py && cd ..
```
Note that KAG provides a special implementation of the ``KAGBuilderChain`` for domain knowledge graph injection, known as the ``DomainKnowledgeInjectChain``, which is registered under the name ``domain_kg_inject_chain``. Since domain knowledge injection does not involve scanning files or directories, you can directly call the ``invoke`` interface of the chain to initiate the task.
Next, execute [indexer.py](./builder/indexer.py) in the [builder](./builder) directory to build KG from unstructured document.
```bash
cd builder && python indexer.py && cd ..
```
### Step 6: Execute the QA tasks
Execute [qa.py](./solver/qa.py) in the [solver](./solver) directory to generate the answer to the question.
```bash
cd solver && python qa.py && cd ..
```
### Step 7: (Optional) Cleanup
To delete the checkpoints, execute the following command.
```bash
rm -rf ./builder/ckpt
rm -rf ./solver/ckpt
```
To delete the KAG project and related knowledge graph, execute the following similar command. Replace the OpenSPG server address and KAG project id with actual values.
```bash
curl http://127.0.0.1:8887/project/api/delete?projectId=1
``` | {
"source": "OpenSPG/KAG",
"title": "kag/examples/domain_kg/README.md",
"url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/domain_kg/README.md",
"date": "2024-09-21T13:56:44",
"stars": 5095,
"description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.",
"file_size": 2987
} |
# KAG 示例:DomainKG
[English](./README.md) |
[简体中文](./README_cn.md)
本示例提供了一个医疗领域知识注入的案例,其中领域知识图谱的节点为医学名词,关系为isA。文档内容为部分医学名词的介绍。
## 1. 前置条件
参考文档 [快速开始](https://openspg.yuque.com/ndx6g9/0.6/quzq24g4esal7q17) 安装 KAG 及其依赖的 OpenSPG server,了解开发者模式 KAG 的使用流程。
## 2. 复现步骤
### Step 1:进入示例目录
```bash
cd kag/examples/domain_kg
```
### Step 2:配置模型
更新 [kag_config.yaml](./kag_config.yaml) 中的生成模型配置 ``openie_llm`` 和 ``chat_llm`` 和表示模型配置 ``vectorizer_model``。
您需要设置正确的 ``api_key``。如果使用的模型供应商和模型名与默认值不同,您还需要更新 ``base_url`` 和 ``model``。
### Step 3:初始化项目
先对项目进行初始化。
```bash
knext project restore --host_addr http://127.0.0.1:8887 --proj_path .
```
### Step 4:提交 schema
执行以下命令提交 schema [TwoWiki.schema](./schema/TwoWiki.schema)。
```bash
knext schema commit
```
### Step 5:构建知识图谱
我们首先需要将领域知识图谱注入到图数据库中,这样在对非结构化文档进行图谱构建的时候,PostProcessor组件可以将抽取出的节点与领域知识图谱节点进行链指(标准化)。
在 [builder](./builder) 目录执行 [injection.py](./builder/injection.py) ,注入图数据。
```bash
cd builder && python injection.py && cd ..
```
注意,KAG为领域知识图谱注入提供了一个特殊的KAGBuilderChain实现,即DomainKnowledgeInjectChain,其注册名为domain_kg_inject_chain。由于领域知识注入不涉及到扫描文件或目录,可以直接调用builder chain 的invoke接口启动任务。
接下来,在 [builder](./builder) 目录执行 [indexer.py](./builder/indexer.py) 构建知识图谱。
```bash
cd builder && python indexer.py && cd ..
```
### Step 6:执行 QA 任务
在 [solver](./solver) 目录执行 [qa.py](./solver/qa.py) 生成问题的答案。
```bash
cd solver && python qa.py && cd ..
```
### Step 7:(可选)清理
若要删除 checkpoint,可执行以下命令。
```bash
rm -rf ./builder/ckpt
rm -rf ./solver/ckpt
```
若要删除 KAG 项目及关联的知识图谱,可执行以下类似命令,将 OpenSPG server 地址和 KAG 项目 id 换为实际的值。
```bash
curl http://127.0.0.1:8887/project/api/delete?projectId=1
``` | {
"source": "OpenSPG/KAG",
"title": "kag/examples/domain_kg/README_cn.md",
"url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/domain_kg/README_cn.md",
"date": "2024-09-21T13:56:44",
"stars": 5095,
"description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.",
"file_size": 1648
} |
# KAG Example: HotpotQA
[English](./README.md) |
[简体中文](./README_cn.md)
[HotpotQA](https://arxiv.org/abs/1809.09600) is a dataset for diverse, explainable multi-hop question answering. It's used by [KAG](https://arxiv.org/abs/2409.13731) and [HippoRAG](https://arxiv.org/abs/2405.14831) for multi-hop question answering performance evaluation.
Here we demonstrate how to build a knowledge graph for the HotpotQA dataset, generate answers to those evaluation questions with KAG and calculate EM and F1 metrics of the KAG generated answers compared to the ground-truth answers.
## 1. Precondition
Please refer to [Quick Start](https://openspg.yuque.com/ndx6g9/cwh47i/rs7gr8g4s538b1n7) to install KAG and its dependency OpenSPG server, and learn about using KAG in developer mode.
## 2. Steps to reproduce
### Step 1: Enter the example directory
```bash
cd kag/examples/hotpotqa
```
### Step 2: Configure models
Update the generative model configurations ``openie_llm`` and ``chat_llm`` and the representational model configuration ``vectorize_model`` in [kag_config.yaml](./kag_config.yaml).
You need to fill in correct ``api_key``s. If your model providers and model names are different from the default values, you also need to update ``base_url`` and ``model``.
### Step 3: Project initialization
Initiate the project with the following command.
```bash
knext project restore --host_addr http://127.0.0.1:8887 --proj_path .
```
### Step 4: Commit the schema
Execute the following command to commit the schema [HotpotQA.schema](./schema/HotpotQA.schema).
```bash
knext schema commit
```
### Step 5: Build the knowledge graph
Execute [indexer.py](./builder/indexer.py) in the [builder](./builder) directory to build the knowledge graph.
```bash
cd builder && python indexer.py && cd ..
```
### Step 6: Execute the QA tasks
Execute [evaForHotpotqa.py](./solver/evaForHotpotqa.py) in the [solver](./solver) directory to generate the answers and calculate the EM and F1 metrics.
```bash
cd solver && python evaForMedicine.py && cd ..
```
The generated answers are saved to ``./solver/hotpotqa_res_*.json``.
The calculated EM and F1 metrics are saved to ``./solver/hotpotqa_metrics_*.json``.
### Step 7: (Optional) Cleanup
To delete the checkpoints, execute the following command.
```bash
rm -rf ./builder/ckpt
rm -rf ./solver/ckpt
```
To delete the KAG project and related knowledge graph, execute the following similar command. Replace the OpenSPG server address and KAG project id with actual values.
```bash
curl http://127.0.0.1:8887/project/api/delete?projectId=1
```
### Step 8: (Optional) Try the larger datasets
Restart from Step 1 and modify [indexer.py](./builder/indexer.py) and [evaForHotpotqa.py](./solver/evaForHotpotqa.py) to try the larger datasets. | {
"source": "OpenSPG/KAG",
"title": "kag/examples/hotpotqa/README.md",
"url": "https://github.com/OpenSPG/KAG/blob/master/kag/examples/hotpotqa/README.md",
"date": "2024-09-21T13:56:44",
"stars": 5095,
"description": "KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.",
"file_size": 2793
} |
Subsets and Splits