nomiracl / README.md
nthakur's picture
updated README.md
ecd0877 verified
---
annotations_creators:
- expert-generated
language:
- ar
- bn
- en
- es
- fa
- fi
- fr
- hi
- id
- ja
- ko
- ru
- sw
- te
- th
- zh
multilinguality:
- multilingual
pretty_name: NoMIRACL
size_categories:
- 10K<n<100K
source_datasets:
- miracl/miracl
task_categories:
- text-classification
license:
- apache-2.0
---
# Dataset Card for NoMIRACL (EMNLP 2024 Findings Track)
<img src="nomiracl.png" alt="NoMIRACL Hallucination Examination (Generated using miramuse.ai and Adobe photoshop)" width="500" height="400">
## Quick Overview
This repository contains the topics, qrels, and top-k (a maximum of 10) annotated passages. The passage collection can be found here on HF: [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
```python
import datasets
language = 'german' # or any of the 18 languages (mentioned above in `languages`)
subset = 'relevant' # or 'non_relevant' (two subsets: relevant & non-relevant)
split = 'test' # or 'dev' for the development split
# four combinations available: 'dev.relevant', 'dev.non_relevant', 'test.relevant' and 'test.non_relevant'
nomiracl = datasets.load_dataset('miracl/nomiracl', language, split=f'{split}.{subset}', trust_remote_code=True)
```
## What is NoMIRACL?
Retrieval Augmented Generation (RAG) is a powerful approach to incorporating external knowledge into large language models (LLMs) to enhance the accuracy and faithfulness of LLM-generated responses. However, evaluating query-passage relevance across diverse language families has been a challenge, leading to gaps in understanding the model's performance against errors in external retrieved knowledge. To address this, we present NoMIRACL, a completely human-annotated dataset designed for evaluating multilingual LLM relevance across 18 diverse languages.
NoMIRACL evaluates LLM relevance as a binary classification objective, containing two subsets: `non-relevant` and `relevant`. The `non-relevant` subset contains queries with all passages manually judged by an expert assessor as non-relevant, while the `relevant` subset contains queries with at least one judged relevant passage within the labeled passages. LLM relevance is measured using two key metrics:
- *hallucination rate* (on the `non-relevant` subset) measuring model tendency to recognize when none of the passages provided are relevant for a given question (non-answerable).
- *error rate* (on the `relevant` subset) measuring model tendency as unable to identify relevant passages when provided for a given question (answerable).
## Acknowledgement
This dataset would not have been possible without all the topics are generated by native speakers of each language in conjunction with our **multilingual RAG universe** work in part 1, **MIRACL** [[TACL '23]](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00595/117438/MIRACL-A-Multilingual-Retrieval-Dataset-Covering). The queries with all non-relevant passages are used to create the `non-relevant` subset whereas queries with at least a single relevant passage (i.e., MIRACL dev and test splits) are used to create `relevant` subset.
This repository contains the topics, qrels and top-10 (maximum) annotated documents of NoMIRACL. The whole collection can be found [here](https://huggingface.co/datasets/miracl/miracl-corpus).
## Quickstart
```python
import datasets
language = 'german' # or any of the 18 languages
subset = 'relevant' # or 'non_relevant'
split = 'test' # or 'dev' for development split
# four combinations available: 'dev.relevant', 'dev.non_relevant', 'test.relevant' and 'test.non_relevant'
nomiracl = datasets.load_dataset('miracl/nomiracl', language, split=f'{split}.{subset}', trust_remote_code=True)
```
## Dataset Description
* **Website:** https://nomiracl.github.io
* **Paper:** https://aclanthology.org/2024.findings-emnlp.730/
* **Repository:** https://github.com/project-miracl/nomiracl
## Dataset Structure
1. To download the files:
Under folders `data/{lang}`,
the subset of the corpus is saved in `.jsonl.gz` format, with each line to be:
```
{"docid": "28742#27",
"title": "Supercontinent",
"text": "Oxygen levels of the Archaean Eon were negligible and today they are roughly 21 percent. [ ... ]"}
```
Under folders `data/{lang}/topics`,
the topics are saved in `.tsv` format, with each line to be:
```
qid\tquery
```
Under folders `miracl-v1.0-{lang}/qrels`,
the qrels are saved in standard TREC format, with each line to be:
```
qid Q0 docid relevance
```
2. To access the data using HuggingFace `datasets`:
```python
import datasets
language = 'german' # or any of the 18 languages
subset = 'relevant' # or 'non_relevant'
split = 'test' # or 'dev' for development split
# four combinations: 'dev.relevant', 'dev.non_relevant', 'test.relevant' and 'test.non_relevant'
nomiracl = datasets.load_dataset('miracl/nomiracl', language, split=f'{split}.{subset}')
# Individual entry in `relevant` or `non_relevant` subset
for data in nomiracl: # or 'dev', 'testA'
query_id = data['query_id']
query = data['query']
positive_passages = data['positive_passages']
negative_passages = data['negative_passages']
for entry in positive_passages: # OR 'negative_passages'
docid = entry['docid']
title = entry['title']
text = entry['text']
```
## Dataset Statistics
For NoMIRACL dataset statistics, please refer to our EMNLP 2024 Findings publication.
Paper: [https://aclanthology.org/2024.findings-emnlp.730/](https://aclanthology.org/2024.findings-emnlp.730/).
## Citation Information
This work was conducted as a collaboration between the University of Waterloo and Huawei Technologies.
```
@inproceedings{thakur-etal-2024-knowing,
title = "{``}Knowing When You Don{'}t Know{''}: A Multilingual Relevance Assessment Dataset for Robust Retrieval-Augmented Generation",
author = "Thakur, Nandan and
Bonifacio, Luiz and
Zhang, Crystina and
Ogundepo, Odunayo and
Kamalloo, Ehsan and
Alfonso-Hermelo, David and
Li, Xiaoguang and
Liu, Qun and
Chen, Boxing and
Rezagholizadeh, Mehdi and
Lin, Jimmy",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-emnlp.730",
pages = "12508--12526",
abstract = "Retrieval-Augmented Generation (RAG) grounds Large Language Model (LLM) output by leveraging external knowledge sources to reduce factual hallucinations. However, prior work lacks a comprehensive evaluation of different language families, making it challenging to evaluate LLM robustness against errors in external retrieved knowledge. To overcome this, we establish **NoMIRACL**, a human-annotated dataset for evaluating LLM robustness in RAG across 18 typologically diverse languages. NoMIRACL includes both a non-relevant and a relevant subset. Queries in the non-relevant subset contain passages judged as non-relevant, whereas queries in the relevant subset include at least a single judged relevant passage. We measure relevance assessment using: (i) *hallucination rate*, measuring model tendency to hallucinate when the answer is not present in passages in the non-relevant subset, and (ii) *error rate*, measuring model inaccuracy to recognize relevant passages in the relevant subset. In our work, we observe that most models struggle to balance the two capacities. Models such as LLAMA-2 and Orca-2 achieve over 88{\%} hallucination rate on the non-relevant subset. Mistral and LLAMA-3 hallucinate less but can achieve up to a 74.9{\%} error rate on the relevant subset. Overall, GPT-4 is observed to provide the best tradeoff on both subsets, highlighting future work necessary to improve LLM robustness. NoMIRACL dataset and evaluation code are available at: https://github.com/project-miracl/nomiracl.",
}
```