MLe-SNLI / README.md
lbourdois's picture
Add language tag
777553f verified
|
raw
history blame
2.64 kB
---
license: mit
language:
- en
- de
- es
- nl
- fr
---
# Multilingual e-SNLI (MLe-SNLI)
In this repo, we provide the training, validation, and testing sets for **M**ulti**l**ingual **e-SNLI** (MLe-SNLI). For more details, find our report [here](https://github.com/rish-16/cs4248-project/blob/main/CS4248_Group19_Final_Report.pdf).
## Dataset details
MLe-SNLI contains 500K training (`train`) samples of premise-hypothesis pairs along with their associated label and explanation. We take 100K training samples from the original e-SNLI (Camburu et al., 2018) dataset and translate them into 4 other languages (Spanish, German, Dutch, and French). We do the same for all 9824 testing (`test`) and validation (`dev`) samples, giving us 49120 samples for both `test` and `dev` splits.
| Column | Description |
|-----------------|---------------------------------------------------------------------------------|
| `premise` | Natural language premise sentence |
| `hypothesis` | Natural language hypothesis sentence |
| `label` | From `entailment`, `contradiction`, or `neutral` |
| `explanation_1` | Natural language justification for `label` |
| `language` | From English (`en`), Spanish (`es`), German (`de`), Dutch (`nl`), French (`fr`) |
> **WARNING:** the translation quality of MLe-SNLI may be compromised for some natural language samples because of quality issues in the original e-SNLI dataset that were not addressed in our [work](https://github.com/rish-16/cs4248-project). Use it at your own discretion.
## Download Instructions
To access MLe-SNLI, you can use the HuggingFace Datasets API to load the dataset:
```python
from datasets import load_dataset
mle_snli = load_dataset("rish16/MLe-SNLI") # loads a DatasetDict object
train_data = mle_snli['train'] # 500K samples (100K per lang)
dev_data = mle_snli['dev'] # 49120 samples (9824 per lang)
test_data = mle_snli['test'] # 49120 samples (9824 per lang)
print (mle_snli)
"""
DatasetDict({
train: Dataset({
features: ['premise', 'hypothesis', 'label', 'explanation_1', 'language'],
num_rows: 500000
})
test: Dataset({
features: ['premise', 'hypothesis', 'label', 'explanation_1', 'language'],
num_rows: 49120
})
validation: Dataset({
features: ['premise', 'hypothesis', 'label', 'explanation_1', 'language'],
num_rows: 49210
})
})
"""
```