|
--- |
|
language: |
|
- tr |
|
license: |
|
- cc-by-3.0 |
|
- cc-by-4.0 |
|
- cc-by-sa-3.0 |
|
- mit |
|
- other |
|
multilinguality: |
|
- monolingual |
|
size_categories: |
|
- 1M<n<10M |
|
task_categories: |
|
- feature-extraction |
|
- sentence-similarity |
|
pretty_name: AllNLITR |
|
dataset_info: |
|
- config_name: pair |
|
features: |
|
- name: anchor |
|
dtype: string |
|
- name: positive |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_examples: 313601 |
|
- name: dev |
|
num_examples: 6802 |
|
- name: test |
|
num_examples: 6827 |
|
- config_name: pair-class |
|
features: |
|
- name: premise |
|
dtype: string |
|
- name: hypothesis |
|
dtype: string |
|
- name: label |
|
dtype: |
|
class_label: |
|
names: |
|
'0': entailment |
|
'1': neutral |
|
'2': contradiction |
|
splits: |
|
- name: train |
|
num_examples: 941086 |
|
- name: dev |
|
num_examples: 19649 |
|
- name: test |
|
num_examples: 19652 |
|
- config_name: pair-score |
|
features: |
|
- name: sentence1 |
|
dtype: string |
|
- name: sentence2 |
|
dtype: string |
|
- name: score |
|
dtype: float64 |
|
splits: |
|
- name: train |
|
num_examples: 941086 |
|
- name: dev |
|
num_examples: 19649 |
|
- name: test |
|
num_examples: 19652 |
|
- config_name: triplet |
|
features: |
|
- name: anchor |
|
dtype: string |
|
- name: positive |
|
dtype: string |
|
- name: negative |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_examples: 482091 |
|
- name: dev |
|
num_examples: 6567 |
|
- name: test |
|
num_examples: 6587 |
|
configs: |
|
- config_name: pair |
|
data_files: |
|
- split: train |
|
path: pair/train* |
|
- split: dev |
|
path: pair/dev* |
|
- split: test |
|
path: pair/test* |
|
- config_name: pair-class |
|
data_files: |
|
- split: train |
|
path: pair-class/train* |
|
- split: dev |
|
path: pair-class/dev* |
|
- split: test |
|
path: pair-class/test* |
|
- config_name: pair-score |
|
data_files: |
|
- split: train |
|
path: pair-score/train* |
|
- split: dev |
|
path: pair-score/dev* |
|
- split: test |
|
path: pair-score/test* |
|
- config_name: triplet |
|
data_files: |
|
- split: train |
|
path: triplet/train* |
|
- split: dev |
|
path: triplet/dev* |
|
- split: test |
|
path: triplet/test* |
|
--- |
|
|
|
# Dataset Card for AllNLITR |
|
|
|
This dataset is a formatted version of [NLI-TR](https://huggingface.co/datasets/boun-tabi/nli_tr) datasets, sharing the same licenses. The format is intended to be in line with [AllNLI](https://huggingface.co/datasets/sentence-transformers/all-nli) by [Sentence Transformers](https://sbert.net/) for ease of training. |
|
Despite originally being intended for Natural Language Inference (NLI), this dataset can be used for training/finetuning an embedding model for semantic textual similarity. |
|
|
|
## Dataset Subsets |
|
|
|
### `pair-class` subset |
|
|
|
* Columns: "premise", "hypothesis", "label" |
|
* Column types: `str`, `str`, `class` with `{"0": "entailment", "1": "neutral", "2", "contradiction"}` |
|
* Examples: |
|
```python |
|
{ |
|
'premise': 'A person on a horse jumps over a broken down airplane.', |
|
'hypothesis': 'A person is training his horse for a competition.', |
|
'label': 1, |
|
} |
|
``` |
|
* Collection strategy: Reading the premise, hypothesis and integer label from SNLI & MultiNLI datasets. |
|
* Deduplified: Yes |
|
|
|
### `pair-score` subset |
|
|
|
* Columns: "sentence1", "sentence2", "score" |
|
* Column types: `str`, `str`, `float` |
|
* Examples: |
|
```python |
|
{ |
|
'sentence1': 'A person on a horse jumps over a broken down airplane.', |
|
'sentence2': 'A person is training his horse for a competition.', |
|
'score': 0.5, |
|
} |
|
``` |
|
* Collection strategy: Taking the `pair-class` subset and remapping "entailment", "neutral" and "contradiction" to 1.0, 0.5 and 0.0, respectively. |
|
* Deduplified: Yes |
|
|
|
### `pair` subset |
|
|
|
* Columns: "anchor", "positive" |
|
* Column types: `str`, `str` |
|
* Examples: |
|
```python |
|
{ |
|
'anchor': 'A person on a horse jumps over a broken down airplane.', |
|
'positive': 'A person is training his horse for a competition.', |
|
} |
|
``` |
|
* Collection strategy: Reading the SNLI & MultiNLI datasets and considering the "premise" as the "anchor" and the "hypothesis" as the "positive" if the label is "entailment". The reverse ("entailment" as "anchor" and "premise" as "positive") is not included. |
|
* Deduplified: Yes |
|
|
|
### `triplet` subset |
|
|
|
* Columns: "anchor", "positive", "negative" |
|
* Column types: `str`, `str`, `str` |
|
* Examples: |
|
```python |
|
{ |
|
'anchor': 'A person on a horse jumps over a broken down airplane.', |
|
'positive': 'A person is outdoors, on a horse.', |
|
'negative': 'A person is at a diner, ordering an omelette.', |
|
} |
|
``` |
|
* Collection strategy: Reading the SNLI & MultiNLI datasets, for each "premise" making a list of entailing and contradictory sentences using the dataset labels. Then, considering all possible triplets out of these entailing and contradictory lists. The reverse ("entailment" as "anchor" and "premise" as "positive") is not included. |
|
* Deduplified: Yes |
|
|
|
### Citation Information |
|
|
|
``` |
|
@inproceedings{budur-etal-2020-data, |
|
title = "Data and Representation for Turkish Natural Language Inference", |
|
author = "Budur, Emrah and |
|
"{O}zçelik, Rıza and |
|
G"{u}ng"{o}r, Tunga", |
|
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
month = nov, |
|
year = "2020", |
|
address = "Online", |
|
publisher = "Association for Computational Linguistics", |
|
abstract = "Large annotated datasets in NLP are overwhelmingly in English. This is an obstacle to progress in other languages. Unfortunately, obtaining new annotated resources for each task in each language would be prohibitively expensive. At the same time, commercial machine translation systems are now robust. Can we leverage these systems to translate English-language datasets automatically? In this paper, we offer a positive response for natural language inference (NLI) in Turkish. We translated two large English NLI datasets into Turkish and had a team of experts validate their translation quality and fidelity to the original labels. Using these datasets, we address core issues of representation for Turkish NLI. We find that in-language embeddings are essential and that morphological parsing can be avoided where the training set is large. Finally, we show that models trained on our machine-translated datasets are successful on human-translated evaluation sets. We share all code, models, and data publicly.", |
|
} |
|
|
|
``` |