Datasets:
task_categories:
- text-classification
language:
- en
- th
- es
pretty_name: Multilingual Task-Oriented Dialog
configs:
- config_name: default
data_files:
- path:
- en/train-en.parquet
- es/train-es.parquet
- th/train-th_TH.parquet
split: train
- path:
- en/test-en.parquet
- es/test-es.parquet
- th/test-th_TH.parquet
split: test
- path:
- en/eval-en.parquet
- es/eval-es.parquet
- th/eval-th_TH.parquet
split: eval
- config_name: en
data_files:
- split: train
path: en/train-en.parquet
- split: test
path: en/test-en.parquet
- split: eval
path: en/eval-en.parquet
- config_name: es
data_files:
- split: train
path: es/train-es.parquet
- split: test
path: es/test-es.parquet
- split: eval
path: es/eval-es.parquet
- config_name: th
data_files:
- split: train
path: th/train-th_TH.parquet
- split: test
path: th/test-th_TH.parquet
- split: eval
path: th/eval-th_TH.parquet
license: cc-by-sa-4.0
Multilingual Task-Oriented Dialog Data
Directory structure
This dataset consists of 3 directories:
encontains the English dataescontains the Spanish datathcontains the Thai data
In each directory, you'll find a file for each of the train/dev/test splits as used in our paper.
File format
PYTEXT parquet FORMAT
Each parquet file contains following 5 columns: intent label, the slot annotations in a comma-separated list with the format <start token>:<end token>:<slot type>,
untokenized utterance, the language, and the token spans from an in-house multilingual tokenizer.
The "upsampled" files contain the upsampled Spanish/Thai data so that there are roughly equal amounts of English and Spanish/Thai data for training and model selection.
License
Provided under the CC-BY-SA license.
Citation
If you use this dataset in your research, please cite the following paper:
@unpublished{Schuster2018,
author = {Sebastian Schuster and Sonal Gupta and Rushin Shah and Mike Lewis},
title = {Cross-lingual Transfer Learning for Multilingual Task Oriented Dialog},
year = {2018},
note = {arXiv preprint},
url = {http://arxiv.org/abs/}
}
Questions
Please contact Sonal Gupta ([email protected]) with questions about this dataset.