Datasets:

Modalities:
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
StefanH commited on
Commit
9f52e0a
·
2 Parent(s): 24925b9 f007358

Update: loading script TODO test

Browse files
Files changed (2) hide show
  1. README.md +33 -0
  2. utcd.py +22 -4
README.md CHANGED
@@ -1,3 +1,36 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+ ## Universal Text Classification Dataset (UTCD)
5
+ UTCD is a curated compilation of 18 datasets revised for Zero-shot Text Classification spanning 3 aspect categories of Sentiment, Intent/Dialogue, and Topic classification. UTCD focuses on the task of zero-shot text classification where the candidate labels are descriptive of the text being classified. TUTCD consists of ~ 6M/800K train/test examples.
6
+
7
+ UTCD was introduced in the Findings of ACL'23 Paper **Label Agnostic Pre-training for Zero-shot Text Classification** by ***Christopher Clarke, Yuzhao Heng, Yiping Kang, Krisztian Flautner, Lingjia Tang and Jason Mars***.
8
+
9
+ UTCD Datasets & Principles:
10
+
11
+ - Sentiment
12
+ - GoEmotions introduced in [GoEmotions: A Dataset of Fine-Grained Emotions](https://arxiv.org/pdf/2005.00547v2.pdf)
13
+ - TweetEval introduced in [TWEETEVAL: Unified Benchmark and Comparative Evaluation for Tweet Classification](https://arxiv.org/pdf/2010.12421v2.pdf) (Sentiment subset)
14
+ - Emotion introduced in [CARER: Contextualized Affect Representations for Emotion Recognition](https://aclanthology.org/D18-1404.pdf)
15
+ - Amazon Polarity introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf)
16
+ - Finance Phrasebank introduced in [Good debt or bad debt: Detecting semantic orientations in economic texts](https://arxiv.org/pdf/1307.5336.pdf)
17
+ - Yelp introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf)
18
+ - Intent/Dialogue
19
+ - Schema-Guided Dialogue introduced in [Towards Scalable Multi-Domain Conversational Agents: The Schema-Guided Dialogue Dataset](https://arxiv.org/pdf/1909.05855v2.pdf)
20
+ - Clinc-150 introduced in [An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction](https://arxiv.org/pdf/1909.02027v1.pdf)
21
+ - SLURP SLU introduced in [SLURP: A Spoken Language Understanding Resource Package](https://arxiv.org/pdf/2011.13205.pdf)
22
+ - Banking77 introduced in [Efficient Intent Detection with Dual Sentence Encoders](https://arxiv.org/abs/2003.04807](https://arxiv.org/pdf/2003.04807.pdf)
23
+ - Snips introduced in [Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces](https://arxiv.org/pdf/1805.10190.pdf)
24
+ - NLU Evaluation introduced in [Benchmarking Natural Language Understanding Services for building Conversational Agents](https://arxiv.org/pdf/1903.05566.pdf)
25
+ - Topic
26
+ - AG News introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf)
27
+ - DBpedia 14 introduced in [DBpedia: A Nucleus for a Web of Open Data](https://link.springer.com/chapter/10.1007/978-3-540-76298-0_52)
28
+ - Yahoo Answer Topics introduced in [Character-level Convolutional Networks for Text Classification](https://arxiv.org/pdf/1509.01626.pdf)
29
+ - MultiEurlex introduced in [MultiEURLEX -- A multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer](https://aclanthology.org/2021.emnlp-main.559v2.pdf)
30
+ - BigPatent introduced in [BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization](https://aclanthology.org/P19-1212.pdf)
31
+ - Consumer Finance introduced in [Consumer Complaint Database](https://www.consumerfinance.gov/data-research/consumer-complaints/)
32
+
33
+ In order to make NLP models more broadly useful, zero-shot techniques need to be capable of label, domain \& aspect transfer. As such, in the construction of UTCD we enforce the following principles:
34
+
35
+ - **Textual labels**: In UTCD, we mandate the use of textual labels. While numerical label values are often used in classification tasks, descriptive textual labels such as those present in the datasets across UTCD enable the development of techniques that can leverage the class name which is instrumental in providing zero-shot support. As such, for each of the compiled datasets, labels are standardized such that the labels are descriptive of the text in natural language.
36
+ - **Diverse domains and Sequence lengths**: In addition to broad coverage of aspects, UTCD compiles diverse data across several domains such as Banking, Finance, Legal, etc each comprising varied length sequences (long and short). The datasets are listed above.
utcd.py CHANGED
@@ -53,6 +53,7 @@ class UtcdConfig(datasets.BuilderConfig):
53
 
54
  config = StefConfig('config.json')
55
  # mic(config('go_emotion'))
 
56
 
57
 
58
  class Utcd(datasets.GeneratorBasedBuilder):
@@ -116,10 +117,14 @@ class Utcd(datasets.GeneratorBasedBuilder):
116
  dnms = self._get_dataset_names()
117
  labels = [config(f'{dnm}.splits.{split}.labels') for dnm in dnms for split in ['train', 'test']]
118
  mic(dnms, labels)
 
 
119
  return datasets.DatasetInfo(
120
  description=_DESCRIPTION,
121
  features=datasets.Features(
122
- text=datasets.Value(dtype='string'), labels=labels, dataset_name=datasets.ClassLabel(names=dnms)
 
 
123
  ),
124
  homepage=_URL
125
  # TODO: citation
@@ -135,9 +140,22 @@ class Utcd(datasets.GeneratorBasedBuilder):
135
 
136
  downloaded_files = dl_manager.download_and_extract('datasets.zip')
137
  mic(downloaded_files)
138
- raise NotImplementedError
139
 
 
 
 
 
140
  return [
141
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
142
- datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
143
  ]
 
 
 
 
 
 
 
 
 
 
 
53
 
54
  config = StefConfig('config.json')
55
  # mic(config('go_emotion'))
56
+ _split2hf_split = dict(train=datasets.Split.TRAIN, eval=datasets.Split.VALIDATION, test=datasets.Split.TEST)
57
 
58
 
59
  class Utcd(datasets.GeneratorBasedBuilder):
 
117
  dnms = self._get_dataset_names()
118
  labels = [config(f'{dnm}.splits.{split}.labels') for dnm in dnms for split in ['train', 'test']]
119
  mic(dnms, labels)
120
+
121
+ labels = sorted(set().union(*labels)) # drop duplicate labels across datasets
122
  return datasets.DatasetInfo(
123
  description=_DESCRIPTION,
124
  features=datasets.Features(
125
+ text=datasets.Value(dtype='string'),
126
+ labels=datasets.Sequence(feature=datasets.ClassLabel(names=labels), length=-1), # for multi-label
127
+ dataset_name=datasets.ClassLabel(names=dnms)
128
  ),
129
  homepage=_URL
130
  # TODO: citation
 
140
 
141
  downloaded_files = dl_manager.download_and_extract('datasets.zip')
142
  mic(downloaded_files)
143
+ # raise NotImplementedError
144
 
145
+ # return [
146
+ # datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
147
+ # datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
148
+ # ]
149
  return [
150
+ datasets.SplitGenerator(name=_split2hf_split[s], gen_kwargs={"filepath": split2paths[s]}) for s in splits
 
151
  ]
152
+
153
+ def _generate_examples(self, filepath: str):
154
+ # each call is for one split of one dataset
155
+ dnm = filepath.split(os.sep)[-2]
156
+ id_ = 0
157
+ with open(filepath, encoding='utf-8') as f:
158
+ dset = json.load(f)
159
+ for txt, labels in dset.items():
160
+ yield id_, dict(text=txt, labels=labels, dataset_name=dnm)
161
+ id_ += 1