Datasets:
Tasks:
Token Classification
Sub-tasks:
named-entity-recognition
Languages:
English
Size:
1K<n<10K
Tags:
named-entity-linking
License:
initial commit
Browse files- README.md +233 -0
- dataset_infos.json +1 -0
- ipm_nel.py +168 -0
README.md
ADDED
@@ -0,0 +1,233 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- crowdsourced
|
4 |
+
language_creators:
|
5 |
+
- found
|
6 |
+
languages:
|
7 |
+
- en
|
8 |
+
licenses:
|
9 |
+
- other
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
size_categories:
|
13 |
+
- 10K<n<100K
|
14 |
+
source_datasets:
|
15 |
+
- extended|other-reuters-corpus
|
16 |
+
task_categories:
|
17 |
+
- token-classification
|
18 |
+
task_ids:
|
19 |
+
- named-entity-recognition
|
20 |
+
- part-of-speech-tagging
|
21 |
+
paperswithcode_id: conll-2003
|
22 |
+
pretty_name: CoNLL-2003
|
23 |
+
---
|
24 |
+
|
25 |
+
# Dataset Card for "conll2003"
|
26 |
+
|
27 |
+
## Table of Contents
|
28 |
+
- [Dataset Description](#dataset-description)
|
29 |
+
- [Dataset Summary](#dataset-summary)
|
30 |
+
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
31 |
+
- [Languages](#languages)
|
32 |
+
- [Dataset Structure](#dataset-structure)
|
33 |
+
- [Data Instances](#data-instances)
|
34 |
+
- [Data Fields](#data-fields)
|
35 |
+
- [Data Splits](#data-splits)
|
36 |
+
- [Dataset Creation](#dataset-creation)
|
37 |
+
- [Curation Rationale](#curation-rationale)
|
38 |
+
- [Source Data](#source-data)
|
39 |
+
- [Annotations](#annotations)
|
40 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
41 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
42 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
43 |
+
- [Discussion of Biases](#discussion-of-biases)
|
44 |
+
- [Other Known Limitations](#other-known-limitations)
|
45 |
+
- [Additional Information](#additional-information)
|
46 |
+
- [Dataset Curators](#dataset-curators)
|
47 |
+
- [Licensing Information](#licensing-information)
|
48 |
+
- [Citation Information](#citation-information)
|
49 |
+
- [Contributions](#contributions)
|
50 |
+
|
51 |
+
## Dataset Description
|
52 |
+
|
53 |
+
- **Homepage:** [https://www.aclweb.org/anthology/W03-0419/](https://www.aclweb.org/anthology/W03-0419/)
|
54 |
+
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
55 |
+
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
56 |
+
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
57 |
+
- **Size of downloaded dataset files:** 4.63 MB
|
58 |
+
- **Size of the generated dataset:** 9.78 MB
|
59 |
+
- **Total amount of disk used:** 14.41 MB
|
60 |
+
|
61 |
+
### Dataset Summary
|
62 |
+
|
63 |
+
The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on
|
64 |
+
four types of named entities: persons, locations, organizations and names of miscellaneous entities that do
|
65 |
+
not belong to the previous three groups.
|
66 |
+
|
67 |
+
The CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on
|
68 |
+
a separate line and there is an empty line after each sentence. The first item on each line is a word, the second
|
69 |
+
a part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags
|
70 |
+
and the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only
|
71 |
+
if two phrases of the same type immediately follow each other, the first word of the second phrase will have tag
|
72 |
+
B-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2
|
73 |
+
tagging scheme, whereas the original dataset uses IOB1.
|
74 |
+
|
75 |
+
For more details see https://www.clips.uantwerpen.be/conll2003/ner/ and https://www.aclweb.org/anthology/W03-0419
|
76 |
+
|
77 |
+
### Supported Tasks and Leaderboards
|
78 |
+
|
79 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
80 |
+
|
81 |
+
### Languages
|
82 |
+
|
83 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
84 |
+
|
85 |
+
## Dataset Structure
|
86 |
+
|
87 |
+
### Data Instances
|
88 |
+
|
89 |
+
#### conll2003
|
90 |
+
|
91 |
+
- **Size of downloaded dataset files:** 4.63 MB
|
92 |
+
- **Size of the generated dataset:** 9.78 MB
|
93 |
+
- **Total amount of disk used:** 14.41 MB
|
94 |
+
|
95 |
+
An example of 'train' looks as follows.
|
96 |
+
|
97 |
+
```
|
98 |
+
{
|
99 |
+
"chunk_tags": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0],
|
100 |
+
"id": "0",
|
101 |
+
"ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
|
102 |
+
"pos_tags": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7],
|
103 |
+
"tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."]
|
104 |
+
}
|
105 |
+
```
|
106 |
+
|
107 |
+
The original data files have `-DOCSTART-` lines used to separate documents, but these lines are removed here.
|
108 |
+
Indeed `-DOCSTART-` is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.
|
109 |
+
|
110 |
+
### Data Fields
|
111 |
+
|
112 |
+
The data fields are the same among all splits.
|
113 |
+
|
114 |
+
#### conll2003
|
115 |
+
- `id`: a `string` feature.
|
116 |
+
- `tokens`: a `list` of `string` features.
|
117 |
+
- `pos_tags`: a `list` of classification labels (`int`). Full tagset with indices:
|
118 |
+
|
119 |
+
```python
|
120 |
+
{'"': 0, "''": 1, '#': 2, '$': 3, '(': 4, ')': 5, ',': 6, '.': 7, ':': 8, '``': 9, 'CC': 10, 'CD': 11, 'DT': 12,
|
121 |
+
'EX': 13, 'FW': 14, 'IN': 15, 'JJ': 16, 'JJR': 17, 'JJS': 18, 'LS': 19, 'MD': 20, 'NN': 21, 'NNP': 22, 'NNPS': 23,
|
122 |
+
'NNS': 24, 'NN|SYM': 25, 'PDT': 26, 'POS': 27, 'PRP': 28, 'PRP$': 29, 'RB': 30, 'RBR': 31, 'RBS': 32, 'RP': 33,
|
123 |
+
'SYM': 34, 'TO': 35, 'UH': 36, 'VB': 37, 'VBD': 38, 'VBG': 39, 'VBN': 40, 'VBP': 41, 'VBZ': 42, 'WDT': 43,
|
124 |
+
'WP': 44, 'WP$': 45, 'WRB': 46}
|
125 |
+
```
|
126 |
+
|
127 |
+
- `chunk_tags`: a `list` of classification labels (`int`). Full tagset with indices:
|
128 |
+
|
129 |
+
```python
|
130 |
+
{'O': 0, 'B-ADJP': 1, 'I-ADJP': 2, 'B-ADVP': 3, 'I-ADVP': 4, 'B-CONJP': 5, 'I-CONJP': 6, 'B-INTJ': 7, 'I-INTJ': 8,
|
131 |
+
'B-LST': 9, 'I-LST': 10, 'B-NP': 11, 'I-NP': 12, 'B-PP': 13, 'I-PP': 14, 'B-PRT': 15, 'I-PRT': 16, 'B-SBAR': 17,
|
132 |
+
'I-SBAR': 18, 'B-UCP': 19, 'I-UCP': 20, 'B-VP': 21, 'I-VP': 22}
|
133 |
+
```
|
134 |
+
|
135 |
+
- `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices:
|
136 |
+
|
137 |
+
```python
|
138 |
+
{'O': 0, 'B-PER': 1, 'I-PER': 2, 'B-ORG': 3, 'I-ORG': 4, 'B-LOC': 5, 'I-LOC': 6, 'B-MISC': 7, 'I-MISC': 8}
|
139 |
+
```
|
140 |
+
|
141 |
+
### Data Splits
|
142 |
+
|
143 |
+
| name |train|validation|test|
|
144 |
+
|---------|----:|---------:|---:|
|
145 |
+
|conll2003|14041| 3250|3453|
|
146 |
+
|
147 |
+
## Dataset Creation
|
148 |
+
|
149 |
+
### Curation Rationale
|
150 |
+
|
151 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
152 |
+
|
153 |
+
### Source Data
|
154 |
+
|
155 |
+
#### Initial Data Collection and Normalization
|
156 |
+
|
157 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
158 |
+
|
159 |
+
#### Who are the source language producers?
|
160 |
+
|
161 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
162 |
+
|
163 |
+
### Annotations
|
164 |
+
|
165 |
+
#### Annotation process
|
166 |
+
|
167 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
168 |
+
|
169 |
+
#### Who are the annotators?
|
170 |
+
|
171 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
172 |
+
|
173 |
+
### Personal and Sensitive Information
|
174 |
+
|
175 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
176 |
+
|
177 |
+
## Considerations for Using the Data
|
178 |
+
|
179 |
+
### Social Impact of Dataset
|
180 |
+
|
181 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
182 |
+
|
183 |
+
### Discussion of Biases
|
184 |
+
|
185 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
186 |
+
|
187 |
+
### Other Known Limitations
|
188 |
+
|
189 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
190 |
+
|
191 |
+
## Additional Information
|
192 |
+
|
193 |
+
### Dataset Curators
|
194 |
+
|
195 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
196 |
+
|
197 |
+
### Licensing Information
|
198 |
+
|
199 |
+
From the [CoNLL2003 shared task](https://www.clips.uantwerpen.be/conll2003/ner/) page:
|
200 |
+
|
201 |
+
> The English data is a collection of news wire articles from the Reuters Corpus. The annotation has been done by people of the University of Antwerp. Because of copyright reasons we only make available the annotations. In order to build the complete data sets you will need access to the Reuters Corpus. It can be obtained for research purposes without any charge from NIST.
|
202 |
+
|
203 |
+
The copyrights are defined below, from the [Reuters Corpus page](https://trec.nist.gov/data/reuters/reuters.html):
|
204 |
+
|
205 |
+
> The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:
|
206 |
+
>
|
207 |
+
> [Organizational agreement](https://trec.nist.gov/data/reuters/org_appl_reuters_v4.html)
|
208 |
+
>
|
209 |
+
> This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.
|
210 |
+
>
|
211 |
+
> [Individual agreement](https://trec.nist.gov/data/reuters/ind_appl_reuters_v4.html)
|
212 |
+
>
|
213 |
+
> This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization.
|
214 |
+
|
215 |
+
### Citation Information
|
216 |
+
|
217 |
+
```
|
218 |
+
@inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
|
219 |
+
title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
|
220 |
+
author = "Tjong Kim Sang, Erik F. and
|
221 |
+
De Meulder, Fien",
|
222 |
+
booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
|
223 |
+
year = "2003",
|
224 |
+
url = "https://www.aclweb.org/anthology/W03-0419",
|
225 |
+
pages = "142--147",
|
226 |
+
}
|
227 |
+
|
228 |
+
```
|
229 |
+
|
230 |
+
|
231 |
+
### Contributions
|
232 |
+
|
233 |
+
Thanks to [@jplu](https://github.com/jplu), [@vblagoje](https://github.com/vblagoje), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
|
dataset_infos.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"ipm_nel": {"description": "This data is for the task of named entity recognition and linking/disambiguation over tweets. It comprises\nthe addition of an entity URI layer on top of an NER-annotated tweet dataset. The task is to detect entities\nand then provide a correct link to them in DBpedia, thus disambiguating otherwise ambiguous entity surface\nforms; for example, this means linking \"Paris\" to the correct instance of a city named that (e.g. Paris, \nFrance vs. Paris, Texas).\n\nThe data concentrates on ten types of named entities: company, facility, geographic location, movie, musical\nartist, person, product, sports team, TV show, and other.\n\nThe file is tab separated, in CoNLL format, with line breaks between tweets.\nData preserves the tokenisation used in the Ritter datasets.\nPoS labels are not present for all tweets, but where they could be found in the Ritter\ndata, they're given. In cases where a URI could not be agreed, or was not present in\nDBpedia, there is a NIL. See the paper for a full description of the methodology.\n\nFor more details see http://www.derczynski.com/papers/ner_single.pdf or https://www.sciencedirect.com/science/article/abs/pii/S0306457314001034\n", "citation": "@article{derczynski2015analysis,\n title={Analysis of named entity recognition and linking for tweets},\n author={Derczynski, Leon and Maynard, Diana and Rizzo, Giuseppe and Van Erp, Marieke and Gorrell, Genevieve and Troncy, Rapha{\"e}l and Petrak, Johann and Bontcheva, Kalina},\n journal={Information Processing \\& Management},\n volume={51},\n number={2},\n pages={32--49},\n year={2015},\n publisher={Elsevier}\n}\n", "homepage": "https://www.sciencedirect.com/science/article/pii/S0306457314001034", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "uris": {"dtype": "string", "id": null, "_type": "Value"}, "ner_tags": {"feature": {"num_classes": 21, "names": ["O", "B-company", "B-facility", "B-geo-loc", "B-movie", "B-musicartist", "B-other", "B-person", "B-product", "B-sportsteam", "B-tvshow", "I-company", "I-facility", "I-geo-loc", "I-movie", "I-musicartist", "I-other", "I-person", "I-product", "I-sportsteam", "I-tvshow"], "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "ipm_nel2003", "config_name": "ipm_nel", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 96989, "num_examples": 183, "dataset_name": "ipm_nel2003"}}, "download_checksums": {"http://www.derczynski.com/resources/ipm_nel.tar.gz": {"num_bytes": 2409032, "checksum": "c5a2fb618f19b591e6091d1538906db60ae16d2dbe7280533e4c2f8f8dabda9c"}}, "download_size": 2409032, "post_processing_size": null, "dataset_size": 96989, "size_in_bytes": 2506021}}
|
ipm_nel.py
ADDED
@@ -0,0 +1,168 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2020 HuggingFace Datasets Authors.
|
3 |
+
#
|
4 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
+
# you may not use this file except in compliance with the License.
|
6 |
+
# You may obtain a copy of the License at
|
7 |
+
#
|
8 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
+
#
|
10 |
+
# Unless required by applicable law or agreed to in writing, software
|
11 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
+
# See the License for the specific language governing permissions and
|
14 |
+
# limitations under the License.
|
15 |
+
|
16 |
+
# Lint as: python3
|
17 |
+
"""Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition"""
|
18 |
+
|
19 |
+
import os
|
20 |
+
|
21 |
+
import datasets
|
22 |
+
|
23 |
+
|
24 |
+
logger = datasets.logging.get_logger(__name__)
|
25 |
+
|
26 |
+
|
27 |
+
_CITATION = """\
|
28 |
+
@article{derczynski2015analysis,
|
29 |
+
title={Analysis of named entity recognition and linking for tweets},
|
30 |
+
author={Derczynski, Leon and Maynard, Diana and Rizzo, Giuseppe and Van Erp, Marieke and Gorrell, Genevieve and Troncy, Rapha{\"e}l and Petrak, Johann and Bontcheva, Kalina},
|
31 |
+
journal={Information Processing \& Management},
|
32 |
+
volume={51},
|
33 |
+
number={2},
|
34 |
+
pages={32--49},
|
35 |
+
year={2015},
|
36 |
+
publisher={Elsevier}
|
37 |
+
}
|
38 |
+
"""
|
39 |
+
|
40 |
+
_DESCRIPTION = """\
|
41 |
+
This data is for the task of named entity recognition and linking/disambiguation over tweets. It comprises
|
42 |
+
the addition of an entity URI layer on top of an NER-annotated tweet dataset. The task is to detect entities
|
43 |
+
and then provide a correct link to them in DBpedia, thus disambiguating otherwise ambiguous entity surface
|
44 |
+
forms; for example, this means linking "Paris" to the correct instance of a city named that (e.g. Paris,
|
45 |
+
France vs. Paris, Texas).
|
46 |
+
|
47 |
+
The data concentrates on ten types of named entities: company, facility, geographic location, movie, musical
|
48 |
+
artist, person, product, sports team, TV show, and other.
|
49 |
+
|
50 |
+
The file is tab separated, in CoNLL format, with line breaks between tweets.
|
51 |
+
Data preserves the tokenisation used in the Ritter datasets.
|
52 |
+
PoS labels are not present for all tweets, but where they could be found in the Ritter
|
53 |
+
data, they're given. In cases where a URI could not be agreed, or was not present in
|
54 |
+
DBpedia, there is a NIL. See the paper for a full description of the methodology.
|
55 |
+
|
56 |
+
For more details see http://www.derczynski.com/papers/ner_single.pdf or https://www.sciencedirect.com/science/article/abs/pii/S0306457314001034
|
57 |
+
"""
|
58 |
+
|
59 |
+
_URL = "http://www.derczynski.com/resources/ipm_nel.tar.gz"
|
60 |
+
_TRAINING_FILE = "ipm_nel_corpus/ipm_nel.conll"
|
61 |
+
|
62 |
+
|
63 |
+
class IpmNelConfig(datasets.BuilderConfig):
|
64 |
+
"""BuilderConfig for IPM NEL"""
|
65 |
+
|
66 |
+
def __init__(self, **kwargs):
|
67 |
+
"""BuilderConfig for IPM NEL.
|
68 |
+
|
69 |
+
Args:
|
70 |
+
**kwargs: keyword arguments forwarded to super.
|
71 |
+
"""
|
72 |
+
super(IpmNelConfig, self).__init__(**kwargs)
|
73 |
+
|
74 |
+
|
75 |
+
class IpmNel2003(datasets.GeneratorBasedBuilder):
|
76 |
+
"""IpmNel2003 dataset."""
|
77 |
+
|
78 |
+
BUILDER_CONFIGS = [
|
79 |
+
IpmNelConfig(name="ipm_nel", version=datasets.Version("1.0.0"), description="IPM NEL dataset"),
|
80 |
+
]
|
81 |
+
|
82 |
+
def _info(self):
|
83 |
+
return datasets.DatasetInfo(
|
84 |
+
description=_DESCRIPTION,
|
85 |
+
features=datasets.Features(
|
86 |
+
{
|
87 |
+
"id": datasets.Value("string"),
|
88 |
+
"tokens": datasets.Sequence(datasets.Value("string")),
|
89 |
+
"uris": datasets.Value("string"),
|
90 |
+
"ner_tags": datasets.Sequence(
|
91 |
+
datasets.features.ClassLabel(
|
92 |
+
names=[
|
93 |
+
"O",
|
94 |
+
"B-company",
|
95 |
+
"B-facility",
|
96 |
+
"B-geo-loc",
|
97 |
+
"B-movie",
|
98 |
+
"B-musicartist",
|
99 |
+
"B-other",
|
100 |
+
"B-person",
|
101 |
+
"B-product",
|
102 |
+
"B-sportsteam",
|
103 |
+
"B-tvshow",
|
104 |
+
"I-company",
|
105 |
+
"I-facility",
|
106 |
+
"I-geo-loc",
|
107 |
+
"I-movie",
|
108 |
+
"I-musicartist",
|
109 |
+
"I-other",
|
110 |
+
"I-person",
|
111 |
+
"I-product",
|
112 |
+
"I-sportsteam",
|
113 |
+
"I-tvshow",
|
114 |
+
]
|
115 |
+
)
|
116 |
+
),
|
117 |
+
|
118 |
+
}
|
119 |
+
),
|
120 |
+
supervised_keys=None,
|
121 |
+
homepage="https://www.sciencedirect.com/science/article/pii/S0306457314001034",
|
122 |
+
citation=_CITATION,
|
123 |
+
)
|
124 |
+
|
125 |
+
def _split_generators(self, dl_manager):
|
126 |
+
"""Returns SplitGenerators."""
|
127 |
+
downloaded_file = dl_manager.download_and_extract(_URL)
|
128 |
+
data_files = {
|
129 |
+
"train": os.path.join(downloaded_file, _TRAINING_FILE),
|
130 |
+
}
|
131 |
+
|
132 |
+
return [
|
133 |
+
datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": data_files["train"]}),
|
134 |
+
]
|
135 |
+
|
136 |
+
def _generate_examples(self, filepath):
|
137 |
+
logger.info("⏳ Generating examples from = %s", filepath)
|
138 |
+
with open(filepath, encoding="utf-8") as f:
|
139 |
+
guid = 0
|
140 |
+
tokens = []
|
141 |
+
ner_tags = []
|
142 |
+
uris = []
|
143 |
+
for line in f:
|
144 |
+
if line.startswith("-DOCSTART-") or line.strip() == "":
|
145 |
+
if tokens:
|
146 |
+
yield guid, {
|
147 |
+
"id": str(guid),
|
148 |
+
"tokens": tokens,
|
149 |
+
"ner_tags": ner_tags,
|
150 |
+
"uris": uris,
|
151 |
+
}
|
152 |
+
guid += 1
|
153 |
+
tokens = []
|
154 |
+
uris = []
|
155 |
+
ner_tags = []
|
156 |
+
else:
|
157 |
+
# ipm_nel items are tab separated
|
158 |
+
splits = line.split("\t")
|
159 |
+
tokens.append(splits[0])
|
160 |
+
uris.append(splits[1])
|
161 |
+
ner_tags.append(splits[2].rstrip())
|
162 |
+
# last example
|
163 |
+
yield guid, {
|
164 |
+
"id": str(guid),
|
165 |
+
"tokens": tokens,
|
166 |
+
"ner_tags": ner_tags,
|
167 |
+
"uris": uris,
|
168 |
+
}
|