leondz commited on
Commit
77aa5d9
·
1 Parent(s): f3fa5fb

update model card

Browse files
Files changed (1) hide show
  1. README.md +67 -92
README.md CHANGED
@@ -10,16 +10,16 @@ licenses:
10
  multilinguality:
11
  - monolingual
12
  size_categories:
13
- - 10K<n<100K
14
  source_datasets:
15
- - extended|other-reuters-corpus
16
  task_categories:
17
  - token-classification
18
  task_ids:
19
  - named-entity-recognition
20
- - part-of-speech-tagging
21
- paperswithcode_id: conll-2003
22
- pretty_name: CoNLL-2003
23
  ---
24
 
25
  # Dataset Card for "conll2003"
@@ -50,29 +50,33 @@ pretty_name: CoNLL-2003
50
 
51
  ## Dataset Description
52
 
53
- - **Homepage:** [https://www.aclweb.org/anthology/W03-0419/](https://www.aclweb.org/anthology/W03-0419/)
54
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
55
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
56
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
57
- - **Size of downloaded dataset files:** 4.63 MB
58
- - **Size of the generated dataset:** 9.78 MB
59
- - **Total amount of disk used:** 14.41 MB
60
 
61
  ### Dataset Summary
62
 
63
- The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on
64
- four types of named entities: persons, locations, organizations and names of miscellaneous entities that do
65
- not belong to the previous three groups.
 
 
66
 
67
- The CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on
68
- a separate line and there is an empty line after each sentence. The first item on each line is a word, the second
69
- a part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags
70
- and the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only
71
- if two phrases of the same type immediately follow each other, the first word of the second phrase will have tag
72
- B-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2
73
- tagging scheme, whereas the original dataset uses IOB1.
 
 
 
74
 
75
- For more details see https://www.clips.uantwerpen.be/conll2003/ner/ and https://www.aclweb.org/anthology/W03-0419
76
 
77
  ### Supported Tasks and Leaderboards
78
 
@@ -86,148 +90,119 @@ For more details see https://www.clips.uantwerpen.be/conll2003/ner/ and https://
86
 
87
  ### Data Instances
88
 
89
- #### conll2003
90
 
91
- - **Size of downloaded dataset files:** 4.63 MB
92
- - **Size of the generated dataset:** 9.78 MB
93
- - **Total amount of disk used:** 14.41 MB
94
 
95
  An example of 'train' looks as follows.
96
 
97
  ```
98
  {
99
- "chunk_tags": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0],
100
  "id": "0",
101
  "ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
102
- "pos_tags": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7],
103
  "tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."]
104
  }
105
  ```
106
 
107
- The original data files have `-DOCSTART-` lines used to separate documents, but these lines are removed here.
108
- Indeed `-DOCSTART-` is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.
109
 
110
  ### Data Fields
111
 
112
- The data fields are the same among all splits.
113
-
114
- #### conll2003
115
  - `id`: a `string` feature.
116
  - `tokens`: a `list` of `string` features.
117
- - `pos_tags`: a `list` of classification labels (`int`). Full tagset with indices:
118
-
119
- ```python
120
- {'"': 0, "''": 1, '#': 2, '$': 3, '(': 4, ')': 5, ',': 6, '.': 7, ':': 8, '``': 9, 'CC': 10, 'CD': 11, 'DT': 12,
121
- 'EX': 13, 'FW': 14, 'IN': 15, 'JJ': 16, 'JJR': 17, 'JJS': 18, 'LS': 19, 'MD': 20, 'NN': 21, 'NNP': 22, 'NNPS': 23,
122
- 'NNS': 24, 'NN|SYM': 25, 'PDT': 26, 'POS': 27, 'PRP': 28, 'PRP$': 29, 'RB': 30, 'RBR': 31, 'RBS': 32, 'RP': 33,
123
- 'SYM': 34, 'TO': 35, 'UH': 36, 'VB': 37, 'VBD': 38, 'VBG': 39, 'VBN': 40, 'VBP': 41, 'VBZ': 42, 'WDT': 43,
124
- 'WP': 44, 'WP$': 45, 'WRB': 46}
125
- ```
126
-
127
- - `chunk_tags`: a `list` of classification labels (`int`). Full tagset with indices:
128
-
129
- ```python
130
- {'O': 0, 'B-ADJP': 1, 'I-ADJP': 2, 'B-ADVP': 3, 'I-ADVP': 4, 'B-CONJP': 5, 'I-CONJP': 6, 'B-INTJ': 7, 'I-INTJ': 8,
131
- 'B-LST': 9, 'I-LST': 10, 'B-NP': 11, 'I-NP': 12, 'B-PP': 13, 'I-PP': 14, 'B-PRT': 15, 'I-PRT': 16, 'B-SBAR': 17,
132
- 'I-SBAR': 18, 'B-UCP': 19, 'I-UCP': 20, 'B-VP': 21, 'I-VP': 22}
133
- ```
134
-
135
  - `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices:
136
-
137
- ```python
138
- {'O': 0, 'B-PER': 1, 'I-PER': 2, 'B-ORG': 3, 'I-ORG': 4, 'B-LOC': 5, 'I-LOC': 6, 'B-MISC': 7, 'I-MISC': 8}
139
- ```
140
 
141
  ### Data Splits
142
 
143
- | name |train|validation|test|
144
- |---------|----:|---------:|---:|
145
- |conll2003|14041| 3250|3453|
146
 
147
  ## Dataset Creation
148
 
149
  ### Curation Rationale
150
 
151
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
152
 
153
  ### Source Data
154
 
155
  #### Initial Data Collection and Normalization
156
 
157
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
158
 
159
  #### Who are the source language producers?
160
 
161
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
162
 
163
  ### Annotations
164
 
165
  #### Annotation process
166
 
167
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
168
 
169
  #### Who are the annotators?
170
 
171
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
172
 
173
  ### Personal and Sensitive Information
174
 
175
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
176
 
177
  ## Considerations for Using the Data
178
 
179
  ### Social Impact of Dataset
180
 
181
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
182
 
183
  ### Discussion of Biases
184
 
185
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
186
 
187
  ### Other Known Limitations
188
 
189
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
190
 
191
  ## Additional Information
192
 
193
  ### Dataset Curators
194
 
195
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
196
 
197
  ### Licensing Information
198
 
199
- From the [CoNLL2003 shared task](https://www.clips.uantwerpen.be/conll2003/ner/) page:
 
 
200
 
201
- > The English data is a collection of news wire articles from the Reuters Corpus. The annotation has been done by people of the University of Antwerp. Because of copyright reasons we only make available the annotations. In order to build the complete data sets you will need access to the Reuters Corpus. It can be obtained for research purposes without any charge from NIST.
202
-
203
- The copyrights are defined below, from the [Reuters Corpus page](https://trec.nist.gov/data/reuters/reuters.html):
204
-
205
- > The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:
206
- >
207
- > [Organizational agreement](https://trec.nist.gov/data/reuters/org_appl_reuters_v4.html)
208
- >
209
- > This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.
210
- >
211
- > [Individual agreement](https://trec.nist.gov/data/reuters/ind_appl_reuters_v4.html)
212
- >
213
- > This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization.
214
 
215
  ### Citation Information
216
 
217
  ```
218
- @inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
219
- title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
220
- author = "Tjong Kim Sang, Erik F. and
221
- De Meulder, Fien",
222
- booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
223
- year = "2003",
224
- url = "https://www.aclweb.org/anthology/W03-0419",
225
- pages = "142--147",
 
226
  }
227
-
228
  ```
229
 
230
 
231
  ### Contributions
232
 
233
- Thanks to [@jplu](https://github.com/jplu), [@vblagoje](https://github.com/vblagoje), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
 
10
  multilinguality:
11
  - monolingual
12
  size_categories:
13
+ - 1K<n<10K
14
  source_datasets:
15
+ -
16
  task_categories:
17
  - token-classification
18
  task_ids:
19
  - named-entity-recognition
20
+ - named-entity-linking
21
+ paperswithcode_id: ipm-nel
22
+ pretty_name: IPM NEL (Derczynski)
23
  ---
24
 
25
  # Dataset Card for "conll2003"
 
50
 
51
  ## Dataset Description
52
 
53
+ - **Homepage:** []()
54
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
55
+ - **Paper:** [http://www.derczynski.com/papers/ner_single.pdf](http://www.derczynski.com/papers/ner_single.pdf)
56
+ - **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
57
+ - **Size of downloaded dataset files:** 120 KB
58
+ - **Size of the generated dataset:**
59
+ - **Total amount of disk used:**
60
 
61
  ### Dataset Summary
62
 
63
+ This data is for the task of named entity recognition and linking/disambiguation over tweets. It comprises
64
+ the addition of an entity URI layer on top of an NER-annotated tweet dataset. The task is to detect entities
65
+ and then provide a correct link to them in DBpedia, thus disambiguating otherwise ambiguous entity surface
66
+ forms; for example, this means linking "Paris" to the correct instance of a city named that (e.g. Paris,
67
+ France vs. Paris, Texas).
68
 
69
+ The data concentrates on ten types of named entities: company, facility, geographic location, movie, musical
70
+ artist, person, product, sports team, TV show, and other.
71
+
72
+ The file is tab separated, in CoNLL format, with line breaks between tweets.
73
+
74
+ * Data preserves the tokenisation used in the Ritter datasets.
75
+ * PoS labels are not present for all tweets, but where they could be found in the Ritter data, they're given.
76
+ * In cases where a URI could not be agreed, or was not present in
77
+
78
+ DBpedia, there is a NIL. See the paper for a full description of the methodology.
79
 
 
80
 
81
  ### Supported Tasks and Leaderboards
82
 
 
90
 
91
  ### Data Instances
92
 
93
+ #### ipm_nel
94
 
95
+ - **Size of downloaded dataset files:** 120 KB
96
+ - **Size of the generated dataset:**
97
+ - **Total amount of disk used:**
98
 
99
  An example of 'train' looks as follows.
100
 
101
  ```
102
  {
 
103
  "id": "0",
104
  "ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
105
+ "uris": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7],
106
  "tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."]
107
  }
108
  ```
109
 
 
 
110
 
111
  ### Data Fields
112
 
 
 
 
113
  - `id`: a `string` feature.
114
  - `tokens`: a `list` of `string` features.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
115
  - `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices:
116
+ - `uris`: a `list` of URIs (`string`) that disambiguate entities. Set to `NIL` when an entity has no DBpedia entry, or blank for outside-of-entity tokens.
 
 
 
117
 
118
  ### Data Splits
119
 
120
+ | name |train|
121
+ |---------|----:|
122
+ |ipm_nel||
123
 
124
  ## Dataset Creation
125
 
126
  ### Curation Rationale
127
 
128
+ To gather a social media benchmark for named entity linking that is sufficiently different from newswire data.
129
 
130
  ### Source Data
131
 
132
  #### Initial Data Collection and Normalization
133
 
134
+ The data is partly harvested from that distributed by [Ritter / Named Entity Recognition in Tweets: An Experimental Study](https://aclanthology.org/D11-1141/),
135
+ and partly taken from Twitter by the authors.
136
 
137
  #### Who are the source language producers?
138
 
139
+ English-speaking Twitter users, between October 2011 and September 2013
140
 
141
  ### Annotations
142
 
143
  #### Annotation process
144
 
145
+ The authors were allocated documents and marked them for named entities (where these were not already present) and then attempted to find
146
+ the best-fitting DBpedia entry for each entity found. Each entity mention was labelled by a random set of three volunteers.
147
+ The annotation task was mediated using Crowdflower (Biewald, 2012). Our interface design was to show each volunteer the text of the tweet, any URL links contained
148
+ therein, and a set of candidate targets from DBpedia. The volunteers were encouraged to click on the URL links from the
149
+ tweet, to gain addition context and thus ensure that the correct DBpedia URI is chosen by them. Candidate entities were
150
+ shown in random order, using the text from the corresponding DBpedia abstracts (where available) or the actual DBpedia
151
+ URI otherwise. In addition, the options ‘‘none of the above’’, ‘‘not an entity’’ and ‘‘cannot decide’’ were added, to allow the
152
+ volunteers to indicate that this entity mention has no corresponding DBpedia URI (none of the above), the highlighted text
153
+ is not an entity, or that the tweet text (and any links, if available) did not provide sufficient information to reliably disambiguate the entity mention.
154
 
155
  #### Who are the annotators?
156
 
157
+ The annotators are 10 volunteer NLP researchers, from the authors and the authors' institutions.
158
 
159
  ### Personal and Sensitive Information
160
 
161
+ The data was public at the time of collection. User names are preserved.
162
 
163
  ## Considerations for Using the Data
164
 
165
  ### Social Impact of Dataset
166
 
167
+ There's a risk of user-deleted content being in this data. The data has NOT been vetted for any content, so there's a risk of [harmful text](https://arxiv.org/abs/2204.14256) content.
168
 
169
  ### Discussion of Biases
170
 
171
+ The data is annotated by NLP researchers; we know that this group has high agreement but low recall on English twitter text [C16-1111](https://aclanthology.org/C16-1111/).
172
 
173
  ### Other Known Limitations
174
 
175
+ The above limitations apply.
176
 
177
  ## Additional Information
178
 
179
  ### Dataset Curators
180
 
181
+ The dataset is curated by the paper's authors.
182
 
183
  ### Licensing Information
184
 
185
+ The authors distribute this data under Creative Commons attribution license, CC-BY 4.0. You must
186
+ acknowledge the author if you use this data, but apart from that, you're quite
187
+ free to do most things. See https://creativecommons.org/licenses/by/4.0/legalcode .
188
 
 
 
 
 
 
 
 
 
 
 
 
 
 
189
 
190
  ### Citation Information
191
 
192
  ```
193
+ @article{derczynski2015analysis,
194
+ title={Analysis of named entity recognition and linking for tweets},
195
+ author={Derczynski, Leon and Maynard, Diana and Rizzo, Giuseppe and Van Erp, Marieke and Gorrell, Genevieve and Troncy, Rapha{\"e}l and Petrak, Johann and Bontcheva, Kalina},
196
+ journal={Information Processing \& Management},
197
+ volume={51},
198
+ number={2},
199
+ pages={32--49},
200
+ year={2015},
201
+ publisher={Elsevier}
202
  }
 
203
  ```
204
 
205
 
206
  ### Contributions
207
 
208
+ Author-added dataset [@leondz](https://github.com/leondz)