Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
phucdev commited on
Commit
84c7383
·
verified ·
1 Parent(s): a64c665

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +113 -0
README.md CHANGED
@@ -42,4 +42,117 @@ configs:
42
  path: data/validation-*
43
  - split: test
44
  path: data/test-*
 
 
 
 
 
 
 
 
 
45
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
  path: data/validation-*
43
  - split: test
44
  path: data/test-*
45
+ task_categories:
46
+ - token-classification
47
+ language:
48
+ - en
49
+ tags:
50
+ - relation-extraction
51
+ pretty_name: CoNLL04
52
+ size_categories:
53
+ - 1K<n<10K
54
  ---
55
+ # Dataset Card for CoNLL04
56
+
57
+ ## Dataset Description
58
+
59
+ - **Repository:** https://github.com/lavis-nlp/spert
60
+ - **Paper:** https://aclanthology.org/W04-2401/
61
+ - **Benchmark:** https://paperswithcode.com/sota/relation-extraction-on-conll04
62
+
63
+ #### Dataset Summary
64
+
65
+ <!-- Provide a quick summary of the dataset. -->
66
+
67
+ The CoNLL04 dataset is a benchmark dataset used for relation extraction tasks. It contains 1,437 sentences, each of which has at least one relation. The sentences are annotated with information about entities and their corresponding relation types.
68
+ The data in this repository was converted from ConLL04 format to JSONL format in https://github.com/lavis-nlp/spert/blob/master/scripts/conversion/convert_conll04.py
69
+ The original data can be found here: https://cogcomp.seas.upenn.edu/page/resource_view/43
70
+
71
+ The sentences in this dataset are tokenized and are annotated with entities (`Peop`, `Loc`, `Org`, `Other`) and relations (`Located_In`, `Work_For`, `OrgBased_In`, `Live_In`, `Kill`).
72
+ Each sentence contains at least one active relation.
73
+
74
+ ### Languages
75
+
76
+ The language in the dataset is English.
77
+
78
+
79
+ ## Dataset Structure
80
+
81
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
82
+
83
+ ### Dataset Instances
84
+
85
+ An example of 'train' looks as follows:
86
+ ```json
87
+ {
88
+ "tokens": ["Newspaper", "`", "Explains", "'", "U.S.", "Interests", "Section", "Events", "FL1402001894", "Havana", "Radio", "Reloj", "Network", "in", "Spanish", "2100", "GMT", "13", "Feb", "94"],
89
+ "entities": [
90
+ {"type": "Loc", "start": 4, "end": 5},
91
+ {"type": "Loc", "start": 9, "end": 10},
92
+ {"type": "Org", "start": 10, "end": 13},
93
+ {"type": "Other", "start": 15, "end": 17},
94
+ {"type": "Other", "start": 17, "end": 20}
95
+ ],
96
+ "relations": [
97
+ {"type": "OrgBased_In", "head": 2, "tail": 1}
98
+ ],
99
+ "orig_id": 3255
100
+ }
101
+ ```
102
+
103
+ ### Data Fields
104
+
105
+ - `tokens`: the text of this example, a `string` feature.
106
+ - `entities`: list of entities
107
+ - `type`: entity type, a `string` feature.
108
+ - `start`: start token index of entity, a `int32` feature.
109
+ - `end`: exclusive end token index of entity, a `int32` feature.
110
+ - `relations`: list of relations
111
+ - `type`: relation type, a `string` feature.
112
+ - `head`: index of head entity, a `int32` feature.
113
+ - `tail`: index of tail entity, a `int32` feature.
114
+
115
+
116
+ ## Citation
117
+
118
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
119
+
120
+ **BibTeX:**
121
+
122
+ ```
123
+ @inproceedings{roth-yih-2004-linear,
124
+ title = "A Linear Programming Formulation for Global Inference in Natural Language Tasks",
125
+ author = "Roth, Dan and
126
+ Yih, Wen-tau",
127
+ booktitle = "Proceedings of the Eighth Conference on Computational Natural Language Learning ({C}o{NLL}-2004) at {HLT}-{NAACL} 2004",
128
+ month = may # " 6 - " # may # " 7",
129
+ year = "2004",
130
+ address = "Boston, Massachusetts, USA",
131
+ publisher = "Association for Computational Linguistics",
132
+ url = "https://aclanthology.org/W04-2401",
133
+ pages = "1--8",
134
+ }
135
+ @article{eberts-ulges2019spert,
136
+ author = {Markus Eberts and
137
+ Adrian Ulges},
138
+ title = {Span-based Joint Entity and Relation Extraction with Transformer Pre-training},
139
+ journal = {CoRR},
140
+ volume = {abs/1909.07755},
141
+ year = {2019},
142
+ url = {http://arxiv.org/abs/1909.07755},
143
+ eprinttype = {arXiv},
144
+ eprint = {1909.07755},
145
+ timestamp = {Mon, 23 Sep 2019 18:07:15 +0200},
146
+ biburl = {https://dblp.org/rec/journals/corr/abs-1909-07755.bib},
147
+ bibsource = {dblp computer science bibliography, https://dblp.org}
148
+ }
149
+ ```
150
+
151
+ **APA:**
152
+
153
+ Roth, D., & Yih, W. (2004). A linear programming formulation for global inference in natural language tasks. In Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004) at HLT-NAACL 2004 (pp. 1-8). Boston, Massachusetts, USA: Association for Computational Linguistics. https://aclanthology.org/W04-2401
154
+ Eberts, M., & Ulges, A. (2019). Span-based joint entity and relation extraction with transformer pre-training. CoRR, abs/1909.07755. http://arxiv.org/abs/1909.07755
155
+
156
+ ## Dataset Card Authors
157
+
158
+ [@phucdev](https://github.com/phucdev)