parquet-converter commited on
Commit
6f8c70a
·
1 Parent(s): 0c31d59

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,146 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - machine-generated
4
- language_creators:
5
- - found
6
- language:
7
- - en
8
- license:
9
- - mit
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - unknown
14
- source_datasets:
15
- - extended|qa_srl
16
- task_categories:
17
- - text-retrieval
18
- task_ids: []
19
- pretty_name: LSOIE
20
- tags:
21
- - Open Information Extraction
22
- ---
23
-
24
- # Dataset Card for LSOIE
25
-
26
- ## Table of Contents
27
- - [Dataset Description](#dataset-description)
28
- - [Dataset Summary](#dataset-summary)
29
- - [Supported Tasks](#supported-tasks-and-leaderboards)
30
- - [Languages](#languages)
31
- - [Dataset Structure](#dataset-structure)
32
- - [Data Instances](#data-instances)
33
- - [Data Fields](#data-instances)
34
- - [Data Splits](#data-instances)
35
- - [Dataset Creation](#dataset-creation)
36
- - [Curation Rationale](#curation-rationale)
37
- - [Source Data](#source-data)
38
- - [Annotations](#annotations)
39
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
- - [Considerations for Using the Data](#considerations-for-using-the-data)
41
- - [Social Impact of Dataset](#social-impact-of-dataset)
42
- - [Discussion of Biases](#discussion-of-biases)
43
- - [Other Known Limitations](#other-known-limitations)
44
- - [Additional Information](#additional-information)
45
- - [Dataset Curators](#dataset-curators)
46
- - [Licensing Information](#licensing-information)
47
- - [Citation Information](#citation-information)
48
-
49
- ## Dataset Description
50
-
51
- - **Homepage:** https://github.com/Jacobsolawetz/large-scale-oie
52
- - **Repository:** https://github.com/Jacobsolawetz/large-scale-oie
53
- - **Paper:** https://arxiv.org/abs/2101.11177
54
- - **Leaderboard:** [Needs More Information]
55
- - **Point of Contact:** [Needs More Information]
56
-
57
- ### Dataset Summary
58
-
59
- The Large Scale Open Information Extraction Dataset (LSOIE), is a dataset 20 times larger than the next largest human-annotated Open Information Extraction (OIE) dataset. LSOIE is a built upon the QA-SRL 2.0 dataset by transforming the list of Questions and answers for each predicate to a tuple representing a fact.
60
-
61
- ### Supported Tasks and Leaderboards
62
-
63
- Open Information Extraction
64
-
65
- ### Languages
66
-
67
- The text in this dataset is english.
68
-
69
- ## Dataset Structure
70
-
71
- ### Data Instances
72
-
73
- A datapoint comprises one fact together with the sentence it was extracted from. There can be multiple facts for each Sentence. Each fact is represented by a tuple $(a_0, p, a_1,\dots a_n)$ where $a_0$ is the head entity $p$ is the predicate and $a_1, \dots,a_n$ represent the tail.
74
-
75
- ### Data Fields
76
-
77
- - word_ids : sequence of indices (int) representing tokens in a sentence,
78
- - words : a sequence of strings, the tokens in the sentence,
79
- - pred : the predicate of the fact,
80
- - pred_ids : ids of the tokens in the predicate,
81
- - head_pred_id : id of the head token in the predicate,
82
- - sent_id : sentence id,
83
- - run_id : ,
84
- - label : Sequence of tags (BIO) representing the fact, e.g. if the fact is given by $(a_0, p, a_1, \dots, a_n) $
85
-
86
- ### Data Splits
87
-
88
- [Needs More Information]
89
-
90
- ## Dataset Creation
91
-
92
- ### Curation Rationale
93
-
94
- [Needs More Information]
95
-
96
- ### Source Data
97
-
98
- #### Initial Data Collection and Normalization
99
-
100
- [Needs More Information]
101
-
102
- #### Who are the source language producers?
103
-
104
- [Needs More Information]
105
-
106
- ### Annotations
107
-
108
- #### Annotation process
109
-
110
- [Needs More Information]
111
-
112
- #### Who are the annotators?
113
-
114
- [Needs More Information]
115
-
116
- ### Personal and Sensitive Information
117
-
118
- [Needs More Information]
119
-
120
- ## Considerations for Using the Data
121
-
122
- ### Social Impact of Dataset
123
-
124
- [Needs More Information]
125
-
126
- ### Discussion of Biases
127
-
128
- [Needs More Information]
129
-
130
- ### Other Known Limitations
131
-
132
- [Needs More Information]
133
-
134
- ## Additional Information
135
-
136
- ### Dataset Curators
137
-
138
- [Needs More Information]
139
-
140
- ### Licensing Information
141
-
142
- [Needs More Information]
143
-
144
- ### Citation Information
145
-
146
- [Needs More Information]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"wiki": {"description": "\nThe Large Scale Open Information Extraction Dataset (LSOIE), is a dataset 20 \ntimes larger than the next largest human-annotated Open Information Extraction\n(OIE) dataset. LSOIE is a built upon the QA-SRL 2.0 dataset.\n", "citation": "@article{lsoie-2021,\n title={{LSOIE}: A Large-Scale Dataset for Supervised Open Information Extraction},\n author={{Solawetz}, Jacob and {Larson}, Stefan},\n journal={arXiv preprint arXiv:2101.11177},\n year={2019},\n url=\"https://arxiv.org/pdf/2101.11177.pdf\"\n}\n", "homepage": "https://github.com/Jacobsolawetz/large-scale-oie/", "license": "", "features": {"word_ids": {"feature": {"dtype": "int16", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "words": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "pred": {"dtype": "string", "id": null, "_type": "Value"}, "pred_ids": {"feature": {"dtype": "int16", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "head_pred_id": {"dtype": "int16", "id": null, "_type": "Value"}, "sent_id": {"dtype": "int16", "id": null, "_type": "Value"}, "run_id": {"dtype": "int16", "id": null, "_type": "Value"}, "label": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": {"input": "word_ids", "output": "label"}, "task_templates": null, "builder_name": "lsoie", "config_name": "wiki", "version": "0.0.0", "splits": {"train": {"name": "train", "num_bytes": 24938522, "num_examples": 46016, "dataset_name": "lsoie"}, "validation": {"name": "validation", "num_bytes": 2880854, "num_examples": 5269, "dataset_name": "lsoie"}, "test": {"name": "test", "num_bytes": 2840517, "num_examples": 5374, "dataset_name": "lsoie"}}, "download_checksums": {"https://github.com/Jacobsolawetz/large-scale-oie/raw/master/dataset_creation/lsoie_data/lsoie_data.zip": {"num_bytes": 19799926, "checksum": "0d189a3a8fef4b9f9efdad8faf0f53fc53805f9b2ad5354926e09c1449a00330"}}, "download_size": 19799926, "post_processing_size": null, "dataset_size": 30659893, "size_in_bytes": 50459819}}
 
 
lsoie.py DELETED
@@ -1,151 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- """LSOIE: A Large-Scale Dataset for Supervised Open Information Extraction."""
3
- import os
4
- import datasets
5
- from datasets.info import SupervisedKeysData
6
- from zipfile import ZipFile
7
- logger = datasets.logging.get_logger(__name__)
8
-
9
-
10
- _CITATION = """\
11
- @article{lsoie-2021,
12
- title={{LSOIE}: A Large-Scale Dataset for Supervised Open Information Extraction},
13
- author={{Solawetz}, Jacob and {Larson}, Stefan},
14
- journal={arXiv preprint arXiv:2101.11177},
15
- year={2019},
16
- url="https://arxiv.org/pdf/2101.11177.pdf"
17
- }
18
- """
19
-
20
- _DESCRIPTION = """
21
- The Large Scale Open Information Extraction Dataset (LSOIE), is a dataset 20
22
- times larger than the next largest human-annotated Open Information Extraction
23
- (OIE) dataset. LSOIE is a built upon the QA-SRL 2.0 dataset.
24
- """
25
-
26
- _URL = "https://github.com/Jacobsolawetz/large-scale-oie/"
27
- _URLS = {
28
- "zip": _URL+"raw/master/dataset_creation/lsoie_data/lsoie_data.zip"
29
- }
30
- _ARCHIVE_FILES = [
31
- "lsoie_science_train.conll",
32
- "lsoie_science_dev.conll",
33
- "lsoie_science_test.conll",
34
- "lsoie_wiki_train.conll",
35
- "lsoie_wiki_dev.conll",
36
- "lsoie_wiki_test.conll",
37
- ]
38
-
39
-
40
- class LsoieConfig(datasets.BuilderConfig):
41
- """BuilderConfig for LSOIE."""
42
-
43
- def __init__(self,subset="wiki", **kwargs):
44
- """BuilderConfig for LSOIE.
45
- Args:
46
- subset: str - either "wiki" or "science"
47
- **kwargs: keyword arguments forwarded to super.
48
- """
49
- super(LsoieConfig, self).__init__(**kwargs)
50
- self.subset=subset
51
-
52
-
53
- class Lsoie(datasets.GeneratorBasedBuilder):
54
- """LSOIE: A Large-Scale Dataset for Supervised Open Information Extraction"""
55
-
56
- BUILDER_CONFIGS = [
57
- LsoieConfig(
58
- name="wiki",
59
- description="LSOIE dataset from wikipedia and wikinews",
60
- subset="wiki",
61
- ),
62
- LsoieConfig(
63
- name="sci",
64
- description="LSOIE dataset build over scientific domain",
65
- subset="science",
66
- ),
67
- ]
68
-
69
- DEFAULT_CONFIG_NAME = "wiki"
70
-
71
- def _info(self):
72
- return datasets.DatasetInfo(
73
- description=_DESCRIPTION,
74
- features=datasets.Features(
75
- {
76
- "word_ids": datasets.Sequence(datasets.Value("int16")),
77
- "words": datasets.Sequence(datasets.Value("string")),
78
- "pred": datasets.Value("string"),
79
- "pred_ids": datasets.Sequence(datasets.Value("int16")),
80
- "head_pred_id": datasets.Value("int16"),
81
- "sent_id": datasets.Value("int16"),
82
- "run_id": datasets.Value("int16"),
83
- "label": datasets.Sequence(datasets.Value("string")),
84
- }
85
- ),
86
- supervised_keys=SupervisedKeysData(input="word_ids",output="label"),
87
- homepage=_URL,
88
- citation=_CITATION,
89
- #there is no default task for open information extraction yet
90
- #task_templates=[
91
- # OpenInformationExtraction(
92
- # question_column="question", context_column="context", answers_column="answers"
93
- # )
94
- #],
95
- )
96
-
97
- def _split_generators(self, dl_manager):
98
- downloaded_archive = dl_manager.download(_URLS)['zip']
99
- #name_pre=os.path.join("lsoie_data","lsoie_")+self.config.subset+"_"
100
- name_pre="lsoie_"+self.config.subset+"_"
101
- return [
102
- datasets.SplitGenerator(name=datasets.Split.TRAIN,
103
- gen_kwargs={
104
- "archive_path": downloaded_archive,
105
- "file_name": name_pre+"train.conll",
106
- }),
107
- datasets.SplitGenerator(name=datasets.Split.VALIDATION,
108
- gen_kwargs={
109
- "archive_path": downloaded_archive,
110
- "file_name": name_pre+"dev.conll",
111
- }),
112
- datasets.SplitGenerator(name=datasets.Split.TEST,
113
- gen_kwargs={
114
- "archive_path": downloaded_archive,
115
- "file_name": name_pre+"test.conll",
116
- }),
117
- ]
118
-
119
- def _generate_examples(self,archive_path,file_name):
120
- """This functions returns the samples in a raw format"""
121
- logger.info("generating examples from archive:{}".format(archive_path))
122
- columns={'word_ids':int,
123
- 'words':str,
124
- 'pred':str,
125
- 'pred_ids':lambda x: [ num for num in x.strip('[]').split(',')],
126
- 'head_pred_id': int,
127
- 'sent_id':int,
128
- 'run_id': int,
129
- 'label':str}
130
- list_columns=["word_ids","words","label"]
131
- sep="\t"
132
- key=0
133
- sentence=dict()
134
- for column in list_columns:
135
- sentence[column]=[]
136
- with ZipFile(archive_path) as zipfile:
137
- with zipfile.open('lsoie_data/'+file_name,mode='r') as file:
138
- for line in file:
139
- line=line.decode("utf-8").strip('\n').split(sep=sep)
140
- if line[0]=='':
141
- yield key, sentence
142
- key+=1
143
- for column in list_columns:
144
- sentence[column]=[]
145
- continue
146
- for column, val in zip(columns.keys(),line):
147
- val=columns[column](val)
148
- if column in list_columns:
149
- sentence[column].append(val)
150
- else:
151
- sentence[column]=val
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
sci/lsoie-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a33284dc0347baebd13fed4d0922ea3401ebd7625d7365bfcc155b65495d7309
3
+ size 1128901
sci/lsoie-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a4c4ffc4375990bfc6fed71558222453aee25fed6af38468188c69143f8f5e9
3
+ size 5726615
sci/lsoie-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc480600f0386f223d1ec302212a88d79c38f666cb9fd01e05ed994c25ead301
3
+ size 8639
wiki/lsoie-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55edf84d8b4db2ff7e2419f060b99cfd9f67b5e5c46fa18b61f485d33b3dc83c
3
+ size 462862
wiki/lsoie-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abf3c509348522ce0645da1e66173fed975ab5081bfaa47cfc254c566129c7d5
3
+ size 3733462
wiki/lsoie-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17167b287d166ea23210cc541cf9e292b840c24ed08358ce2514caf34522e2af
3
+ size 437235