Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Russian
ArXiv:
Libraries:
Datasets
Dask
License:
.gitattributes CHANGED
@@ -28,3 +28,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
  gazeta_train.jsonl filter=lfs diff=lfs merge=lfs -text
29
  gazeta_val.jsonl filter=lfs diff=lfs merge=lfs -text
30
  gazeta_test.jsonl filter=lfs diff=lfs merge=lfs -text
 
 
 
 
28
  gazeta_train.jsonl filter=lfs diff=lfs merge=lfs -text
29
  gazeta_val.jsonl filter=lfs diff=lfs merge=lfs -text
30
  gazeta_test.jsonl filter=lfs diff=lfs merge=lfs -text
31
+ default/test/index.duckdb filter=lfs diff=lfs merge=lfs -text
32
+ default/validation/index.duckdb filter=lfs diff=lfs merge=lfs -text
33
+ default/train/index.duckdb filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,29 +1,47 @@
1
  ---
2
- YAML tags:
3
  annotations_creators:
4
  - expert-generated
5
  - found
6
  language_creators:
7
  - expert-generated
8
  - found
 
 
9
  language:
10
  - ru
11
- language_bcp47:
12
- - ru-RU
13
  license:
14
  - unknown
15
  multilinguality:
16
  - monolingual
17
- pretty_name: Gazeta
18
- size_categories:
19
- - 10K<n<100K
20
  source_datasets:
21
  - original
22
- task_categories:
23
- - conditional-text-generation
24
- task_ids:
25
- - summarization
26
  paperswithcode_id: gazeta
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
  ---
28
 
29
  # Dataset Card for Gazeta
@@ -44,14 +62,11 @@ paperswithcode_id: gazeta
44
  - [Annotations](#annotations)
45
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
46
  - [Considerations for Using the Data](#considerations-for-using-the-data)
47
- - [Social Impact of Dataset](#social-impact-of-dataset)
48
  - [Discussion of Biases](#discussion-of-biases)
49
- - [Other Known Limitations](#other-known-limitations)
50
  - [Additional Information](#additional-information)
51
  - [Dataset Curators](#dataset-curators)
52
  - [Licensing Information](#licensing-information)
53
  - [Citation Information](#citation-information)
54
- - [Contributions](#contributions)
55
 
56
  ## Dataset Description
57
 
@@ -144,34 +159,16 @@ When the first version of the dataset was collected, there were no other dataset
144
 
145
  Texts and summaries were written by journalists at [Gazeta](https://www.gazeta.ru/).
146
 
147
- ### Annotations
148
-
149
- #### Annotation process
150
-
151
- [N/A]
152
-
153
- #### Who are the annotators?
154
-
155
- [N/A]
156
-
157
  ### Personal and Sensitive Information
158
 
159
  The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original author is not included in the dataset.
160
 
161
  ## Considerations for Using the Data
162
 
163
- ### Social Impact of Dataset
164
-
165
- [More Information Needed]
166
-
167
  ### Discussion of Biases
168
 
169
  It is a dataset from a single source. Thus it has a constrained text style and event perspective.
170
 
171
- ### Other Known Limitations
172
-
173
- [More Information Needed]
174
-
175
  ## Additional Information
176
 
177
  ### Dataset Curators
@@ -197,7 +194,3 @@ Legal basis for distribution of the dataset: https://www.gazeta.ru/credits.shtml
197
  isbn="978-3-030-59082-6"
198
  }
199
  ```
200
-
201
- ### Contributions
202
-
203
- [N/A]
 
1
  ---
 
2
  annotations_creators:
3
  - expert-generated
4
  - found
5
  language_creators:
6
  - expert-generated
7
  - found
8
+ task_categories:
9
+ - summarization
10
  language:
11
  - ru
12
+ size_categories:
13
+ - 10K<n<100K
14
  license:
15
  - unknown
16
  multilinguality:
17
  - monolingual
 
 
 
18
  source_datasets:
19
  - original
 
 
 
 
20
  paperswithcode_id: gazeta
21
+ dataset_info:
22
+ features:
23
+ - name: text
24
+ dtype: string
25
+ - name: summary
26
+ dtype: string
27
+ - name: title
28
+ dtype: string
29
+ - name: date
30
+ dtype: string
31
+ - name: url
32
+ dtype: string
33
+ splits:
34
+ - name: train
35
+ num_bytes: 547118436
36
+ num_examples: 60964
37
+ - name: validation
38
+ num_bytes: 55784053
39
+ num_examples: 6369
40
+ - name: test
41
+ num_bytes: 60816821
42
+ num_examples: 6793
43
+ download_size: 332486618
44
+ dataset_size: 663719310
45
  ---
46
 
47
  # Dataset Card for Gazeta
 
62
  - [Annotations](#annotations)
63
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
64
  - [Considerations for Using the Data](#considerations-for-using-the-data)
 
65
  - [Discussion of Biases](#discussion-of-biases)
 
66
  - [Additional Information](#additional-information)
67
  - [Dataset Curators](#dataset-curators)
68
  - [Licensing Information](#licensing-information)
69
  - [Citation Information](#citation-information)
 
70
 
71
  ## Dataset Description
72
 
 
159
 
160
  Texts and summaries were written by journalists at [Gazeta](https://www.gazeta.ru/).
161
 
 
 
 
 
 
 
 
 
 
 
162
  ### Personal and Sensitive Information
163
 
164
  The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original author is not included in the dataset.
165
 
166
  ## Considerations for Using the Data
167
 
 
 
 
 
168
  ### Discussion of Biases
169
 
170
  It is a dataset from a single source. Thus it has a constrained text style and event perspective.
171
 
 
 
 
 
172
  ## Additional Information
173
 
174
  ### Dataset Curators
 
194
  isbn="978-3-030-59082-6"
195
  }
196
  ```
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "Gazeta: Dataset for Automatic Summarization of Russian News", "citation": "\n@InProceedings{10.1007/978-3-030-59082-6_9,\n author=\"Gusev, Ilya\",\n editor=\"Filchenkov, Andrey and Kauttonen, Janne and Pivovarova, Lidia\",\n title=\"Dataset for Automatic Summarization of Russian News\",\n booktitle=\"Artificial Intelligence and Natural Language\",\n year=\"2020\",\n publisher=\"Springer International Publishing\",\n address=\"Cham\",\n pages=\"122--134\",\n isbn=\"978-3-030-59082-6\"\n}\n", "homepage": "https://github.com/IlyaGusev/gazeta", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "date": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "text", "output": "summary"}, "task_templates": null, "builder_name": "gazeta_dataset", "config_name": "default", "version": {"version_str": "2.0.0", "description": null, "major": 2, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 547118576, "num_examples": 60964, "dataset_name": "gazeta_dataset"}, "test": {"name": "test", "num_bytes": 60816841, "num_examples": 6793, "dataset_name": "gazeta_dataset"}, "validation": {"name": "validation", "num_bytes": 55784073, "num_examples": 6369, "dataset_name": "gazeta_dataset"}}, "download_checksums": {"gazeta_train.jsonl": {"num_bytes": 549801555, "checksum": "678ce0eab9b3026c9f3388c6f8b2e5a48c84590819e175a462cf15749bc0c60e"}, "gazeta_val.jsonl": {"num_bytes": 56064530, "checksum": "bb1e1edd75b9de85af193de473e301655f59e345be3c29ce9087326adada24fd"}, "gazeta_test.jsonl": {"num_bytes": 61115756, "checksum": "3963ca7e2313c4bb75a4140abd614e17d98199c9f03f03490ab6afb19bfbf6cf"}}, "download_size": 666981841, "post_processing_size": null, "dataset_size": 663719490, "size_in_bytes": 1330701331}}
 
 
gazeta_val.jsonl → default/test/0000.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bb1e1edd75b9de85af193de473e301655f59e345be3c29ce9087326adada24fd
3
- size 56064530
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db72251f11b33a7d69a7de709a4026a19de04f8039ad3ef1696986ba13a6d959
3
+ size 30276385
gazeta_train.jsonl → default/train/0000.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:678ce0eab9b3026c9f3388c6f8b2e5a48c84590819e175a462cf15749bc0c60e
3
- size 549801555
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:932df194fa24b4bd3cc50ede7f79ebf9c0fb57eff0972dca336fa0e5ea747710
3
+ size 251643354
gazeta_test.jsonl → default/train/0001.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3963ca7e2313c4bb75a4140abd614e17d98199c9f03f03490ab6afb19bfbf6cf
3
- size 61115756
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50ce4af4957fff7d1e6adf4ff6358a11abee870aece6e0536549f98141e5b697
3
+ size 22741670
default/validation/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c59c0e15fa659bc3e33c150d605e1c8139646fc04e402540ea84568cf80f323e
3
+ size 27825209
gazeta.py DELETED
@@ -1,92 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and Ilya Gusev
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # Lint as: python3
17
- """Gazeta: Dataset for Automatic Summarization of Russian News"""
18
-
19
-
20
- import json
21
- import os
22
-
23
- import datasets
24
-
25
-
26
- _CITATION = """
27
- @InProceedings{10.1007/978-3-030-59082-6_9,
28
- author="Gusev, Ilya",
29
- editor="Filchenkov, Andrey and Kauttonen, Janne and Pivovarova, Lidia",
30
- title="Dataset for Automatic Summarization of Russian News",
31
- booktitle="Artificial Intelligence and Natural Language",
32
- year="2020",
33
- publisher="Springer International Publishing",
34
- address="Cham",
35
- pages="122--134",
36
- isbn="978-3-030-59082-6"
37
- }
38
- """
39
-
40
- _DESCRIPTION = "Dataset for automatic summarization of Russian news"
41
- _HOMEPAGE = "https://github.com/IlyaGusev/gazeta"
42
- _URLS = {
43
- "train": "gazeta_train.jsonl",
44
- "val": "gazeta_val.jsonl",
45
- "test": "gazeta_test.jsonl"
46
- }
47
- _DOCUMENT = "text"
48
- _SUMMARY = "summary"
49
-
50
-
51
- class GazetaDataset(datasets.GeneratorBasedBuilder):
52
- """Gazeta Dataset"""
53
-
54
- VERSION = datasets.Version("2.0.0")
55
-
56
- BUILDER_CONFIGS = [
57
- datasets.BuilderConfig(name="default", version=VERSION, description=""),
58
- ]
59
-
60
- DEFAULT_CONFIG_NAME = "default"
61
-
62
- def _info(self):
63
- features = datasets.Features(
64
- {
65
- _DOCUMENT: datasets.Value("string"),
66
- _SUMMARY: datasets.Value("string"),
67
- "title": datasets.Value("string"),
68
- "date": datasets.Value("string"),
69
- "url": datasets.Value("string")
70
- }
71
- )
72
- return datasets.DatasetInfo(
73
- description=_DESCRIPTION,
74
- features=features,
75
- supervised_keys=(_DOCUMENT, _SUMMARY),
76
- homepage=_HOMEPAGE,
77
- citation=_CITATION,
78
- )
79
-
80
- def _split_generators(self, dl_manager):
81
- downloaded_files = dl_manager.download_and_extract(_URLS)
82
- return [
83
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
84
- datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
85
- datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["val"]}),
86
- ]
87
-
88
- def _generate_examples(self, filepath):
89
- with open(filepath, encoding="utf-8") as f:
90
- for id_, row in enumerate(f):
91
- data = json.loads(row)
92
- yield id_, data