Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -14,6 +14,18 @@ language_creators:
|
|
14 |
source_datasets:
|
15 |
- extended
|
16 |
dataset_modality: text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
tags:
|
18 |
- gaming
|
19 |
- annotations
|
@@ -72,7 +84,7 @@ dataset_info:
|
|
72 |
## <u>Dataset Summary</u>
|
73 |
|
74 |
This dataset contains **1,461 Steam reviews** from **10 of the most reviewed games**. Each game has about the same amount of reviews. Each review is annotated with a **binary label** indicating whether the review is **constructive** or not. The dataset is designed to support tasks related to **text classification**, particularly **constructiveness detection** tasks in the gaming domain.
|
75 |
-
|
76 |
The dataset is particularly useful for training models like **BERT**, and its' derivatives or any other NLP models aimed at classifying text.
|
77 |
|
78 |
## <u>Dataset Structure</u>
|
|
|
14 |
source_datasets:
|
15 |
- extended
|
16 |
dataset_modality: text
|
17 |
+
viewer: true
|
18 |
+
configs:
|
19 |
+
- config_name: main_data
|
20 |
+
data_files: "steam_reviews_constructiveness_1.5k.csv"
|
21 |
+
- config_name: additional_data
|
22 |
+
data_files:
|
23 |
+
- split: train
|
24 |
+
path: "train-dev-test_split_csvs/train.csv"
|
25 |
+
- split: validation
|
26 |
+
path: "train-dev-test_split_csvs/dev.csv"
|
27 |
+
- split: test
|
28 |
+
path: "train-dev-test_split_csvs/test.csv"
|
29 |
tags:
|
30 |
- gaming
|
31 |
- annotations
|
|
|
84 |
## <u>Dataset Summary</u>
|
85 |
|
86 |
This dataset contains **1,461 Steam reviews** from **10 of the most reviewed games**. Each game has about the same amount of reviews. Each review is annotated with a **binary label** indicating whether the review is **constructive** or not. The dataset is designed to support tasks related to **text classification**, particularly **constructiveness detection** tasks in the gaming domain.
|
87 |
+
Also available as additional data, are **train/dev/test split** csv's. These contain the features of the base dataset, concatenated into strings, next to the binary constructiveness labels. These csv's were used to train the [albert-v2-steam-review-constructiveness-classifier](https://huggingface.co/abullard1/albert-v2-steam-review-constructiveness-classifier) model.
|
88 |
The dataset is particularly useful for training models like **BERT**, and its' derivatives or any other NLP models aimed at classifying text.
|
89 |
|
90 |
## <u>Dataset Structure</u>
|