abullard1's picture
Update README.md
63f58fb verified
|
raw
history blame
4.4 kB
---
license: mit
task_categories:
- text-classification
language:
- en
multilinguality:
- monolingual
annotations_creators:
- expert-generated
language_creators:
- found
source_datasets:
- extended
dataset_modality: text
tags:
- gaming
- annotations
- binary
- classification
- labels
- steam
- reviews
- steam-reviews
- BERT
- ROBERTA
- constructiveness
- constructivity
- sentiment-analysis
- nlp
pretty_name: 1.5K Steam Reviews Binary Labeled for Constructiveness
size_categories:
- 1K<n<10K
thumbnail: https://i.ibb.co/Bnj0gw6/abullard1-steam-review-constructiveness-classifier-logo.png
dataset_size: 332 KB
dataset_info:
features:
- name: id
dtype: int32
- name: game
dtype: string
- name: review
dtype: string
- name: author_playtime_at_review
dtype: int32
- name: voted_up
dtype: bool
- name: votes_up
dtype: int32
- name: votes_funny
dtype: int32
- name: constructive
dtype: int32
---
# 1.5K Steam Reviews Binary Labeled for Constructiveness
## Dataset Summary
This dataset contains **1,461 Steam reviews** from **10 of the most reviewed games**. Each game has about the same amount of reviews. Each review is annotated with a **binary label** indicating whether the review is **constructive** or not. The dataset is designed to support tasks related to **text classification**, particularly **constructiveness detection** tasks in the gaming domain.
The dataset is particularly useful for training models like **BERT**, and its' derivatives or any other NLP models aimed at classifying text.
## Dataset Structure
The dataset contains the following columns:
- **id**: A unique identifier for each review.
- **game**: The name of the game being reviewed.
- **review**: The text of the Steam review.
- **author_playtime_at_review**: The number of hours the author had played the game at the time of writing the review.
- **voted_up**: Whether the user marked the review/the game as positive (True) or negative (False).
- **votes_up**: The number of upvotes the review received from other users.
- **votes_funny**: The number of "funny" votes the review received from other users.
- **constructive**: A binary label indicating whether the review was constructive (1) or not (0).
### Example Data
| id | game | review | author_playtime_at_review | voted_up | votes_up | votes_funny | constructive |
|------|---------------------|-------------------------------------------------------------------------|---------------------------|----------|----------|-------------|--------------|
| 1024 | Team Fortress 2 | shoot enemy | 639 | True | 1 | 0 | 0 |
| 652 | Grand Theft Auto V | 6 damn years and it's still rocking like its g... | 145 | True | 0 | 0 | 0 |
| 1244 | Terraria | Great game highly recommend for people who like... | 569 | True | 0 | 0 | 1 |
| 15 | Among Us | So good. Amazing game of teamwork and betrayal... | 5 | True | 0 | 0 | 1 |
| 584 | Garry's Mod | Jbmod is trash!!! | 65 | True | 0 | 0 | 0 |
### Labeling Criteria
- **Constructive (1)**: Reviews that provide helpful feedback, suggestions for improvement, constructive criticism, or detailed insights into the game.
- **Non-constructive (0)**: Reviews that do not offer useful feedback, do not offer substance, are vague, off-topic, irrelevant, or trolling.
Of course, constructiveness is subjective, when testing the dataset on a finetuned ROBERTA model though, we reached about 80% accuracy.
### Notes
Please note, that the **dataset is unbalanced**. **63.04%** of the reviews were labeled as being non-constructive while **36.96%** were labeled as being constructive. Please take this into account when utilizing the dataset.
## License
This dataset is licensed under the **MIT License**, allowing open and flexible use of the dataset for both academic and commercial purposes.