Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
csv
Languages:
English
Size:
10K - 100K
License:
Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ task_categories:
|
|
15 |
We maximized the information within the multi-sentence input reviews of the original IMDb dataset (stanfordnlp/imdb) and transformed it into a single-sentence format. This transformation aims to make benchmark studies more compatible with datasets containing single-sentence inputs (SST-2, HateSpeech, Tweet-Emotion, etc.)
|
16 |
|
17 |
Tasks we performed to obtain this dataset:
|
18 |
-
- (1) We fine-tuned the RoBERTa model (roberta-base)
|
19 |
- (2) We split the sentences in each train and test sample of the IMDb dataset's input reviews.
|
20 |
- (3) For each sentence in every review, we queried our fine-tuned RoBERTa sentiment classifier model to obtain sentiment predictions.
|
21 |
- (4) For positive reviews, we selected the sentences with the highest confidence positive label predictions by the model, retaining these sentences within the original multi-sentence reviews and deleting all other sentences.
|
|
|
15 |
We maximized the information within the multi-sentence input reviews of the original IMDb dataset (stanfordnlp/imdb) and transformed it into a single-sentence format. This transformation aims to make benchmark studies more compatible with datasets containing single-sentence inputs (SST-2, HateSpeech, Tweet-Emotion, etc.)
|
16 |
|
17 |
Tasks we performed to obtain this dataset:
|
18 |
+
- (1) We fine-tuned the RoBERTa model (roberta-base) on the raw IMDb dataset, achieving a classification accuracy of 94.6% for sentiment analysis. We published this fine-tuned sentiment classifier on HuggingFace under EgehanEralp/roberta-base-imdb-ft for public access.
|
19 |
- (2) We split the sentences in each train and test sample of the IMDb dataset's input reviews.
|
20 |
- (3) For each sentence in every review, we queried our fine-tuned RoBERTa sentiment classifier model to obtain sentiment predictions.
|
21 |
- (4) For positive reviews, we selected the sentences with the highest confidence positive label predictions by the model, retaining these sentences within the original multi-sentence reviews and deleting all other sentences.
|