EgehanEralp commited on
Commit
7eec689
·
verified ·
1 Parent(s): 93af096

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -1
README.md CHANGED
@@ -12,4 +12,16 @@ task_categories:
12
 
13
  ---
14
 
15
- For this manipulation, we followed these steps: (1) We fine-tuned the RoBERTa model (roberta-base) \cite{roberta} on the raw IMDb dataset, achieving a classification accuracy of 94.6\% for sentiment analysis. We published this fine-tuned sentiment classifier on HuggingFace under EgehanEralp/roberta-base-imdb-ft for public access. (2) We split the sentences in each train and test sample of the IMDb dataset's input reviews. (3) For each sentence in every review, we queried our fine-tuned RoBERTa sentiment classifier model to obtain sentiment predictions. (4) For positive reviews, we selected the sentences with the highest confidence positive label predictions by the model, retaining these sentences within the original multi-sentence reviews and deleting all other sentences. (5) For negative reviews, we selected the sentences with the highest confidence negative label predictions by the model, retaining these sentences within the original multi-sentence reviews and deleting all other sentences.
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
  ---
14
 
15
+ We maximized the information within the multi-sentence input reviews of the original IMDb dataset (stanfordnlp/imdb) and transformed it into a single-sentence format. This transformation aims to make benchmark studies more compatible with datasets containing single-sentence inputs (SST-2, HateSpeech, Tweet-Emotion, etc.)
16
+
17
+ Tasks we performed to obtain this dataset:
18
+ - (1) We fine-tuned the RoBERTa model (roberta-base) \cite{roberta} on the raw IMDb dataset, achieving a classification accuracy of 94.6% for sentiment analysis. We published this fine-tuned sentiment classifier on HuggingFace under EgehanEralp/roberta-base-imdb-ft for public access.
19
+ - (2) We split the sentences in each train and test sample of the IMDb dataset's input reviews.
20
+ - (3) For each sentence in every review, we queried our fine-tuned RoBERTa sentiment classifier model to obtain sentiment predictions.
21
+ - (4) For positive reviews, we selected the sentences with the highest confidence positive label predictions by the model, retaining these sentences within the original multi-sentence reviews and deleting all other sentences.
22
+ - (5) For negative reviews, we selected the sentences with the highest confidence negative label predictions by the model, retaining these sentences within the original multi-sentence reviews and deleting all other sentences.
23
+
24
+ We created a single-sentence IMDb dataset, representing each multi-sentence review with the sentence containing the most sentiment. This strategy allowed us to select and use the sentences that best represented the sentiment of the original reviews.
25
+
26
+
27
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6412b8e26e51a8e21887fdfe/Lo-qQFB08lahBQ5yJ7-rn.png)