abdulrub commited on
Commit
28e5440
·
verified ·
1 Parent(s): 5263183

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -0
README.md CHANGED
@@ -27,3 +27,32 @@ configs:
27
  - split: test
28
  path: data/test-*
29
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
  - split: test
28
  path: data/test-*
29
  ---
30
+ ## Dataset Description
31
+
32
+ This dataset is designed for fine-tuning language models, particularly the [Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) model, for the task of hate speech detection in social media text (tweets). It focuses on both **implicit** and **explicit** forms of hate speech, aiming to improve the performance of smaller language models in this challenging task.
33
+
34
+ The dataset is a combination of two existing datasets:
35
+
36
+ * **Hate Speech Examples:** Examples of implicit hate speech are sourced from the [SALT-NLP/ImplicitHate](https://huggingface.co/datasets/SALT-NLP/ImplicitHate) dataset. This dataset contains tweets annotated as containing implicit hate speech, categorized into types like grievance, incitement, inferiority, irony, stereotyping, and threatening.
37
+ * **Non-Hate Speech Examples:** Examples of non-hate speech are sourced from the [TweetEval](https://huggingface.co/datasets/tweet_eval) dataset, specifically the `hate` configuration. This configuration provides tweets labeled as 'non-hate'.
38
+
39
+ By combining these two sources, we create a dataset suitable for binary classification of tweets into "hate speech" and "not hate speech" categories.
40
+
41
+ ## Dataset Splits
42
+
43
+ The dataset is divided into the following splits:
44
+
45
+ * **`train`**: Contains 2500 examples for training the model.
46
+ * **`validation`**: Contains 150 examples for evaluating and tuning the model during training.
47
+ * **`test`**: Contains 100 examples for final evaluation of the trained model's performance.
48
+
49
+ These splits are designed to be relatively balanced in terms of class distribution (hate vs. not hate) to ensure fair evaluation.
50
+
51
+ ## Dataset Fields
52
+
53
+ Each example in the dataset consists of the following fields:
54
+
55
+ * **`text`**: (`string`) The text content of the tweet.
56
+ * **`label`**: (`int`) The label for the tweet, with the following mapping:
57
+ * `0`: Not Hate Speech
58
+ * `1`: Hate Speech