You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Cyberbullying Dataset

Overview

This dataset combines five public datasets (tdavidson, OLID, Stormfront, Gab Hate Corpus, and HateXplain) to create a comprehensive resource for training and evaluating binary text classification models to detect cyberbullying. It contains ~30,000 balanced text samples labeled as "bully" (hate speech, offensive) or "normal" (non-offensive), sourced from Twitter, Gab, and Stormfront forums.

Dataset Structure

  • Splits:
    • Train: 30k samples (80%)
    • Validation: 4k samples (10%)
    • Test: 4k samples (10%)
  • Columns:
    • cleaned_text: Preprocessed text (lowercase, mentions/URLs/newlines removed, basic punctuation kept, numbers/emojis dropped, max 50 words).
    • label: Binary label ("bully" or "normal").
  • Class Balance: Equal number of "bully" and "normal" samples in each split.

Preprocessing

  • Combined from tdavidson, OLID, Stormfront, Gab Hate Corpus, and HateXplain.
  • Unified labels: "hate"/"offensive" mapped to "bully", "no_hate"/"normal" to "normal".
  • Applied consistent cleaning: removed mentions, URLs, newlines; converted to lowercase; kept basic punctuation; capped at 50 words.
  • Deduplicated and balanced classes to ensure robustness.

Usage

Ideal for fine-tuning LLMs for binary text classification (e.g., detecting cyberbullying). Example prompt format:

Classify this text: {cleaned_text}
Response: {label}

Load with Hugging Face datasets:

from datasets import load_dataset
dataset = load_dataset("cike-dev/cyberbullying_dataset")

Sources and Citations

This dataset aggregates the following sources:

  • tdavidson: Davidson, T., Warmsley, D., Macy, M., & Weber, I. (2017). Automated hate speech detection and the problem of offensive language. In Proceedings of the 11th International AAAI Conference on Web and Social Media (ICWSM ’17) (pp. 512–515). Montreal, Canada.
  • OLID: Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N., & Kumar, R. (2019). Predicting the type and target of offensive posts in social media. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL).
  • Stormfront: de Gibert, O., Perez, N., García-Pablos, A., & Cuadros, M. (2018, October). Hate speech dataset from a white supremacy forum. In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2) (pp. 11–20). Association for Computational Linguistics. https://doi.org/10.18653/v1/W18-5102
  • Gab Hate Corpus: Kennedy, B., Atari, M., Davani, A. M., Yeh, L., Omrani, A., Kim, Y., Coombs, K., Portillo-Wightman, G., Havaldar, S., Gonzalez, E., et al. (2022, April). The Gab Hate Corpus. OSF. https://doi.org/10.17605/OSF.IO/EDUA3
  • HateXplain: Mathew, B., Saha, P., Yimam, S. M., Biemann, C., Goyal, P., & Mukherjee, A. (2021). HateXplain: A benchmark dataset for explainable hate speech detection. Proceedings of the AAAI Conference on Artificial Intelligence, 35(17), 14867–14875.

License

The dataset is released under CC-BY 4.0, respecting the licenses of the original datasets. Please cite the sources above when using this dataset.

Contact

For issues or questions, open an issue on the Hugging Face repository or contact the maintainer.

Downloads last month
11