metadata
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 74636783061
num_examples: 13847707
download_size: 41512074295
dataset_size: 74636783061
Wikipedia and OSCAR Turkish Dataset
π Welcome to the "Wikipedia and OSCAR Turkish" Huggingface Repo!
π This repo contains a Turkish language dataset generated by merging Wikipedia and OSCAR cleaned Common Crawl. The dataset contains over 13 million examples with a single feature - text.
π This dataset can be useful for natural language processing tasks in Turkish language.
π₯ To download the dataset, you can use the Hugging Face Datasets library. Here's some sample code to get started:
from datasets import load_dataset
dataset = load_dataset("musabg/wikipedia-oscar-tr")
π€ Have fun exploring this dataset and training language models on it!