TreeCorpus / README.md
akkiisfrommars's picture
Update README.md
3055bc1 verified
metadata
language:
  - en
license: cc-by-sa-3.0
tags:
  - treecorpus
  - wikipedia
  - encyclopedia
  - knowledge-base
  - factual-knowledge
  - training-data
  - conversational-ai
  - nlp
  - language-model
  - text-corpus
  - qa-dataset
  - structured-data
  - large-scale
pretty_name: 'TreeCorpus: Wikipedia Knowledge for AI Models'
size_categories:
  - 10M<n<100M

TreeCorpus

TreeCorpus is a comprehensive, structured dataset derived from the latest Wikipedia dumps, specially processed to serve as high-quality training data for conversational AI models. This dataset transforms Wikipedia's encyclopedic knowledge into a format optimized for natural language understanding and generation tasks.

Dataset Statistics

  • Size: 26.27 GB (26,272,580,250 bytes)
  • Examples: 2,882,766 articles
  • Download Size: 13.33 GB (13,326,529,312 bytes)
  • Language: English

Data Structure

Each entry in the dataset contains:

  • id (string): Unique Wikipedia article identifier
  • title (string): Article title
  • text (string): Clean, processed text content
  • url (string): Source Wikipedia URL
  • timestamp (string): Processing timestamp

Key Features

  • Clean, Structured Content: Meticulously processed to remove markup, templates, references, and other non-content elements while preserving the informational value of Wikipedia articles.
  • Rich Metadata: Each entry includes article ID, title, clean text content, source URL, and timestamp.
  • Comprehensive Coverage: Incorporates the full spectrum of Wikipedia's knowledge base, spanning nearly 3 million articles across countless topics.
  • Conversational Optimization: Content is processed specifically to support training of dialogue systems, conversational agents, and knowledge-grounded language models.
  • Regular Updates: Built from the latest Wikipedia dumps to ensure current information.

Usage

This dataset is ideal for:

  • Training large language models requiring broad knowledge bases
  • Fine-tuning conversational agents for knowledge-intensive tasks
  • Question-answering systems that need factual grounding
  • Research in knowledge representation and retrieval in natural language

License and Citation

TreeCorpus is derived from Wikipedia content available under the CC BY-SA 3.0 license. When using this dataset, please provide appropriate attribution to both this dataset and Wikipedia.

Dataset Configuration

The dataset is configured with a default split:

  • Split name: train
  • Data files pattern: data/train-*

Creation Process

TreeCorpus was created using a specialized pipeline that:

  1. Downloads the latest Wikipedia dumps
  2. Processes XML content to extract articles
  3. Cleans and standardizes text by removing markup, templates, and non-content elements
  4. Structures data in a consistent, machine-readable format
  5. Filters out redirects, stubs, and non-article content

For more details on the methodology and processing pipeline, please see the accompanying code documentation.