Book_Stitch / README.md
R3troR0b's picture
Create README.md
9eda3fe verified
metadata
license: mit
task_categories:
  - text-classification
language:
  - en

Dataset Card for Book_Stitch

This dataset contains books from Project Gutenberg, tokenized into 1020-token chunks with markers that indicate the section and book unique identifier (UID). These markers serve as both prefix and suffix for the sections, ensuring that the sequential nature of each book is preserved and facilitating later text reconstruction. The book_stitch dataset is designed for training AI models to handle long texts in sections, retaining context for tasks like summarization, text stitching, and document analysis.

This dataset card aims to be a base template for new datasets. It has been generated using this raw template.

Dataset Details

Dataset Description

The book_stitch dataset is part of a series designed to teach AI models how to handle large documents, including stitching and unstitching sections of text. Each book is tokenized into fixed-size chunks of 1020 tokens, with markers appended to indicate the beginning and end of each section. This dataset works in conjunction with the context_stitch and train_stitch datasets to allow models to maintain long-range context across different sections of a document, enabling comprehensive text analysis and reassembly.

Curated by: [Robert McNarland, McNarland Software Consultation Inc.] Funded by None Shared by None Language(s) (NLP): [ English (books from Project Gutenberg)] License: [MIT] Dataset Sources [optional] Repository: [R3troR0b/book_stitch] Paper [optional]: [More Information Needed] Demo [optional]: [More Information Needed]

Uses

Direct Use

  • Document Classification and Reconstruction: The book_stitch dataset is key for teaching models to classify books and reconstruct them from tokenized chunks. The stitched-together sections allow for complex tasks like document-level classification and content retrieval.

  • Text Stitching and Unstitching: Models can learn how to stitch and unstitch text segments, with markers indicating where sections begin and end, supporting tasks like reassembling fragmented documents or summarizing long-form text.

  • Long-Document Modeling: The dataset helps train models to process long texts efficiently, maintaining contextual understanding across multiple sections by using section markers and UIDs.

  • Contextual Inference: By identifying relationships between text sections, models can better infer meaning and connections in lengthy documents, supporting tasks such as question-answering, summarization, and complex search.

Out-of-Scope Use

The dataset is not intended for use cases unrelated to document classification, text reassembly, or handling long-range context in texts. It may not be applicable to non-English texts.

Dataset Structure

Each entry in the dataset consists of:

Label: This includes both the prefix and suffix stitch markers indicating the section number and the book's UID (e.g., [/SEC:1;B-5]). Text: A 1020-token chunk of the book's content. Example:

{
  "label": "[/SEC:1;B-5] [/SEC:2;B-5]",
  "text": "CHAPTER I: START OF THE PROJECT GUTENBERG EBOOK..."
}

Dataset Creation

Curation Rationale

The book_stitch dataset was created to help AI models understand the structure of long-form text. By breaking books into consistent tokenized chunks with markers, models can be trained to stitch and unstitch sections, allowing for sophisticated text handling.

Source Data

Data Collection and Processing

The dataset was generated using a custom-built tokenizer to split Project Gutenberg books into fixed 1020-token chunks, adding section and book UIDs as markers. The end portion of each book may be smaller than previous sections. The table of contents and appendices are treated as part of the text.

Who are the source data producers?

Titles in the book_stitch dataset are from Project Gutenberg's English collection.

Annotations [optional]

Annotation process

This dataset does not include any additional annotations beyond the section markers and UIDs.

Who are the annotators?

There are no annotators for this dataset, as it relies solely on automated tokenization and marking.

Personal and Sensitive Information

This dataset does not contain personal, sensitive, or private information. All books are sourced from Project Gutenberg, a collection of public domain works.

Bias, Risks, and Limitations

Recommendations

Users should be aware that:

The dataset is limited to English books from Project Gutenberg and may not generalize well to non-English or non-literary domains. Since these books are public domain, the texts may reflect biases inherent in historical works.

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

  • Book UID (BUID): A unique identifier assigned to each book.
  • Stitch Markers: Markers appended to each section of the text to indicate the section number and the book's UID (e.g., [/SEC:1;B-5]).
  • Contextual Stitching: The process of stitching together sections of text while maintaining continuity.

More Information [optional]

[More Information Needed]

Dataset Card Authors [optional]

[Robert McNarland, McNarland Software Consultation Inc.]

Dataset Card Contact

[email protected]