Datasets:

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
pandas
mlconti commited on
Commit
93e4c74
·
1 Parent(s): d8a50f9

Update README

Browse files
Files changed (1) hide show
  1. README.md +42 -0
README.md CHANGED
@@ -58,3 +58,45 @@ configs:
58
  - split: test
59
  path: queries/test-*
60
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
  - split: test
59
  path: queries/test-*
60
  ---
61
+ # ConTEB - NarrativeQA
62
+
63
+ This dataset is part of *ConTEB* (Context-aware Text Embedding Benchmark), designed for evaluating contextual embedding model capabilities. It stems from the widely used [NarrativeQA](https://huggingface.co/datasets/deepmind/narrativeqa) dataset.
64
+
65
+ ## Dataset Summary
66
+
67
+ NarrativeQA (literature), consists of long documents, associated to existing sets of question-answer pairs. To build the corpus, we start from the pre-existing collection documents, extract the text, and chunk them (using [LangChain](https://github.com/langchain-ai/langchain)'s RecursiveCharacterSplitter with a threshold of 1000 characters). Since chunking is done a posteriori without considering the questions, chunks are not always self-contained and eliciting document-wide context can help build meaningful representations. We use GPT-4o to annotate which chunk, among the gold document, best contains information needed to answer the query.
68
+
69
+ This dataset provides a focused benchmark for contextualized embeddings. It includes a set of original documents, chunks stemming from them, and queries.
70
+
71
+ * **Number of Documents:** 355
72
+ * **Number of Chunks:** 1750
73
+ * **Number of Queries:** 8575
74
+ * **Average Number of Tokens per Chunk:** 151.9
75
+
76
+ ## Dataset Structure (Hugging Face Datasets)
77
+ The dataset is structured into the following columns:
78
+
79
+ * **`documents`**: Contains chunk information:
80
+ * `"chunk_id"`: The ID of the chunk, of the form `doc-id_chunk-id`, where `doc-id` is the ID of the original document and `chunk-id` is the position of the chunk within that document.
81
+ * `"chunk"`: The text of the chunk
82
+ * **`queries`**: Contains query information:
83
+ * `"query"`: The text of the query.
84
+ * `"answer"`: The answer relevant to the query, from the original dataset.
85
+ * `"chunk_id"`: The ID of the chunk that the query is related to, of the form `doc-id_chunk-id`, where `doc-id` is the ID of the original document and `chunk-id` is the position of the chunk within that document.
86
+
87
+ ## Usage
88
+
89
+ Use the `train` split for training, and the `test` split for evaluation.
90
+ We will upload a Quickstart evaluation snippet soon.
91
+
92
+ ## Citation
93
+
94
+ We will add the corresponding citation soon.
95
+
96
+ ## Acknowledgments
97
+
98
+ This work is partially supported by [ILLUIN Technology](https://www.illuin.tech/), and by a grant from ANRT France.
99
+
100
+ ## Copyright
101
+
102
+ All rights are reserved to the original authors of the documents.