sumuks commited on
Commit
3c7da8a
·
verified ·
1 Parent(s): dd9d380

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +171 -1
README.md CHANGED
@@ -92,4 +92,174 @@ dataset_info:
92
  dataset_size: 4193359908.0
93
  ---
94
 
95
- # Tempora
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
92
  dataset_size: 4193359908.0
93
  ---
94
 
95
+
96
+ # Tempora
97
+ ![Tempora Logo](assets/tempora_logo.jpg)
98
+
99
+ > A contemporary dataset of 7,368 real-world documents published **after March 1, 2025**, curated for testing the temporal grounding of Large Language Models.
100
+
101
+ ---
102
+
103
+ ## Table of Contents
104
+ 1. [Dataset Overview](#dataset-overview)
105
+ 2. [Why a Contemporary Dataset?](#why-a-contemporary-dataset)
106
+ 3. [Scope & Diversity](#scope--diversity)
107
+ 4. [Evaluating Parametric vs. Contextual Knowledge](#evaluating-parametric-vs-contextual-knowledge)
108
+ 5. [Methodological Longevity](#methodological-longevity)
109
+ 6. [Dataset Structure](#dataset-structure)
110
+ - [Available Configurations](#available-configurations)
111
+ - [Data Fields](#data-fields)
112
+ - [Splits and Statistics](#splits-and-statistics)
113
+ 7. [Usage](#usage)
114
+ - [Loading with `datasets`](#loading-with-datasets)
115
+ - [Dataset Example](#dataset-example)
116
+ 8. [Licensing](#licensing)
117
+ 9. [Citation](#citation)
118
+ 10. [Acknowledgments](#acknowledgments)
119
+
120
+ ---
121
+
122
+ ## Dataset Overview
123
+
124
+ Recent advances in large language models (LLMs) have highlighted a critical gap in testing temporal and factual grounding: models are often pretrained on massive (and sometimes outdated) corpora, making it difficult to discern whether they rely on newly provided textual evidence or memorize stale facts. **Tempora-0325** addresses this challenge by presenting a set of **7,368 documents** published after **March 1, 2025**, ensuring that the vast majority of pretrained models have not seen this data during training.
125
+
126
+ <p align="center">
127
+ <img src="assets/content_lengths.png" alt="Distribution of Character Lengths in Tempora-0325" width="60%"><br>
128
+ <em>Figure: Distribution of character lengths within Tempora-0325</em>
129
+ </p>
130
+
131
+ ---
132
+
133
+ ## Why a Contemporary Dataset?
134
+
135
+ When LLMs are prompted with documents containing up-to-date facts, regulations, or events, it becomes crucial to separate genuine, context-grounded outputs from those derived purely from parametric memory. **Tempora-0325** focuses on this objective:
136
+
137
+ - **Temporal testing**: Provides data published exclusively after March 1, 2025.
138
+ - **Unseen textual evidence**: Ensures that most existing models’ pretraining does not include these documents.
139
+ - **Detection of stale knowledge**: Encourages models to rely on newly provided information—or risk inconsistencies revealing outdated parametric knowledge.
140
+
141
+ ---
142
+
143
+ ## Scope & Diversity
144
+
145
+ We collected **7,368** publicly available documents from:
146
+ - Government and corporate announcements
147
+ - Legal and medical reports
148
+ - Sports updates, news articles, and blogs
149
+ - Miscellaneous informational sites
150
+
151
+ Each source was verified to have been published after March 1, 2025, with manual checks to confirm the authenticity of time-sensitive information. Two key subsets are made available:
152
+
153
+ 1. **Unbalanced Full Corpus** (Tempora-0325): Mirrors real-world domain distribution.
154
+ 2. **Balanced Subset** (Tempora-0325B): Offers uniform coverage across eight categories (government, corporate, legal, medical, sports, news, blogs, miscellaneous) for controlled experimentation.
155
+
156
+ ---
157
+
158
+ ## Evaluating Parametric vs. Contextual Knowledge
159
+
160
+ A central motivation behind **Tempora-0325** is enabling deeper analysis into how—or even whether—an LLM updates its internal knowledge states when presented with truly novel or conflicting data. By isolating content never encountered in typical pretraining corpora, the dataset can:
161
+
162
+ - Test retrieval-augmented generation: Determine if a model is using new evidence from a document or relying on outdated internal parameters.
163
+ - Assess summarization and question generation tasks: See whether newly introduced information is being processed accurately or overshadowed by memorized facts.
164
+
165
+ ---
166
+
167
+ ## Methodological Longevity
168
+
169
+ While **Tempora-0325** is a snapshot of post March 2025 knowledge, the data collection methodology is **open-sourced** so future variants (e.g., **Tempora-0727**) can be built over time. This systematic refresh ensures the dataset remains novel for the next generation of LLMs, preserving its effectiveness for detecting when models override new information with stale, parametric knowledge.
170
+
171
+ ---
172
+
173
+ ## Dataset Structure
174
+
175
+ ### Available Configurations
176
+
177
+ This repository offers multiple configurations, each corresponding to different data splits or processing stages:
178
+
179
+ - **tempora-0325B**
180
+ - Balanced subset of 250 training documents.
181
+ - Equal coverage of 8 domains for controlled experiments.
182
+ - **tempora-0325**
183
+ - The full, unbalanced corpus.
184
+ - 5,599 training documents.
185
+ - **tempora-0325-raw**
186
+ - The raw version containing minimal processing for advanced or custom use-cases.
187
+ - 7,368 total documents.
188
+
189
+ ### Data Fields
190
+
191
+ Depending on the configuration, you will see some or all of the following fields:
192
+
193
+ - **id** *(string)*: A unique identifier for each document.
194
+ - **source** *(string)*: The source domain or category (e.g., `legal`, `medical`, `sports`), if available.
195
+ - **raw** *(string)*: Unprocessed text content (available in `tempora-0325-raw` only).
196
+ - **extracted_content** *(string)*: The main processed text from each document.
197
+ - **extracted_content_stage_2** *(string)*: Additional content extraction stage (only in `tempora-0325-raw`).
198
+
199
+ ### Splits and Statistics
200
+
201
+ | Config | # Documents | Split | Size (approx.) |
202
+ |:----------------------|-----------:|:-----:|---------------:|
203
+ | **tempora-0325** | 5,599 | train | ~25.9 MB |
204
+ | **tempora-0325B** | 250 | train | ~1.5 MB |
205
+ | **tempora-0325-raw** | 7,368 | train | ~4.19 GB |
206
+
207
+ ---
208
+
209
+ ## Usage
210
+
211
+ Below are examples of how to load **Tempora-0325** using the [Hugging Face `datasets` library](https://github.com/huggingface/datasets). Adjust the `config_name` as needed.
212
+
213
+ ### Loading with `datasets`
214
+
215
+ ```python
216
+ from datasets import load_dataset
217
+
218
+ # Load the balanced subset
219
+ ds_balanced = load_dataset("sumuks/tempora", name="tempora-0325B", split="train")
220
+
221
+ # Load the main unbalanced corpus
222
+ ds_full = load_dataset("sumuks/tempora", name="tempora-0325", split="train")
223
+
224
+ # Load the raw version
225
+ ds_raw = load_dataset("sumuks/tempora", name="tempora-0325-raw", split="train")
226
+ ```
227
+
228
+ ### Dataset Example
229
+
230
+ A sample entry from `tempora-0325` might look like:
231
+
232
+ ```python
233
+ {
234
+ 'id': 'QChCKP-ecAD',
235
+ 'source': 'https://www.theguardian.com/sport/2025/mar/09/france-captain-antoine-dupont-rugby-union-injury',
236
+ 'extracted_content': "# Antoine Dupont faces long spell out with ruptured cruciate knee ligaments\nAntoine Dupont, France’s talismanic captain and the player ..."
237
+ }
238
+ ```
239
+
240
+ ---
241
+
242
+ ## Licensing
243
+
244
+ This dataset is released under the [**Open Data Commons Attribution License (ODC-By) v1.0**](https://opendatacommons.org/licenses/by/1-0/).
245
+ Use of this dataset is also subject to the terms and conditions laid out by each respective source from which documents were collected.
246
+
247
+ ---
248
+
249
+ ## Citation
250
+
251
+ If you use **Tempora-0325** in your research or application, please cite:
252
+
253
+ ```
254
+ Pending! Please contact the authors!
255
+ ```
256
+
257
+ ---
258
+
259
+ ## Acknowledgments
260
+
261
+ Special thanks to all domain experts and contributors who helped verify publication dates and authenticity. By regularly refreshing **Tempora** with new data, we hope to advance the understanding of how modern language models adapt to truly novel, time-sensitive content.
262
+
263
+ ---
264
+
265
+ *(Last updated: March 17, 2025)*