abdoelsayed commited on
Commit
e57e5b2
·
verified ·
1 Parent(s): 4c35222

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +197 -203
README.md CHANGED
@@ -1,204 +1,198 @@
1
- ---
2
- license: apache-2.0
3
- task_categories:
4
- - text-retrieval
5
- language:
6
- - en
7
- tags:
8
- - information-retrieval
9
- - reranking
10
- - temporal-evaluation
11
- - benchmark
12
- size_categories:
13
- - 1K<n<10K
14
- pretty_name: Reranking, Retreiver
15
- configs:
16
- - config_name: default
17
- data_files:
18
- - split: test
19
- path:
20
- - queries.tsv
21
- ---
22
-
23
- # FutureQueryEval Dataset (EMNLP 2025)🔍
24
-
25
- ## Dataset Description
26
-
27
- **FutureQueryEval** is a novel Information Retrieval (IR) benchmark designed to evaluate reranker performance on temporal novelty. It comprises **148 queries** with **2,938 query-document pairs** across **7 topical categories**, specifically created to test how well reranking models generalize to truly novel queries that were unseen during LLM pretraining.
28
-
29
- ### Key Features
30
-
31
- - **Zero Contamination**: All queries refer to events after April 2025
32
- - **Human Annotated**: Created by 4 expert annotators with quality control
33
- - **Diverse Domains**: Technology, Sports, Politics, Science, Health, Business, Entertainment
34
- - **Real Events**: Based on actual news and developments, not synthetic data
35
- - **Temporal Novelty**: First benchmark designed to test reranker generalization on post-training events
36
-
37
- ## Dataset Statistics
38
-
39
- | Metric | Value |
40
- |--------|-------|
41
- | Total Queries | 148 |
42
- | Total Documents | 2,787 |
43
- | Query-Document Pairs | 2,938 |
44
- | Avg. Relevant Docs per Query | 6.54 |
45
- | Languages | English |
46
- | License | Apache-2.0 |
47
-
48
- ## Category Distribution
49
-
50
- | Category | Queries | Percentage |
51
- |----------|---------|------------|
52
- | **Technology** | 37 | 25.0% |
53
- | **Sports** | 31 | 20.9% |
54
- | **Science & Environment** | 20 | 13.5% |
55
- | **Business & Finance** | 19 | 12.8% |
56
- | **Health & Medicine** | 16 | 10.8% |
57
- | **World News & Politics** | 14 | 9.5% |
58
- | **Entertainment & Culture** | 11 | 7.4% |
59
-
60
- ## Dataset Structure
61
-
62
- The dataset consists of three main files:
63
-
64
- ### Files
65
-
66
- - **`queries.tsv`**: Contains the query information
67
- - Columns: `query_id`, `query_text`, `category`
68
- - **`corpus.tsv`**: Contains the document collection
69
- - Columns: `doc_id`, `title`, `text`, `url`
70
- - **`qrels.txt`**: Contains relevance judgments
71
- - Format: `query_id 0 doc_id relevance_score`
72
-
73
- ### Data Fields
74
-
75
- #### Queries
76
- - `query_id` (string): Unique identifier for each query
77
- - `query_text` (string): The natural language query
78
- - `category` (string): Topical category (Technology, Sports, etc.)
79
-
80
- #### Corpus
81
- - `doc_id` (string): Unique identifier for each document
82
- - `title` (string): Document title
83
- - `text` (string): Full document content
84
- - `url` (string): Source URL of the document
85
-
86
- #### Relevance Judgments (qrels)
87
- - `query_id` (string): Query identifier
88
- - `iteration` (int): Always 0 (standard TREC format)
89
- - `doc_id` (string): Document identifier
90
- - `relevance` (int): Relevance score (0-3, where 3 is highly relevant)
91
-
92
- ## Example Queries
93
-
94
- **🌍 World News & Politics:**
95
- > "What specific actions has Egypt taken to support injured Palestinians from Gaza, as highlighted during the visit of Presidents El-Sisi and Macron to Al-Arish General Hospital?"
96
-
97
- **⚽ Sports:**
98
- > "Which teams qualified for the 2025 UEFA European Championship playoffs in June 2025?"
99
-
100
- **💻 Technology:**
101
- > "What are the key features of Apple's new Vision Pro 2 announced at WWDC 2025?"
102
-
103
- ## Usage
104
-
105
- ### Loading the Dataset
106
-
107
- ```python
108
- from datasets import load_dataset
109
-
110
- # Load the dataset
111
- dataset = load_dataset("abdoelsayed/FutureQueryEval")
112
-
113
- # Access different splits
114
- queries = dataset["queries"]
115
- corpus = dataset["corpus"]
116
- qrels = dataset["qrels"]
117
-
118
- # Example: Get first query
119
- print(f"Query: {queries[0]['query_text']}")
120
- print(f"Category: {queries[0]['category']}")
121
- ```
122
-
123
- ### Evaluation Example
124
-
125
- ```python
126
- import pandas as pd
127
-
128
- # Load relevance judgments
129
- qrels_df = pd.read_csv("qrels.txt", sep=" ",
130
- names=["query_id", "iteration", "doc_id", "relevance"])
131
-
132
- # Filter for a specific query
133
- query_rels = qrels_df[qrels_df["query_id"] == "FQ001"]
134
- print(f"Relevant documents for query FQ001: {len(query_rels)}")
135
- ```
136
-
137
- ## Methodology
138
-
139
- ### Data Collection Process
140
-
141
- 1. **Source Selection**: Major news outlets, official sites, sports organizations
142
- 2. **Temporal Filtering**: Events after April 2025 only
143
- 3. **Query Creation**: Manual generation by domain experts
144
- 4. **Novelty Validation**: Tested against GPT-4 knowledge cutoff
145
- 5. **Quality Control**: Multi-annotator review with senior oversight
146
-
147
- ### Annotation Guidelines
148
-
149
- - **Highly Relevant (3)**: Document directly answers the query
150
- - **Relevant (2)**: Document partially addresses the query
151
- - **Marginally Relevant (1)**: Document mentions query topics but lacks detail
152
- - **Not Relevant (0)**: Document does not address the query
153
-
154
- ## Research Applications
155
-
156
- This dataset is designed for:
157
-
158
- - **Reranker Evaluation**: Testing generalization to novel content
159
- - **Temporal IR Research**: Understanding time-sensitive retrieval challenges
160
- - **Domain Robustness**: Evaluating cross-domain performance
161
- - **Contamination Studies**: Clean evaluation on post-training data
162
-
163
- ## Benchmark Results
164
-
165
- Top performing methods on FutureQueryEval:
166
-
167
- | Method | Type | NDCG@10 | Runtime (s) |
168
- |--------|------|---------|-------------|
169
- | Zephyr-7B | Listwise | **62.65** | 1,240 |
170
- | MonoT5-3B | Pointwise | **60.75** | 486 |
171
- | Flan-T5-XL | Setwise | **56.57** | 892 |
172
-
173
- ## Dataset Updates
174
-
175
- FutureQueryEval will be updated every 6 months with new queries about recent events to maintain temporal novelty:
176
-
177
- - **Version 1.1** (December 2025): +100 queries from July-September 2025
178
- - **Version 1.2** (June 2026): +100 queries from October 2025-March 2026
179
-
180
- ## Citation
181
-
182
- If you use FutureQueryEval in your research, please cite:
183
-
184
- ```bibtex
185
- @misc{abdallah2025good,
186
- title={How Good are LLM-based Rerankers? An Empirical Analysis of State-of-the-Art Reranking Models},
187
- author={Abdelrahman Abdallah and Bhawna Piryani and Jamshid Mozafari and Mohammed Ali and Adam Jatowt},
188
- year={2025},
189
- eprint={2508.16757},
190
- archivePrefix={arXiv},
191
- primaryClass={cs.CL}
192
- }
193
- ```
194
-
195
- ## Contact
196
-
197
- - **Authors**: Abdelrahman Abdallah, Bhawna Piryani
198
- - **Institution**: University of Innsbruck
199
- - **Paper**: [arXiv:2508.16757](https://arxiv.org/abs/2508.16757)
200
- - **Code**: [GitHub Repository](https://github.com/DataScienceUIBK/llm-reranking-generalization-study)
201
-
202
- ## License
203
-
204
  This dataset is released under the Apache-2.0 License.
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - text-retrieval
5
+ language:
6
+ - en
7
+ tags:
8
+ - information-retrieval
9
+ - reranking
10
+ - temporal-evaluation
11
+ - benchmark
12
+ size_categories:
13
+ - 1K<n<10K
14
+ pretty_name: Reranking, Retreiver
15
+ ---
16
+
17
+ # FutureQueryEval Dataset (EMNLP 2025)🔍
18
+
19
+ ## Dataset Description
20
+
21
+ **FutureQueryEval** is a novel Information Retrieval (IR) benchmark designed to evaluate reranker performance on temporal novelty. It comprises **148 queries** with **2,938 query-document pairs** across **7 topical categories**, specifically created to test how well reranking models generalize to truly novel queries that were unseen during LLM pretraining.
22
+
23
+ ### Key Features
24
+
25
+ - **Zero Contamination**: All queries refer to events after April 2025
26
+ - **Human Annotated**: Created by 4 expert annotators with quality control
27
+ - **Diverse Domains**: Technology, Sports, Politics, Science, Health, Business, Entertainment
28
+ - **Real Events**: Based on actual news and developments, not synthetic data
29
+ - **Temporal Novelty**: First benchmark designed to test reranker generalization on post-training events
30
+
31
+ ## Dataset Statistics
32
+
33
+ | Metric | Value |
34
+ |--------|-------|
35
+ | Total Queries | 148 |
36
+ | Total Documents | 2,787 |
37
+ | Query-Document Pairs | 2,938 |
38
+ | Avg. Relevant Docs per Query | 6.54 |
39
+ | Languages | English |
40
+ | License | Apache-2.0 |
41
+
42
+ ## Category Distribution
43
+
44
+ | Category | Queries | Percentage |
45
+ |----------|---------|------------|
46
+ | **Technology** | 37 | 25.0% |
47
+ | **Sports** | 31 | 20.9% |
48
+ | **Science & Environment** | 20 | 13.5% |
49
+ | **Business & Finance** | 19 | 12.8% |
50
+ | **Health & Medicine** | 16 | 10.8% |
51
+ | **World News & Politics** | 14 | 9.5% |
52
+ | **Entertainment & Culture** | 11 | 7.4% |
53
+
54
+ ## Dataset Structure
55
+
56
+ The dataset consists of three main files:
57
+
58
+ ### Files
59
+
60
+ - **`queries.tsv`**: Contains the query information
61
+ - Columns: `query_id`, `query_text`, `category`
62
+ - **`corpus.tsv`**: Contains the document collection
63
+ - Columns: `doc_id`, `title`, `text`, `url`
64
+ - **`qrels.txt`**: Contains relevance judgments
65
+ - Format: `query_id 0 doc_id relevance_score`
66
+
67
+ ### Data Fields
68
+
69
+ #### Queries
70
+ - `query_id` (string): Unique identifier for each query
71
+ - `query_text` (string): The natural language query
72
+ - `category` (string): Topical category (Technology, Sports, etc.)
73
+
74
+ #### Corpus
75
+ - `doc_id` (string): Unique identifier for each document
76
+ - `title` (string): Document title
77
+ - `text` (string): Full document content
78
+ - `url` (string): Source URL of the document
79
+
80
+ #### Relevance Judgments (qrels)
81
+ - `query_id` (string): Query identifier
82
+ - `iteration` (int): Always 0 (standard TREC format)
83
+ - `doc_id` (string): Document identifier
84
+ - `relevance` (int): Relevance score (0-3, where 3 is highly relevant)
85
+
86
+ ## Example Queries
87
+
88
+ **🌍 World News & Politics:**
89
+ > "What specific actions has Egypt taken to support injured Palestinians from Gaza, as highlighted during the visit of Presidents El-Sisi and Macron to Al-Arish General Hospital?"
90
+
91
+ **⚽ Sports:**
92
+ > "Which teams qualified for the 2025 UEFA European Championship playoffs in June 2025?"
93
+
94
+ **💻 Technology:**
95
+ > "What are the key features of Apple's new Vision Pro 2 announced at WWDC 2025?"
96
+
97
+ ## Usage
98
+
99
+ ### Loading the Dataset
100
+
101
+ ```python
102
+ from datasets import load_dataset
103
+
104
+ # Load the dataset
105
+ dataset = load_dataset("abdoelsayed/FutureQueryEval")
106
+
107
+ # Access different splits
108
+ queries = dataset["queries"]
109
+ corpus = dataset["corpus"]
110
+ qrels = dataset["qrels"]
111
+
112
+ # Example: Get first query
113
+ print(f"Query: {queries[0]['query_text']}")
114
+ print(f"Category: {queries[0]['category']}")
115
+ ```
116
+
117
+ ### Evaluation Example
118
+
119
+ ```python
120
+ import pandas as pd
121
+
122
+ # Load relevance judgments
123
+ qrels_df = pd.read_csv("qrels.txt", sep=" ",
124
+ names=["query_id", "iteration", "doc_id", "relevance"])
125
+
126
+ # Filter for a specific query
127
+ query_rels = qrels_df[qrels_df["query_id"] == "FQ001"]
128
+ print(f"Relevant documents for query FQ001: {len(query_rels)}")
129
+ ```
130
+
131
+ ## Methodology
132
+
133
+ ### Data Collection Process
134
+
135
+ 1. **Source Selection**: Major news outlets, official sites, sports organizations
136
+ 2. **Temporal Filtering**: Events after April 2025 only
137
+ 3. **Query Creation**: Manual generation by domain experts
138
+ 4. **Novelty Validation**: Tested against GPT-4 knowledge cutoff
139
+ 5. **Quality Control**: Multi-annotator review with senior oversight
140
+
141
+ ### Annotation Guidelines
142
+
143
+ - **Highly Relevant (3)**: Document directly answers the query
144
+ - **Relevant (2)**: Document partially addresses the query
145
+ - **Marginally Relevant (1)**: Document mentions query topics but lacks detail
146
+ - **Not Relevant (0)**: Document does not address the query
147
+
148
+ ## Research Applications
149
+
150
+ This dataset is designed for:
151
+
152
+ - **Reranker Evaluation**: Testing generalization to novel content
153
+ - **Temporal IR Research**: Understanding time-sensitive retrieval challenges
154
+ - **Domain Robustness**: Evaluating cross-domain performance
155
+ - **Contamination Studies**: Clean evaluation on post-training data
156
+
157
+ ## Benchmark Results
158
+
159
+ Top performing methods on FutureQueryEval:
160
+
161
+ | Method | Type | NDCG@10 | Runtime (s) |
162
+ |--------|------|---------|-------------|
163
+ | Zephyr-7B | Listwise | **62.65** | 1,240 |
164
+ | MonoT5-3B | Pointwise | **60.75** | 486 |
165
+ | Flan-T5-XL | Setwise | **56.57** | 892 |
166
+
167
+ ## Dataset Updates
168
+
169
+ FutureQueryEval will be updated every 6 months with new queries about recent events to maintain temporal novelty:
170
+
171
+ - **Version 1.1** (December 2025): +100 queries from July-September 2025
172
+ - **Version 1.2** (June 2026): +100 queries from October 2025-March 2026
173
+
174
+ ## Citation
175
+
176
+ If you use FutureQueryEval in your research, please cite:
177
+
178
+ ```bibtex
179
+ @misc{abdallah2025good,
180
+ title={How Good are LLM-based Rerankers? An Empirical Analysis of State-of-the-Art Reranking Models},
181
+ author={Abdelrahman Abdallah and Bhawna Piryani and Jamshid Mozafari and Mohammed Ali and Adam Jatowt},
182
+ year={2025},
183
+ eprint={2508.16757},
184
+ archivePrefix={arXiv},
185
+ primaryClass={cs.CL}
186
+ }
187
+ ```
188
+
189
+ ## Contact
190
+
191
+ - **Authors**: Abdelrahman Abdallah, Bhawna Piryani
192
+ - **Institution**: University of Innsbruck
193
+ - **Paper**: [arXiv:2508.16757](https://arxiv.org/abs/2508.16757)
194
+ - **Code**: [GitHub Repository](https://github.com/DataScienceUIBK/llm-reranking-generalization-study)
195
+
196
+ ## License
197
+
 
 
 
 
 
 
198
  This dataset is released under the Apache-2.0 License.