abdoelsayed commited on
Commit
da16f2f
·
verified ·
1 Parent(s): a9c03d7

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. .gitattributes +59 -59
  2. README.md +203 -203
.gitattributes CHANGED
@@ -1,59 +1,59 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ckpt filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.lz4 filter=lfs diff=lfs merge=lfs -text
12
- *.mds filter=lfs diff=lfs merge=lfs -text
13
- *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
- *.model filter=lfs diff=lfs merge=lfs -text
15
- *.msgpack filter=lfs diff=lfs merge=lfs -text
16
- *.npy filter=lfs diff=lfs merge=lfs -text
17
- *.npz filter=lfs diff=lfs merge=lfs -text
18
- *.onnx filter=lfs diff=lfs merge=lfs -text
19
- *.ot filter=lfs diff=lfs merge=lfs -text
20
- *.parquet filter=lfs diff=lfs merge=lfs -text
21
- *.pb filter=lfs diff=lfs merge=lfs -text
22
- *.pickle filter=lfs diff=lfs merge=lfs -text
23
- *.pkl filter=lfs diff=lfs merge=lfs -text
24
- *.pt filter=lfs diff=lfs merge=lfs -text
25
- *.pth filter=lfs diff=lfs merge=lfs -text
26
- *.rar filter=lfs diff=lfs merge=lfs -text
27
- *.safetensors filter=lfs diff=lfs merge=lfs -text
28
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
- *.tar.* filter=lfs diff=lfs merge=lfs -text
30
- *.tar filter=lfs diff=lfs merge=lfs -text
31
- *.tflite filter=lfs diff=lfs merge=lfs -text
32
- *.tgz filter=lfs diff=lfs merge=lfs -text
33
- *.wasm filter=lfs diff=lfs merge=lfs -text
34
- *.xz filter=lfs diff=lfs merge=lfs -text
35
- *.zip filter=lfs diff=lfs merge=lfs -text
36
- *.zst filter=lfs diff=lfs merge=lfs -text
37
- *tfevents* filter=lfs diff=lfs merge=lfs -text
38
- # Audio files - uncompressed
39
- *.pcm filter=lfs diff=lfs merge=lfs -text
40
- *.sam filter=lfs diff=lfs merge=lfs -text
41
- *.raw filter=lfs diff=lfs merge=lfs -text
42
- # Audio files - compressed
43
- *.aac filter=lfs diff=lfs merge=lfs -text
44
- *.flac filter=lfs diff=lfs merge=lfs -text
45
- *.mp3 filter=lfs diff=lfs merge=lfs -text
46
- *.ogg filter=lfs diff=lfs merge=lfs -text
47
- *.wav filter=lfs diff=lfs merge=lfs -text
48
- # Image files - uncompressed
49
- *.bmp filter=lfs diff=lfs merge=lfs -text
50
- *.gif filter=lfs diff=lfs merge=lfs -text
51
- *.png filter=lfs diff=lfs merge=lfs -text
52
- *.tiff filter=lfs diff=lfs merge=lfs -text
53
- # Image files - compressed
54
- *.jpg filter=lfs diff=lfs merge=lfs -text
55
- *.jpeg filter=lfs diff=lfs merge=lfs -text
56
- *.webp filter=lfs diff=lfs merge=lfs -text
57
- # Video files - compressed
58
- *.mp4 filter=lfs diff=lfs merge=lfs -text
59
- *.webm filter=lfs diff=lfs merge=lfs -text
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mds filter=lfs diff=lfs merge=lfs -text
13
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
+ *.model filter=lfs diff=lfs merge=lfs -text
15
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
16
+ *.npy filter=lfs diff=lfs merge=lfs -text
17
+ *.npz filter=lfs diff=lfs merge=lfs -text
18
+ *.onnx filter=lfs diff=lfs merge=lfs -text
19
+ *.ot filter=lfs diff=lfs merge=lfs -text
20
+ *.parquet filter=lfs diff=lfs merge=lfs -text
21
+ *.pb filter=lfs diff=lfs merge=lfs -text
22
+ *.pickle filter=lfs diff=lfs merge=lfs -text
23
+ *.pkl filter=lfs diff=lfs merge=lfs -text
24
+ *.pt filter=lfs diff=lfs merge=lfs -text
25
+ *.pth filter=lfs diff=lfs merge=lfs -text
26
+ *.rar filter=lfs diff=lfs merge=lfs -text
27
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
28
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
30
+ *.tar filter=lfs diff=lfs merge=lfs -text
31
+ *.tflite filter=lfs diff=lfs merge=lfs -text
32
+ *.tgz filter=lfs diff=lfs merge=lfs -text
33
+ *.wasm filter=lfs diff=lfs merge=lfs -text
34
+ *.xz filter=lfs diff=lfs merge=lfs -text
35
+ *.zip filter=lfs diff=lfs merge=lfs -text
36
+ *.zst filter=lfs diff=lfs merge=lfs -text
37
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
38
+ # Audio files - uncompressed
39
+ *.pcm filter=lfs diff=lfs merge=lfs -text
40
+ *.sam filter=lfs diff=lfs merge=lfs -text
41
+ *.raw filter=lfs diff=lfs merge=lfs -text
42
+ # Audio files - compressed
43
+ *.aac filter=lfs diff=lfs merge=lfs -text
44
+ *.flac filter=lfs diff=lfs merge=lfs -text
45
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
46
+ *.ogg filter=lfs diff=lfs merge=lfs -text
47
+ *.wav filter=lfs diff=lfs merge=lfs -text
48
+ # Image files - uncompressed
49
+ *.bmp filter=lfs diff=lfs merge=lfs -text
50
+ *.gif filter=lfs diff=lfs merge=lfs -text
51
+ *.png filter=lfs diff=lfs merge=lfs -text
52
+ *.tiff filter=lfs diff=lfs merge=lfs -text
53
+ # Image files - compressed
54
+ *.jpg filter=lfs diff=lfs merge=lfs -text
55
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
56
+ *.webp filter=lfs diff=lfs merge=lfs -text
57
+ # Video files - compressed
58
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
59
+ *.webm filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,204 +1,204 @@
1
- ---
2
- license: apache-2.0
3
- task_categories:
4
- - text-retrieval
5
- language:
6
- - en
7
- tags:
8
- - information-retrieval
9
- - reranking
10
- - temporal-evaluation
11
- - benchmark
12
- size_categories:
13
- - 1K<n<10K
14
- pretty_name: Reranking, Retreiver
15
- configs:
16
- - config_name: default
17
- data_files:
18
- - split: test
19
- path:
20
- - queries.tsv
21
- ---
22
-
23
- # FutureQueryEval Dataset (EMNLP 2025)🔍
24
-
25
- ## Dataset Description
26
-
27
- **FutureQueryEval** is a novel Information Retrieval (IR) benchmark designed to evaluate reranker performance on temporal novelty. It comprises **148 queries** with **2,938 query-document pairs** across **7 topical categories**, specifically created to test how well reranking models generalize to truly novel queries that were unseen during LLM pretraining.
28
-
29
- ### Key Features
30
-
31
- - **Zero Contamination**: All queries refer to events after April 2025
32
- - **Human Annotated**: Created by 4 expert annotators with quality control
33
- - **Diverse Domains**: Technology, Sports, Politics, Science, Health, Business, Entertainment
34
- - **Real Events**: Based on actual news and developments, not synthetic data
35
- - **Temporal Novelty**: First benchmark designed to test reranker generalization on post-training events
36
-
37
- ## Dataset Statistics
38
-
39
- | Metric | Value |
40
- |--------|-------|
41
- | Total Queries | 148 |
42
- | Total Documents | 2,787 |
43
- | Query-Document Pairs | 2,938 |
44
- | Avg. Relevant Docs per Query | 6.54 |
45
- | Languages | English |
46
- | License | Apache-2.0 |
47
-
48
- ## Category Distribution
49
-
50
- | Category | Queries | Percentage |
51
- |----------|---------|------------|
52
- | **Technology** | 37 | 25.0% |
53
- | **Sports** | 31 | 20.9% |
54
- | **Science & Environment** | 20 | 13.5% |
55
- | **Business & Finance** | 19 | 12.8% |
56
- | **Health & Medicine** | 16 | 10.8% |
57
- | **World News & Politics** | 14 | 9.5% |
58
- | **Entertainment & Culture** | 11 | 7.4% |
59
-
60
- ## Dataset Structure
61
-
62
- The dataset consists of three main files:
63
-
64
- ### Files
65
-
66
- - **`queries.tsv`**: Contains the query information
67
- - Columns: `query_id`, `query_text`, `category`
68
- - **`corpus.tsv`**: Contains the document collection
69
- - Columns: `doc_id`, `title`, `text`, `url`
70
- - **`qrels.txt`**: Contains relevance judgments
71
- - Format: `query_id 0 doc_id relevance_score`
72
-
73
- ### Data Fields
74
-
75
- #### Queries
76
- - `query_id` (string): Unique identifier for each query
77
- - `query_text` (string): The natural language query
78
- - `category` (string): Topical category (Technology, Sports, etc.)
79
-
80
- #### Corpus
81
- - `doc_id` (string): Unique identifier for each document
82
- - `title` (string): Document title
83
- - `text` (string): Full document content
84
- - `url` (string): Source URL of the document
85
-
86
- #### Relevance Judgments (qrels)
87
- - `query_id` (string): Query identifier
88
- - `iteration` (int): Always 0 (standard TREC format)
89
- - `doc_id` (string): Document identifier
90
- - `relevance` (int): Relevance score (0-3, where 3 is highly relevant)
91
-
92
- ## Example Queries
93
-
94
- **🌍 World News & Politics:**
95
- > "What specific actions has Egypt taken to support injured Palestinians from Gaza, as highlighted during the visit of Presidents El-Sisi and Macron to Al-Arish General Hospital?"
96
-
97
- **⚽ Sports:**
98
- > "Which teams qualified for the 2025 UEFA European Championship playoffs in June 2025?"
99
-
100
- **💻 Technology:**
101
- > "What are the key features of Apple's new Vision Pro 2 announced at WWDC 2025?"
102
-
103
- ## Usage
104
-
105
- ### Loading the Dataset
106
-
107
- ```python
108
- from datasets import load_dataset
109
-
110
- # Load the dataset
111
- dataset = load_dataset("abdoelsayed/FutureQueryEval")
112
-
113
- # Access different splits
114
- queries = dataset["queries"]
115
- corpus = dataset["corpus"]
116
- qrels = dataset["qrels"]
117
-
118
- # Example: Get first query
119
- print(f"Query: {queries[0]['query_text']}")
120
- print(f"Category: {queries[0]['category']}")
121
- ```
122
-
123
- ### Evaluation Example
124
-
125
- ```python
126
- import pandas as pd
127
-
128
- # Load relevance judgments
129
- qrels_df = pd.read_csv("qrels.txt", sep=" ",
130
- names=["query_id", "iteration", "doc_id", "relevance"])
131
-
132
- # Filter for a specific query
133
- query_rels = qrels_df[qrels_df["query_id"] == "FQ001"]
134
- print(f"Relevant documents for query FQ001: {len(query_rels)}")
135
- ```
136
-
137
- ## Methodology
138
-
139
- ### Data Collection Process
140
-
141
- 1. **Source Selection**: Major news outlets, official sites, sports organizations
142
- 2. **Temporal Filtering**: Events after April 2025 only
143
- 3. **Query Creation**: Manual generation by domain experts
144
- 4. **Novelty Validation**: Tested against GPT-4 knowledge cutoff
145
- 5. **Quality Control**: Multi-annotator review with senior oversight
146
-
147
- ### Annotation Guidelines
148
-
149
- - **Highly Relevant (3)**: Document directly answers the query
150
- - **Relevant (2)**: Document partially addresses the query
151
- - **Marginally Relevant (1)**: Document mentions query topics but lacks detail
152
- - **Not Relevant (0)**: Document does not address the query
153
-
154
- ## Research Applications
155
-
156
- This dataset is designed for:
157
-
158
- - **Reranker Evaluation**: Testing generalization to novel content
159
- - **Temporal IR Research**: Understanding time-sensitive retrieval challenges
160
- - **Domain Robustness**: Evaluating cross-domain performance
161
- - **Contamination Studies**: Clean evaluation on post-training data
162
-
163
- ## Benchmark Results
164
-
165
- Top performing methods on FutureQueryEval:
166
-
167
- | Method | Type | NDCG@10 | Runtime (s) |
168
- |--------|------|---------|-------------|
169
- | Zephyr-7B | Listwise | **62.65** | 1,240 |
170
- | MonoT5-3B | Pointwise | **60.75** | 486 |
171
- | Flan-T5-XL | Setwise | **56.57** | 892 |
172
-
173
- ## Dataset Updates
174
-
175
- FutureQueryEval will be updated every 6 months with new queries about recent events to maintain temporal novelty:
176
-
177
- - **Version 1.1** (December 2025): +100 queries from July-September 2025
178
- - **Version 1.2** (June 2026): +100 queries from October 2025-March 2026
179
-
180
- ## Citation
181
-
182
- If you use FutureQueryEval in your research, please cite:
183
-
184
- ```bibtex
185
- @misc{abdallah2025good,
186
- title={How Good are LLM-based Rerankers? An Empirical Analysis of State-of-the-Art Reranking Models},
187
- author={Abdelrahman Abdallah and Bhawna Piryani and Jamshid Mozafari and Mohammed Ali and Adam Jatowt},
188
- year={2025},
189
- eprint={2508.16757},
190
- archivePrefix={arXiv},
191
- primaryClass={cs.CL}
192
- }
193
- ```
194
-
195
- ## Contact
196
-
197
- - **Authors**: Abdelrahman Abdallah, Bhawna Piryani
198
- - **Institution**: University of Innsbruck
199
- - **Paper**: [arXiv:2508.16757](https://arxiv.org/abs/2508.16757)
200
- - **Code**: [GitHub Repository](https://github.com/DataScienceUIBK/llm-reranking-generalization-study)
201
-
202
- ## License
203
-
204
  This dataset is released under the Apache-2.0 License.
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - text-retrieval
5
+ language:
6
+ - en
7
+ tags:
8
+ - information-retrieval
9
+ - reranking
10
+ - temporal-evaluation
11
+ - benchmark
12
+ size_categories:
13
+ - 1K<n<10K
14
+ pretty_name: Reranking, Retreiver
15
+ configs:
16
+ - config_name: default
17
+ data_files:
18
+ - split: test
19
+ path:
20
+ - queries.tsv
21
+ ---
22
+
23
+ # FutureQueryEval Dataset (EMNLP 2025)🔍
24
+
25
+ ## Dataset Description
26
+
27
+ **FutureQueryEval** is a novel Information Retrieval (IR) benchmark designed to evaluate reranker performance on temporal novelty. It comprises **148 queries** with **2,938 query-document pairs** across **7 topical categories**, specifically created to test how well reranking models generalize to truly novel queries that were unseen during LLM pretraining.
28
+
29
+ ### Key Features
30
+
31
+ - **Zero Contamination**: All queries refer to events after April 2025
32
+ - **Human Annotated**: Created by 4 expert annotators with quality control
33
+ - **Diverse Domains**: Technology, Sports, Politics, Science, Health, Business, Entertainment
34
+ - **Real Events**: Based on actual news and developments, not synthetic data
35
+ - **Temporal Novelty**: First benchmark designed to test reranker generalization on post-training events
36
+
37
+ ## Dataset Statistics
38
+
39
+ | Metric | Value |
40
+ |--------|-------|
41
+ | Total Queries | 148 |
42
+ | Total Documents | 2,787 |
43
+ | Query-Document Pairs | 2,938 |
44
+ | Avg. Relevant Docs per Query | 6.54 |
45
+ | Languages | English |
46
+ | License | Apache-2.0 |
47
+
48
+ ## Category Distribution
49
+
50
+ | Category | Queries | Percentage |
51
+ |----------|---------|------------|
52
+ | **Technology** | 37 | 25.0% |
53
+ | **Sports** | 31 | 20.9% |
54
+ | **Science & Environment** | 20 | 13.5% |
55
+ | **Business & Finance** | 19 | 12.8% |
56
+ | **Health & Medicine** | 16 | 10.8% |
57
+ | **World News & Politics** | 14 | 9.5% |
58
+ | **Entertainment & Culture** | 11 | 7.4% |
59
+
60
+ ## Dataset Structure
61
+
62
+ The dataset consists of three main files:
63
+
64
+ ### Files
65
+
66
+ - **`queries.tsv`**: Contains the query information
67
+ - Columns: `query_id`, `query_text`, `category`
68
+ - **`corpus.tsv`**: Contains the document collection
69
+ - Columns: `doc_id`, `title`, `text`, `url`
70
+ - **`qrels.txt`**: Contains relevance judgments
71
+ - Format: `query_id 0 doc_id relevance_score`
72
+
73
+ ### Data Fields
74
+
75
+ #### Queries
76
+ - `query_id` (string): Unique identifier for each query
77
+ - `query_text` (string): The natural language query
78
+ - `category` (string): Topical category (Technology, Sports, etc.)
79
+
80
+ #### Corpus
81
+ - `doc_id` (string): Unique identifier for each document
82
+ - `title` (string): Document title
83
+ - `text` (string): Full document content
84
+ - `url` (string): Source URL of the document
85
+
86
+ #### Relevance Judgments (qrels)
87
+ - `query_id` (string): Query identifier
88
+ - `iteration` (int): Always 0 (standard TREC format)
89
+ - `doc_id` (string): Document identifier
90
+ - `relevance` (int): Relevance score (0-3, where 3 is highly relevant)
91
+
92
+ ## Example Queries
93
+
94
+ **🌍 World News & Politics:**
95
+ > "What specific actions has Egypt taken to support injured Palestinians from Gaza, as highlighted during the visit of Presidents El-Sisi and Macron to Al-Arish General Hospital?"
96
+
97
+ **⚽ Sports:**
98
+ > "Which teams qualified for the 2025 UEFA European Championship playoffs in June 2025?"
99
+
100
+ **💻 Technology:**
101
+ > "What are the key features of Apple's new Vision Pro 2 announced at WWDC 2025?"
102
+
103
+ ## Usage
104
+
105
+ ### Loading the Dataset
106
+
107
+ ```python
108
+ from datasets import load_dataset
109
+
110
+ # Load the dataset
111
+ dataset = load_dataset("abdoelsayed/FutureQueryEval")
112
+
113
+ # Access different splits
114
+ queries = dataset["queries"]
115
+ corpus = dataset["corpus"]
116
+ qrels = dataset["qrels"]
117
+
118
+ # Example: Get first query
119
+ print(f"Query: {queries[0]['query_text']}")
120
+ print(f"Category: {queries[0]['category']}")
121
+ ```
122
+
123
+ ### Evaluation Example
124
+
125
+ ```python
126
+ import pandas as pd
127
+
128
+ # Load relevance judgments
129
+ qrels_df = pd.read_csv("qrels.txt", sep=" ",
130
+ names=["query_id", "iteration", "doc_id", "relevance"])
131
+
132
+ # Filter for a specific query
133
+ query_rels = qrels_df[qrels_df["query_id"] == "FQ001"]
134
+ print(f"Relevant documents for query FQ001: {len(query_rels)}")
135
+ ```
136
+
137
+ ## Methodology
138
+
139
+ ### Data Collection Process
140
+
141
+ 1. **Source Selection**: Major news outlets, official sites, sports organizations
142
+ 2. **Temporal Filtering**: Events after April 2025 only
143
+ 3. **Query Creation**: Manual generation by domain experts
144
+ 4. **Novelty Validation**: Tested against GPT-4 knowledge cutoff
145
+ 5. **Quality Control**: Multi-annotator review with senior oversight
146
+
147
+ ### Annotation Guidelines
148
+
149
+ - **Highly Relevant (3)**: Document directly answers the query
150
+ - **Relevant (2)**: Document partially addresses the query
151
+ - **Marginally Relevant (1)**: Document mentions query topics but lacks detail
152
+ - **Not Relevant (0)**: Document does not address the query
153
+
154
+ ## Research Applications
155
+
156
+ This dataset is designed for:
157
+
158
+ - **Reranker Evaluation**: Testing generalization to novel content
159
+ - **Temporal IR Research**: Understanding time-sensitive retrieval challenges
160
+ - **Domain Robustness**: Evaluating cross-domain performance
161
+ - **Contamination Studies**: Clean evaluation on post-training data
162
+
163
+ ## Benchmark Results
164
+
165
+ Top performing methods on FutureQueryEval:
166
+
167
+ | Method | Type | NDCG@10 | Runtime (s) |
168
+ |--------|------|---------|-------------|
169
+ | Zephyr-7B | Listwise | **62.65** | 1,240 |
170
+ | MonoT5-3B | Pointwise | **60.75** | 486 |
171
+ | Flan-T5-XL | Setwise | **56.57** | 892 |
172
+
173
+ ## Dataset Updates
174
+
175
+ FutureQueryEval will be updated every 6 months with new queries about recent events to maintain temporal novelty:
176
+
177
+ - **Version 1.1** (December 2025): +100 queries from July-September 2025
178
+ - **Version 1.2** (June 2026): +100 queries from October 2025-March 2026
179
+
180
+ ## Citation
181
+
182
+ If you use FutureQueryEval in your research, please cite:
183
+
184
+ ```bibtex
185
+ @misc{abdallah2025good,
186
+ title={How Good are LLM-based Rerankers? An Empirical Analysis of State-of-the-Art Reranking Models},
187
+ author={Abdelrahman Abdallah and Bhawna Piryani and Jamshid Mozafari and Mohammed Ali and Adam Jatowt},
188
+ year={2025},
189
+ eprint={2508.16757},
190
+ archivePrefix={arXiv},
191
+ primaryClass={cs.CL}
192
+ }
193
+ ```
194
+
195
+ ## Contact
196
+
197
+ - **Authors**: Abdelrahman Abdallah, Bhawna Piryani
198
+ - **Institution**: University of Innsbruck
199
+ - **Paper**: [arXiv:2508.16757](https://arxiv.org/abs/2508.16757)
200
+ - **Code**: [GitHub Repository](https://github.com/DataScienceUIBK/llm-reranking-generalization-study)
201
+
202
+ ## License
203
+
204
  This dataset is released under the Apache-2.0 License.