Improve dataset card for FutureQueryEval: Add paper, code, project links, update metadata and content

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +187 -3
README.md CHANGED
@@ -1,3 +1,187 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - text-ranking
5
+ language:
6
+ - en
7
+ tags:
8
+ - information-retrieval
9
+ - reranking
10
+ - llm
11
+ - benchmark
12
+ - temporal
13
+ - llm-reranking
14
+ ---
15
+
16
+ # How Good are LLM-based Rerankers? An Empirical Analysis of State-of-the-Art Reranking Models πŸ”
17
+
18
+ This repository contains the **FutureQueryEval Dataset** presented in the paper [How Good are LLM-based Rerankers? An Empirical Analysis of State-of-the-Art Reranking Models](https://huggingface.co/papers/2508.16757).
19
+
20
+ Code: [https://github.com/DataScienceUIBK/llm-reranking-generalization-study](https://github.com/DataScienceUIBK/llm-reranking-generalization-study)
21
+
22
+ Project Page / Leaderboard: [https://rankarena.ngrok.io](https://rankarena.ngrok.io)
23
+
24
+ ## πŸŽ‰ News
25
+ - **[2025-08-22]** 🎯 **FutureQueryEval Dataset Released!** - The first temporal IR benchmark with queries from April 2025+
26
+ - **[2025-08-22]** πŸ”§ Comprehensive evaluation framework released - 22 reranking methods, 40 variants tested
27
+ - **[2025-08-22]** πŸ“Š Integrated with [RankArena](https://arxiv.org/abs/2508.05512) leaderboard. You can view and interact with RankArena through this [link](https://rankarena.ngrok.io)
28
+ - **[2025-08-20]** πŸ“ Paper accepted at EMNLP Findings 2025
29
+
30
+ ## πŸ“– Introduction
31
+
32
+ We present the **most comprehensive empirical study of reranking methods** to date, systematically evaluating 22 state-of-the-art approaches across 40 variants. Our key contribution is **FutureQueryEval** - the first temporal benchmark designed to test reranker generalization on truly novel queries unseen during LLM pretraining.
33
+
34
+ <div align="center">
35
+ <img src="https://github.com/DataScienceUIBK/llm-reranking-generalization-study/blob/main/figures/radar.jpg" alt="Performance Overview" width="600"/>
36
+ <p><em>Performance comparison across pointwise, pairwise, and listwise reranking paradigms</em></p>
37
+ </div>
38
+
39
+ ### Key Findings πŸ”
40
+ - **Temporal Performance Gap**: 5-15% performance drop on novel queries compared to standard benchmarks
41
+ - **Listwise Superiority**: Best generalization to unseen content (8% avg. degradation vs 12-15% for others)
42
+ - **Efficiency Trade-offs**: Comprehensive runtime analysis reveals optimal speed-accuracy combinations
43
+ - **Domain Vulnerabilities**: All methods struggle with argumentative and informal content
44
+
45
+ # πŸ“„ FutureQueryEval Dataset
46
+
47
+ ## Overview
48
+ **FutureQueryEval** is a novel IR benchmark comprising **148 queries** with **2,938 query-document pairs** across **7 topical categories**, designed to evaluate reranker performance on temporal novelty.
49
+
50
+ ### 🎯 Why FutureQueryEval?
51
+ - **Zero Contamination**: All queries refer to events after April 2025
52
+ - **Human Annotated**: 4 expert annotators with quality control
53
+ - **Diverse Domains**: Technology, Sports, Politics, Science, Health, Business, Entertainment
54
+ - **Real Events**: Based on actual news and developments, not synthetic data
55
+
56
+ ### πŸ“Š Dataset Statistics
57
+ | Metric | Value |
58
+ |--------|-------|
59
+ | Total Queries | 148 |
60
+ | Total Documents | 2,787 |
61
+ | Query-Document Pairs | 2,938 |
62
+ | Avg. Relevant Docs per Query | 6.54 |
63
+ | Languages | English |
64
+ | License | MIT |
65
+
66
+ ### 🌍 Category Distribution
67
+ - **Technology**: 25.0% (37 queries)
68
+ - **Sports**: 20.9% (31 queries)
69
+ - **Science & Environment**: 13.5% (20 queries)
70
+ - **Business & Finance**: 12.8% (19 queries)
71
+ - **Health & Medicine**: 10.8% (16 queries)
72
+ - **World News & Politics**: 9.5% (14 queries)
73
+ - **Entertainment & Culture**: 7.4% (11 queries)
74
+
75
+ ### πŸ“ Example Queries
76
+ ```
77
+ 🌍 World News & Politics:
78
+ "What specific actions has Egypt taken to support injured Palestinians from Gaza,
79
+ as highlighted during the visit of Presidents El-Sisi and Macron to Al-Arish General Hospital?"
80
+
81
+ ⚽ Sports:
82
+ "Which teams qualified for the 2025 UEFA European Championship playoffs in June 2025?"
83
+
84
+ πŸ’» Technology:
85
+ "What are the key features of Apple's new Vision Pro 2 announced at WWDC 2025?"
86
+ ```
87
+
88
+ ## Data Collection Methodology
89
+ 1. **Source Selection**: Major news outlets, official sites, sports organizations
90
+ 2. **Temporal Filtering**: Events after April 2025 only
91
+ 3. **Query Creation**: Manual generation by domain experts
92
+ 4. **Novelty Validation**: Tested against GPT-4 knowledge cutoff
93
+ 5. **Quality Control**: Multi-annotator review with senior oversight
94
+
95
+ # πŸ“Š Evaluation Results
96
+
97
+ ## Top Performers on FutureQueryEval
98
+
99
+ | Method Category | Best Model | NDCG@10 | Runtime (s) |
100
+ |----------------|------------|---------|-------------|
101
+ | **Listwise** | Zephyr-7B | **62.65** | 1,240 |
102
+ | **Pointwise** | MonoT5-3B | **60.75** | 486 |
103
+ | **Setwise** | Flan-T5-XL | **56.57** | 892 |
104
+ | **Pairwise** | EchoRank-XL | **54.97** | 2,158 |
105
+ | **Tournament** | TourRank-GPT4o | **62.02** | 3,420 |
106
+
107
+ ## Performance Insights
108
+ - πŸ† **Best Overall**: Zephyr-7B (62.65 NDCG@10)
109
+ - ⚑ **Best Efficiency**: FlashRank-MiniLM (55.43 NDCG@10, 195s)
110
+ - 🎯 **Best Balance**: MonoT5-3B (60.75 NDCG@10, 486s)
111
+
112
+ <div align="center">
113
+ <img src="https://github.com/DataScienceUIBK/llm-reranking-generalization-study/blob/main/figures/efficiency_tradeoff.png.jpg" alt="Efficiency Analysis" width="700"/>
114
+ <p><em>Runtime vs. Performance trade-offs across reranking methods</em></p>
115
+ </div>
116
+
117
+ # πŸ”§ Supported Methods
118
+
119
+ We evaluate **22 reranking approaches** across multiple paradigms:
120
+
121
+ ### Pointwise Methods
122
+ - MonoT5, RankT5, InRanker, TWOLAR
123
+ - FlashRank, Transformer Rankers
124
+ - UPR, MonoBERT, ColBERT
125
+
126
+ ### Listwise Methods
127
+ - RankGPT, ListT5, Zephyr, Vicuna
128
+ - LiT5-Distill, InContext Rerankers
129
+
130
+ ### Pairwise Methods
131
+ - PRP (Pairwise Ranking Prompting)
132
+ - EchoRank
133
+
134
+ ### Advanced Methods
135
+ - Setwise (Flan-T5 variants)
136
+ - TourRank (Tournament-based)
137
+ - RankLLaMA (Task-specific fine-tuned)
138
+
139
+ # πŸ”„ Dataset Updates
140
+
141
+ **FutureQueryEval will be updated every 6 months** with new queries about recent events to maintain temporal novelty. Subscribe to releases for notifications!
142
+
143
+ ## Upcoming Updates
144
+ - **Version 1.1** (December 2025): +100 queries from July-September 2025 events
145
+ - **Version 1.2** (June 2026): +100 queries from October 2025-March 2026 events
146
+
147
+ # πŸ“‹ Leaderboard
148
+
149
+ Submit your reranking method results to appear on our leaderboard! See [SUBMISSION.md](https://github.com/DataScienceUIBK/llm-reranking-generalization-study/blob/main/SUBMISSION.md) for guidelines.
150
+
151
+ Current standings available at: [RanArena](https://rankarena.ngrok.io)
152
+
153
+ # 🀝 Contributing
154
+
155
+ We welcome contributions! See [CONTRIBUTING.md](https://github.com/DataScienceUIBK/llm-reranking-generalization-study/blob/main/CONTRIBUTING.md) for:
156
+ - Adding new reranking methods
157
+ - Improving evaluation metrics
158
+ - Dataset quality improvements
159
+ - Bug fixes and optimizations
160
+
161
+ # 🎈 Citation
162
+
163
+ If you use FutureQueryEval or our evaluation framework, please cite:
164
+
165
+ ```bibtex
166
+ @misc{abdallah2025howgoodarellmbasedrerankers,
167
+ title={How Good are LLM-based Rerankers? An Empirical Analysis of State-of-the-Art Reranking Models},
168
+ author={Abdelrahman Abdallah and Bhawna Piryani},
169
+ year={2025},
170
+ eprint={2508.16757},
171
+ archivePrefix={arXiv},
172
+ primaryClass={cs.IR}
173
+ }
174
+ ```
175
+
176
+ # πŸ“ž Contact
177
+
178
+ - **Authors**: [Abdelrahman Abdallah](mailto:[email protected]), [Bhawna Piryani](mailto:[email protected])
179
+ - **Institution**: University of Innsbruck
180
+ - **Issues**: Please use GitHub Issues for bug reports and feature requests
181
+
182
+ ---
183
+
184
+ <div align="center">
185
+ <p>⭐ Star this repo if you find it helpful! ⭐</p>
186
+ <p>πŸ“§ Questions? Open an issue or contact the authors</p>
187
+ </div>