Datasets:
Improve dataset card for FutureQueryEval: Add paper, code, project links, update metadata and content
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
@@ -1,3 +1,187 @@
|
|
1 |
-
---
|
2 |
-
license:
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
task_categories:
|
4 |
+
- text-ranking
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
tags:
|
8 |
+
- information-retrieval
|
9 |
+
- reranking
|
10 |
+
- llm
|
11 |
+
- benchmark
|
12 |
+
- temporal
|
13 |
+
- llm-reranking
|
14 |
+
---
|
15 |
+
|
16 |
+
# How Good are LLM-based Rerankers? An Empirical Analysis of State-of-the-Art Reranking Models π
|
17 |
+
|
18 |
+
This repository contains the **FutureQueryEval Dataset** presented in the paper [How Good are LLM-based Rerankers? An Empirical Analysis of State-of-the-Art Reranking Models](https://huggingface.co/papers/2508.16757).
|
19 |
+
|
20 |
+
Code: [https://github.com/DataScienceUIBK/llm-reranking-generalization-study](https://github.com/DataScienceUIBK/llm-reranking-generalization-study)
|
21 |
+
|
22 |
+
Project Page / Leaderboard: [https://rankarena.ngrok.io](https://rankarena.ngrok.io)
|
23 |
+
|
24 |
+
## π News
|
25 |
+
- **[2025-08-22]** π― **FutureQueryEval Dataset Released!** - The first temporal IR benchmark with queries from April 2025+
|
26 |
+
- **[2025-08-22]** π§ Comprehensive evaluation framework released - 22 reranking methods, 40 variants tested
|
27 |
+
- **[2025-08-22]** π Integrated with [RankArena](https://arxiv.org/abs/2508.05512) leaderboard. You can view and interact with RankArena through this [link](https://rankarena.ngrok.io)
|
28 |
+
- **[2025-08-20]** π Paper accepted at EMNLP Findings 2025
|
29 |
+
|
30 |
+
## π Introduction
|
31 |
+
|
32 |
+
We present the **most comprehensive empirical study of reranking methods** to date, systematically evaluating 22 state-of-the-art approaches across 40 variants. Our key contribution is **FutureQueryEval** - the first temporal benchmark designed to test reranker generalization on truly novel queries unseen during LLM pretraining.
|
33 |
+
|
34 |
+
<div align="center">
|
35 |
+
<img src="https://github.com/DataScienceUIBK/llm-reranking-generalization-study/blob/main/figures/radar.jpg" alt="Performance Overview" width="600"/>
|
36 |
+
<p><em>Performance comparison across pointwise, pairwise, and listwise reranking paradigms</em></p>
|
37 |
+
</div>
|
38 |
+
|
39 |
+
### Key Findings π
|
40 |
+
- **Temporal Performance Gap**: 5-15% performance drop on novel queries compared to standard benchmarks
|
41 |
+
- **Listwise Superiority**: Best generalization to unseen content (8% avg. degradation vs 12-15% for others)
|
42 |
+
- **Efficiency Trade-offs**: Comprehensive runtime analysis reveals optimal speed-accuracy combinations
|
43 |
+
- **Domain Vulnerabilities**: All methods struggle with argumentative and informal content
|
44 |
+
|
45 |
+
# π FutureQueryEval Dataset
|
46 |
+
|
47 |
+
## Overview
|
48 |
+
**FutureQueryEval** is a novel IR benchmark comprising **148 queries** with **2,938 query-document pairs** across **7 topical categories**, designed to evaluate reranker performance on temporal novelty.
|
49 |
+
|
50 |
+
### π― Why FutureQueryEval?
|
51 |
+
- **Zero Contamination**: All queries refer to events after April 2025
|
52 |
+
- **Human Annotated**: 4 expert annotators with quality control
|
53 |
+
- **Diverse Domains**: Technology, Sports, Politics, Science, Health, Business, Entertainment
|
54 |
+
- **Real Events**: Based on actual news and developments, not synthetic data
|
55 |
+
|
56 |
+
### π Dataset Statistics
|
57 |
+
| Metric | Value |
|
58 |
+
|--------|-------|
|
59 |
+
| Total Queries | 148 |
|
60 |
+
| Total Documents | 2,787 |
|
61 |
+
| Query-Document Pairs | 2,938 |
|
62 |
+
| Avg. Relevant Docs per Query | 6.54 |
|
63 |
+
| Languages | English |
|
64 |
+
| License | MIT |
|
65 |
+
|
66 |
+
### π Category Distribution
|
67 |
+
- **Technology**: 25.0% (37 queries)
|
68 |
+
- **Sports**: 20.9% (31 queries)
|
69 |
+
- **Science & Environment**: 13.5% (20 queries)
|
70 |
+
- **Business & Finance**: 12.8% (19 queries)
|
71 |
+
- **Health & Medicine**: 10.8% (16 queries)
|
72 |
+
- **World News & Politics**: 9.5% (14 queries)
|
73 |
+
- **Entertainment & Culture**: 7.4% (11 queries)
|
74 |
+
|
75 |
+
### π Example Queries
|
76 |
+
```
|
77 |
+
π World News & Politics:
|
78 |
+
"What specific actions has Egypt taken to support injured Palestinians from Gaza,
|
79 |
+
as highlighted during the visit of Presidents El-Sisi and Macron to Al-Arish General Hospital?"
|
80 |
+
|
81 |
+
β½ Sports:
|
82 |
+
"Which teams qualified for the 2025 UEFA European Championship playoffs in June 2025?"
|
83 |
+
|
84 |
+
π» Technology:
|
85 |
+
"What are the key features of Apple's new Vision Pro 2 announced at WWDC 2025?"
|
86 |
+
```
|
87 |
+
|
88 |
+
## Data Collection Methodology
|
89 |
+
1. **Source Selection**: Major news outlets, official sites, sports organizations
|
90 |
+
2. **Temporal Filtering**: Events after April 2025 only
|
91 |
+
3. **Query Creation**: Manual generation by domain experts
|
92 |
+
4. **Novelty Validation**: Tested against GPT-4 knowledge cutoff
|
93 |
+
5. **Quality Control**: Multi-annotator review with senior oversight
|
94 |
+
|
95 |
+
# π Evaluation Results
|
96 |
+
|
97 |
+
## Top Performers on FutureQueryEval
|
98 |
+
|
99 |
+
| Method Category | Best Model | NDCG@10 | Runtime (s) |
|
100 |
+
|----------------|------------|---------|-------------|
|
101 |
+
| **Listwise** | Zephyr-7B | **62.65** | 1,240 |
|
102 |
+
| **Pointwise** | MonoT5-3B | **60.75** | 486 |
|
103 |
+
| **Setwise** | Flan-T5-XL | **56.57** | 892 |
|
104 |
+
| **Pairwise** | EchoRank-XL | **54.97** | 2,158 |
|
105 |
+
| **Tournament** | TourRank-GPT4o | **62.02** | 3,420 |
|
106 |
+
|
107 |
+
## Performance Insights
|
108 |
+
- π **Best Overall**: Zephyr-7B (62.65 NDCG@10)
|
109 |
+
- β‘ **Best Efficiency**: FlashRank-MiniLM (55.43 NDCG@10, 195s)
|
110 |
+
- π― **Best Balance**: MonoT5-3B (60.75 NDCG@10, 486s)
|
111 |
+
|
112 |
+
<div align="center">
|
113 |
+
<img src="https://github.com/DataScienceUIBK/llm-reranking-generalization-study/blob/main/figures/efficiency_tradeoff.png.jpg" alt="Efficiency Analysis" width="700"/>
|
114 |
+
<p><em>Runtime vs. Performance trade-offs across reranking methods</em></p>
|
115 |
+
</div>
|
116 |
+
|
117 |
+
# π§ Supported Methods
|
118 |
+
|
119 |
+
We evaluate **22 reranking approaches** across multiple paradigms:
|
120 |
+
|
121 |
+
### Pointwise Methods
|
122 |
+
- MonoT5, RankT5, InRanker, TWOLAR
|
123 |
+
- FlashRank, Transformer Rankers
|
124 |
+
- UPR, MonoBERT, ColBERT
|
125 |
+
|
126 |
+
### Listwise Methods
|
127 |
+
- RankGPT, ListT5, Zephyr, Vicuna
|
128 |
+
- LiT5-Distill, InContext Rerankers
|
129 |
+
|
130 |
+
### Pairwise Methods
|
131 |
+
- PRP (Pairwise Ranking Prompting)
|
132 |
+
- EchoRank
|
133 |
+
|
134 |
+
### Advanced Methods
|
135 |
+
- Setwise (Flan-T5 variants)
|
136 |
+
- TourRank (Tournament-based)
|
137 |
+
- RankLLaMA (Task-specific fine-tuned)
|
138 |
+
|
139 |
+
# π Dataset Updates
|
140 |
+
|
141 |
+
**FutureQueryEval will be updated every 6 months** with new queries about recent events to maintain temporal novelty. Subscribe to releases for notifications!
|
142 |
+
|
143 |
+
## Upcoming Updates
|
144 |
+
- **Version 1.1** (December 2025): +100 queries from July-September 2025 events
|
145 |
+
- **Version 1.2** (June 2026): +100 queries from October 2025-March 2026 events
|
146 |
+
|
147 |
+
# π Leaderboard
|
148 |
+
|
149 |
+
Submit your reranking method results to appear on our leaderboard! See [SUBMISSION.md](https://github.com/DataScienceUIBK/llm-reranking-generalization-study/blob/main/SUBMISSION.md) for guidelines.
|
150 |
+
|
151 |
+
Current standings available at: [RanArena](https://rankarena.ngrok.io)
|
152 |
+
|
153 |
+
# π€ Contributing
|
154 |
+
|
155 |
+
We welcome contributions! See [CONTRIBUTING.md](https://github.com/DataScienceUIBK/llm-reranking-generalization-study/blob/main/CONTRIBUTING.md) for:
|
156 |
+
- Adding new reranking methods
|
157 |
+
- Improving evaluation metrics
|
158 |
+
- Dataset quality improvements
|
159 |
+
- Bug fixes and optimizations
|
160 |
+
|
161 |
+
# π Citation
|
162 |
+
|
163 |
+
If you use FutureQueryEval or our evaluation framework, please cite:
|
164 |
+
|
165 |
+
```bibtex
|
166 |
+
@misc{abdallah2025howgoodarellmbasedrerankers,
|
167 |
+
title={How Good are LLM-based Rerankers? An Empirical Analysis of State-of-the-Art Reranking Models},
|
168 |
+
author={Abdelrahman Abdallah and Bhawna Piryani},
|
169 |
+
year={2025},
|
170 |
+
eprint={2508.16757},
|
171 |
+
archivePrefix={arXiv},
|
172 |
+
primaryClass={cs.IR}
|
173 |
+
}
|
174 |
+
```
|
175 |
+
|
176 |
+
# π Contact
|
177 |
+
|
178 |
+
- **Authors**: [Abdelrahman Abdallah](mailto:[email protected]), [Bhawna Piryani](mailto:[email protected])
|
179 |
+
- **Institution**: University of Innsbruck
|
180 |
+
- **Issues**: Please use GitHub Issues for bug reports and feature requests
|
181 |
+
|
182 |
+
---
|
183 |
+
|
184 |
+
<div align="center">
|
185 |
+
<p>β Star this repo if you find it helpful! β</p>
|
186 |
+
<p>π§ Questions? Open an issue or contact the authors</p>
|
187 |
+
</div>
|