FutureQueryEval / README.md
abdoelsayed's picture
Update README.md
e57e5b2 verified
metadata
license: apache-2.0
task_categories:
  - text-retrieval
language:
  - en
tags:
  - information-retrieval
  - reranking
  - temporal-evaluation
  - benchmark
size_categories:
  - 1K<n<10K
pretty_name: Reranking, Retreiver

FutureQueryEval Dataset (EMNLP 2025)๐Ÿ”

Dataset Description

FutureQueryEval is a novel Information Retrieval (IR) benchmark designed to evaluate reranker performance on temporal novelty. It comprises 148 queries with 2,938 query-document pairs across 7 topical categories, specifically created to test how well reranking models generalize to truly novel queries that were unseen during LLM pretraining.

Key Features

  • Zero Contamination: All queries refer to events after April 2025
  • Human Annotated: Created by 4 expert annotators with quality control
  • Diverse Domains: Technology, Sports, Politics, Science, Health, Business, Entertainment
  • Real Events: Based on actual news and developments, not synthetic data
  • Temporal Novelty: First benchmark designed to test reranker generalization on post-training events

Dataset Statistics

Metric Value
Total Queries 148
Total Documents 2,787
Query-Document Pairs 2,938
Avg. Relevant Docs per Query 6.54
Languages English
License Apache-2.0

Category Distribution

Category Queries Percentage
Technology 37 25.0%
Sports 31 20.9%
Science & Environment 20 13.5%
Business & Finance 19 12.8%
Health & Medicine 16 10.8%
World News & Politics 14 9.5%
Entertainment & Culture 11 7.4%

Dataset Structure

The dataset consists of three main files:

Files

  • queries.tsv: Contains the query information
    • Columns: query_id, query_text, category
  • corpus.tsv: Contains the document collection
    • Columns: doc_id, title, text, url
  • qrels.txt: Contains relevance judgments
    • Format: query_id 0 doc_id relevance_score

Data Fields

Queries

  • query_id (string): Unique identifier for each query
  • query_text (string): The natural language query
  • category (string): Topical category (Technology, Sports, etc.)

Corpus

  • doc_id (string): Unique identifier for each document
  • title (string): Document title
  • text (string): Full document content
  • url (string): Source URL of the document

Relevance Judgments (qrels)

  • query_id (string): Query identifier
  • iteration (int): Always 0 (standard TREC format)
  • doc_id (string): Document identifier
  • relevance (int): Relevance score (0-3, where 3 is highly relevant)

Example Queries

๐ŸŒ World News & Politics:

"What specific actions has Egypt taken to support injured Palestinians from Gaza, as highlighted during the visit of Presidents El-Sisi and Macron to Al-Arish General Hospital?"

โšฝ Sports:

"Which teams qualified for the 2025 UEFA European Championship playoffs in June 2025?"

๐Ÿ’ป Technology:

"What are the key features of Apple's new Vision Pro 2 announced at WWDC 2025?"

Usage

Loading the Dataset

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("abdoelsayed/FutureQueryEval")

# Access different splits
queries = dataset["queries"]
corpus = dataset["corpus"] 
qrels = dataset["qrels"]

# Example: Get first query
print(f"Query: {queries[0]['query_text']}")
print(f"Category: {queries[0]['category']}")

Evaluation Example

import pandas as pd

# Load relevance judgments
qrels_df = pd.read_csv("qrels.txt", sep=" ", 
                      names=["query_id", "iteration", "doc_id", "relevance"])

# Filter for a specific query
query_rels = qrels_df[qrels_df["query_id"] == "FQ001"]
print(f"Relevant documents for query FQ001: {len(query_rels)}")

Methodology

Data Collection Process

  1. Source Selection: Major news outlets, official sites, sports organizations
  2. Temporal Filtering: Events after April 2025 only
  3. Query Creation: Manual generation by domain experts
  4. Novelty Validation: Tested against GPT-4 knowledge cutoff
  5. Quality Control: Multi-annotator review with senior oversight

Annotation Guidelines

  • Highly Relevant (3): Document directly answers the query
  • Relevant (2): Document partially addresses the query
  • Marginally Relevant (1): Document mentions query topics but lacks detail
  • Not Relevant (0): Document does not address the query

Research Applications

This dataset is designed for:

  • Reranker Evaluation: Testing generalization to novel content
  • Temporal IR Research: Understanding time-sensitive retrieval challenges
  • Domain Robustness: Evaluating cross-domain performance
  • Contamination Studies: Clean evaluation on post-training data

Benchmark Results

Top performing methods on FutureQueryEval:

Method Type NDCG@10 Runtime (s)
Zephyr-7B Listwise 62.65 1,240
MonoT5-3B Pointwise 60.75 486
Flan-T5-XL Setwise 56.57 892

Dataset Updates

FutureQueryEval will be updated every 6 months with new queries about recent events to maintain temporal novelty:

  • Version 1.1 (December 2025): +100 queries from July-September 2025
  • Version 1.2 (June 2026): +100 queries from October 2025-March 2026

Citation

If you use FutureQueryEval in your research, please cite:

@misc{abdallah2025good,
    title={How Good are LLM-based Rerankers? An Empirical Analysis of State-of-the-Art Reranking Models},
    author={Abdelrahman Abdallah and Bhawna Piryani and Jamshid Mozafari and Mohammed Ali and Adam Jatowt},
    year={2025},
    eprint={2508.16757},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

Contact

License

This dataset is released under the Apache-2.0 License.