|
--- |
|
license: mit |
|
language: |
|
- en |
|
base_model: |
|
- meta-llama/Meta-Llama-3-8B-Instruct |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
# π§ LLM-QE: Improving Query Expansion by Aligning Large Language Models with Ranking Preferences |
|
|
|
This is the official model for **[LLM-QE: Improving Query Expansion by Aligning Large Language Models with Ranking Preferences](https://arxiv.org/abs/2502.17057)**. |
|
|
|
The LLM-QE model is designed to enhance **query expansion** in **information retrieval** tasks by leveraging **Large Language Models (LLMs)**, improving the **alignment between LLMs and ranking preferences** during query expansion. |
|
|
|
--- |
|
|
|
## π **Paper** |
|
For a detailed explanation of the methodology and experiments, please refer to our paper: |
|
[**LLM-QE: Improving Query Expansion by Aligning Large Language Models with Ranking Preferences**](https://arxiv.org/abs/2502.17057) |
|
|
|
--- |
|
|
|
## π Reproduce the Results |
|
To reproduce the experiments and benchmarks from the paper, follow the instructions provided in the official GitHub repository: [π GitHub: NEUIR/LLM-QE](https://github.com/NEUIR/LLM-QE). |
|
|
|
## π Model Details |
|
- Model Name: LLM-QE-DPO |
|
- Architecture: LLaMA3-8B-Instruct with query expansion alignment using ranking preferences |
|
|
|
## π Usage: |
|
You can use this model for query expansion tasks, particularly in information retrieval systems that benefit from alignment with ranking preferences. |
|
|
|
## π Citation |
|
If you use LLM-QE in your work, please consider citing our paper: |
|
``` |
|
@misc{yao2025llmqeimprovingqueryexpansion, |
|
title={LLM-QE: Improving Query Expansion by Aligning Large Language Models with Ranking Preferences}, |
|
author={Sijia Yao and Pengcheng Huang and Zhenghao Liu and Yu Gu and Yukun Yan and Shi Yu and Ge Yu}, |
|
year={2025}, |
|
eprint={2502.17057}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.IR}, |
|
url={https://arxiv.org/abs/2502.17057}, |
|
} |
|
|
|
``` |