📢 Update: On September 4, 2025, we merged the LoRA parameters of ReasonRank (32B) into the model’s checkpoint shards, so now everyone only needs to load the model shards without the LoRA adapter anymore.

Introduction

This is the model trained in our paper: ReasonRank: Empowering Passage Ranking with Strong Reasoning Ability (📝arXiv). Please refer our 🧩github repository for the usage of reasonrank-32B.

Model Performance

image

🌹 If you use this model, please ✨star our GitHub repository to support us. Your star means a lot!

Downloads last month
323
Safetensors
Model size
32.8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for liuwenhan/reasonrank-32B

Base model

Qwen/Qwen2.5-32B
Finetuned
(1154)
this model

Datasets used to train liuwenhan/reasonrank-32B

Collection including liuwenhan/reasonrank-32B