File size: 1,489 Bytes
0138d5d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
---
license: cc-by-sa-4.0
language:
- en
---
# Dataset Card
## Dataset Details
This dataset contains a set of candidate documents for second-stage re-ranking on msmarco
(dev, test split in [BEIR](https://huggingface.co/BeIR)). Those candidate documents are composed of hard negatives mined from
[gtr-t5-xl](https://huggingface.co/sentence-transformers/gtr-t5-xl) as Stage 1 ranker
and ground-truth documents that are known to be relevant to the query. This is a release from our paper
[Policy-Gradient Training of Language Models for Ranking](https://gao-g.github.io/), so
please cite it if using this dataset.
## Direct Use
You can load the dataset by:
```python
from datasets import load_dataset
dataset = load_dataset("NeuralPGRank/msmarco-hard-negatives")
```
Each example is an dictionary:
```python
>>> python dataset['test'][0]
{
"qid" : ..., # query ID
"topk" : {
doc ID: ..., # document ID as the key; None or a score as the value
doc ID: ...,
...
},
}
```
## Citation
```
@inproceedings{Gao2023PolicyGradientTO,
title={Policy-Gradient Training of Language Models for Ranking},
author={Ge Gao and Jonathan D. Chang and Claire Cardie and Kiant{\'e} Brantley and Thorsten Joachims},
booktitle={Conference on Neural Information Processing Systems (Foundation Models for Decising Making Workshop)},
year={2023},
url={https://arxiv.org/pdf/2310.04407}
}
```
## Dataset Card Author and Contact
[Ge Gao](https://gao-g.github.io/) |