Datasets:
ArXiv:
License:
File size: 8,182 Bytes
f60b223 e9faeb0 f60b223 e9faeb0 ac5acfd e9faeb0 1b37fc2 e9faeb0 1b37fc2 e9faeb0 96838dc e9faeb0 96838dc e9faeb0 96838dc e9faeb0 96838dc e9faeb0 96838dc e9faeb0 96838dc e9faeb0 96838dc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 |
---
license: cc-by-nc-4.0
viewer: false
---
# Baidu ULTR Dataset - UvA BERT-12l-12h
Query-document vectors and clicks for a subset of the [Baidu Unbiased Learning to Rank
dataset](https://arxiv.org/abs/2207.03051).
This dataset uses a BERT cross-encoder with 12 layers trained on a Masked Language Modeling (MLM) and click-through-rate (CTR) prediction task to compute query-document vectors (768 dims).
The model is available at: https://huggingface.co/philipphager/baidu-ultr_uva-bert_naive-pointwise
## Setup
1. Install huggingface [datasets](https://huggingface.co/docs/datasets/installation)
2. Install [pandas](https://github.com/pandas-dev/pandas) and [pyarrow](https://arrow.apache.org/docs/python/index.html): `pip install pandas pyarrow`
3. Optionally, you might need to install a [pyarrow-hotfix](https://github.com/pitrou/pyarrow-hotfix) if you cannot install `pyarrow >= 14.0.1`
4. You can now use the dataset as described below.
## Load train / test click dataset:
```Python
from datasets import load_dataset
dataset = load_dataset(
"philipphager/baidu-ultr_uva-mlm-ctr",
name="clicks",
split="train", # ["train", "test"]
cache_dir="~/.cache/huggingface",
)
dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"]
```
## Load expert annotations:
```Python
from datasets import load_dataset
dataset = load_dataset(
"philipphager/baidu-ultr_uva-mlm-ctr",
name="annotations",
split="test",
cache_dir="~/.cache/huggingface",
)
dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"]
```
## Available features
Each row of the click / annotation dataset contains the following attributes. Use a custom `collate_fn` to select specific features (see below):
### Click dataset
| name | dtype | description |
|------------------------------|----------------|-------------|
| query_id | string | Baidu query_id |
| query_md5 | string | MD5 hash of query text |
| query | List[int32] | List of query tokens |
| query_length | int32 | Number of query tokens |
| n | int32 | Number of documents for current query, useful for padding |
| url_md5 | List[string] | MD5 hash of document URL, most reliable document identifier |
| text_md5 | List[string] | MD5 hash of document title and abstract |
| title | List[List[int32]] | List of tokens for document titles |
| abstract | List[List[int32]] | List of tokens for document abstracts |
| query_document_embedding | Tensor[Tensor[float16]]| BERT CLS token |
| click | Tensor[int32] | Click / no click on a document |
| position | Tensor[int32] | Position in ranking (does not always match original item position) |
| media_type | Tensor[int32] | Document type (label encoding recommended as IDs do not occupy a continuous integer range) |
| displayed_time | Tensor[float32]| Seconds a document was displayed on the screen |
| serp_height | Tensor[int32] | Pixel height of a document on the screen |
| slipoff_count_after_click | Tensor[int32] | Number of times a document was scrolled off the screen after previously clicking on it |
| bm25 | Tensor[float32] | BM25 score for documents |
| bm25_title | Tensor[float32] | BM25 score for document titles |
| bm25_abstract | Tensor[float32] | BM25 score for document abstracts |
| tf_idf | Tensor[float32] | TF-IDF score for documents |
| tf | Tensor[float32] | Term frequency for documents |
| idf | Tensor[float32] | Inverse document frequency for documents |
| ql_jelinek_mercer_short | Tensor[float32] | Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.1) |
| ql_jelinek_mercer_long | Tensor[float32] | Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.7) |
| ql_dirichlet | Tensor[float32] | Query likelihood score for documents using Dirichlet smoothing (lambda = 128) |
| document_length | Tensor[int32] | Length of documents |
| title_length | Tensor[int32] | Length of document titles |
| abstract_length | Tensor[int32] | Length of document abstracts |
### Expert annotation dataset
| name | dtype | description |
|------------------------------|----------------|-------------|
| query_id | string | Baidu query_id |
| query_md5 | string | MD5 hash of query text |
| query | List[int32] | List of query tokens |
| query_length | int32 | Number of query tokens |
| frequency_bucket | int32 | Monthly frequency of query (bucket) from 0 (high frequency) to 9 (low frequency) |
| n | int32 | Number of documents for current query, useful for padding |
| url_md5 | List[string] | MD5 hash of document URL, most reliable document identifier |
| text_md5 | List[string] | MD5 hash of document title and abstract |
| title | List[List[int32]] | List of tokens for document titles |
| abstract | List[List[int32]] | List of tokens for document abstracts |
| query_document_embedding | Tensor[Tensor[float16]] | BERT CLS token |
| label | Tensor[int32] | Relevance judgments on a scale from 0 (bad) to 4 (excellent) |
| bm25 | Tensor[float32] | BM25 score for documents |
| bm25_title | Tensor[float32] | BM25 score for document titles |
| bm25_abstract | Tensor[float32] | BM25 score for document abstracts |
| tf_idf | Tensor[float32] | TF-IDF score for documents |
| tf | Tensor[float32] | Term frequency for documents |
| idf | Tensor[float32] | Inverse document frequency for documents |
| ql_jelinek_mercer_short | Tensor[float32] | Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.1) |
| ql_jelinek_mercer_long | Tensor[float32] | Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.7) |
| ql_dirichlet | Tensor[float32] | Query likelihood score for documents using Dirichlet smoothing (lambda = 128) |
| document_length | Tensor[int32] | Length of documents |
| title_length | Tensor[int32] | Length of document titles |
| abstract_length | Tensor[int32] | Length of document abstracts |
## Example PyTorch collate function
Each sample in the dataset is a single query with multiple documents.
The following example demonstrates how to create a batch containing multiple queries with varying numbers of documents by applying padding:
```Python
import torch
from typing import List
from collections import defaultdict
from torch.nn.utils.rnn import pad_sequence
from torch.utils.data import DataLoader
def collate_clicks(samples: List):
batch = defaultdict(lambda: [])
for sample in samples:
batch["query_document_embedding"].append(sample["query_document_embedding"])
batch["position"].append(sample["position"])
batch["click"].append(sample["click"])
batch["n"].append(sample["n"])
return {
"query_document_embedding": pad_sequence(
batch["query_document_embedding"], batch_first=True
),
"position": pad_sequence(batch["position"], batch_first=True),
"click": pad_sequence(batch["click"], batch_first=True),
"n": torch.tensor(batch["n"]),
}
loader = DataLoader(dataset, collate_fn=collate_clicks, batch_size=16)
```
|