Datasets:
File size: 9,939 Bytes
833d272 cc1e481 833d272 258ad63 833d272 d978840 833d272 d978840 833d272 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 |
---
license: apache-2.0
task_categories:
- tabular-classification
- tabular-regression
- graph-ml
# - recommendation
# - retrieval
# - ranking
# - user-modeling
- other
tags:
- recommendation
- recsys
- short-video
- clips
- retrieval
- ranking
- user-modeling
- industrial
- real-world
size_categories:
- 10B<n<100B
language:
- en
pretty_name: VK-LSVD
---
# VK-LSVD: Large Short-Video Dataset
**VK-LSVD** is the largest open industrial short-video recommendation dataset with real-world interactions:
- **40B** unique user–item interactions with rich feedback (`timespent`, `like`, `dislike`, `share`, `bookmark`, `click_on_author`, `open_comments`) and
context (`place`, `platform`, `agent`);
- **10M** users (with `age`, `gender`, `geo`);
- **20M** short videos (with `duration`, `author_id`, content `embedding`);
- **Global Temporal Ordering** across **six consecutive months** of user interactions.
**Why short video?** Users often watch dozens of clips per session, producing dense, time-ordered signals well suited for modeling.
Unlike music, podcasts, or long-form video, which are often consumed in the background, short videos are foreground by design. They also do not exhibit repeat exposure.
Even without explicit feedback, signals such as skips, completions, and replays yield strong implicit labels.
Single-item feeds also simplify attribution and reduce confounding compared with multi-item layouts.
---
> **Note:** The test set will be released after the upcoming challenge.
---
[📊 Basic Statistics](#basic-statistics) • [🧱 Data Description](#data-description) • [⚡ Quick Start](#quick-start) • [🧩 Configurable Subsets](#configurable-subsets)
---
## Basic Statistics
- Users **10,000,000**
- Items **19,627,601**
- Unique interactions **40,774,024,903**
- Interactions density **0.0208%**
- Total watch time: **858,160,100,084 s**
- Likes: **1,171,423,458**
- Dislikes: **11,860,138**
- Shares: **262,734,328**
- Bookmarks: **40,124,463**
- Clicks on author: **84,632,666**
- Comment opens: **481,251,593**
---
## Data Description
**Privacy-preserving taxonomy** — all categorical metadata (`user_id`, `geo`, `item_id`, `author_id`, `place`, `platform`, `agent`) is anonymized into stable integer IDs (consistent across splits; no reverse mapping provided).
### Interactions
[interactions](https://huggingface.co/datasets/deepvk/VK-LSVD/tree/main/interactions)
Each row is one observation (a short video shown to a user) with feedback and context. There are no repeated exposures of the same user–item pair.
**Global Temporal Split (GTS):** `train` / `validation` / `test` preserve time order — train on the past, validate/test on the future.
**Chronology:** Files are organized by weeks (e.g., week_XX.parquet); rows within each file are in increasing timestamp order.
| Field | Type | Description |
|-----|----|-----------|
|`user_id`|uint32|User identifier|
|`item_id`|uint32|Video identifier|
|`place`|uint8|Place: feed/search/group/… (24 ids)|
|`platform`|uint8|Platform: Android/Web/TV/… (11 ids) |
|`agent`|uint8|Agent/client: browser/app (29 ids)|
|`timespent`|uint8|Watch time (0–255 seconds)|
|`like`|boolean|User liked the video|
|`dislike`|boolean|User disliked the video|
|`share`|boolean|User shared the video|
|`bookmark`|boolean|User bookmarked the video|
|`click_on_author`|boolean|User opened author page|
|`open_comments`|boolean|User opened the comments section |
### Users metadata
[users_metadata.parquet](metadata/users_metadata.parquet)
| Field | Type | Description |
|-----|----|-----------|
|`user_id`|uint32|User identifier|
|`age`|uint8|Age (18-70 years)|
|`gender`|uint8|Gender|
|`geo`|uint8|Most frequent user location (80 ids)|
|`train_interactions_rank`|uint32|Popularity rank for sampling (lower = more interactions)|
### Items metadata
[items_metadata.parquet](metadata/items_metadata.parquet)
| Field | Type | Description |
|-----|----|-----------|
|`item_id`|uint32|Video identifier|
|`author_id`|uint32|Author identifier|
|`duration`|uint8|Video duration (seconds)|
|`train_interactions_rank`|uint32|Popularity rank for sampling (lower = more interactions)|
### Embeddings: variable width
**Embeddings are trained strictly on content** (video/description/audio, etc.) — no collaborative signal mixed in.
**Components are ordered**: the _dot product_ of the first n components approximates the _cosine_ similarity of the original production embeddings.
This lets researchers pick any dimensionality (**1…64**) to trade quality for speed and memory.
[item_embeddings.npz](metadata/item_embeddings.npz)
| Field | Type | Description |
|-----|----|-----------|
|`item_id`|uint32|Video identifier|
|`embedding`|float16[64]|Item content embedding with ordered components|
---
## Quick Start
### Load a small subsample
```python
from huggingface_hub import hf_hub_download
import polars as pl
import numpy as np
subsample_name = 'up0.001_ip0.001'
content_embedding_size = 32
train_interactions_files = [f'subsamples/{subsample_name}/train/week_{i:02}.parquet'
for i in range(25)]
val_interactions_file = [f'subsamples/{subsample_name}/validation/week_25.parquet']
metadata_files = ['metadata/users_metadata.parquet',
'metadata/items_metadata.parquet',
'metadata/item_embeddings.npz']
for file in (train_interactions_files +
val_interactions_file +
metadata_files):
hf_hub_download(
repo_id='deepvk/VK-LSVD', repo_type='dataset',
filename=file, local_dir='VK-LSVD'
)
train_interactions = pl.concat([pl.scan_parquet(f'VK-LSVD/{file}')
for file in train_interactions_files])
train_interactions = train_interactions.collect(engine='streaming')
val_interactions = pl.read_parquet(f'VK-LSVD/{val_interactions_file[0]}')
train_users = train_interactions.select('user_id').unique()
train_items = train_interactions.select('item_id').unique()
item_ids = np.load('VK-LSVD/metadata/item_embeddings.npz')['item_id']
item_embeddings = np.load('VK-LSVD/metadata/item_embeddings.npz')['embedding']
mask = np.isin(item_ids, train_items.to_numpy())
item_ids = item_ids[mask]
item_embeddings = item_embeddings[mask]
item_embeddings = item_embeddings[:, :content_embedding_size]
users_metadata = pl.read_parquet('VK-LSVD/metadata/users_metadata.parquet')
items_metadata = pl.read_parquet('VK-LSVD/metadata/items_metadata.parquet')
users_metadata = users_metadata.join(train_users, on='user_id')
items_metadata = items_metadata.join(train_items, on='item_id')
items_metadata = items_metadata.join(pl.DataFrame({'item_id': item_ids,
'embedding': item_embeddings}),
on='item_id')
```
---
## Configurable Subsets
We provide several ready-made slices and simple utilities to compose your own subset that matches your task, data budget, and hardware.
You can control density via popularity quantiles (`train_interactions_rank`), draw random users,
or pick specific time windows — while preserving the Global Temporal Split.
Representative subsamples are provided for quick experiments:
| Subset | Users | Items | Interactions | Density |
|-----|----:|-----------:|-----------:|-----------:|
|`whole`|10,000,000|19,627,601|40,774,024,903|0.0208%|
|`ur0.1`|1,000,000|18,701,510|4,066,457,259|0.0217%|
|`ur0.01`|100,000|12,467,302|407,854,360|0.0327%|
|`ur0.01_ir0.01`|90,178|125,018|4,044,900|0.0359%|
|`up0.01_ir0.01`|100000|171106|38,404,921|0.2245%|
|`ur0.01_ip0.01`|99,893|196,277|191,625,941|0.9774%|
|`up0.01_ip0.01`|100,000|196,277|1,417,906,344|7.2240%|
|`up0.001_ip0.001`|10,000|19,628|47,976,280|24.4428%|
|`up-0.9_ip-0.9`|8,939,432|17,654,817|2,861,937,212|0.0018%|
- `urX` — X fraction of **r**andom **u**sers (e.g., `ur0.01` = 1% of users).
- `ipX` — X fraction of **p**opular **i**tems (by `train_interactions_rank`)
- Negative X denotes the least-popular fraction (e.g., `−0.9` → bottom 90%).
For example, to get [ur0.01_ip0.01](https://huggingface.co/datasets/deepvk/VK-LSVD/tree/main/subsamples/ur0.01_ip0.01) (1% of **r**andom **u**sers, 1% of most **p**opular **i**tems) use the snippet below.
```python
import polars as pl
def get_sample(entries: pl.DataFrame, split_column: str, fraction: float) -> pl.DataFrame:
if fraction >= 0:
entries = entries.filter(pl.col(split_column) <=
pl.col(split_column).quantile(fraction,
interpolation='midpoint'))
else:
entries = entries.filter(pl.col(split_column) >=
pl.col(split_column).quantile(1 + fraction,
interpolation='midpoint'))
return entries
users = pl.scan_parquet('VK-LSVD/metadata/users_metadata.parquet')
users_sample = get_sample(users, 'user_id', 0.01).select(['user_id'])
items = pl.scan_parquet('VK-LSVD/metadata/items_metadata.parquet')
items_sample = get_sample(items, 'train_interactions_rank', 0.01).select(['item_id'])
interactions = pl.scan_parquet('VK-LSVD/interactions/validation/week_25.parquet')
interactions = interactions.join(users_sample, on='user_id', maintain_order='left')
interactions = interactions.join(items_sample, on='item_id', maintain_order='left')
interactions_sample = interactions.collect(engine='streaming')
```
To get [up-0.9_ip-0.9](https://huggingface.co/datasets/deepvk/VK-LSVD/tree/main/subsamples/up-0.9_ip-0.9) (90% of least **p**opular **u**sers, 90% of least **p**opular **i**tems) replace users and items sampling lines with
```python
users_sample = get_sample(users, 'train_interactions_rank', -0.9).select(['user_id'])
items_sample = get_sample(items, 'train_interactions_rank', -0.9).select(['item_id'])
```
|