DrSlink commited on
Commit
833d272
·
verified ·
1 Parent(s): 85edbb4

initial commit

Browse files
Files changed (1) hide show
  1. README.md +246 -3
README.md CHANGED
@@ -1,3 +1,246 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - tabular-classification
5
+ - tabular-regression
6
+ - graph-ml
7
+ # - recommendation
8
+ # - retrieval
9
+ # - ranking
10
+ # - user-modeling
11
+ - other
12
+ tags:
13
+ - recommendation
14
+ - recsys
15
+ - short-video
16
+ - clips
17
+ - retrieval
18
+ - ranking
19
+ - user-modeling
20
+ - industrial
21
+ - real-world
22
+ size_categories:
23
+ - 10B<n<100B
24
+ language:
25
+ - en
26
+ pretty_name: VK-LSVD
27
+ ---
28
+
29
+
30
+ # VK-LSVD: Large Short-Video Dataset
31
+ **VK-LSVD** is the largest open industrial short-video recommendation dataset with real-world interactions:
32
+ - **40B** unique user–item interactions with rich feedback (`timespent`, `like`, `dislike`, `share`, `bookmark`, `click_on_author`, `open_comments`) and
33
+ context (`place`, `platform`, `agent`);
34
+ - **10M** users (with `age`, `gender`, `geo`);
35
+ - **20M** short videos (with `duration`, `author_id`, content `embedding`);
36
+ - **Global Temporal Ordering** across **six consecutive months** of user interactions.
37
+
38
+ **Why short video?** Users often watch dozens of clips per session, producing dense, time-ordered signals well suited for modeling.
39
+ Unlike music, podcasts, or long-form video, which are often consumed in the background, short videos are foreground by design. They also do not exhibit repeat exposure.
40
+ Even without explicit feedback, signals such as skips, completions, and replays yield strong implicit labels.
41
+ Single-item feeds also simplify attribution and reduce confounding compared with multi-item layouts.
42
+
43
+
44
+ ---
45
+
46
+ > **Note:** The test set will be released after the upcoming challenge.
47
+
48
+ ---
49
+
50
+ [📊 Basic Statistics](#basic-statistics) • [🧱 Data Description](#data-description) • [⚡ Quick Start](#quick-start) • [🧩 Configurable Subsets](#configurable-subsets)
51
+
52
+ ---
53
+
54
+ ## Basic Statistics
55
+ - Users **10,000,000**
56
+ - Items **19,627,601**
57
+ - Unique interactions **40,774,024,903**
58
+ - Interactions density **0.0208%**
59
+ - Total watch time: **858,160,100,084 s**
60
+ - Likes: **1,171,423,458**
61
+ - Dislikes: **11,860,138**
62
+ - Shares: **262,734,328**
63
+ - Clicks on author: **84,632,666**
64
+ - Comment opens: **481,251,593**
65
+
66
+ ---
67
+
68
+ ## Data Description
69
+ **Privacy-preserving taxonomy** — all categorical metadata (`user_id`, `geo`, `item_id`, `author_id`, `place`, `platform`, `agent`) is anonymized into stable integer IDs (consistent across splits; no reverse mapping provided).
70
+
71
+ ### Interactions
72
+ [interactions](https://huggingface.co/datasets/deepvk/VK-LSVD/tree/main/interactions)
73
+ Each row is one observation (a short video shown to a user) with feedback and context. There are no repeated exposures of the same user–item pair.
74
+ **Global Temporal Split (GTS):** `train` / `validation` / `test` preserve time order — train on the past, validate/test on the future.
75
+ **Chronology:** Files are organized by weeks (e.g., week_XX.parquet); rows within each file are in increasing timestamp order.
76
+
77
+ > The test set will be published after the challenge.
78
+
79
+ ---
80
+ | Field | Type | Description |
81
+ |-----|----|-----------|
82
+ |`user_id`|uint32|User identifier|
83
+ |`item_id`|uint32|Video identifier|
84
+ |`place`|uint8|Place: feed/search/group/… (24 ids)|
85
+ |`platform`|uint8|Platform: Android/Web/TV/… (11 ids) |
86
+ |`agent`|uint8|Agent/client: browser/app (29 ids)|
87
+ |`timespent`|uint8|Watch time (0–255 seconds)|
88
+ |`like`|boolean|User liked the video|
89
+ |`dislike`|boolean|User disliked the video|
90
+ |`share`|boolean|User shared the video|
91
+ |`bookmark`|boolean|User bookmarked the video|
92
+ |`click_on_author`|boolean|User opened author page|
93
+ |`open_comments`|boolean|User opened the comments section |
94
+
95
+ ### Users metadata
96
+ [users_metadata.parquet](metadata/users_metadata.parquet)
97
+ | Field | Type | Description |
98
+ |-----|----|-----------|
99
+ |`user_id`|uint32|User identifier|
100
+ |`age`|uint8|Age (18-70 years)|
101
+ |`gender`|uint8|Gender|
102
+ |`geo`|uint8|Most frequent user location (80 ids)|
103
+ |`train_interactions_rank`|uint32|Popularity rank for sampling (lower = more interactions)|
104
+
105
+ ### Items metadata
106
+ [items_metadata.parquet](metadata/items_metadata.parquet)
107
+
108
+ | Field | Type | Description |
109
+ |-----|----|-----------|
110
+ |`item_id`|uint32|Video identifier|
111
+ |`author_id`|uint32|Author identifier|
112
+ |`duration`|uint8|Video duration (seconds)|
113
+ |`train_interactions_rank`|uint32|Popularity rank for sampling (lower = more interactions)|
114
+
115
+ ### Embeddings: variable width
116
+ **Embeddings are trained strictly on content** (video/description/audio, etc.) — no collaborative signal mixed in.
117
+ **Components are ordered**: the _dot product_ of the first n components approximates the _cosine_ similarity of the original production embeddings.
118
+ This lets researchers pick any dimensionality (**1…64**) to trade quality for speed and memory.
119
+
120
+
121
+ [item_embeddings.npz](metadata/item_embeddings.npz)
122
+
123
+ | Field | Type | Description |
124
+ |-----|----|-----------|
125
+ |`item_id`|uint32|Video identifier|
126
+ |`embedding`|float16[64]|Item content embedding with ordered components|
127
+
128
+ ## Quick Start
129
+
130
+
131
+ ### Load a small subsample
132
+
133
+
134
+ ```python
135
+ from huggingface_hub import hf_hub_download
136
+ import polars as pl
137
+ import numpy as np
138
+
139
+ subsample_name = 'up0.001_ip0.001'
140
+ content_embedding_size = 32
141
+
142
+ train_interactions_files = [f'subsamples/{subsample_name}/train/week_{i:02}.parquet'
143
+ for i in range(25)]
144
+ val_interactions_file = [f'subsamples/{subsample_name}/validation/week_25.parquet']
145
+
146
+ metadata_files = ['metadata/users_metadata.parquet',
147
+ 'metadata/items_metadata.parquet',
148
+ 'metadata/item_embeddings.npz']
149
+
150
+ for file in (train_interactions_files +
151
+ val_interactions_file +
152
+ metadata_files):
153
+ hf_hub_download(
154
+ repo_id='deepvk/VK-LSVD', repo_type='dataset',
155
+ filename=file, local_dir='VK-LSVD'
156
+ )
157
+
158
+ train_interactions = pl.concat([pl.scan_parquet(f'VK-LSVD/{file}')
159
+ for file in train_interactions_files])
160
+ train_interactions = train_interactions.collect(engine='streaming')
161
+
162
+ val_interactions = pl.read_parquet(f'VK-LSVD/{val_interactions_file[0]}')
163
+
164
+ train_users = train_interactions.select('user_id').unique()
165
+ train_items = train_interactions.select('item_id').unique()
166
+
167
+ item_ids = np.load('VK-LSVD/metadata/item_embeddings.npz')['item_id']
168
+ item_embeddings = np.load('VK-LSVD/metadata/item_embeddings.npz')['embedding']
169
+
170
+ mask = np.isin(item_ids, train_items.to_numpy())
171
+ item_ids = item_ids[mask]
172
+ item_embeddings = item_embeddings[mask]
173
+ item_embeddings = item_embeddings[:, :content_embedding_size]
174
+
175
+ users_metadata = pl.read_parquet('VK-LSVD/metadata/users_metadata.parquet')
176
+ items_metadata = pl.read_parquet('VK-LSVD/metadata/items_metadata.parquet')
177
+
178
+ users_metadata = users_metadata.join(train_users, on='user_id')
179
+ items_metadata = items_metadata.join(train_items, on='item_id')
180
+ items_metadata = items_metadata.join(pl.DataFrame({'item_id': item_ids,
181
+ 'embedding': item_embeddings}),
182
+ on='item_id')
183
+ ```
184
+
185
+
186
+ ---
187
+
188
+
189
+ ## Configurable Subsets
190
+
191
+ We provide several ready-made slices and simple utilities to compose your own subset that matches your task, data budget, and hardware.
192
+ You can control density via popularity quantiles (`train_interactions_rank`), draw random users,
193
+ or pick specific time windows — while preserving the Global Temporal Split.
194
+
195
+ Representative subsamples are provided for quick experiments:
196
+
197
+ | Subset | Users | Items | Interactions | Density |
198
+ |-----|----:|-----------:|-----------:|-----------:|
199
+ |`whole`|10,000,000|19,627,601|40,774,024,903|0.0208%|
200
+ |`ur0.1`|1,000,000|18,701,510|4,066,457,259|0.0217%|
201
+ |`ur0.01`|100,000|12,467,302|407,854,360|0.0327%|
202
+ |`ur0.01_ir0.01`|90,178|125,018|4,044,900|0.0359%|
203
+ |`ur0.01_ip0.01`|99,893|196,277|191,625,941|0.9774%|
204
+ |`up0.01_ip0.01`|100,000|196,277|1,417,906,344|7.2240%|
205
+ |`up0.001_ip0.001`|10,000|19,628|47,976,280|24.4428%|
206
+ |`up-0.9_ip-0.9`|8,939,432|17,654,817|2,861,937,212|0.0018%|
207
+
208
+
209
+ - `urX` — X fraction of **r**andom **u**sers (e.g., `ur0.01` = 1% of users).
210
+ - `ipX` — X fraction of **p**opular **i**tems (by `train_interactions_rank`)
211
+ - Negative X denotes the least-popular fraction (e.g., `−0.9` → bottom 90%).
212
+
213
+
214
+ For example, to get [ur0.01_ip0.01](https://huggingface.co/datasets/deepvk/VK-LSVD/tree/main/subsamples/ur0.01_ip0.01) (1% of **r**andom **u**sers, 1% of most **p**opular **i**tems) use the snippet below.
215
+ ```python
216
+ import polars as pl
217
+
218
+ def get_sample(entries: pl.DataFrame, split_column: str, fraction: float) -> pl.DataFrame:
219
+ if fraction >= 0:
220
+ entries = entries.filter(pl.col(split_column) <=
221
+ pl.col(split_column).quantile(fraction,
222
+ interpolation='midpoint'))
223
+ else:
224
+ entries = entries.filter(pl.col(split_column) >=
225
+ pl.col(split_column).quantile(1 + fraction,
226
+ interpolation='midpoint'))
227
+ return entries
228
+
229
+ users = pl.scan_parquet('VK-LSVD/metadata/users_metadata.parquet')
230
+ users_sample = get_sample(users, 'user_id', 0.01).select(['user_id'])
231
+
232
+ items = pl.scan_parquet('VK-LSVD/metadata/items_metadata.parquet')
233
+ items_sample = get_sample(items, 'train_interactions_rank', 0.01).select(['item_id'])
234
+
235
+ interactions = pl.scan_parquet('VK-LSVD/interactions/validation/week_25.parquet')
236
+ interactions = interactions.join(users_sample, on='user_id', maintain_order='left')
237
+ interactions = interactions.join(items_sample, on='item_id', maintain_order='left')
238
+
239
+ interactions_sample = interactions.collect(engine='streaming')
240
+ ```
241
+
242
+ To get [up-0.9_ip-0.9](https://huggingface.co/datasets/deepvk/VK-LSVD/tree/main/subsamples/up-0.9_ip-0.9) (90% of least **p**opular **u**sers, 90% of least **p**opular **i**tems) replace users and items sampling lines with
243
+ ```python
244
+ users_sample = get_sample(users, 'train_interactions_rank', -0.9).select(['user_id'])
245
+ items_sample = get_sample(items, 'train_interactions_rank', -0.9).select(['item_id'])
246
+ ```