VK-LSVD / README.md
DrSlink's picture
initial commit
cc1e481 verified
|
raw
history blame
9.94 kB
metadata
license: apache-2.0
task_categories:
  - tabular-classification
  - tabular-regression
  - graph-ml
  - other
tags:
  - recommendation
  - recsys
  - short-video
  - clips
  - retrieval
  - ranking
  - user-modeling
  - industrial
  - real-world
size_categories:
  - 10B<n<100B
language:
  - en
pretty_name: VK-LSVD

VK-LSVD: Large Short-Video Dataset

VK-LSVD is the largest open industrial short-video recommendation dataset with real-world interactions:

  • 40B unique user–item interactions with rich feedback (timespent, like, dislike, share, bookmark, click_on_author, open_comments) and context (place, platform, agent);
  • 10M users (with age, gender, geo);
  • 20M short videos (with duration, author_id, content embedding);
  • Global Temporal Ordering across six consecutive months of user interactions.

Why short video? Users often watch dozens of clips per session, producing dense, time-ordered signals well suited for modeling. Unlike music, podcasts, or long-form video, which are often consumed in the background, short videos are foreground by design. They also do not exhibit repeat exposure. Even without explicit feedback, signals such as skips, completions, and replays yield strong implicit labels. Single-item feeds also simplify attribution and reduce confounding compared with multi-item layouts.


Note: The test set will be released after the upcoming challenge.


📊 Basic Statistics🧱 Data Description⚡ Quick Start🧩 Configurable Subsets


Basic Statistics

  • Users 10,000,000
  • Items 19,627,601
  • Unique interactions 40,774,024,903
  • Interactions density 0.0208%
  • Total watch time: 858,160,100,084 s
  • Likes: 1,171,423,458
  • Dislikes: 11,860,138
  • Shares: 262,734,328
  • Bookmarks: 40,124,463
  • Clicks on author: 84,632,666
  • Comment opens: 481,251,593

Data Description

Privacy-preserving taxonomy — all categorical metadata (user_id, geo, item_id, author_id, place, platform, agent) is anonymized into stable integer IDs (consistent across splits; no reverse mapping provided).

Interactions

interactions
Each row is one observation (a short video shown to a user) with feedback and context. There are no repeated exposures of the same user–item pair.
Global Temporal Split (GTS): train / validation / test preserve time order — train on the past, validate/test on the future.
Chronology: Files are organized by weeks (e.g., week_XX.parquet); rows within each file are in increasing timestamp order.

Field Type Description
user_id uint32 User identifier
item_id uint32 Video identifier
place uint8 Place: feed/search/group/… (24 ids)
platform uint8 Platform: Android/Web/TV/… (11 ids)
agent uint8 Agent/client: browser/app (29 ids)
timespent uint8 Watch time (0–255 seconds)
like boolean User liked the video
dislike boolean User disliked the video
share boolean User shared the video
bookmark boolean User bookmarked the video
click_on_author boolean User opened author page
open_comments boolean User opened the comments section

Users metadata

users_metadata.parquet

Field Type Description
user_id uint32 User identifier
age uint8 Age (18-70 years)
gender uint8 Gender
geo uint8 Most frequent user location (80 ids)
train_interactions_rank uint32 Popularity rank for sampling (lower = more interactions)

Items metadata

items_metadata.parquet

Field Type Description
item_id uint32 Video identifier
author_id uint32 Author identifier
duration uint8 Video duration (seconds)
train_interactions_rank uint32 Popularity rank for sampling (lower = more interactions)

Embeddings: variable width

Embeddings are trained strictly on content (video/description/audio, etc.) — no collaborative signal mixed in.
Components are ordered: the dot product of the first n components approximates the cosine similarity of the original production embeddings.
This lets researchers pick any dimensionality (1…64) to trade quality for speed and memory.

item_embeddings.npz

Field Type Description
item_id uint32 Video identifier
embedding float16[64] Item content embedding with ordered components

Quick Start

Load a small subsample

from huggingface_hub import hf_hub_download
import polars as pl
import numpy as np

subsample_name = 'up0.001_ip0.001'
content_embedding_size = 32

train_interactions_files = [f'subsamples/{subsample_name}/train/week_{i:02}.parquet'
                            for i in range(25)]
val_interactions_file = [f'subsamples/{subsample_name}/validation/week_25.parquet']

metadata_files = ['metadata/users_metadata.parquet',
                  'metadata/items_metadata.parquet',
                  'metadata/item_embeddings.npz']

for file in (train_interactions_files +
             val_interactions_file +
             metadata_files):
    hf_hub_download(
        repo_id='deepvk/VK-LSVD', repo_type='dataset',
        filename=file, local_dir='VK-LSVD'
    )

train_interactions = pl.concat([pl.scan_parquet(f'VK-LSVD/{file}')
                                for file in train_interactions_files])
train_interactions = train_interactions.collect(engine='streaming')

val_interactions = pl.read_parquet(f'VK-LSVD/{val_interactions_file[0]}')

train_users = train_interactions.select('user_id').unique()
train_items = train_interactions.select('item_id').unique()

item_ids = np.load('VK-LSVD/metadata/item_embeddings.npz')['item_id']
item_embeddings = np.load('VK-LSVD/metadata/item_embeddings.npz')['embedding']

mask = np.isin(item_ids, train_items.to_numpy())
item_ids = item_ids[mask]
item_embeddings = item_embeddings[mask]
item_embeddings = item_embeddings[:, :content_embedding_size]

users_metadata = pl.read_parquet('VK-LSVD/metadata/users_metadata.parquet')
items_metadata = pl.read_parquet('VK-LSVD/metadata/items_metadata.parquet')

users_metadata = users_metadata.join(train_users, on='user_id')
items_metadata = items_metadata.join(train_items, on='item_id')
items_metadata = items_metadata.join(pl.DataFrame({'item_id': item_ids, 
                                                   'embedding': item_embeddings}), 
                                     on='item_id')

Configurable Subsets

We provide several ready-made slices and simple utilities to compose your own subset that matches your task, data budget, and hardware. You can control density via popularity quantiles (train_interactions_rank), draw random users, or pick specific time windows — while preserving the Global Temporal Split.

Representative subsamples are provided for quick experiments:

Subset Users Items Interactions Density
whole 10,000,000 19,627,601 40,774,024,903 0.0208%
ur0.1 1,000,000 18,701,510 4,066,457,259 0.0217%
ur0.01 100,000 12,467,302 407,854,360 0.0327%
ur0.01_ir0.01 90,178 125,018 4,044,900 0.0359%
up0.01_ir0.01 100000 171106 38,404,921 0.2245%
ur0.01_ip0.01 99,893 196,277 191,625,941 0.9774%
up0.01_ip0.01 100,000 196,277 1,417,906,344 7.2240%
up0.001_ip0.001 10,000 19,628 47,976,280 24.4428%
up-0.9_ip-0.9 8,939,432 17,654,817 2,861,937,212 0.0018%
  • urX — X fraction of random users (e.g., ur0.01 = 1% of users).
  • ipX — X fraction of popular items (by train_interactions_rank)
  • Negative X denotes the least-popular fraction (e.g., −0.9 → bottom 90%).

For example, to get ur0.01_ip0.01 (1% of random users, 1% of most popular items) use the snippet below.

import polars as pl

def get_sample(entries: pl.DataFrame, split_column: str, fraction: float) -> pl.DataFrame:
    if fraction >= 0:
        entries = entries.filter(pl.col(split_column) <= 
                                 pl.col(split_column).quantile(fraction, 
                                                               interpolation='midpoint'))
    else:
        entries = entries.filter(pl.col(split_column) >= 
                                 pl.col(split_column).quantile(1 + fraction, 
                                                               interpolation='midpoint'))
    return entries

users = pl.scan_parquet('VK-LSVD/metadata/users_metadata.parquet')
users_sample = get_sample(users, 'user_id', 0.01).select(['user_id'])

items = pl.scan_parquet('VK-LSVD/metadata/items_metadata.parquet')
items_sample = get_sample(items, 'train_interactions_rank', 0.01).select(['item_id'])

interactions = pl.scan_parquet('VK-LSVD/interactions/validation/week_25.parquet')
interactions = interactions.join(users_sample, on='user_id', maintain_order='left')
interactions = interactions.join(items_sample, on='item_id', maintain_order='left')

interactions_sample = interactions.collect(engine='streaming')

To get up-0.9_ip-0.9 (90% of least popular users, 90% of least popular items) replace users and items sampling lines with

users_sample = get_sample(users, 'train_interactions_rank', -0.9).select(['user_id'])
items_sample = get_sample(items, 'train_interactions_rank', -0.9).select(['item_id'])