real-v-tsfm / README.md
Volavion's picture
Update README.md
4f15ac9 verified
metadata
dataset_info:
  features:
    - name: t
      list: int64
    - name: target
      list: float64
    - name: axis
      dtype: string
    - name: track_id
      dtype: int64
    - name: timestamp
      dtype: timestamp[ns]
    - name: prefix
      dtype: string
    - name: time_length
      dtype: int64
    - name: target_length
      dtype: int64
    - name: unique_id
      dtype: int64
  splits:
    - name: train
      num_bytes: 200807240
      num_examples: 6130
  download_size: 106717441
  dataset_size: 200807240
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: mit
task_categories:
  - time-series-forecasting
size_categories:
  - 1K<n<10K

REAL-V-TSFM Dataset

REAL-V-TSFM is a novel time series dataset derived entirely from real-world video data using optical flow methods. It was created to evaluate the generalization capabilities of Time Series Foundation Models (TSFMs) on realistic temporal dynamics, bridging the gap between synthetic benchmarks and real data.

Description

Dataset Overview

  • Extraction Method: Uses the Lucas-Kanade optical flow algorithm to track pixel trajectories at detected keypoints in videos. The x and y coordinates of each track form separate univariate time series.
  • Source Videos: Mainly sourced from the LaSOT dataset, which contains long videos primarily featuring humans and animals.
  • Number of Time Series: 6,130 time series from 609 distinct objects, providing substantial categorical diversity.
  • Sequence Length: Average length of 2,043 time steps; lengths vary from 1,000 to 8,000 time steps.
  • Data Characteristics: Approximately 44% of series are stationary according to the Augmented Dickey-Fuller test; average information entropy is 3.88 bits, indicating complexity and diversity.

Dataset Construction Pipeline

  1. Select videos and extract frame-by-frame images.
  2. Perform foreground detection using Mixture of Gaussians 2 (MOG2) to mask background pixels.
  3. Detect corners in foreground objects with the Shi–Tomasi corner detection algorithm.
  4. Track keypoints across frames using pyramidal Lucas-Kanade optical flow and apply forward-backward consistency checks to filter unstable tracks.
  5. Interpolate tracks to the longest sequence length per video; keep five least correlated tracks to ensure diversity and reduce noise.
  6. Store the x and y coordinates of each track as separate time series in the dataset.

Evaluation and Benchmarking

  • Evaluated state-of-the-art TSFMs under zero-shot forecasting on REAL-V-TSFM and compared against the M4 dataset.
  • Metrics used: Mean Absolute Percentage Error (MAPE), Symmetric MAPE (sMAPE), Aggregate Relative Weighted Quantile Loss (WQL), and Aggregate Relative Mean Absolute Scaled Error (MASE).
  • Results show performance degradation on REAL-V-TSFM compared to M4, indicating current TSFMs have limited generalizability to real-world video-derived time series.

Datasets

The dataset contains six primary columns:

  • t: temporal index of the time series,
  • target: the corresponding value of the time series at time ( t ),
  • axis: indicates the spatial axis (x or y) represented by the time series,
  • track_id: the identifier assigned during optical flow tracking, which is not guaranteed to be unique,
  • timestamp: a field we introduced to comply with specific model input requirements (e.g., Google's TimesFM (Time Series Foundation Model)), though it has no intrinsic semantic meaning,
  • prefix: denotes the source video from the LaSOT dataset from which the data is derived.