The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
RPMC-L2
The description is generated by Grok3.
Dataset download link: RPMC-L2
Dataset Structure
Data Instances
Each instance is an HDF5 file (.h5
) containing synchronized music and lighting data for a specific live performance. The dataset is split into multiple parts (RPMC_L2_part_aa
, RPMC_L2_part_ab
, etc.) that can be merged into a single RPMC_L2.h5
file. Each file is organized into two main groups: music
and light
, with the following features:
- Music Features: Audio-related features stored as
np.ndarray
arrays with shape(X, L)
, whereL
is the sequence length. - Light Features: Lighting-related data stored as
np.ndarray
arrays, primarily threshold data with shape(F, 3, 256)
.
Data Fields
Group | Feature | Shape | Description |
---|---|---|---|
music | openl3 | (512, L) | OpenL3 deep audio embedding |
music | mel_spectrogram | (128, L) | Mel spectrogram |
music | mel_spectrogram_db | (128, L) | Mel spectrogram in decibels |
music | cqt | (84, L) | Constant-Q transform (CQT) |
music | stft | (1025, L) | Short-time Fourier transform (STFT) |
music | mfcc | (128, L) | Mel-frequency cepstral coefficients |
music | chroma_stft | (12, L) | Chroma features from STFT |
music | chroma_cqt | (12, L) | Chroma features from CQT |
music | chroma_cens | (12, L) | Chroma Energy Normalized Statistics |
music | spectral_centroids | (1, L) | Spectral centroid |
music | spectral_bandwidth | (1, L) | Spectral bandwidth |
music | spectral_contrast | (7, L) | Spectral contrast |
music | spectral_rolloff | (1, L) | Spectral rolloff frequency |
music | zero_crossing_rate | (1, L) | Zero-crossing rate |
light | threshold | (F, 3, 256) | Frame-specific light threshold data (Hue: 0–179, Saturation: 0–255, Value: 0–255) |
Data Splits
The dataset consists of 699 files, organized by file hashes (top-level keys in the HDF5 file). There are no predefined splits; users can process the merged RPMC_L2.h5
file to create custom train/validation/test splits based on their research needs.
Dataset Creation
Curation Rationale
The dataset was created to facilitate research on the relationship between music characteristics and lighting effects in live performance venues, enabling applications in automated lighting design, audio-visual synchronization, and immersive live experiences.
Source Data
Initial Data Collection:
Data was collected from professional live performance venues hosting Rock, Punk, Metal, and Core music genres. Music features were extracted from audio recordings, and lighting data was captured as threshold values (Hue, Saturation, Value) synchronized with the audio.
- Total Size: ~40 GB
- Collection Method: Professional live performance venues
- File Format: HDF5 (
.h5
)
Who are the source data producers? The data was collected by researchers or professionals in live music venues, capturing synchronized audio and lighting data.
Annotations
- Annotation Process: The dataset includes music features (e.g., mel spectrogram, MFCC) and lighting data (threshold values for Hue, Saturation, Value) automatically extracted and synchronized during data collection. No manual annotations were provided.
- Who are the annotators? The dataset creators processed and organized the data using automated feature extraction tools.
Citation
@article{zhao2025automatic,
title={Automatic Stage Lighting Control: Is it a Rule-Driven Process or Generative Task?},
author={Zhao, Zijian and Jin, Dian and Zhou, Zijing and Zhang, Xiaoyu},
journal={arXiv preprint arXiv:2506.01482},
year={2025}
}
- Downloads last month
- 11