Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
Dask
License:
jonflynn's picture
Update README.md
e9f1d85 verified
|
raw
history blame
4.18 kB
metadata
license: cc-by-4.0
dataset_info:
  features:
    - name: track_name
      dtype: string
    - name: start_time
      dtype: int64
    - name: embedding
      dtype:
        array2_d:
          shape:
            - 240
            - 4800
          dtype: float32
  splits:
    - name: train
      num_bytes: 4166537771
      num_examples: 904
  download_size: 4171864391
  dataset_size: 4166537771
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Jukebox Embeddings for the URMP Dataset

Repo with Colab notebook used to extract the embeddings.

Overview

This dataset extends the University of Rochester Multi-Modal Music Performance (URMP) Dataset by providing embeddings for each audio file.

Original URMP Dataset

Link to official site

The URMP dataset was created to facilitate audio-visual analysis of musical performances. It comprises multiple simple multi-instrument musical pieces assembled from coordinated but separately recorded performances of individual tracks.

Jukebox Embeddings

Embeddings are derived from OpenAI's Jukebox model, following the approach described in Castellon et al. (2021) with some modifications followed in Spotify's Llark paper:

  • Source: Output of the 36th layer of the Jukebox encoder
  • Original Jukebox encoding: 4800-dimensional vectors at 345Hz
  • Audio/embeddings are chunked into 25 seconds clips as that is the max Jukebox can take in as input, any clips shorter than 25 seconds are padded before passed through Jukebox
  • Approach: Mean-pooling within 100ms frames, resulting in:
    • Downsampled frequency: 10Hz
    • Embedding size: 1.2 × 10^6 for a 25s audio clip
    • For a 25s audio clip the 2D array shape will be [250, 4800]
  • This method retains temporal information while reducing the embedding size

Why Jukebox? Are these embeddings state-of-the-art as of September 2024?

Determining the optimal location to extract embeddings from large models typically requires extensive probing. This involves testing various activations or extracted layers of the model on different classification tasks through a process of trial and error. Additional fine-tuning is often done to optimise embeddings across these tasks.

The two largest publicly available music generation and music continuation (i.e.: able to take in audio as input) models are Jukebox and MusicGen. According to this paper on probing MusicGen, embeddings extracted from Jukebox appears to outperform MusicGen on average in their classification tasks.

Dataset Features

This extension to the URMP dataset includes:

  1. File name of each WAV file in the URMP dataset
  2. Start time of the audio
  3. Jukebox embedding for each audio file

There are embeddings for both the full mixes and separated instruments.

Applications

This extended dataset can be used for various tasks, including but not limited to:

  • Music source separation
  • Transcription
  • Performance analysis
  • Multi-modal information retrieval

Usage

from datasets import load_dataset

dataset = load_dataset("jonflynn/urmp_jukebox_embeddings")

# There's only one split, that is train
train_dataset = dataset['train']

Citation

If you use this dataset in your research, please cite the original URMP paper and this extension:

@article{li2018creating,
  title={Creating a multi-track classical music performance dataset for multi-modal music analysis: Challenges, insights, and applications},
  author={Li, Bochen and Liu, Xinzhao and Dinesh, Karthik and Duan, Zhiyao and Sharma, Gaurav},
  journal={IEEE Transactions on Multimedia},
  year={2018},
  publisher={IEEE}
}

@dataset{flynn2024urmpjukebox,
  author       = {Jon Flynn},
  title        = {Jukebox Embeddings for the URMP Dataset},
  year         = {2024},
  publisher    = {Hugging Face},
  howpublished = {\url{https://huggingface.co/datasets/jonflynn/urmp_jukebox_embeddings}},
}