Dataset Viewer
Auto-converted to Parquet
audio
audioduration (s)
0.52
1
technique
class label
9 classes
microphone
class label
5 classes
string
stringclasses
4 values
note
stringclasses
36 values
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
0rodemic
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null
0bariolage
1s10left
null
null

pretty_name: VADS

Dataset Card for VADS

Dataset Summary

VADS (Violin Audio Dataset for Sound Synthesis) is a dataset for violin technique classification and sound synthesis research. It consists of over 10,000 violin samples played with various techniques, recorded with different microphones.

Languages

The dataset does not contain any spoken language; it consists of audio recordings of violin sounds.

Dataset Structure

Data Instances

Each data instance represents a single violin audio recording and includes information about the technique used, the microphone, the string played, and the note.

Data Fields

  • audio: The audio recording itself, represented as a waveform.
  • technique: The violin technique used in the recording (e.g., 'detaché', 'pizzicato', 'tremolo').
  • microphone: The microphone used for the recording (e.g., 'close', 'room').
  • string: The violin string played (e.g., 'G', 'D', 'A', 'E').
  • note: The musical note played.

Data Splits

The dataset is split into train, validation, and test sets.

Dataset Creation

Curation Rationale

The dataset was created to provide a comprehensive resource for research on violin sound synthesis and technique classification.

Source Data

Initial Data Collection and Normalization

The audio recordings were collected by Politeles and are normalized to a specific dynamic range.

Annotation process

The recordings were annotated with the corresponding technique, microphone, string, and note information.

Who are the annotators?

The annotations were provided by Politeles.

Personal and Sensitive Information

The dataset does not contain any personal or sensitive information.

Considerations for Using the Data

Social Impact of Dataset

The dataset can contribute to research in music information retrieval, sound synthesis, and music education.

Discussion of Biases

The dataset may contain biases related to the specific violin, the performer, or the recording environment.

Other Known Limitations

The dataset is limited to violin sounds and does not include other instruments or musical styles.

Additional Information

Dataset Curators

Politeles

Licensing Information

The dataset is licensed under Creative Commons Attribution 4.0 International.

Citation Information

Downloads last month
80