|
--- |
|
license: other |
|
task_categories: |
|
- audio-classification |
|
language: |
|
- en |
|
tags: |
|
- biology |
|
- bioacoustics |
|
- audio-classification |
|
- multimodal |
|
- Audio-Text-to-Text |
|
pretty_name: NatureLM-audio-training |
|
size_categories: |
|
- 10M<n<100M |
|
configs: |
|
- config_name: NatureLM-audio-training |
|
features: |
|
- name: file_name |
|
dtype: string |
|
- name: metadata |
|
dtype: string |
|
- name: audio |
|
dtype: |
|
audio: |
|
sample_rate: 16000 |
|
decode: true |
|
- name: source_dataset |
|
dtype: string |
|
- name: id |
|
dtype: string |
|
- name: license |
|
dtype: string |
|
- name: instruction |
|
dtype: string |
|
- name: instruction_text |
|
dtype: string |
|
- name: output |
|
dtype: string |
|
- name: task |
|
dtype: string |
|
data_files: |
|
- split: train |
|
path: "train/part*/shard*" |
|
|
|
--- |
|
|
|
# Dataset card for NatureLM-audio-training |
|
|
|
|
|
## Overview |
|
|
|
NatureLM-audio-training is a large and diverse **audio-language dataset** designed for training bioacoustic models that can generate a natural language answer to a natural language query on a reference bioacoustic audio recording. |
|
For example, for an in-the-wild audio recording of a bird species, a relevant query might be "What is the common name for the focal species in the audio?" to which an audio-language model trained on this dataset may respond with "Common yellowthroat". |
|
|
|
It consists of over 26 million audio-text pairs derived from diverse sources including animal vocalizations, insects, human speech, music and environmental sounds: |
|
|
|
| Task | Dataset | # Hours | # Samples | |
|
|------|---------|---------|-----------| |
|
| CAP | WavCaps (Mei et al., 2023) | 7,568 | 402k | |
|
| CAP | AudioCaps (Kim et al., 2019) | 145 | 52k | |
|
| CLS | NSynth (Engel et al., 2017) | 442 | 300k | |
|
| CLS | LibriSpeechTTS (Zen et al., 2019), VCTK (Yamagishi et al. 2019) | 689 | 337k | |
|
| CAP | Clotho (Drossos et al. 2020) | 25 | 4k | |
|
| CLS, DET, CAP | Xeno-canto (Vellinga & Planque, 2015) | 10,416 | 607k | |
|
| CLS, DET, CAP | iNaturalist (iNaturalist) | 1,539 | 320k | |
|
| CLS, DET, CAP | Watkins (Sayigh et al., 2016) | 27 | 15k | |
|
| CLS, DET | Animal Sound Archive (Museum für Naturkunde Berlin) | 78 | 16k | |
|
| DET | Sapsucker Woods (Kahl et al., 2022a) | 285 | 342k | |
|
| CLS, DET | Barkley Canyon (Kanes, 2021) | 876 | 309k | |
|
| CLS | UrbanSound (Salamon & Jacoby, 2014) | 10 | 2k | |
|
|
|
CLS: classification, DET: detection, CAP: captioning, Samples = number of audio files |
|
|
|
Introduced in the paper [NatureLM-audio: An Audio-Language Foundation Model for Bioacoustics](https://arxiv.org/pdf/2411.07186), this dataset aggregates data from sources such as Xeno-canto, iNaturalist, Animal Sound Archive amongst others. |
|
The accompanying model trained on this dataset can be found here [EarthSpeciesProject/NatureLM-audio](https://huggingface.co/EarthSpeciesProject/NatureLM-audio). |
|
|
|
* Developed by: David Robinson, Marius Miron, Masato Hagiwara, Milad Alizadeh, Gagan Narula, Sara Keen, Benno Weck, Matthieu Geist, Olivier Pietquin (Earth Species Project) |
|
* Funded by: More info at https://www.earthspecies.org/about-us#support |
|
* Shared by: Earth Species Project |
|
* Language(s) (NLP): English |
|
|
|
### Coverage of taxonomic groups |
|
The NatureLM-audio-training dataset, though diverse, still leans toward bird species. |
|
|
|
 |
|
|
|
## Usage |
|
```python |
|
from datasets import load_dataset |
|
|
|
dataset = load_dataset("EarthSpeciesProject/NatureLM-audio-training", split="train") |
|
print(dataset) |
|
``` |
|
|
|
### Example data |
|
```python |
|
import numpy as np |
|
|
|
# Inspect the first example in the dataset |
|
x = ds[0] |
|
audio = x["audio"]["array"] |
|
print(audio.shape) |
|
# (503808,) |
|
|
|
print(x["instruction"]) |
|
# '<Audio><AudioHere></Audio> What is the taxonomic name of the focal species in the audio?' |
|
|
|
print(x["output"]) |
|
# 'Chordata Aves Passeriformes Passerellidae Atlapetes fuscoolivaceus' |
|
|
|
print(x["task"]) |
|
# 'taxonomic-classification' |
|
|
|
import json |
|
metadata = json.loads(x["metadata"]) |
|
print(metadata) |
|
# {'recordist': 'Peter Boesman', |
|
# 'url': 'https://xeno-canto.org/sounds/uploaded/OOECIWCSWV/XC693334-LS_58842%20Dusky-headed%20brushfinch%20song%20B.mp3', |
|
# 'source': 'Xeno-canto', |
|
# 'duration': 31.488, |
|
# 'class': 'Aves', |
|
# 'family': 'Passerellidae', |
|
# 'genus': 'Atlapetes', |
|
# 'species': 'Atlapetes fuscoolivaceus', |
|
# 'phylum': 'Chordata', |
|
# 'order': 'Passeriformes', |
|
# 'subspecies': '', |
|
# 'data_category': 'animal', |
|
# 'text': None, |
|
# 'sample_rate': 16000} |
|
``` |
|
### Example prompts |
|
|
|
Prompt: What is the common name for the focal species in the audio? |
|
Answer: Humpback Whale |
|
|
|
Prompt: Which of these, if any, are present in the audio recording? Single pulse gibbon call, Multiple pulse gibbon call, Gibbon duet, None. |
|
Answer: Gibbon duet |
|
|
|
Prompt: What is the common name for the focal species in the audio? |
|
Answer: Spectacled Tetraka |
|
|
|
Prompt: What is the life stage of the focal species in the audio? |
|
Answer: Juvenile |
|
|
|
Prompt: What type of vocalization is heard from the focal species in the audio? |
|
Answer with either 'call' or 'song'. |
|
|
|
Prompt: Caption the audio, using the common name for any animal species. |
|
|
|
## Dataset Composition |
|
|
|
NatureLM-audio-training combines data from several well-known sources. There are a total of 26,440,512 samples (examples). |
|
The data are organized in shards of 2500 samples each, each containing a subset of the total dataset. We additionally provide an `annotations.jsonl` file that contains taxonomic information for each sample, including family, genus, species, common name, and other relevant metadata. This file can be used to perform queries on the dataset to create data mixes as desired by the user. The `id` column of the `annotations.jsonl` file matches the `id` field in the dataset, and the shard-id of the corresponding sample is in the `shard_id` column. |
|
|
|
|
|
## Tasks and Applications |
|
|
|
Several different types of bioacoustically relevant tasks are included in the dataset. The tasks are designed to be flexible and can be used for a variety of applications, including (but not limited to): |
|
- *taxonomic-classification* |
|
- *species-sci-detection-hard* |
|
- *genus-detection* |
|
- *call-type* |
|
- *caption-scientific-rich* |
|
- *open-ended question* |
|
- *call-type-with-common-name* |
|
- *lifestage-with-common-name* |
|
... |
|
|
|
These tasks are particularly useful for exploring zero-shot learning applications in bioacoustics. |
|
|
|
## Data Fields |
|
The following fields are present in each example: |
|
- `source_dataset` (str): One of the source datasets mentioned above |
|
- `audio` (Audio): The audio data in float32 format. |
|
- `id` (str): Sample uuid. |
|
- `metadata` (str): Each sample can have some extra data such as the url of the original audio, the recordist, duration, |
|
sample_rate and taxonomic information such as family, genus, species, common name, etc. The metadata is a JSON string. |
|
- `file_name` (str): Sample file_name |
|
- `instruction` (str): A prompt (a query) corresponding to the audio for your audio-text model with a placeholder for audio tokens. E.g. '<Audio><AudioHere></Audio> What is the scientific name for the focal species in the audio?' |
|
- `instruction_text` (str): Same as `instruction` but without the placeholder for audio tokens. |
|
- `output` (str): The expected output from the model |
|
- `task` (str): The task type e.g. 'taxonomic-classification', 'caption-common', 'lifestage', 'speech-nspeakers' |
|
- `license` (str): The license of the dataset. For example, 'CC-BY-NC' or 'CC0'. |
|
|
|
## Licensing |
|
|
|
Due to its composite nature, `NatureLM-audio-training` is subject to multiple licenses. Individual samples have the "license" field indicating the specific license for that sample. The dataset is not intended for commercial use, and users should adhere to the licenses of the individual datasets. |
|
|
|
## Citation |
|
|
|
If you use NatureLM-audio-training, please cite the following: |
|
|
|
```bibtex |
|
@misc{naturelm-audio, |
|
title={NATURELM-AUDIO: AN AUDIO-LANGUAGE FOUNDATION MODEL FOR BIOACOUSTICS}, |
|
url={https://arxiv.org/pdf/2411.07186}, |
|
note={Preprint}, |
|
year={2024} |
|
} |
|
``` |
|
|
|
|
|
## Contact |
|
|
|
For questions, comments, or contributions, please contact: |
|
- D. Robinson (david at earthspecies dot org) |
|
- M. Hagiwara (masato at earthspecies dot org) |
|
- M. Miron (marius at earthspecies dot org) |
|
- G. Narula (gagan at earthspecies dot org) |
|
- M. Alizadeh (milad at earthspecies dot org) |
|
|