Datasets:
Formats:
parquet
Size:
10K - 100K
Update README.md
Browse files
README.md
CHANGED
@@ -137,18 +137,6 @@ configs:
|
|
137 |
|
138 |
**Bird3M** is the first synchronized, multi-modal, multi-individual dataset designed for comprehensive behavioral analysis of freely interacting birds, specifically zebra finches, in naturalistic settings. It addresses the critical need for benchmark datasets that integrate precisely synchronized multi-modal recordings to support tasks such as 3D pose estimation, multi-animal tracking, sound source localization, and vocalization attribution. The dataset facilitates research in machine learning, neuroscience, and ethology by enabling the development of robust, unified models for long-term tracking and interpretation of complex social behaviors.
|
139 |
|
140 |
-
### Key Features
|
141 |
-
- **Duration**: 22.2 hours of synchronized multi-modal recordings, including a fully annotated subset with 4,420 video frames and 2.5 hours of contextual audio and sensor data.
|
142 |
-
- **Modalities**:
|
143 |
-
- **Multi-view video**: Three orthogonal color cameras (top, side, back) at 47 fps, supplemented by a monochrome nest camera.
|
144 |
-
- **Multi-channel audio**: Wall-mounted microphones (16 kHz) and body-mounted accelerometers (24,414 Hz, down-sampled to 16 kHz).
|
145 |
-
- **Radio signals**: FM radio phases and magnitudes from four orthogonal antennas.
|
146 |
-
- **Annotations**:
|
147 |
-
- **Visual**: 57,396 3D keypoints (5 per bird: beak tip, head center, backpack center, tail base, tail end) across 4,420 frames, with 2D keypoints, visibility labels, and bounding boxes.
|
148 |
-
- **Audio**: 4,902 vocalization segments with onset/offset times and vocalizer identities, linked across microphone and accelerometer channels.
|
149 |
-
- **Experimental Setup**: Data from 15 experiments (2–8 birds each) conducted in the **Birdpark** system, with sessions lasting 4–120 days.
|
150 |
-
- **Applications**: Supports 3D localization, pose estimation, multi-animal tracking, sound source localization/separation, and cross-modal behavioral analyses (e.g., vocalization directness).
|
151 |
-
|
152 |
### Purpose
|
153 |
Bird3M bridges the gap in publicly available datasets for multi-modal animal behavior analysis by providing:
|
154 |
1. A benchmark for unified machine learning models tackling multiple behavioral tasks.
|
|
|
137 |
|
138 |
**Bird3M** is the first synchronized, multi-modal, multi-individual dataset designed for comprehensive behavioral analysis of freely interacting birds, specifically zebra finches, in naturalistic settings. It addresses the critical need for benchmark datasets that integrate precisely synchronized multi-modal recordings to support tasks such as 3D pose estimation, multi-animal tracking, sound source localization, and vocalization attribution. The dataset facilitates research in machine learning, neuroscience, and ethology by enabling the development of robust, unified models for long-term tracking and interpretation of complex social behaviors.
|
139 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
140 |
### Purpose
|
141 |
Bird3M bridges the gap in publicly available datasets for multi-modal animal behavior analysis by providing:
|
142 |
1. A benchmark for unified machine learning models tackling multiple behavioral tasks.
|