Datasets:
Formats:
parquet
Size:
10K - 100K
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,149 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- computer-vision
|
4 |
+
- audio
|
5 |
+
- keypoint-detection
|
6 |
+
- animal-behavior
|
7 |
+
- multi-modal
|
8 |
+
- jsonl
|
9 |
+
---
|
10 |
+
|
11 |
+
# Bird3m Dataset
|
12 |
+
|
13 |
+
## Dataset Description
|
14 |
+
|
15 |
+
This dataset contains multi-modal data for bird tracking and behavior analysis, primarily focused on Zebrafinches (based on category names in the source data). Each data entry corresponds to a specific bird instance within a video frame.
|
16 |
+
|
17 |
+
The data includes:
|
18 |
+
|
19 |
+
* 3D keypoints derived from multi-view reconstruction.
|
20 |
+
* 2D keypoints and bounding boxes for top, side, and back camera views.
|
21 |
+
* Information about the video frame and associated processed audio/radio data files.
|
22 |
+
* Metadata about the bird and the experimental setup.
|
23 |
+
* Linked vocalization events associated with the specific bird in the frame.
|
24 |
+
|
25 |
+
This dataset is designed to facilitate research in areas such as multi-view 3D reconstruction, multi-modal data fusion, and fine-grained animal behavior analysis.
|
26 |
+
|
27 |
+
[More details about the dataset origin, collection process, and purpose can be added here.]
|
28 |
+
|
29 |
+
## Dataset Structure
|
30 |
+
|
31 |
+
The dataset is structured into splits based on the `split` field in the original data. The standard splits are `train`, `validation` (mapped from `val`), and `test`.
|
32 |
+
|
33 |
+
Each split is a standard Hugging Face `Dataset` object. Each row in the dataset corresponds to a single detected bird instance in a single frame, with associated multi-modal data.
|
34 |
+
|
35 |
+
```python
|
36 |
+
# Example of accessing splits
|
37 |
+
from datasets import load_dataset
|
38 |
+
|
39 |
+
dataset = load_dataset("anonymous-submission000/bird3m") # Replace with your actual repo_id
|
40 |
+
|
41 |
+
train_dataset = dataset["train"]
|
42 |
+
validation_dataset = dataset["validation"] # Or whatever your split is named if not 'val'
|
43 |
+
test_dataset = dataset["test"]
|
44 |
+
```
|
45 |
+
|
46 |
+
## Dataset Fields
|
47 |
+
|
48 |
+
Each example in the dataset has the following fields:
|
49 |
+
|
50 |
+
* `bird_id` (`string`): A unique identifier for the specific bird instance within the context of its frame (e.g., "bird_1", "bird_2").
|
51 |
+
* `back_bbox_2d` (`Sequence[float64]`): 2D bounding box coordinates for the bird instance in the **back** camera view. Format is likely `[x, y, width, height]` or `[x_min, y_min, x_max, y_max]`.
|
52 |
+
* `back_keypoints_2d` (`Sequence[float64]`): 2D keypoint coordinates and visibility flags for the bird instance in the **back** camera view. Format is likely `[x1, y1, v1, x2, y2, v2, ...]`, where `v` is the visibility status (e.g., 0: not labeled, 1: labeled but not visible, 2: visible and labeled).
|
53 |
+
* `back_view_boundary` (`Sequence[int64]`): Boundary coordinates defining the relevant area for the **back** view within the full image dimensions. Format is likely `[x, y, width, height]`.
|
54 |
+
* `bird_name` (`string`): The biological or specific identifier assigned to the bird (e.g., "b13k20_f", "b13o15_m", "dead", "2U7a_j").
|
55 |
+
* `video_name` (`string`): Identifier for the original video file the frame belongs to (e.g., "BP_2020-10-13_19-44-38_564726_0240000").
|
56 |
+
* `frame_name` (`string`): Filename of the individual frame (e.g., "img00961.png").
|
57 |
+
* `frame_path` (`Image`): Path to the image file (`.png`) for this specific frame. When accessing this field using the `datasets` library, it will automatically load the image.
|
58 |
+
* `keypoints_3d` (`Sequence[Sequence[float64]]`): 3D keypoint coordinates for the bird instance. Each inner sequence is a 3D point, likely `[x, y, z]`.
|
59 |
+
* `radio_path` (`Value(dtype='binary')`): Path to an associated radio data file (`.npz`). Stored as generic binary data. You would need external libraries (like `numpy`) to load/interpret the content of the `.npz` file after reading the path.
|
60 |
+
* `reprojection_error` (`Sequence[float64]`): Reprojection error values, likely corresponding to each 3D keypoint.
|
61 |
+
* `side_bbox_2d` (`Sequence[float64]`): 2D bounding box for the **side** camera view. (Format similar to `back_bbox_2d`).
|
62 |
+
* `side_keypoints_2d` (`Sequence[float64]`): 2D keypoint coordinates and visibility for the **side** camera view. (Format similar to `back_keypoints_2d`).
|
63 |
+
* `side_view_boundary` (`Sequence[int64]`): View boundary coordinates for the **side** view. (Format similar to `back_view_boundary`).
|
64 |
+
* `backpack_color` (`string`): Color of the backpack tag on the bird (e.g., "purple", "yellow", "red").
|
65 |
+
* `experiment_id` (`string`): Simplified experiment identifier (e.g., "copExpBP03", "juvExpBP05").
|
66 |
+
* `split` (`string`): Dataset split for this example ("train", "validation", "test").
|
67 |
+
* `top_bbox_2d` (`Sequence[float64]`): 2D bounding box for the **top** camera view. (Format similar to `back_bbox_2d`).
|
68 |
+
* `top_keypoints_2d` (`Sequence[float64]`): 2D keypoint coordinates and visibility for the **top** camera view. (Format similar to `back_keypoints_2d`).
|
69 |
+
* `top_view_boundary` (`Sequence[int64]`): View boundary coordinates for the **top** view. (Format similar to `back_view_boundary`).
|
70 |
+
* `video_path` (`Video`): Path to the video clip file (`.mp4`) containing this frame. When accessing this field, it will automatically load the video object.
|
71 |
+
* `acc_ch_map` (`{'0': string, ...}`): Dictionary mapping accelerometer channel indices (as strings) to bird identifiers or descriptions.
|
72 |
+
* `acc_sr` (`float64`): Accelerometer sampling rate in Hz.
|
73 |
+
* `has_overlap` (`bool`): Boolean indicating whether the accelerometer event overlaps with the vocalization event (based on previous modification logic).
|
74 |
+
* `mic_ch_map` (`{'0': string, ...}`): Dictionary mapping microphone channel indices (as strings) to microphone names or descriptions.
|
75 |
+
* `mic_sr` (`float64`): Microphone sampling rate in Hz.
|
76 |
+
* `acc_path` (`Audio`): Path to the processed accelerometer audio file (`.wav`). When accessing this field, it will automatically load the audio signal.
|
77 |
+
* `mic_path` (`Audio`): Path to the processed microphone audio file (`.wav`). When accessing this field, it will automatically load the audio signal.
|
78 |
+
* `vocalization` (`Sequence[Dict]`): A list (Sequence) containing dictionaries (Dict) representing vocalization events associated with this bird in this frame. Each dictionary within this list has the following fields:
|
79 |
+
* `overlap_type` (`string`): Describes the type or confidence of overlap/attribution (mapped from `attribution_confidence_step2`).
|
80 |
+
* `has_bird` (`bool`): Boolean indicating if this vocalization event was attributed to a bird (mapped from `is_attributed_to_bird`).
|
81 |
+
* `2ddistance` (`bool`): Boolean indicating if the 2D keypoint match distance for this vocalization attribution was within a certain threshold (likely < 20px, mapped from `keypoint_match_is_close_lt_20px`).
|
82 |
+
* `small_2ddistance` (`float64`): The minimum 2D keypoint match distance associated with this vocalization event attribution (mapped from `keypoint_match_min_distance_px`).
|
83 |
+
* `voc_metadata` (`Sequence[float64]`): Likely the onset and offset times of the vocalization event within the associated audio clip (e.g., `[onset_sec, offset_sec]`). Mapped from `vocalization_onset_offset_sec_in_clip`.
|
84 |
+
|
85 |
+
## How to Use
|
86 |
+
|
87 |
+
```python
|
88 |
+
from datasets import load_dataset
|
89 |
+
import numpy as np # Needed to load .npz radio data
|
90 |
+
|
91 |
+
# Load the dataset from the Hub
|
92 |
+
dataset = load_dataset("anonymous-submission000/bird3m") # Replace with your actual repo_id
|
93 |
+
|
94 |
+
# Access a split
|
95 |
+
train_data = dataset["train"]
|
96 |
+
|
97 |
+
# Access an example
|
98 |
+
example = train_data[0]
|
99 |
+
|
100 |
+
# Access fields
|
101 |
+
bird_id = example["bird_id"]
|
102 |
+
keypoints_3d = example["keypoints_3d"]
|
103 |
+
top_bbox = example["top_bbox_2d"]
|
104 |
+
vocalizations = example["vocalization"] # This is a list of dicts
|
105 |
+
|
106 |
+
# Access multimedia files (they are lazy-loaded)
|
107 |
+
image = example["frame_path"] # This loads the PIL Image
|
108 |
+
video = example["video_path"] # This loads a Video object
|
109 |
+
mic_audio = example["mic_path"] # This loads the Audio signal (dict with 'array' and 'sampling_rate')
|
110 |
+
acc_audio = example["acc_path"] # This loads the Audio signal
|
111 |
+
|
112 |
+
# To access the audio arrays and sampling rates:
|
113 |
+
mic_array = mic_audio['array']
|
114 |
+
mic_sr = mic_audio['sampling_rate']
|
115 |
+
acc_array = acc_audio['array']
|
116 |
+
acc_sr_actual = acc_audio['sampling_rate'] # Note: This comes from the file metadata, compare with example['acc_sr']
|
117 |
+
|
118 |
+
# To access the binary radio data (if you know how to parse it, e.g., with numpy)
|
119 |
+
radio_bytes = example["radio_path"] # This loads the binary content
|
120 |
+
# Example (assuming it's a numpy .npz file):
|
121 |
+
# try:
|
122 |
+
# from io import BytesIO
|
123 |
+
# radio_data = np.load(BytesIO(radio_bytes))
|
124 |
+
# # Access data inside the npz file, e.g., radio_data['some_key']
|
125 |
+
# print("Radio data keys:", list(radio_data.keys()))
|
126 |
+
# except Exception as e:
|
127 |
+
# print(f"Could not load radio data: {e}")
|
128 |
+
|
129 |
+
|
130 |
+
print(f"Bird ID: {bird_id}")
|
131 |
+
print(f"Number of 3D keypoints: {len(keypoints_3d)}")
|
132 |
+
print(f"Top Bounding Box: {top_bbox}")
|
133 |
+
print(f"Number of vocalization events: {len(vocalizations)}")
|
134 |
+
|
135 |
+
if vocalizations:
|
136 |
+
first_vocal = vocalizations[0]
|
137 |
+
print(f"First vocal event metadata: {first_vocal.get('voc_metadata')}")
|
138 |
+
print(f"First vocal event overlap type: {first_vocal.get('overlap_type')}")
|
139 |
+
|
140 |
+
# Example of how to get a slice of the audio clip based on voc_metadata
|
141 |
+
# if vocalizations and mic_sr is not None:
|
142 |
+
# onset, offset = vocalizations[0]['voc_metadata']
|
143 |
+
# onset_sample = int(onset * mic_sr)
|
144 |
+
# offset_sample = int(offset * mic_sr)
|
145 |
+
# vocal_audio_clip = mic_array[onset_sample:offset_sample]
|
146 |
+
# print(f"Duration of first vocal clip: {offset - onset:.3f} seconds")
|
147 |
+
# print(f"Shape of first vocal audio clip: {vocal_audio_clip.shape}")
|
148 |
+
|
149 |
+
```
|