File size: 2,493 Bytes
8f580d3 f102654 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
---
license: mit
tags:
- video-understanding
---
Dataset of the paper [Interacted Object Grounding in Spatio-Temporal Human-Object Interactions](https://huggingface.co/papers/2412.19542)
Code: https://github.com/DirtyHarryLYL/HAKE-AVA
# Preparing Dataset for HAKE-GIO + HAKE-AVA-PaSta (h box + b box & class + action + PaSta)
1. Dataset downloading steps
1. Download AVA Dataset (following [SlowFast](https://github.com/facebookresearch/SlowFast)).
```
./script/download_AVA_dataset.sh
```
2. Downloading annotation
The annotation is contained in GIO_annotation
Please download it to ava folder and extract data from the package.
3. Structure of downloaded data
```
GIO
|_ GIO_annotation
| |_ GIO_test.csv
| |_ GIO_train.csv
|_ frames
| |_ [video name 0]
| | |_ [video name 0]_000001.jpg
| | |_ [video name 0]_000002.jpg
| | |_ ...
| |_ [video name 1]
| |_ [video name 1]_000001.jpg
| |_ [video name 1]_000002.jpg
| |_ ...
|_ frame_lists
| |_ train.csv
| |_ val.csv
```
2. Annotation Format
Files in the GIO folder contains the annotations of each frame, including human/object box, action, object name, etc.
example:
| video | frame | h_x1 | h_y1 | h_x2 | h_y2 | o_x1 | o_y1 | o_x2 | o_y2 | action | object_name | human_id | object_id |
| ----------- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ------ | ----------- | -------- | --------- |
| -5KQ66BBWC4 | 905 | 0.392 | 0.033 | 0.556 | 0.618 | 0.37 | 0.019 | 0.432 | 0.608 | 6 | stick | 12 | 0 |
| -5KQ66BBWC4 | 906 | 0.408 | 0.008 | 0.586 | 0.639 | 0.37 | 0.036 | 0.457 | 0.678 | 6 | stick | 12 | 0 |
| -5KQ66BBWC4 | 907 | 0.42 | 0.115 | 0.616 | 0.883 | 0.371 | 0.143 | 0.466 | 0.878 | 6 | stick | 12 | 0 |
The meanings of each column:
- video: name of the video
- frame: time (second) of the frame
- h_x1~h_y2: the upper left and bottom right corners of human-box
- o_x1~o_y2: the upper left and bottom right corners of object-box
- action: the action label of the person in the human-box
- object_name: name of object
- human_id: ID of the person performing the action
- object_id: category id of the object
|