File size: 3,687 Bytes
c830a5b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ff93262
c830a5b
 
 
 
ff93262
c830a5b
 
ff93262
c830a5b
 
 
 
 
 
 
ff93262
 
c830a5b
 
 
 
 
 
 
 
 
ff93262
 
c830a5b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
---
license: cc-by-4.0
pretty_name: XS-VID
size_categories:
- 10K<n<100K
---

# XS-VID: An Extremely Small Video Object Detection Dataset

## Dataset Description

XS-VID is designed as a benchmark dataset for extremely small video object detection. It is intended to evaluate the performance of video object detection models, particularly focusing on efficiency and effectiveness in resource-limited situations. The dataset includes a variety of videos and scenarios to comprehensively assess model capabilities.

**[News]**: XS-VIDv2 is coming soon! We are excited to announce the upcoming release of XS-VIDv2, which will feature a significantly expanded dataset with many new videos and scenarios. Stay tuned for updates!

To access the XS-VID benchmark go to **https://gjhhust.github.io/XS-VID/**

## Dataset Download

### Using Command Line

This guide provides instructions for downloading and extracting the XS-VID dataset from huggingface using command-line tools in both Linux and Windows environments.

#### Prerequisites

*   **Python and pip:** Ensure Python and pip are installed on your system.
*   **huggingface Library:** Install the huggingface_hub library using pip:

    ```bash
    pip install huggingface_hub
    ```

#### Download and Extract Dataset

**Linux Command:**

```bash
pip install huggingface_hub && \
huggingface-cli download lanlanlan23/XS-VID --repo-type dataset --local-dir ./XS-VID && \
mkdir -p ./XS-VID/{annotations,images} && \
unzip -o ./XS-VID/annotations.zip -d ./XS-VID/annotations && \
find ./XS-VID -name 'videos_subset_*.zip' -exec unzip -o {} -d ./XS-VID/images \; && \
rm -f ./XS-VID/*.zip
```

**Windows Command (CMD):**

```bash
pip install huggingface_hub && ^
huggingface-cli download lanlanlan23/XS-VID --repo-type dataset --local-dir ./XS-VID && ^
mkdir "./XS-VID\annotations" && mkdir "./XS-VID\images" && ^
powershell -Command "Expand-Archive -Path './XS-VID/annotations.zip' -DestinationPath './XS-VID/annotations' -Force" && ^
for /r "./XS-VID" %f in (videos_subset_*.zip) do powershell -Command "Expand-Archive -Path '%f' -DestinationPath './XS-VID/images' -Force" && ^
del /f /q "./XS-VID\*.zip"
```

### Expected Folder Structure

After running the download and extraction commands, the XS-VID dataset folder should have the following structure:

```
./XS-VID/
├── annotations/    # Annotation files
└── images/         # Video frames (extracted from videos_subset_*.zip)
```

### Notes

*   The download script automatically deletes ZIP files after successful extraction.
*   Ensure you have sufficient disk space available (approximately the size of the ZIP files plus the extracted content).

## Evaluation Tool Usage

To evaluate your models on the XS-VID dataset, please follow these steps:

1.  **Clone the repository:** Obtain the evaluation tool files, including `eval_tool.py`, `cocoeval.py`, and `mask.py` from the main branch of the XS-VID repository.
2.  **Set JSON paths:**  In `eval_tool.py`, configure the paths to your test COCO JSON annotation file and prediction JSON file.
3.  **Run evaluation:** Execute the evaluation script using the command:

    ```bash
    python eval_tool.py
    ```

## Citation

If you utilize the XS-VID dataset in your research or applications, please cite the following paper:

```
@article{guo2024XSVID,
  title={XS-VID: An Extremely Small Video Object Detection Dataset},
  author={Jiahao Guo, Ziyang Xu, Lianjun Wu, Fei Gao, Wenyu Liu, Xinggang Wang},
  journal={arXiv preprint arXiv:2407.18137},
  year={2024}
}
```

## Support and Contact

For any questions or issues regarding the XS-VID benchmark, please feel free to contact us at gjh[email protected].
```