File size: 5,522 Bytes
b80ae90 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 |
# Audio-Visual Instance Segmentation
[](https://arxiv.org/abs/2412.03069)
[](https://ruohaoguo.github.io/avis/)
[](https://1drv.ms/u/c/3c9af704fb61931d/EVOs609SGMxLsbvVzVJHAa4Bmnu4GVZGjqYHQxDz0NKTew?e=WQU2Uf)
Ruohao Guo, Xianghua Ying*, Yaru Chen, Dantong Niu, Guangyao Li, Liao Qu, Yanyu Qi, Jinxing Zhou, Bowei Xing, Wenzhen Yue, Ji Shi, Qixun Wang, Peiliang Zhang, Buwen Liang
## π° News
π₯**2025.03.01**: Codes and checkpoints are released!
π₯**2025.02.27**: AVIS got accepted to **CVPR 2025**! πππ
π₯**2024.11.12**: Our [project page](https://ruohaoguo.github.io/avis/) is now available!
π₯**2024.11.11**: The AVISeg dataset has been uploaded to [OneDrive](https://1drv.ms/u/c/3c9af704fb61931d/EVOs609SGMxLsbvVzVJHAa4Bmnu4GVZGjqYHQxDz0NKTew?e=WQU2Uf), welcome to download and use!
## πΏ Introduction
In this paper, we propose a new multi-modal task, termed audio-visual instance segmentation (AVIS), which aims to simultaneously identify, segment and track individual sounding object instances in audible videos. To facilitate this research, we introduce a high-quality benchmark named AVISeg, containing over 90K instance masks from 26 semantic categories in 926 long videos. Additionally, we propose a strong baseline model for this task. Our model first localizes sound source within each frame, and condenses object-specific contexts into concise tokens. Then it builds long-range audio-visual dependencies between these tokens using window-based attention, and tracks sounding objects among the entire video sequences.
<div align='center'>
<img src="./assets/teaser_figure.png" class="interpolation-image" alt="radar." height="50%" width="100%" />
</div>
## βοΈ Installation
```bash
conda create --name avism python=3.8 -y
conda activate avism
conda install pytorch==1.9.0 torchvision==0.10.0 cudatoolkit=11.1 -c pytorch -c nvidia
pip install -U opencv-python
cd ./AVISM
git clone https://github.com/facebookresearch/detectron2
cd detectron2
pip install -e .
cd ../
pip install -r requirements.txt
cd mask2former/modeling/pixel_decoder/ops
sh make.sh
```
## π€ Setup
### Datasets
Download and unzip datasets [OneDrive](https://1drv.ms/u/c/3c9af704fb61931d/EVOs609SGMxLsbvVzVJHAa4Bmnu4GVZGjqYHQxDz0NKTew?e=WQU2Uf) and put them in ```./datasets```.
### Pretrained Backbones
Download and unzip pre-trained backbones [OneDrive](https://1drv.ms/u/c/3c9af704fb61931d/ETDDliQ8zZFGmYxlLVPyi3sBis_fdjX0w8mJhyQnYVSdXA?e=Wt7pUb) and put them in ```./pre_models```.
### Checkpoints
Download the following checkpoints and put them in ```./checkpoints```.
<table>
<tr>
<th style="width: 150px;">Backbone</th>
<th>Pre-trained Datasets</th>
<th>FSLA</th>
<th>HOTA</th>
<th>mAP</th>
<th>Model Weight</th>
</tr>
<tr>
<td align="center">ResNet-50</td>
<td align="center">ImageNet</td>
<td align="center">42.78</td>
<td align="center">61.73</td>
<td align="center">40.57</td>
<td align="center"><a href="https://1drv.ms/u/c/3c9af704fb61931d/EYyAuCNpRjxDqEohJfoDLO0BYgw0lbwKqQ1lwVXe_kIPVQ?e=PeRlyx">AVISM_R50_IN.pth</a></td>
</tr>
<tr>
<td align="center">ResNet-50</td>
<td align="center">ImageNet & COCO</td>
<td align="center">44.42</td>
<td align="center">64.52</td>
<td align="center">45.04</td>
<td align="center"><a href="https://1drv.ms/u/c/3c9af704fb61931d/EX0snZsxQwdBswQFdG4sc9kBd-Bd7lw5zaTGR6FvrSxinQ?e=bdZF5G">AVISM_R50_COCO.pth</a></td>
</tr>
<tr>
<td align="center">Swin-L</td>
<td align="center">ImageNet</td>
<td align="center">49.15</td>
<td align="center">68.81</td>
<td align="center">49.06</td>
<td align="center"><a href="https://1drv.ms/u/c/3c9af704fb61931d/EV4V5Bh5AqVBhLVMM1ucdN0BuOZgHu17W3JDGjKDMLZ1bg?e=hF8umh">AVISM_SwinL_IN.pth</a></td>
</tr>
<tr>
<td align="center">Swin-L</td>
<td align="center">ImageNet & COCO</td>
<td align="center">52.49</td>
<td align="center">71.13</td>
<td align="center">53.46</td>
<td align="center"><a href="https://1drv.ms/u/c/3c9af704fb61931d/EXuM4cUxPTpEk1M7FoPqtNEBi47L7uR-ZlnqDCJscmNsiA?e=7prFiN">AVISM_SwinL_COCO.pth</a></td>
</tr>
</table>
## π Getting Started
### Training
```
python train_net.py --num-gpus 2 --config-file configs/avism/R50/avism_R50_IN.yaml
```
### Evaluation
```
python train_net.py --config-file configs/avism/R50/avism_R50_IN.yaml --eval-only MODEL.WEIGHTS checkpoints/AVISM_R50_IN.pth
```
### Demo
```
python demo_video/demo.py --config-file configs/avism/R50/avism_R50_IN.yaml --opts MODEL.WEIGHTS checkpoints/AVISM_R50_IN.pth
```
## Acknowledgement
We thank the great work from [Detectron2](https://github.com/facebookresearch/detectron2), [Mask2Former](https://github.com/facebookresearch/MaskFormer) and [VITA](https://github.com/sukjunhwang/VITA).
## π Citation
If our work assists your research, feel free to give us a star β or cite us using
```
@article{guo2023audio,
title={Audio-Visual Instance Segmentation},
author={Guo, Ruohao and Ying, Xianghua and Chen, Yaru and Niu, Dantong and Li, Guangyao and Qu, Liao and Qi, Yanyu and Zhou, Jinxing and Xing, Bowei and Yue, Wenzhen and Shi, Ji and Wang, Qixun and Zhang, Peiliang and Liang, Buwen},
journal={arXiv preprint arXiv:2310.18709},
year={2023}
}
```
|