gengyuanmax commited on
Commit
56eeb07
·
verified ·
1 Parent(s): 2777b98

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -17,7 +17,7 @@ A dataset for Localizing Events in Videos with Multimodal Queries (Reference ima
17
  Video understanding is a pivotal task in the digital era, yet the dynamic and multievent nature of videos makes them labor-intensive and computationally demanding to process. Thus, localizing a specific event given a semantic query has gained importance in both user-oriented applications like video search and academic research into video foundation models. A significant limitation in current research is that semantic queries are typically in natural language that depicts the semantics of the target event. This setting overlooks the potential for multimodal semantic queries composed of images and texts. To address this gap, we introduce a new benchmark, ICQ, for localizing events in videos with multimodal queries, along with a new evaluation dataset ICQ-Highlight. Our new benchmark aims to evaluate how well models can localize an event given a multimodal semantic query that consists of a reference image, which depicts the event, and a refinement text to adjust the images' semantics. To systematically benchmark model performance, we include 4 styles of reference images and 5 types of refinement texts, allowing us to explore model performance across different domains. We propose 3 adaptation methods that tailor existing models to our new setting and evaluate 10 SOTA models, ranging from specialized to large-scale foundation models. We believe this benchmark is an initial step toward investigating multimodal queries in video event localization. Our project can be found at httos://icq-benchmark.github.io/.
18
 
19
 
20
- - **Curated by:** Gengyuan Zhang, Ada Mang Ling Fok
21
  - **Funded by [optional]:**
22
  - Munich Center of Machine Learning
23
  - LMU Munich
@@ -145,7 +145,7 @@ Users should be made aware of the risks, biases and limitations of the dataset.
145
 
146
  ## Dataset Card Authors [optional]
147
  - Gengyuan Zhang
148
- - Ada Mang Ling Fok
149
 
150
  ## Dataset Card Contact
151
  [email: Gengyuan Zhang]([email protected])
 
17
  Video understanding is a pivotal task in the digital era, yet the dynamic and multievent nature of videos makes them labor-intensive and computationally demanding to process. Thus, localizing a specific event given a semantic query has gained importance in both user-oriented applications like video search and academic research into video foundation models. A significant limitation in current research is that semantic queries are typically in natural language that depicts the semantics of the target event. This setting overlooks the potential for multimodal semantic queries composed of images and texts. To address this gap, we introduce a new benchmark, ICQ, for localizing events in videos with multimodal queries, along with a new evaluation dataset ICQ-Highlight. Our new benchmark aims to evaluate how well models can localize an event given a multimodal semantic query that consists of a reference image, which depicts the event, and a refinement text to adjust the images' semantics. To systematically benchmark model performance, we include 4 styles of reference images and 5 types of refinement texts, allowing us to explore model performance across different domains. We propose 3 adaptation methods that tailor existing models to our new setting and evaluate 10 SOTA models, ranging from specialized to large-scale foundation models. We believe this benchmark is an initial step toward investigating multimodal queries in video event localization. Our project can be found at httos://icq-benchmark.github.io/.
18
 
19
 
20
+ - **Curated by:** Gengyuan Zhang, Mang Ling Ada Fok
21
  - **Funded by [optional]:**
22
  - Munich Center of Machine Learning
23
  - LMU Munich
 
145
 
146
  ## Dataset Card Authors [optional]
147
  - Gengyuan Zhang
148
+ - Mang Ling Ada Fok
149
 
150
  ## Dataset Card Contact
151
  [email: Gengyuan Zhang]([email protected])