English
File size: 1,487 Bytes
0694265
 
 
 
 
 
 
 
a64baa7
0694265
 
 
 
 
11a91e2
0694265
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4065faa
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
license: mit
datasets:
- mjjung/ActivityNet-VTune
language:
- en
---

# TimeChat-7B-ActivityNet-VTune Model

## Model details

We trained [TimeChat](https://arxiv.org/abs/2312.02051) using VTune, a developed instruction-tuning method specifically designed to account for consistency. 

For the tuning, we utilized 10K training videos from ActivityNet-Captions with 205K automatically generated annotations.

## Evaluation
We evaluated the model on ActivtyNet-CON and ActivtyNet-Captions.

- ActivityNet-CON
| Metric          | Value       |
|-----------------|-------------|
| Ground          | 37.4        |
| R-Ground        | 28.3 (75.6) |
| S-Ground        | 10.6 (28.3) |
| H-Verify        | 19.6 (52.3) |
| C-Verify        | 19.5 (51.5) |

- ActivityNet-Captions
| Metric          | Value   |
|-----------------|---------|
| R@1 IoU=0.3     | 57.74   |
| R@1 IoU=0.5     | 41.05   |
| R@1 IoU=0.7     | 23.72   |
| mIoU            | 40.89   |

**Paper and Code for more information:**
[Paper](https://arxiv.org/abs/2411.12951), [Code](https://github.com/minjoong507/consistency-of-video-llm)

## Citation
If you find our research and codes useful, please consider starring our repository and citing our paper:

```
@article{jung2024consistency,
  title={On the Consistency of Video Large Language Models in Temporal Comprehension},
  author={Jung, Minjoon and Xiao, Junbin and Zhang, Byoung-Tak and Yao, Angela},
  journal={arXiv preprint arXiv:2411.12951},
  year={2024}
}
```