fazliimam commited on
Commit
32e644c
·
verified ·
1 Parent(s): 52aa96a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +116 -41
README.md CHANGED
@@ -1,41 +1,116 @@
1
- ---
2
- license: apache-2.0
3
- dataset_info:
4
- - config_name: temporal_order
5
- features:
6
- - name: image_1
7
- dtype: image
8
- - name: image_2
9
- dtype: image
10
- - name: label
11
- dtype: string
12
- splits:
13
- - name: test
14
- num_bytes: 211564460.0
15
- num_examples: 720
16
- download_size: 202986206
17
- dataset_size: 211564460.0
18
- - config_name: timelapse_estimation
19
- features:
20
- - name: image_1
21
- dtype: image
22
- - name: image_2
23
- dtype: image
24
- - name: label
25
- dtype: string
26
- splits:
27
- - name: test
28
- num_bytes: 48450099.0
29
- num_examples: 125
30
- download_size: 48184050
31
- dataset_size: 48450099.0
32
- configs:
33
- - config_name: temporal_order
34
- data_files:
35
- - split: test
36
- path: temporal_order/test-*
37
- - config_name: timelapse_estimation
38
- data_files:
39
- - split: test
40
- path: timelapse_estimation/test-*
41
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ dataset_info:
4
+ - config_name: temporal_order
5
+ features:
6
+ - name: image_1
7
+ dtype: image
8
+ - name: image_2
9
+ dtype: image
10
+ - name: label
11
+ dtype: string
12
+ splits:
13
+ - name: test
14
+ num_bytes: 211564460.0
15
+ num_examples: 720
16
+ download_size: 202986206
17
+ dataset_size: 211564460.0
18
+ - config_name: timelapse_estimation
19
+ features:
20
+ - name: image_1
21
+ dtype: image
22
+ - name: image_2
23
+ dtype: image
24
+ - name: label
25
+ dtype: string
26
+ splits:
27
+ - name: test
28
+ num_bytes: 48450099.0
29
+ num_examples: 125
30
+ download_size: 48184050
31
+ dataset_size: 48450099.0
32
+ configs:
33
+ - config_name: temporal_order
34
+ data_files:
35
+ - split: test
36
+ path: temporal_order/test-*
37
+ - config_name: timelapse_estimation
38
+ data_files:
39
+ - split: test
40
+ path: timelapse_estimation/test-*
41
+ ---
42
+
43
+
44
+ ### **Dataset Description**
45
+ The Temporal-VQA dataset is a challenging benchmark designed to evaluate the temporal reasoning capabilities of Multimodal Large Language Models (MLLMs) in tasks requiring visual temporal understanding. It emphasizes real-world temporal dynamics through two core evaluation tasks:-
46
+ - **Temporal Order Understanding:** This task presents MLLMs with temporally consecutive frames from video sequences. The models must analyze and determine the correct sequence of events, assessing their ability to comprehend event progression over time.
47
+ - **Time-Lapse Estimation:** In this task, MLLMs are shown pairs of images taken at varying time intervals. The models are required to estimate the time-lapse between the images by selecting from multiple-choice options that span from seconds to years.
48
+
49
+ ### **GPT4o Usage**
50
+ - The __Temporal Order Understanding__ task contains 720 image pairs of which 360 image pairs are unique image pairs (while the other 360 are image pairs in reveresed position) created by sampling frames from copyright-free videos.
51
+ - The __Timelapse Estimation__ task contains 125 image pairs compiled from copyright-free sources. The _image_A_ refers to the image that was taken first and the _image_B_ refers to the latest image.
52
+
53
+
54
+ ```python
55
+ from datasets import load_dataset
56
+ import base64
57
+ import requests
58
+ import os
59
+ from io import BytesIO
60
+
61
+ API_KEY = os.environ.get("API_KEY")
62
+
63
+ def encode_image(image):
64
+ buffer = BytesIO()
65
+ image.save(buffer, format="JPEG")
66
+ return base64.b64encode(buffer.getvalue()).decode('utf-8')
67
+
68
+ def get_gpt_response(image1, image2, query):
69
+ headers = {
70
+ "Content-Type": "application/json",
71
+ "Authorization": f"Bearer {API_KEY}"
72
+ }
73
+
74
+ payload = {
75
+ "model": "gpt-4o",
76
+ "messages": [
77
+ {
78
+ "role": "user",
79
+ "content": [
80
+ {"type": "text", "text": query},
81
+ {"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{image1}"}},
82
+ {"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{image2}"}}
83
+ ]
84
+ }
85
+ ],
86
+ "max_tokens": 512
87
+ }
88
+
89
+ response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, json=payload)
90
+ return response.json()
91
+
92
+ ### TASK 1
93
+ dataset = load_dataset('fazliimam/temporal-vqa', 'temporal_order', split='test')
94
+ image1 = encode_image(dataset[0]['image_1'])
95
+ image2 = encode_image(dataset[0]['image_2'])
96
+
97
+ prompt_1 = "Did the event in the first image happen before the event in the second image? Provide your answer in dictionary format: {'Answer':'True or False', 'Reasoning':'Brief explanation of your choice'}"
98
+ prompt_2 = "Between these two images, which one depicts the event that happened first? Provide your answer in dictionary format: {'Answer':'First image or Second image', 'Reasoning':'Brief explanation of your choice'}"
99
+
100
+ response = get_gpt_response(image1, image2, prompt_1)
101
+ print(response)
102
+
103
+ ### TASK 2
104
+ dataset = load_dataset('fazliimam/temporal-vqa', 'timelapse_estimation', split='test')
105
+ image1 = encode_image(dataset[0]['image_1'])
106
+ image2 = encode_image(dataset[0]['image_2'])
107
+
108
+ prompt = "In the given image, estimate the time that has passed between the first image (left) and the second image (right). Choose one of the following options: A. Less than 15 seconds B. Between 2 minutes to 15 minutes C. Between 1 hour to 12 hours D. Between 2 days to 30 days E. Between 4 months to 12 months F. More than 3 years. Provide your answer in dictionary format: {'Answer':'Selected option', 'Reasoning':'Brief explanation of your choice'}"
109
+
110
+ response = get_gpt_response(image1, image2, prompt)
111
+ print(response)
112
+
113
+
114
+
115
+ ```
116
+