|
|
--- |
|
|
license: mit |
|
|
--- |
|
|
|
|
|
|
|
|
# Dataset Card for TimeIT |
|
|
|
|
|
TimeIT encompasses 6 longstanding timestamp-related video tasks and incorporates 12 specific datasets derived from different domains. |
|
|
|
|
|
## Dataset Description |
|
|
|
|
|
|
|
|
- **Homepage: https://huggingface.co/datasets/ShuhuaiRen/TimeIT** |
|
|
- **Repository: https://huggingface.co/datasets/ShuhuaiRen/TimeIT** |
|
|
- **Paper: https://arxiv.org/abs/2312.02051** |
|
|
- **Leaderboard:** |
|
|
- **Point of Contact:** |
|
|
|
|
|
## Dataset Statistics |
|
|
|
|
|
Our dataset compiles diverse tasks of time-sensitive long video understanding, including Dense Video Captioning, Video Grounding, Video Summarization, Video Highlight Detection, Step Localization, Transcribed Speech Generation. |
|
|
|
|
|
### Instruction Statistics |
|
|
|
|
|
| Task | #Instructions | |
|
|
|-------------------------------|---------------| |
|
|
| Dense Video Captioning | | |
|
|
| Temporal Video Grounding | | |
|
|
| Video Summarization | | |
|
|
| Video Highlight Detection | | |
|
|
| Step Localization | | |
|
|
| Transcribed Speech Generation | | |
|
|
| Total | | |
|
|
|
|
|
### Task Statistics |
|
|
|
|
|
| Task | Description | #Train | #Val | #Test | |
|
|
|-------------------------------|----------------------------------------------------------------------------------------------------------------------|---------|---------|---------| |
|
|
| Dense Video Captioning | detects a series of events in the given video and outputs the corresponding timestamps and descriptions | |
|
|
| Temporal Video Grounding | predict a timestamp boundary including the start and end time in the video given a natural language query | |
|
|
| Video Summarization | create a compressed set of frames or clip shots to represent the most informative content of the given video | |
|
|
| Video Highlight Detection | identify the most exciting, impressive, or emotional moments that may not cover the full scope of the original video | |
|
|
| Step Localization | segment and describe significant steps in a long untrimmed video | |
|
|
| Transcribed Speech Generation | predict the speech content and its corresponding start and end timestamps based on visual signals in the video | |
|
|
| Total | - | |
|
|
|
|
|
### Detailed Dataset Statistics |
|
|
|
|
|
| Task | Dataset | #Train | #Val | #Test | |
|
|
|-------------------------------|------------------------|---------|--------|-------| |
|
|
| Dense Video Captioning | `ActivityNet Captions` | | | | |
|
|
| | `ViTT` | 97,765 | 13,965 | 0 | |
|
|
| | `YouCook2` | 14,575 | 2,487 | 2,489 | |
|
|
| Temporal Video Grounding | `DiDeMo` | 30,000 | 2,000 | 0 | |
|
|
| | `QuerYD` | 118,312 | 27,550 | 0 | |
|
|
| | `HiREST_grounding` | 30,000 | 50,000 | 0 | |
|
|
| | `Charades-STA` | 30,000 | 5,000 | 5,000 | |
|
|
| Video Summarization | `TVSum` | 30,000 | 30,000 | 0 | |
|
|
| | `SumMe` | 13,568 | 1,024 | 1,024 | |
|
|
| Video Highlight Detection | `QVHighlights` | 9,009 | 5,046 | 0 | |
|
|
| Step Localization | `COIN` | 30,000 | 2,000 | 0 | |
|
|
| | `HiREST_step` | 29,372 | 2,000 | 0 | |
|
|
| Transcribed Speech Generation | `YT-Temporal` | 5,000 | 4,315 | 4,350 | |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
### HuggingFace Login (Optional) |
|
|
|
|
|
```python |
|
|
# OR run huggingface-cli login |
|
|
from huggingface_hub import login |
|
|
|
|
|
hf_token = "hf_xxx" # TODO: set a valid HuggingFace access token for loading datasets/models |
|
|
login(token=hf_token) |
|
|
``` |
|
|
|
|
|
### Data Loading |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
ds_name = "youcook2" # change the dataset name here |
|
|
dataset = load_dataset("ShuhuaiRen/TimeIT", ds_name) |
|
|
``` |
|
|
|
|
|
### Data Splits |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
ds_name = "youcook2" # change the dataset name here |
|
|
dataset = load_dataset("ShuhuaiRen/TimeIT", ds_name) |
|
|
train_set = dataset["train"] |
|
|
``` |
|
|
|
|
|
### Data Instances |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
from io import BytesIO |
|
|
from base64 import b64decode |
|
|
from PIL import Image |
|
|
|
|
|
ds_name = "youcook2" # change the dataset name here |
|
|
dataset = load_dataset("ShuhuaiRen/TimeIT", ds_name) |
|
|
train_set = dataset["train"] |
|
|
|
|
|
for train_instance in train_set: |
|
|
question = train_instance["QA"][0]['q'] # str |
|
|
answer = train_instance["QA"][0]['a'] # str |
|
|
video_path = train_instance["video"] # str |
|
|
``` |
|
|
|
|
|
### Data Fields |
|
|
|
|
|
```python |
|
|
import datasets |
|
|
|
|
|
features = datasets.Features( |
|
|
{ |
|
|
"instruction": datasets.Value("string"), |
|
|
"inputs": datasets.Value("string"), |
|
|
"image_base64_str": [datasets.Value("string")], |
|
|
"outputs": datasets.Value("string"), |
|
|
} |
|
|
) |
|
|
``` |
|
|
|
|
|
## Dataset Creation |
|
|
|
|
|
### Curation Rationale |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
### Source Data |
|
|
|
|
|
| Task | Dataset [Citation] | Source | |
|
|
|---------------------------|----------------------------------|------------------------------------------------------------------------------------| |
|
|
| Image Captioning | `coco` [1] | [Source](https://cocodataset.org/#home) | |
|
|
| | `textcap` [2] | [Source](https://textvqa.org/textcaps/) | |
|
|
| | `image-paragraph-captioning` [3] | [Source](https://cs.stanford.edu/people/ranjaykrishna/im2p/index.html) | |
|
|
| Classification | `coco-goi` [1] | [Source](https://cocodataset.org/#home) | |
|
|
| | `coco-text` [4] | [Source](https://bgshih.github.io/cocotext/) | |
|
|
| | `imagenet` [5] | [Source](https://www.image-net.org/) | |
|
|
| | `coco-itm` [1] | [Source](https://cocodataset.org/#home) | |
|
|
| | `snli-ve` [6] | [Source](https://github.com/necla-ml/SNLI-VE) | |
|
|
| | `mocheg` [7] | [Source](https://github.com/VT-NLP/Mocheg) | |
|
|
| | `iqa` [8] | [Source](https://github.com/icbcbicc/IQA-Dataset) | |
|
|
| Visual Question Answering | `vqa-v2` [9] | [Source](https://visualqa.org/) | |
|
|
| | `shapes` [10] | [Source](https://github.com/ronghanghu/n2nmn) | |
|
|
| | `docvqa` [11] | [Source](https://www.docvqa.org/) | |
|
|
| | `ocr-vqa` [12] | [Source](https://ocr-vqa.github.io/) | |
|
|
| | `st-vqa` [13] | [Source](https://rrc.cvc.uab.es/?ch=11) | |
|
|
| | `text-vqa` [14] | [Source](https://textvqa.org/) | |
|
|
| | `gqa` [15] | [Source](https://cs.stanford.edu/people/dorarad/gqa/about.html) | |
|
|
| Knowledgeable Visual QA | `okvqa` [16] | [Source](https://okvqa.allenai.org/) | |
|
|
| | `a-okvqa` [17] | [Source](https://allenai.org/project/a-okvqa/home) | |
|
|
| | `science-qa` [18] | [Source](https://scienceqa.github.io/) | |
|
|
| | `viquae` [19] | [Source](https://github.com/PaulLerner/ViQuAE) | |
|
|
| Reasoning | `clevr` [20] | [Source](https://cs.stanford.edu/people/jcjohns/clevr/) | |
|
|
| | `nlvr` [21] | [Source](https://lil.nlp.cornell.edu/nlvr/) | |
|
|
| | `vcr` [22] | [Source](https://visualcommonsense.com/) | |
|
|
| | `visual-mrc` [23] | [Source](https://github.com/nttmdlab-nlp/VisualMRC) | |
|
|
| | `winoground` [24] | [Source](https://huggingface.co/datasets/facebook/winoground) | |
|
|
| Generation | `vist` [25] | [Source](https://visionandlanguage.net/VIST/) | |
|
|
| | `visual-dialog` [26] | [Source](https://visualdialog.org/) | |
|
|
| | `multi30k` [27] | [Source](https://github.com/multi30k/dataset) | |
|
|
| Chinese | `fm-iqa` [28] | [Source](https://paperswithcode.com/dataset/fm-iqa) | |
|
|
| | `coco-cn` [29] | [Source](https://github.com/li-xirong/coco-cn) | |
|
|
| | `flickr8k-cn` [30] | [Source](https://github.com/li-xirong/flickr8kcn) | |
|
|
| | `chinese-food` [31] | [Source](https://sites.google.com/view/chinesefoodnet) | |
|
|
| | `mmchat` [32] | [Source](https://github.com/silverriver/MMChat) | |
|
|
| Video | `ss` [33] | [Source](https://developer.qualcomm.com/software/ai-datasets/something-something) | |
|
|
| | `ivqa` [34] | [Source](https://antoyang.github.io/just-ask.html) | |
|
|
| | `msvd-qa` [35] | [Source](https://paperswithcode.com/dataset/msvd) | |
|
|
| | `activitynet-qa` [36] | [Source](https://github.com/MILVLG/activitynet-qa) | |
|
|
| | `msrvtt` [35] | [Source](https://paperswithcode.com/dataset/msr-vtt) | |
|
|
| | `msrvtt-qa` [37] | [Source](https://paperswithcode.com/sota/visual-question-answering-on-msrvtt-qa-1) | |
|
|
|
|
|
|
|
|
|
|
|
### Annotations |
|
|
|
|
|
#### Annotation process |
|
|
|
|
|
To build high-quality multimodal instruction datasets, |
|
|
we rewrite various datasets into multimodal-to-text dialog format. |
|
|
The annotation process includes four steps: |
|
|
|
|
|
- (1) **Stage I: Instruction Writing**: writing instructions for each task; |
|
|
- (2) **Stage II: Data Format Unification**: structuring images and texts into a unified schema; |
|
|
- (3) **Stage III: Quality Check**: checking the overall dataset quality; |
|
|
- (4) **Stage IV: Key Datasets Translation**: building multilingual sets. |
|
|
|
|
|
#### Who are the annotators? |
|
|
|
|
|
Three authors of this work are employed as human annotators, |
|
|
each of whom is a graduate student familiar with relevant literature. |
|
|
|
|
|
|
|
|
## Additional Information |
|
|
|
|
|
### Licensing Information |
|
|
|
|
|
The content of original dataset follows their original license. |
|
|
We suggest that for the task with Unknown/Custom license, the user can check the original project or contact the dataset owner for detailed license information. |
|
|
|
|
|
Our annotated instruction data is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). |
|
|
|
|
|
|
|
|
### Citation Information |
|
|
```bibtex |
|
|
@article{Ren2023TimeChatAT, |
|
|
title={TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding}, |
|
|
author={Shuhuai Ren and Linli Yao and Shicheng Li and Xu Sun and Lu Hou}, |
|
|
journal={ArXiv}, |
|
|
year={2023}, |
|
|
volume={abs/2312.02051}, |
|
|
} |
|
|
``` |
|
|
### Contributions |
|
|
|
|
|
TimeIT is a video-centric instruction-tuning dataset involving timestamps. |
|
|
designed to enable the development of general-purpose video agents. |
|
|
|
|
|
## References |
|
|
|
|
|
- [1] Microsoft COCO: Common Objects in Context |
|
|
- [2] TextCaps: a dataset for image captioning with reading comprehension |
|
|
- [3] A Hierarchical Approach for Generating Descriptive Image Paragraphs |
|
|
- [4] COCO-Text: Dataset and benchmark for text detection and recognition in natural images |
|
|
- [5] Imagenet large scale visual recognition challenge |
|
|
- [6] E-ViL: A Dataset and Benchmark for Natural Language Explanations in Vision-Language Tasks |
|
|
- [7] End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models |
|
|
- [8] Quantifying visual image quality: A Bayesian view |
|
|
- [9] Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering |
|
|
- [10] Neural Module Networks |
|
|
- [11] DocVQA: A dataset for vqa on document images |
|
|
- [12] OCR-VQA: Visual Question Answering by Reading Text in Images |
|
|
- [13] Scene Text Visual Question Answering |
|
|
- [14] Towards VQA Models That Can Read |
|
|
- [15] GQA: A new dataset for real-world visual reasoning and compositional question answering |
|
|
- [16] OK-VQA: A Visual Question Answering Benchmark Requiring External Knowledge |
|
|
- [17] A-OKVQA: A Benchmark for Visual Question Answering using World Knowledge |
|
|
- [18] Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering |
|
|
- [19] ViQuAE: a dataset for knowledge-based visual question answering about named entities |
|
|
- [20] CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning |
|
|
- [21] A Corpus of Natural Language for Visual Reasoning |
|
|
- [22] From recognition to cognition: Visual Commonsense Reasoning |
|
|
- [23] VisualMRC: Machine reading comprehension on document images |
|
|
- [24] WinoGround: Probing vision and language models for visio-linguistic compositionality |
|
|
- [25] Visual Storytelling |
|
|
- [26] Visual Dialog |
|
|
- [27] Multi30k: Multilingual english-german image descriptions |
|
|
- [28] Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question |
|
|
- [29] COCO-CN for cross-lingual image tagging, captioning, and retrieval |
|
|
- [30] Adding Chinese Captions to Images |
|
|
- [31] ChineseFoodNet: A large-scale image dataset for chinese food recognition |
|
|
- [32] MMChat: Multi-Modal Chat Dataset on Social Media |
|
|
- [33] The "Something Something" Video Database for Learning and Evaluating Visual Common Sense |
|
|
- [34] Just Ask: Learning to answer questions from millions of narrated videos |
|
|
- [35] Video Question Answering via Gradually Refined Attention over Appearance and Motion |
|
|
- [36] ActivityNet-qa: A dataset for understanding complex web videos via question answering |
|
|
- [37] MSR-VTT: A large video description dataset for bridging video and language |