-
VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking
Paper β’ 2303.16727 β’ Published -
OpenGVLab/VideoMAEv2-Base
Video Classification β’ Updated β’ 555 β’ 1 -
OpenGVLab/VideoMAEv2-Large
Video Classification β’ Updated β’ 774 -
OpenGVLab/VideoMAEv2-Huge
Video Classification β’ Updated β’ 39
OpenGVLab
community
AI & ML interests
Computer Vision
Recent Activity
View all activity
Organization Card
OpenGVLab
Welcome to OpenGVLab! We are a research group from Shanghai AI Lab focused on Vision-Centric AI research. The GV in our name, OpenGVLab, means general vision, a general understanding of vision, so little effort is needed to adapt to new vision-based tasks.
Models
- InternVL: a pioneering open-source alternative to GPT-4V.
- InternImage: a large-scale vision foundation models with deformable convolutions.
- InternVideo: large-scale video foundation models for multimodal understanding.
- VideoChat: an end-to-end chat assistant for video comprehension.
- All-Seeing-Project: towards panoptic visual recognition and understanding of the open world.
Datasets
- ShareGPT4o: a groundbreaking large-scale resource that we plan to open-source with 200K meticulously annotated images, 10K videos with highly descriptive captions, and 10K audio files with detailed descriptions.
- InternVid: a large-scale video-text dataset for multimodal understanding and generation.
- MMPR: a high-quality, large-scale multimodal preference dataset.
Benchmarks
- MVBench: a comprehensive benchmark for multimodal video understanding.
- CRPE: a benchmark covering all elements of the relation triplets (subject, predicate, object), providing a systematic platform for the evaluation of relation comprehension ability.
- MM-NIAH: a comprehensive benchmark for long multimodal documents comprehension.
- GMAI-MMBench: a comprehensive multimodal evaluation benchmark towards general medical AI.
Collections
19
Faster and more powerful VideoChat.
-
OpenGVLab/VideoChat-Flash-Qwen2_5-2B_res448
Video-Text-to-Text β’ Updated β’ 1.7k β’ 11 -
OpenGVLab/VideoChat-Flash-Qwen2-7B_res224
Video-Text-to-Text β’ Updated β’ 888 β’ 3 -
OpenGVLab/VideoChat-Flash-Qwen2-7B_res448
Video-Text-to-Text β’ Updated β’ 833 β’ 7 -
VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling
Paper β’ 2501.00574 β’ Published β’ 5
spaces
11
models
144
OpenGVLab/VisionLLMv2
Updated
β’
34
β’
3
OpenGVLab/InternImage
Updated
β’
14
OpenGVLab/InternVL_2_5_HiCo_R64
Video-Text-to-Text
β’
Updated
β’
89
OpenGVLab/InternVL_2_5_HiCo_R16
Video-Text-to-Text
β’
Updated
β’
55
β’
1
OpenGVLab/InternVideo2_5_Chat_8B
Video-Text-to-Text
β’
Updated
β’
563
β’
6
OpenGVLab/InternOmni
Image-Text-to-Text
β’
Updated
β’
14
β’
6
OpenGVLab/VideoChat-Flash-Qwen2-7B_res224
Video-Text-to-Text
β’
Updated
β’
888
β’
3
OpenGVLab/VideoChat-Flash-Qwen2-7B_res448
Video-Text-to-Text
β’
Updated
β’
833
β’
7
OpenGVLab/VideoChat-Flash-Qwen2_5-2B_res448
Video-Text-to-Text
β’
Updated
β’
1.7k
β’
11
OpenGVLab/PIIP
Object Detection
β’
Updated
β’
4
datasets
30
OpenGVLab/MMPR-v1.1
Preview
β’
Updated
β’
1.09k
β’
37
OpenGVLab/MMPR
Preview
β’
Updated
β’
427
β’
45
OpenGVLab/GMAI-MMBench
Preview
β’
Updated
β’
214
β’
14
OpenGVLab/V2PE-Data
Preview
β’
Updated
β’
433
β’
6
OpenGVLab/InternVL-Domain-Adaptation-Data
Preview
β’
Updated
β’
193
β’
7
OpenGVLab/GUI-Odyssey
Viewer
β’
Updated
β’
7.74k
β’
20.7k
β’
10
OpenGVLab/OmniCorpus-YT
Updated
β’
862
β’
10
OpenGVLab/OmniCorpus-CC-210M
Viewer
β’
Updated
β’
208M
β’
186
β’
19
OpenGVLab/OmniCorpus-CC
Viewer
β’
Updated
β’
986M
β’
35k
β’
12
OpenGVLab/MVBench
Viewer
β’
Updated
β’
4k
β’
12.8k
β’
29