πΌοΈπ OneEncoder: A Unified Text & Image & Video Model
OneEncoder is a lightweight framework for cross-modal alignment, focusing on efficiently integrating text, image and video (with future extensions to other modalities). Unlike traditional methods relying on massive modality-specific encoders, OneEncoder progressively aligns different data types, making it cost-effective and performant even on small paired datasets.
π Key Features
β
Multimodal Alignment: Initially supports text & image & video, with extension to other modalities.
β
Lightweight & Efficient: Avoids full retraining when adding new modalities.
β
Superior Performance: Outperforms models that require large specialized datasets.
π― Applications
- Visual Question Answering (VQA)
- Image-Text-X-ray Retrieval
- Multimodal Content Understanding
π Research Paper
π arXiv: OneEncoder: Progressive Cross-Modal Alignment
π Resources
π GitHub Repo: OneEncoder
π Hugging Face Demo: OneEncoder Retriever
π Demo Notebook: OneEncoder Demos
π OneEncoder for Text, Image: HF Model
π OneEncoder for Text, Image & Audio: HF Model
π OneEncoder for Text, Image & X-ray: HF Model
π Authors
π Bilal FAYE, Hanane AZZAG, Mustapha LEBBAH, Djamel BOUCHAFFRA
Note: This model is training with temperature=2.5 and addition as fusion operation
- Downloads last month
- 16
Model tree for bilalfaye/OneEncoder-text-image-video
Base model
MCG-NJU/videomae-base