--- tags: - model_hub_mixin - pytorch_model_hub_mixin language: - en base_model: - google/vit-base-patch16-224 - google-bert/bert-base-uncased - MCG-NJU/videomae-base --- # 🖼️📝 OneEncoder: A Unified Text & Image & Video Model **OneEncoder** is a lightweight framework for cross-modal alignment, focusing on efficiently integrating **text, image and video** (with future extensions to other modalities). Unlike traditional methods relying on massive modality-specific encoders, OneEncoder progressively aligns different data types, making it cost-effective and performant even on small paired datasets. ## 🚀 Key Features ✅ **Multimodal Alignment**: Initially supports **text & image & video**, with extension to other modalities. ✅ **Lightweight & Efficient**: Avoids full retraining when adding new modalities. ✅ **Superior Performance**: Outperforms models that require large specialized datasets. ## 🎯 Applications - **Visual Question Answering (VQA)** - **Image-Text-X-ray Retrieval** - **Multimodal Content Understanding** ## 📄 Research Paper 📜 **arXiv**: [OneEncoder: Progressive Cross-Modal Alignment](https://arxiv.org/abs/2409.11059) ## 📌 Resources 🔗 **GitHub Repo**: [OneEncoder](https://github.com/b-faye/OneEncoder) 🚀 **Hugging Face Demo**: [OneEncoder Retriever](https://huggingface.co/spaces/bilalfaye/OneEncoder-retriever) 📓 **Demo Notebook**: [OneEncoder Demos](https://github.com/b-faye/OneEncoder/tree/main/demo) 🔊 **OneEncoder for Text, Image**: [HF Model](https://huggingface.co/bilalfaye/OneEncoder-text-image) 🔊 **OneEncoder for Text, Image & Audio**: [HF Model](https://huggingface.co/bilalfaye/OneEncoder-text-image-audio) 🔊 **OneEncoder for Text, Image & X-ray**: [HF Model](https://huggingface.co/bilalfaye/OneEncoder-text-image-xray) ## 📝 Authors 📌 **Bilal FAYE**, Hanane AZZAG, Mustapha LEBBAH, Djamel BOUCHAFFRA **Note: This model is training with temperature=2.5 and addition as fusion operation**