Update README.md
Browse files
README.md
CHANGED
@@ -2,8 +2,41 @@
|
|
2 |
tags:
|
3 |
- model_hub_mixin
|
4 |
- pytorch_model_hub_mixin
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
---
|
6 |
|
7 |
-
|
8 |
-
|
9 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
tags:
|
3 |
- model_hub_mixin
|
4 |
- pytorch_model_hub_mixin
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
base_model:
|
8 |
+
- google/vit-base-patch16-224
|
9 |
+
- google-bert/bert-base-uncased
|
10 |
+
- MCG-NJU/videomae-base
|
11 |
---
|
12 |
|
13 |
+
# πΌοΈπ OneEncoder: A Unified Text & Image & Video Model
|
14 |
+
|
15 |
+
**OneEncoder** is a lightweight framework for cross-modal alignment, focusing on efficiently integrating **text, image and video** (with future extensions to other modalities). Unlike traditional methods relying on massive modality-specific encoders, OneEncoder progressively aligns different data types, making it cost-effective and performant even on small paired datasets.
|
16 |
+
|
17 |
+
## π Key Features
|
18 |
+
β
**Multimodal Alignment**: Initially supports **text & image & video**, with extension to other modalities.
|
19 |
+
β
**Lightweight & Efficient**: Avoids full retraining when adding new modalities.
|
20 |
+
β
**Superior Performance**: Outperforms models that require large specialized datasets.
|
21 |
+
|
22 |
+
## π― Applications
|
23 |
+
- **Visual Question Answering (VQA)**
|
24 |
+
- **Image-Text-X-ray Retrieval**
|
25 |
+
- **Multimodal Content Understanding**
|
26 |
+
|
27 |
+
## π Research Paper
|
28 |
+
π **arXiv**: [OneEncoder: Progressive Cross-Modal Alignment](https://arxiv.org/abs/2409.11059)
|
29 |
+
|
30 |
+
## π Resources
|
31 |
+
π **GitHub Repo**: [OneEncoder](https://github.com/b-faye/OneEncoder)
|
32 |
+
π **Hugging Face Demo**: [OneEncoder Retriever](https://huggingface.co/spaces/bilalfaye/OneEncoder-retriever)
|
33 |
+
π **Demo Notebook**: [OneEncoder Demos](https://github.com/b-faye/OneEncoder/tree/main/demo)
|
34 |
+
π **OneEncoder for Text, Image**: [HF Model](https://huggingface.co/bilalfaye/OneEncoder-text-image)
|
35 |
+
π **OneEncoder for Text, Image & Audio**: [HF Model](https://huggingface.co/bilalfaye/OneEncoder-text-image-audio)
|
36 |
+
π **OneEncoder for Text, Image & X-ray**: [HF Model](https://huggingface.co/bilalfaye/OneEncoder-text-image-xray)
|
37 |
+
|
38 |
+
|
39 |
+
## π Authors
|
40 |
+
π **Bilal FAYE**, Hanane AZZAG, Mustapha LEBBAH, Djamel BOUCHAFFRA
|
41 |
+
|
42 |
+
**Note: This model is training with temperature=2.5 and addition as fusion operation**
|