--- library_name: transformers license: apache-2.0 language: - en base_model: - microsoft/Phi-3-mini-4k-instruct pipeline_tag: image-text-to-text --- # pretrain_dsg_OLA-VLM-CLIP-ViT-Phi3-4k-mini Model Card >Note: This is the pretrained model used for [OLA-VLM-CLIP-ViT-Phi3-4k-mini](https://huggingface.co/shi-labs/OLA-VLM-CLIP-ViT-Phi3-4k-mini). OLA-VLM distills target visual information into the intermediate representations of the LLM from a set of target encoders. It adopts a predictive embedding optimization approach at selected LLM layers during training to minimize the embedding losses along with the next token prediction (NTP) objective, resulting in a vision-centric approach to training the Multimodal Large Language Model. - **GitHub Repo:** [https://github.com/SHI-Labs/OLA-VLM](https://github.com/SHI-Labs/OLA-VLM) - **Project Page:** [https://praeclarumjj3.github.io/ola_vlm/](https://praeclarumjj3.github.io/ola_vlm/)