--- license: llama2 pipeline_tag: text-generation Tags: - cortex.cpp - multimodal - vicuna - vision-language --- ## Overview **LLaVA** (Large Language and Vision Assistant) is an open-source chatbot trained to handle multimodal instruction-following tasks. It is a fine-tuned **Vicuna-7B** model, designed to process both **text and image** inputs. This auto-regressive language model leverages the **transformer architecture** to improve interactions in vision-language tasks, making it useful for research in **computer vision, natural language processing, machine learning, and artificial intelligence**. LLaVA-v1.6-Vicuna-7B is the latest iteration, trained in **December 2023**, and optimized for improved instruction-following performance in multimodal settings. ## Variants | No | Variant | Cortex CLI command | | --- | --- | --- | | 1 | [llava-v1.6-vicuna-7b-f16](https://huggingface.co/cortexso/llava-v1.6/tree/gguf-f16) | `cortex run llava-v1.6:gguf-f16` | | 2 | [llava-v1.6-vicuna-7b-q4_km](https://huggingface.co/cortexso/llava-v1.6/tree/gguf-q4-km) | `cortex run llava-v1.6:gguf-q4-km` | ## Use it with Jan (UI) 1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart) 2. Use in Jan model Hub: ```bash cortexso/llava-v1.6 ``` ## Use it with Cortex (CLI) 1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart) 2. Run the model with command: ```bash cortex run llava-v1.6 ``` ## Credits - **Author:** LLaVA Research Team - **Converter:** [Homebrew](https://www.homebrew.ltd/) - **Original License:** [LLAMA 2 Community License](https://github.com/facebookresearch/llama/blob/main/LICENSE) - **Papers:** [LLaVA-v1.6: Enhancing Large Multimodal Models](https://llava-vl.github.io/)