File size: 1,763 Bytes
3349d47 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
---
license: llama2
pipeline_tag: text-generation
Tags:
- cortex.cpp
- multimodal
- vicuna
- vision-language
---
## Overview
**LLaVA** (Large Language and Vision Assistant) is an open-source chatbot trained to handle multimodal instruction-following tasks. It is a fine-tuned **Vicuna-7B** model, designed to process both **text and image** inputs. This auto-regressive language model leverages the **transformer architecture** to improve interactions in vision-language tasks, making it useful for research in **computer vision, natural language processing, machine learning, and artificial intelligence**.
LLaVA-v1.6-Vicuna-7B is the latest iteration, trained in **December 2023**, and optimized for improved instruction-following performance in multimodal settings.
## Variants
| No | Variant | Cortex CLI command |
| --- | --- | --- |
| 1 | [llava-v1.6-vicuna-7b-f16](https://huggingface.co/cortexso/llava-v1.6/tree/gguf-f16) | `cortex run llava-v1.6:gguf-f16` |
| 2 | [llava-v1.6-vicuna-7b-q4_km](https://huggingface.co/cortexso/llava-v1.6/tree/gguf-q4-km) | `cortex run llava-v1.6:gguf-q4-km` |
## Use it with Jan (UI)
1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart)
2. Use in Jan model Hub:
```bash
cortexso/llava-v1.6
```
## Use it with Cortex (CLI)
1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart)
2. Run the model with command:
```bash
cortex run llava-v1.6
```
## Credits
- **Author:** LLaVA Research Team
- **Converter:** [Homebrew](https://www.homebrew.ltd/)
- **Original License:** [LLAMA 2 Community License](https://github.com/facebookresearch/llama/blob/main/LICENSE)
- **Papers:** [LLaVA-v1.6: Enhancing Large Multimodal Models](https://llava-vl.github.io/)
|