File size: 1,539 Bytes
4cd7c4b 3b62503 f27fb27 5aabb7d 4cd7c4b 7331322 4cd7c4b 4ae6d90 4cd7c4b 4ae6d90 4cd7c4b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
---
license: llama3.2
pipeline_tag: text-generation
tags:
- cortex.cpp
- featured
---
## Overview
Meta developed and released the [Meta Llama 3.2](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
## Variants
| No | Variant | Cortex CLI command |
| --- | --- | --- |
| 1 | [LLama3.2-1b](https://huggingface.co/cortexso/llama3.2/tree/1b) | `cortex run llama3.2:1b` |
| 2 | [LLama3.2-3b](https://huggingface.co/cortexso/llama3.2/tree/3b) | `cortex run llama3.2:3b` |
## Use it with Jan (UI)
1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart)
2. Use in Jan model Hub:
```bash
cortexso/llama3.2
```
## Use it with Cortex (CLI)
1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart)
2. Run the model with command:
```bash
cortex run llama3.2
```
## Credits
- **Author:** Meta
- **Converter:** [Homebrew](https://www.homebrew.ltd/)
- **Original License:** [License](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct/blob/main/LICENSE.txt)
- **Papers:** [Llama-3.2 Blog](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/) |