File size: 2,441 Bytes
4f857bd 5075542 4f857bd 5075542 4f857bd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
pipeline_tag: text-generation
tags:
- cortexp.cpp
- featured
---
## Overview
**Google** developed and released the **Gemma 3** series, featuring multiple model sizes with both pre-trained and instruction-tuned variants. These multimodal models handle both text and image inputs while generating text outputs, making them versatile for various applications. Gemma 3 models are built from the same research and technology used to create the Gemini models, offering state-of-the-art capabilities in a lightweight and accessible format.
The Gemma 3 models include four different sizes with open weights, providing excellent performance across tasks like question answering, summarization, and reasoning while maintaining efficiency for deployment in resource-constrained environments such as laptops, desktops, or custom cloud infrastructure.
## Variants
### Gemma 3
| No | Variant | Branch | Cortex CLI command |
| -- | ------------------------------------------------------ | ------ | ----------------------------- |
| 1 | [Gemma-3-1B](https://huggingface.co/cortexso/gemma3/tree/1b) | 1b | `cortex run gemma3:1b` |
| 2 | [Gemma-3-4B](https://huggingface.co/cortexso/gemma3/tree/4b) | 4b | `cortex run gemma3:4b` |
| 3 | [Gemma-3-12B](https://huggingface.co/cortexso/gemma3/tree/12b) | 12b | `cortex run gemma3:12b` |
| 4 | [Gemma-3-27B](https://huggingface.co/cortexso/gemma3/tree/27b) | 27b | `cortex run gemma3:27b` |
Each branch contains a default quantized version.
### Key Features
- **Multimodal capabilities**: Handles both text and image inputs
- **Large context window**: 128K tokens
- **Multilingual support**: Over 140 languages
- **Available in multiple sizes**: From 1B to 27B parameters
- **Open weights**: For both pre-trained and instruction-tuned variants
## Use it with Jan (UI)
1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart)
2. Use in Jan model Hub:
```bash
cortexso/gemma3
```
## Use it with Cortex (CLI)
1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart)
2. Run the model with command:
```bash
cortex run gemma3
```
## Credits
- **Author:** Google
- **Original License:** [Gemma License](https://ai.google.dev/gemma/terms)
- **Papers:** [Gemma 3 Technical Report](https://storage.googleapis.com/deepmind-media/gemma/Gemma3Report.pdf) |