File size: 1,222 Bytes
40e7b80 b351e33 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
---
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 2B and 7B Instruct versions of the Gemma model's Guff.
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
#### Model Usage
Since this is a `guff`, it can be run locally using
- Ollama
- Llama.cpp
- LM Studio
- And Many More
- I have provided [GemmaModelFile](https://huggingface.co/c2p-cmd/google_gemma_guff/blob/main/GemmaModelFile) that can be used with ollama by:
- Download the model:
```python
pip install huggingface_hub
from huggingface_hub import hf_hub_download
model_id="c2p-cmd/google_gemma_guff"
hf_hub_download(repo_id=model_id, local_dir="gemma_snapshot", local_dir_use_symlinks=False, filename="gemma_snapshot/gemma-2b-it.gguf")
```
- Load the model file to ollama
```shell
ollama create gemma -f GemmaModelFile
```
- You change the model name based on needs |