|
--- |
|
license: other |
|
license_name: gemma-terms-of-use |
|
license_link: https://ai.google.dev/gemma/terms |
|
--- |
|
|
|
# Gemma Model Card |
|
|
|
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs) |
|
|
|
This model card corresponds to the 2B and 7B Instruct versions of the Gemma model's Guff. |
|
|
|
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent) |
|
|
|
### Description |
|
|
|
Gemma is a family of lightweight, state-of-the-art open models from Google, |
|
built from the same research and technology used to create the Gemini models. |
|
|
|
#### Model Usage |
|
Since this is a `guff`, it can be run locally using |
|
- Ollama |
|
- Llama.cpp |
|
- LM Studio |
|
- And Many More |
|
- I have provided [GemmaModelFile](https://huggingface.co/c2p-cmd/google_gemma_guff/blob/main/GemmaModelFile) that can be used with ollama by: |
|
- Download the model: |
|
```python |
|
pip install huggingface_hub |
|
from huggingface_hub import hf_hub_download |
|
|
|
model_id="c2p-cmd/google_gemma_guff" |
|
hf_hub_download(repo_id=model_id, local_dir="gemma_snapshot", local_dir_use_symlinks=False, filename="gemma_snapshot/gemma-2b-it.gguf") |
|
``` |
|
- Load the model file to ollama |
|
```shell |
|
ollama create gemma -f GemmaModelFile |
|
``` |
|
- You change the model name based on needs |