---
base_model: google/gemma-2-9b-it
license: mit
language:
- tr
---
# Cere-llm-gemma-2-9b-it Model Card
Cere-llm-gemma-2-9b-it is a finetuned version of gemma-2-9b-it. It is trained on synthetically generated and natural preference datasets.
## Model Details
### Model Description
We fine-tuned [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it)
- **Developed by:** Cerebrum Tech
- **Model type:** Causal Language Model
- **License:** gemma
- **Finetuned from model:** [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it)
## How to Get Started with the Model
```
import torch
from transformers import pipeline
model_id = "Cerebrum/cere-llm-gemma-2-ito"
generator = pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda",
)
outputs = generator([{"role": "user", "content": "Türkiye'nin başkenti neresidir?"}],
do_sample=False,
eos_token_id=[generator.tokenizer.convert_tokens_to_ids(""), generator.tokenizer.eos_token_id],
max_new_tokens=200)
print(outputs[0]['generated_text'])
```