Cere-llm-gemma-2-9b-it Model Card
Cere-llm-gemma-2-9b-it is a finetuned version of gemma-2-9b-it. It is trained on synthetically generated and natural preference datasets.
Model Details
Model Description
We fine-tuned google/gemma-2-9b-it
- Developed by: Cerebrum Tech
- Model type: Causal Language Model
- License: gemma
- Finetuned from model: google/gemma-2-9b-it
How to Get Started with the Model
import torch
from transformers import pipeline
model_id = "Cerebrum/cere-llm-gemma-2-ito"
generator = pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda",
)
outputs = generator([{"role": "user", "content": "Türkiye'nin başkenti neresidir?"}],
do_sample=False,
eos_token_id=[generator.tokenizer.convert_tokens_to_ids("<end_of_turn>"), generator.tokenizer.eos_token_id],
max_new_tokens=200)
print(outputs[0]['generated_text'])
- Downloads last month
- 83
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.