CEREBRUM LLM

Cere-llm-gemma-2-9b-it Model Card

Cere-llm-gemma-2-9b-it is a finetuned version of gemma-2-9b-it. It is trained on synthetically generated and natural preference datasets.

Model Details

Model Description

We fine-tuned google/gemma-2-9b-it

  • Developed by: Cerebrum Tech
  • Model type: Causal Language Model
  • License: gemma
  • Finetuned from model: google/gemma-2-9b-it

How to Get Started with the Model

import torch
from transformers import pipeline

model_id = "Cerebrum/cere-llm-gemma-2-ito"

generator = pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device="cuda",
)
outputs = generator([{"role": "user", "content": "Türkiye'nin başkenti neresidir?"}],
                      do_sample=False,
                      eos_token_id=[generator.tokenizer.convert_tokens_to_ids("<end_of_turn>"), generator.tokenizer.eos_token_id],
                      max_new_tokens=200)
print(outputs[0]['generated_text'])
Downloads last month
83
GGUF
Model size
9.24B params
Architecture
gemma2

4-bit

5-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for CerebrumTech/cere-gemma-2-9b-tr

Base model

google/gemma-2-9b
Quantized
(143)
this model

Space using CerebrumTech/cere-gemma-2-9b-tr 1