image/jpeg

Gemma Ko 7B Instruct v0.50

  • Eval Loss: 1.08372
  • Train Loss: 1.09816
  • lr: 1.5e-5
  • optimizer: adamw
  • lr_scheduler_type: cosine

Model Details

Model Description

The Gemma Ko 7B Instruct v0.50 model is designed for generating human-like text in the Korean language. It can be used for a variety of natural language processing tasks, such as language translation, text summarization, question answering, and conversation generation. This model is particularly well-suited for applications that require high-quality, coherent, and contextually relevant Korean text generation.

Limitations and Ethical Considerations

As Gemma Ko 7B has been trained on extensive web data, biases present in the training data may be reflected in the model. Additionally, there is a possibility that it may generate sentences containing errors or incorrect information. Therefore, rather than blindly trusting the model's output, it is necessary to refer to it with caution.

Downloads last month
2,243
Safetensors
Model size
8.54B params
Tensor type
BF16
ยท
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for lemon-mint/gemma-ko-7b-instruct-v0.50

Base model

beomi/gemma-ko-7b
Finetuned
(4)
this model
Quantizations
1 model

Space using lemon-mint/gemma-ko-7b-instruct-v0.50 1