Llama3.2-1B-Bulgarian-tokenizer
- Developed by: petkopetkov
- License: apache-2.0
- Finetuned from model : unsloth/llama-3.2-1b-bnb-4bit
Llama3.2-1B finetuned on datasets translated to Bulgarian language (with tokenizer trained on Bulgarian text):
- MMLU: multiple-choice questions from various branches of knowledge
- Winogrande challenge: testing world knowledge and understanding
- Hellaswag: testing sentence completion
- ARC Easy/Challenge: testing logical reasoning
- GSM-8k: solving multiple-choice questions in high-school mathematics
- MathQA: math word problems
Usage
First, install the Transformers library with:
pip install -U transformers
Run with the pipeline
API
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="petkopetkov/Llama3.2-1B-bg-tokenizer",
torch_dtype=torch.bfloat16,
device_map="auto"
)
prompt = "Колко е 2 + 2?"
print(pipe(prompt)[0]['generated_text'])
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model’s pipeline type.