π Meet with WiroAI/wiroai-turkish-llm-8b! A robust language model with more Turkish language and culture support! π
π Key Features
- Fine-tuned with 500,000+ high-quality Turkish instructions
- LoRA method was used for fine-tuning without quantization.
- Adapted to Turkish culture and local context
- Built on Google's cutting-edge LLaMA architecture
π Model Details The model is the Turkish-speaking member of Meta's innovative LLaMA model family. This model has been trained using Supervised Fine-Tuning (SFT) on carefully curated high-quality Turkish instructions. This model demonstrates superior performance in Turkish language processing tasks.
π§ Technical Specifications
- Architecture: Decoder-only transformer
- Base Model: Meta LLaMA 3.1 8B
- Training Data: 500,000+ specially selected Turkish instructions
- Language Support: Turkish (with comprehensive local context understanding) and other common languages.
π‘ Use Cases
- Text Generation and Editing
- Question Answering
- Summarization
- Analysis and Reasoning
- Content Transformation
- Turkish Natural Language Processing Tasks
- Turkish Culture
π Advantages
- Local Understanding: Ability to comprehend Turkish culture, idioms, and current events
- Resource Efficiency: Effective operation even with limited hardware resources
- Flexible Deployment: Usable on desktop, laptop, or custom cloud infrastructure
- Open Model: Transparent and customizable architecture
π Performance and Limitations
While the model demonstrates high performance in Turkish language tasks, users should consider the following:
- Use clear and structured instructions for best results.
- Verify model outputs for critical applications.
- Evaluate resource requirements before deployment.
- Be aware that benchmarks below are represented in certain conditions and results can be replicated. Condition choices are explained below the table.
Benchmark Scores
Models | MMLU TR | TruthfulQA TR | ARC TR | HellaSwag TR | GSM8K TR | WinoGrande TR | Average |
---|---|---|---|---|---|---|---|
WiroAI/wiroai-turkish-llm-9b | 59.8 | 49.9 | 53.7 | 57.0 | 66.8 | 60.6 | 58.0 |
selimc/OrpoGemma-2-9B-TR | 53.0 | 54.3 | 52.4 | 52.0 | 64.8 | 58.9 | 55.9 |
Metin/Gemma-2-9b-it-TR-DPO-V1 | 51.3 | 54.7 | 52.6 | 51.2 | 67.1 | 55.2 | 55.4 |
CohereForAI/aya-expanse-8b | 52.3 | 52.8 | 49.3 | 56.7 | 61.3 | 59.2 | 55.3 |
ytu-ce-cosmos/Turkish-Llama-8b-DPO-v0.1 | 52.0 | 57.6 | 51.0 | 53.0 | 59.8 | 58.0 | 55.2 |
google/gemma-2-9b-it | 51.8 | 53.0 | 52.2 | 51.5 | 63.0 | 56.2 | 54.6 |
Eurdem/Defne-llama3.1-8B | 52.9 | 51.2 | 47.1 | 51.6 | 59.9 | 57.5 | 53.4 |
WiroAI/wiroai-turkish-llm-8b | 52.4 | 49.5 | 50.1 | 54 | 57.5 | 57.0 | 53.4 |
meta-llama/Meta-Llama-3-8B-Instruct | 52.2 | 49.2 | 44.2 | 49.2 | 56.0 | 56.7 | 51.3 |
Models Benchmarks are tested with
lm_eval --model_args pretrained=<model_path> --tasks mmlu_tr_v0.2,arc_tr-v0.2,gsm8k_tr-v0.2,hellaswag_tr-v0.2,truthfulqa_v0.2,winogrande_tr-v0.2
Please see https://github.com/malhajar17/lm-evaluation-harness_turkish and note that we move forward with default language inference which is the same approach in OpenLLMLeaderboard v2.0
Usage
Transformers Pipeline
import transformers
import torch
model_id = "WiroAI/wiroai-turkish-llm-8b"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
pipeline.model.eval()
messages = [
{"role": "system", "content": "Sen Wiro AI tarafΔ±ndan eΔitilmiΕ TΓΌrkΓ§e konuΕan bir dil modelisin."},
{"role": "user", "content": "Bana Δ°stanbul ile alakalΔ± bir sosyal medya postu hazΔ±rlar mΔ±sΔ±n?"
},
]
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
messages,
max_new_tokens=512,
eos_token_id=terminators,
do_sample=True,
temperature=0.9,
)
print(outputs[0]["generated_text"][-1]['content'])
Δ°stanbul'un bΓΌyΓΌsΓΌne kapΔ±lΔ±n! :city_sunset:
Halk arasΔ±nda "dΓΌnyanΔ±n masalΔ± Εehri" olarak bilinen Δ°stanbul, her kΓΆΕesinde tarih, kΓΌltΓΌr ve modern yaΕamΔ±n bir araya geldiΔi eΕsiz bir Εehir.
YΓΌzyΔ±llardΔ±r farklΔ± medeniyetlerin izlerini taΕΔ±yan Δ°stanbul, tarihi mekanlarΔ±ndan, mΓΌzelerinden, Γ§arΕΔ±larΔ±ndan ve restoranlarΔ±ndan oluΕan zengin kΓΌltΓΌrel mirasa sahiptir.
BoΔaz'Δ±n eΕsiz manzarasΔ±nda tekne turu yapmak, TopkapΔ± SarayΔ±'nΔ± ziyaret etmek, Grand Bazaar'da alΔ±ΕveriΕ yapmak, MΔ±sΔ±r ΓarΕΔ±sΔ±'nΔ±n canlΔ± atmosferinde kaybolmak, Galata Kulesi'nden muhteΕem bir manzara deneyimlemek veya BeyoΔlu'nun hareketli sokaklarΔ±nda yΓΌrΓΌyΓΌΕ yapmak Δ°stanbul'da unutulmaz anΔ±lar yaratmak iΓ§in fΔ±rsatlar sunar.
Δ°stanbul'un bΓΌyΓΌlΓΌ atmosferini kendiniz yaΕamak iΓ§in hemen planΔ±nΔ±zΔ± yapΔ±n! :flag-tr: #Δ°stanbul #TΓΌrkiye #Seyahat #Tarih #KΓΌltΓΌr #Gezi
π€ License and Usage
This model is provided under apache 2.0 license. Please review and accept the license terms before use.
π« Contact and Support
For questions, suggestions, and feedback, please open an issue on HuggingFace or contact us directly from our website.
Citation
@article{WiroAI,
title={WiroAI/wiroai-turkish-llm-8b},
author={Abdullah Bezir, Furkan Burhan TΓΌrkay, Cengiz AsmazoΔlu},
year={2024},
url={https://huggingface.co/WiroAI/wiroai-turkish-llm-8b}
}
- Downloads last month
- 2,818
Model tree for WiroAI/wiroai-turkish-llm-8b
Collection including WiroAI/wiroai-turkish-llm-8b
Evaluation results
- 5-shot on MMLU_TR_V0.2self-reported0.524
- 0-shot on Truthful_QA_V0.2self-reported0.495
- 25-shot on ARC_TR_V0.2self-reported0.501
- 10-shot on HellaSwag_TR_V0.2self-reported0.540
- 5-shot on GSM8K_TR_V0.2self-reported0.575
- 5-shot on Winogrande_TR_V0.2self-reported0.570