Update README.md
Browse files
README.md
CHANGED
|
@@ -11,12 +11,51 @@ tags:
|
|
| 11 |
- gguf
|
| 12 |
---
|
| 13 |
|
| 14 |
-
#
|
| 15 |
|
| 16 |
- **Developed by:** inetnuc
|
| 17 |
- **License:** apache-2.0
|
| 18 |
-
- **Finetuned from model
|
| 19 |
|
| 20 |
-
This
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
|
| 22 |
-
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
|
|
|
| 11 |
- gguf
|
| 12 |
---
|
| 13 |
|
| 14 |
+
# LLAMA-3.1 8B Chat Turkish Model
|
| 15 |
|
| 16 |
- **Developed by:** inetnuc
|
| 17 |
- **License:** apache-2.0
|
| 18 |
+
- **Finetuned from model:** unsloth/Meta-Llama-3.1-8B-bnb-4bit
|
| 19 |
|
| 20 |
+
This LLAMA-3.1 model was finetuned to enhance capabilities in text generation for nuclear-related topics. The training was accelerated using [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library, achieving a 2x faster performance.
|
| 21 |
+
|
| 22 |
+
## Finetuning Process
|
| 23 |
+
The model was finetuned using the Unsloth library, leveraging its efficient training capabilities. The process included the following steps:
|
| 24 |
+
|
| 25 |
+
1. **Data Preparation:** Loaded and preprocessed nuclear-related data.
|
| 26 |
+
2. **Model Loading:** Utilized `unsloth/llama-3-8b-bnb-4bit` as the base model.
|
| 27 |
+
3. **LoRA Patching:** Applied LoRA (Low-Rank Adaptation) for efficient training.
|
| 28 |
+
4. **Training:** Finetuned the model using Hugging Face's TRL library with optimized hyperparameters.
|
| 29 |
+
|
| 30 |
+
## Model Details
|
| 31 |
+
|
| 32 |
+
- **Base Model:** `unsloth/llama-3.1-8b-bnb-4bit`
|
| 33 |
+
- **Language:** English (`en`)
|
| 34 |
+
- **License:** Apache-2.0
|
| 35 |
+
|
| 36 |
+
## Author
|
| 37 |
+
|
| 38 |
+
**MUSTAFA UMUT OZBEK**
|
| 39 |
+
|
| 40 |
+
**https://www.linkedin.com/in/mustafaumutozbek/**
|
| 41 |
+
**https://x.com/m_umut_ozbek**
|
| 42 |
+
|
| 43 |
+
|
| 44 |
+
## Usage
|
| 45 |
+
|
| 46 |
+
### Loading the Model
|
| 47 |
+
|
| 48 |
+
You can load the model and tokenizer using the following code snippet:
|
| 49 |
+
|
| 50 |
+
```python
|
| 51 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 52 |
+
|
| 53 |
+
# Load the tokenizer and model
|
| 54 |
+
tokenizer = AutoTokenizer.from_pretrained("inetnuc/Turkish-Llama-3.1-8B-4bit-lora")
|
| 55 |
+
model = AutoModelForCausalLM.from_pretrained("inetnuc/Turkish-Llama-3.1-8B-4bit-lora")
|
| 56 |
+
|
| 57 |
+
# Example of generating text
|
| 58 |
+
inputs = tokenizer("Türki̇ye'de nükleer enerji̇ yatirimlari artirilmali mi, ne düşünüyorsun?", return_tensors="pt")
|
| 59 |
+
outputs = model.generate(**inputs, max_new_tokens=128)
|
| 60 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 61 |
|
|
|