|
--- |
|
base_model: unsloth/gemma-7b-bnb-4bit |
|
library_name: peft |
|
--- |
|
|
|
# Gemma2 Fine-Tuned LoRA Model |
|
|
|
## Overview |
|
This is a **LoRA (Low-Rank Adaptation)** fine-tuned model based on the **`unsloth/gemma-7b-bnb-4bit`** base model. It has been adapted for a **tipification analysis task** similar to the Llama-3.2-3B-Instruct LoRA fine-tuning, where the model classifies text into categories such as **"ESTAFA," "ROBO," "HURTO,"** and their **"TENTATIVA DE"** variations. |
|
|
|
During fine-tuning, only specific adapter layers were trained (\~50 million parameters), while the rest of the base model was frozen. This approach allows parameter-efficient training, significantly reducing computational costs. |
|
|
|
--- |
|
|
|
## Key Features |
|
- **Base Model**: `unsloth/gemma-7b-bnb-4bit` |
|
- **Task Type**: Causal Language Modeling (`CAUSAL_LM`) |
|
- **LoRA Parameters**: |
|
- `r`: 16 |
|
- `lora_alpha`: 16 |
|
- `lora_dropout`: 0.0 |
|
- **Target Modules**: |
|
- `gate_proj`, `up_proj`, `down_proj`, `k_proj`, `q_proj`, `o_proj`, `v_proj` |
|
- **Number of Trainable Parameters**: **50,003,968** |
|
- **Training Loss & Validation Loss**: |
|
- Observed over **117 steps** (1 epoch). |
|
- See table below for detailed step-by-step values. |
|
|
|
--- |
|
|
|
## Dataset Distribution |
|
This model was fine-tuned on the **same dataset** as the Llama-3.2-3B-Instruct LoRA version, with the following category distribution: |
|
|
|
| **Category** | **Count** | **Percentage** | |
|
|--------------------------|-----------|----------------| |
|
| ESTAFA | 4610 | 47.3% | |
|
| ROBO | 2307 | 23.7% | |
|
| HURTO | 2141 | 22.0% | |
|
| TENTATIVA DE ESTAFA | 306 | 3.1% | |
|
| TENTATIVA DE ROBO | 272 | 2.8% | |
|
| TENTATIVA DE HURTO | 113 | 1.2% | |
|
| **Total** | 9749 | 100% | |
|
|
|
Although the dataset has nearly 10K examples in this summary table, the fine-tuning run used an extended version (\~15K examples) for this particular training session. |
|
|
|
--- |
|
|
|
## Training Details |
|
- **Hardware**: Single GPU A100 40Gb |
|
- **Num Examples**: ~15,000 |
|
- **Epochs**: 1 |
|
- **Batch Size per Device**: 32 |
|
- **Gradient Accumulation Steps**: 4 |
|
- **Effective Total Batch Size**: 128 |
|
- **Total Steps**: 117 |
|
- **Number of Trainable Parameters**: 50,003,968 |
|
|
|
### Training and Validation Loss |
|
Below is a snapshot of how training and validation loss evolved during the single epoch (117 steps): |
|
|
|
| **Step** | **Training Loss** | **Validation Loss** | |
|
|----------|-------------------|---------------------| |
|
| 10 | 2.974900 | 4.242294 | |
|
| 20 | 5.451000 | 4.526450 | |
|
| 30 | 4.150400 | 3.632928 | |
|
| 40 | 3.036100 | 2.615031 | |
|
| 50 | 2.492900 | 2.178700 | |
|
| 60 | 2.095400 | 1.886430 | |
|
| 70 | 2.099200 | 1.548187 | |
|
| 80 | 1.983100 | 2.104600 | |
|
| 90 | 2.020900 | 1.526225 | |
|
| 100 | 1.727700 | 1.699223 | |
|
| 110 | 1.868300 | 1.716561 | |
|
| ... | ... | ... | |
|
|
|
Final training concluded at **step 117**. |
|
We observe a steady decrease in both training and validation losses, indicating the model was converging throughout the single epoch. |
|
|
|
--- |
|
|
|
## Deployment Instructions |
|
You can use this LoRA fine-tuned model with the Hugging Face Transformers library. Below is an example of how to load and run the model for text generation or classification-like tasks: |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("Petermoyano/unsloth-gemma-7b-bnb-4bit-LoRA-Tipification-CausalLM-16R-16Alpha-1Epoch") |
|
model = AutoModelForCausalLM.from_pretrained("Petermoyano/unsloth-gemma-7b-bnb-4bit-LoRA-Tipification-CausalLM-16R-16Alpha-1Epoch") |
|
|
|
input_text = "TENTATIVA DE ESTAFA:" |
|
inputs = tokenizer(input_text, return_tensors="pt") |
|
outputs = model.generate(**inputs, max_length=50) |
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
|