Fine-tuning Parameters
- Learning-rate: 2e-4
- Optimizer: adamw_8bit
- Weight decay: 0.01
Evaluation Scores
- SacreBLEU Score: 38.051333888825056
- TER Score: 46.60778605615039
- chrF++ Score: 57.837177243685
- METEOR Score: 0.5845828271274762
- COMET Score: 0.6077234148979187
Uploaded model
- Developed by: NairaRahim
- License: apache-2.0
- Finetuned from model : unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no pipeline_tag.
Model tree for NairaRahim/fine_tuned900_llama3.1
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct