# sst2_mnli_qqp_FullFT_ Fine-tuned LLaMA model on SST2_MNLI_QQP dataset. - **LoRA**: Full Fine-Tuning - **LoRA Rank**: N/A - **Tasks**: SST2_MNLI_QQP - **Base Model**: LLaMA 1B - **Optimizer**: AdamW - **Batch Size**: 4 Trained using the 🤗 Transformers `Trainer` API.