# sst2_mnli_qqp_LoRA_8 Fine-tuned LLaMA model on SST2 dataset. - **LoRA**: Enabled - **LoRA Rank**: 8 - **Tasks**: SST2, MNLI, QQP - **Base Model**: [LLaMA 1B (Meta)](https://huggingface.co/meta-llama/Llama-3.2-1B) - **Optimizer**: AdamW - **Batch Size**: 128 - **Max Sequence Length**: 128 tokens - **Tokenizer**: LLaMA-1B tokenizer Trained using the 🤗 Transformers `Trainer` API. ## **Usage** ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("emirhanboge/LLaMA_1B_sst2_mnli_qqp_LoRA_8") tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-1B") ```