emirhanboge's picture
Update README.md
2165747 verified

sst2_mnli_qqp_LoRA_32

Fine-tuned LLaMA model on SST2 dataset.

  • LoRA: Enabled
  • LoRA Rank: 32
  • Tasks: SST2, MNLI, QQP
  • Base Model: LLaMA 1B (Meta)
  • Optimizer: AdamW
  • Batch Size: 128
  • Max Sequence Length: 128 tokens
  • Tokenizer: LLaMA-1B tokenizer

Trained using the 🤗 Transformers Trainer API.

Usage

from transformers import AutoModelForSequenceClassification, AutoTokenizer

model = AutoModelForSequenceClassification.from_pretrained("emirhanboge/LLaMA_1B_sst2_mnli_qqp_LoRA_32")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-1B")