YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
sst2_mnli_qqp_LoRA_4
Fine-tuned LLaMA model on SST2 dataset.
- LoRA: Enabled
- LoRA Rank: 4
- Tasks: SST2, MNLI, QQP
- Base Model: LLaMA 1B (Meta)
- Optimizer: AdamW
- Batch Size: 128
- Max Sequence Length: 128 tokens
- Tokenizer: LLaMA-1B tokenizer
Trained using the 🤗 Transformers Trainer
API.
Usage
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("emirhanboge/LLaMA_1B_sst2_mnli_qqp_LoRA_4")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-1B")
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.