YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
sst2_mnli_qqp_FullFT_
Fine-tuned LLaMA model on MNLI dataset.
- LoRA: Full Fine-Tuning
- LoRA Rank: N/A
- Tasks: SST2, MNLI, QQP
- Base Model: LLaMA 1B (Meta)
- Optimizer: AdamW
- Batch Size: 32
- Max Sequence Length: 128 tokens
- Tokenizer: LLaMA-1B tokenizer
Trained using the 🤗 Transformers Trainer
API.
Usage
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("emirhanboge/LLaMA_1B_sst2_mnli_qqp_FullFT_")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-1B")
- Downloads last month
- 47
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.