jfforero/distilbert-base-uncased-BERT-POET4
This model is a fine-tuned version of distilbert-base-uncased for sentiment analysis.
Model description
This model is trained for sentiment classification with three labels: positive, neutral, and negative. It is based on DistilBERT, a lighter and faster version of BERT.
Intended uses & limitations
Intended Use
- Sentiment classification of short texts, such as product reviews or social media posts.
- Designed for English-language input.
Limitations
- May struggle with sarcasm or complex irony.
- Performance depends on training data quality.
Training and evaluation data
- Fine-tuned on [Dataset Name] (if available, add link).
- Contains X training samples and Y validation samples.
Training procedure
Training hyperparameters
- Optimizer: AdamW
- Learning Rate: 2e-5
- Batch Size: X
- Epochs: Y
Training results
Train Loss | Validation Loss | Epoch |
---|---|---|
X.XXXX | Y.YYYY | Z |
Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.15.2
- Downloads last month
- 38
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.
Model tree for jfforero/distilbert-base-uncased-BERT-POET4
Base model
distilbert/distilbert-base-uncased