BERT-ASTD Balanced
Arabic version bert model fine tuned on ASTD dataset balanced version to identify twitter sentiments in Arabic language MSA dialect .
Data
The model were fine-tuned on ~1330 tweet in Arabic language.
Results
class | precision | recall | f1-score | Support |
---|---|---|---|---|
0 | 0.9328 | 0.9398 | 0.9363 | 133 |
1 | 0.9394 | 0.9323 | 0.9358 | 133 |
Accuracy | 0.9361 | 266 |
How to use
You can use these models by installing torch
or tensorflow
and Huggingface library transformers
. And you can use it directly by initializing it like this:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name="mofawzy/BERT-ASTD"
model = AutoModelForSequenceClassification.from_pretrained(model_name,num_labels=2)
tokenizer = AutoTokenizer.from_pretrained(model_name)
- Downloads last month
- 9
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.