YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Model Card for BERT-base Sentiment Analysis Model

Model Details

This model is a fine-tuned version of BERT-base for sentiment analysis tasks.

Training Data

The model was trained on the Rotten Tomatoes dataset.

Training Procedure

  • Learning Rate: 2e-5
  • Epochs: 3
  • Batch Size: 16 κ°€λŠ₯ν•˜λ©΄ λ‹€λ₯Έ μ‚¬λžŒλ“€λ„ λ˜‘κ°™μ€ λ²„νŠΈλͺ¨λΈ, 둜튼 ν† λ©”ν† λ₯Ό μ΄μš©ν–ˆμ„ λ•Œ μž¬ν˜„κ°€λŠ₯ν•˜λ„λ‘ ν•˜λŠ” λͺ¨λ“  ν•˜μ΄νΌ νŒŒλΌλ―Έν„°λ“€μ„ λ‹€ 적어라

How to Use ν—ˆκΉ… 페이슀 μ“Έ λ•Œ μ–΄λ–€ 것을 μ“°λ©΄ λœλ‹€λŠ” κ±Έ μ•Œλ €μ£ΌλŠ” 것

from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased")

input_text = "The movie was fantastic with a gripping storyline!"
inputs = tokenizer.encode(input_text, return_tensors="pt")
outputs = model(inputs)
print(outputs.logits)

Evaluation 평가 κ²°κ³Ό

  • Accuracy: 81.97%

Limitations 약점은 뭐가 μžˆλ‹€λŠ” 것

The model may generate biased or inappropriate content due to the nature of the training data. It is recommended to use the model with caution and apply necessary filters.

Ethical Considerations

  • Bias: The model may inherit biases present in the training data.
  • Misuse: The model can be misused to generate misleading or harmful content.

Copyright and License

This model is licensed under the MIT License.

Downloads last month
3
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.