Text Classification
Transformers
PyTorch
Safetensors
English
bert
Inference Endpoints
news-ESG-Bert / README.md
AyoubChLin's picture
Update README.md
9bc8dcf
---
license: apache-2.0
datasets:
- AyoubChLin/20NewsGroup-AgNews-CnnNews
- AyoubChLin/CNN_News_Articles_2011-2022
- ag_news
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
widget:
- text: money in the pocket
- text: no one can win this cup in quatar..
- text: >-
new transformers architicture can build a large language model with low
ressources
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [CHERGUELAINE Ayoub](https://www.linkedin.com/in/ayoub-cherguelaine/) & [BOUBEKRI Faycal](https://www.linkedin.com/in/faycal-boubekri-832848199/)
- **Shared by [optional]:** HuggingFace
- **Model type:** Language model
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [ESG-Bert](https://huggingface.co/nbroad/ESG-BERT)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://huggingface.co/nbroad/ESG-BERT
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
(AyoubChLin/20NewsGroup-AgNews-CnnNews)[https://huggingface.co/datasets/AyoubChLin/20NewsGroup-AgNews-CnnNews]
###### Fine-tuning hyper-parameters
- learning_rate = 4e-5
- batch_size = 8
- max_seq_length = 256
- num_train_epochs = 2.0
#### Testing Data
[CNN-NEWS-Article](https://huggingface.co/datasets/AyoubChLin/CNN_News_Articles_2011-2022)
[ag_news](https://huggingface.co/datasets/ag_news)
#### Metrics
Accuracy
### Results
----------------------------------------------------------------------------------
[CNN-NEWS-Article](https://huggingface.co/datasets/AyoubChLin/CNN_News_Articles_2011-2022)
0.957791 / LOSS : 0.197338
----------------------------------------------------------------------------------
[ag_news](https://huggingface.co/datasets/ag_news)
0.9417105 / LOSS : 0.25715
----------------------------------------------------------------------------------
#### Summary