AyoubChLin commited on
Commit
73da051
·
1 Parent(s): 1ea16b3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -0
README.md CHANGED
@@ -1,3 +1,56 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ datasets:
4
+ - AyoubChLin/CNN_News_Articles_2011-2022
5
+ language:
6
+ - en
7
+ metrics:
8
+ - accuracy
9
+ pipeline_tag: text-classification
10
+ tags:
11
+ - news classification
12
  ---
13
+ # Fine-Tuned BART Model for Text Classification on CNN News Articles
14
+
15
+ [![Hugging Face Model](https://img.shields.io/huggingface/model/IT-community/Bart_News_text_classification?color=blue&logo=huggingface)](https://huggingface.co/IT-community/Bart_News_text_classification)
16
+ [![License](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
17
+
18
+ This is a fine-tuned BART (Bidirectional and Auto-Regressive Transformers) model for text classification on CNN news articles. The model was fine-tuned on a dataset of CNN news articles with labels indicating the article topic, using a batch size of 32, learning rate of 6e-5, and trained for one epoch.
19
+
20
+ ## How to Use
21
+
22
+ ### Install
23
+
24
+ ```bash
25
+ pip install transformers
26
+ ```
27
+
28
+ ### Example Usage
29
+
30
+ ```python
31
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
32
+
33
+ tokenizer = AutoTokenizer.from_pretrained("IT-community/Bart_News_text_classification")
34
+ model = AutoModelForSequenceClassification.from_pretrained("IT-community/Bart_News_text_classification")
35
+
36
+ # Tokenize input text
37
+ text = "This is an example CNN news article about politics."
38
+ inputs = tokenizer(text, padding=True, truncation=True, max_length=512, return_tensors="pt")
39
+
40
+ # Make prediction
41
+ outputs = model(inputs["input_ids"], attention_mask=inputs["attention_mask"])
42
+ predicted_label = torch.argmax(outputs.logits)
43
+
44
+ print(predicted_label)
45
+ ```
46
+ ## Evaluation
47
+
48
+ The model achieved the following performance metrics on the test set:
49
+
50
+ Accuracy: 0.9591836734693877
51
+ F1-score: 0.958301875401112
52
+ Recall: 0.9591836734693877
53
+ Precision: 0.9579673040369542
54
+
55
+ ## Contact
56
+ This work was done by the IT-community club.