suriya7 commited on
Commit
486dcb3
·
verified ·
1 Parent(s): 71b3868

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -25
README.md CHANGED
@@ -19,34 +19,36 @@ This model is based on the Facebook BART (Bidirectional and Auto-Regressive Tran
19
  ## Usage:
20
 
21
  ### Installation:
 
22
  You can install the necessary libraries using pip:
23
- ```bash
 
24
  pip install transformers
25
  pip datasets
26
  pip evaluate
27
  pip rouge_score
28
 
29
- ### Inference
30
- ```bash
31
- # Load model directly
32
- from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
33
-
34
- tokenizer = AutoTokenizer.from_pretrained("suriya7/text_summarize")
35
- model = AutoModelForSeq2SeqLM.from_pretrained("suriya7/text_summarize")
36
-
37
- def generate_summary(text):
38
-
39
- inputs = tokenizer([text], max_length=1024, return_tensors='pt', truncation=True)
40
-
41
- summary_ids = model.generate(inputs['input_ids'],max_new_tokens=100, do_sample=False)
42
-
43
- summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
44
- return summary
45
-
46
- text_to_summarize = "Now, there is no doubt that one of the most important aspects of any Pixel phone is its camera.
47
- And there might be good news for all camera lovers. Rumours have suggested that the Pixel 9 could come with a telephoto lens,
48
- improving its photography capabilities even further. Google will likely continue to focus on using AI to
49
- enhance its camera performance, in order to make sure that Pixel phones remain top contenders in the world of mobile photography"
50
- summary = generate_summary(text_to_summarize)
51
-
52
-
 
19
  ## Usage:
20
 
21
  ### Installation:
22
+
23
  You can install the necessary libraries using pip:
24
+
25
+ ```bash
26
  pip install transformers
27
  pip datasets
28
  pip evaluate
29
  pip rouge_score
30
 
31
+ ## Example Usage
32
+
33
+ Here's an example of how to use this model for text summarization:
34
+
35
+ ```python
36
+ # Load model and tokenizer
37
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
38
+
39
+ tokenizer = AutoTokenizer.from_pretrained("your_model_name")
40
+ model = AutoModelForSeq2SeqLM.from_pretrained("your_model_name")
41
+
42
+ # Input text to be summarized
43
+ text_to_summarize = "Insert your text to summarize here..."
44
+
45
+ # Generate summary
46
+ inputs = tokenizer([text_to_summarize], max_length=1024, return_tensors='pt', truncation=True)
47
+ summary_ids = model.generate(inputs['input_ids'], max_length=100, num_beams=4, early_stopping=True)
48
+ summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
49
+
50
+ # Print the generated summary
51
+ print("Input Text:")
52
+ print(text_to_summarize)
53
+ print("\nGenerated Summary:")
54
+ print(summary)