Update README.md
Browse files
README.md
CHANGED
@@ -19,34 +19,36 @@ This model is based on the Facebook BART (Bidirectional and Auto-Regressive Tran
|
|
19 |
## Usage:
|
20 |
|
21 |
### Installation:
|
|
|
22 |
You can install the necessary libraries using pip:
|
23 |
-
|
|
|
24 |
pip install transformers
|
25 |
pip datasets
|
26 |
pip evaluate
|
27 |
pip rouge_score
|
28 |
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
|
|
19 |
## Usage:
|
20 |
|
21 |
### Installation:
|
22 |
+
|
23 |
You can install the necessary libraries using pip:
|
24 |
+
|
25 |
+
```bash
|
26 |
pip install transformers
|
27 |
pip datasets
|
28 |
pip evaluate
|
29 |
pip rouge_score
|
30 |
|
31 |
+
## Example Usage
|
32 |
+
|
33 |
+
Here's an example of how to use this model for text summarization:
|
34 |
+
|
35 |
+
```python
|
36 |
+
# Load model and tokenizer
|
37 |
+
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
38 |
+
|
39 |
+
tokenizer = AutoTokenizer.from_pretrained("your_model_name")
|
40 |
+
model = AutoModelForSeq2SeqLM.from_pretrained("your_model_name")
|
41 |
+
|
42 |
+
# Input text to be summarized
|
43 |
+
text_to_summarize = "Insert your text to summarize here..."
|
44 |
+
|
45 |
+
# Generate summary
|
46 |
+
inputs = tokenizer([text_to_summarize], max_length=1024, return_tensors='pt', truncation=True)
|
47 |
+
summary_ids = model.generate(inputs['input_ids'], max_length=100, num_beams=4, early_stopping=True)
|
48 |
+
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
|
49 |
+
|
50 |
+
# Print the generated summary
|
51 |
+
print("Input Text:")
|
52 |
+
print(text_to_summarize)
|
53 |
+
print("\nGenerated Summary:")
|
54 |
+
print(summary)
|