File size: 1,283 Bytes
fdf7880
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cb3502e
 
fdf7880
 
cb3502e
fdf7880
 
 
 
cb3502e
fdf7880
cb3502e
 
fdf7880
cb3502e
 
fdf7880
cb3502e
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
license: apache-2.0
language:
- en
metrics:
- rouge
base_model: google/pegasus-cnn_dailymail
---

### Pegasus-based Text Summarization Model
Model Name: pegsus-text-summarization

### Model Description
This model is a fine-tuned version of the Pegasus model, specifically adapted for the task of text summarization. It is trained on the SAMSum dataset, which is designed for summarizing conversations.

### Usage
This model can be used to generate concise summaries of input text, particularly for conversational text or dialogue-based inputs.

### How to Use
You can use this model with the Hugging Face transformers library. Below is an example code snippet:

```bash

from transformers import PegasusForConditionalGeneration, PegasusTokenizer

# Load the pre-trained model and tokenizer
model_name = "ailm/pegsus-text-summarization"
model = PegasusForConditionalGeneration.from_pretrained(model_name)
tokenizer = PegasusTokenizer.from_pretrained(model_name)

# Define the input text
text = "Your input text here"

# Tokenize the input text
tokens = tokenizer(text, truncation=True, padding="longest", return_tensors="pt")

# Generate the summary
summary = model.generate(**tokens)

# Decode and print the summary
print(tokenizer.decode(summary[0], skip_special_tokens=True))