Update README.md
Browse files
README.md
CHANGED
@@ -79,7 +79,7 @@ its implementation and the article from which it originated.
|
|
79 |
- The only language supported by the model is English
|
80 |
- For texts summarized by transformers, the size of the original text is limited by the maximum number of tokens supported by the transformer
|
81 |
- No specific training is done for the application of the model and only pretrained transformers are used (e.g. BART is trained with CNN Corpus and Pegasus with XSum)
|
82 |
-
- There is a difference
|
83 |
|
84 |
### How to use
|
85 |
|
|
|
79 |
- The only language supported by the model is English
|
80 |
- For texts summarized by transformers, the size of the original text is limited by the maximum number of tokens supported by the transformer
|
81 |
- No specific training is done for the application of the model and only pretrained transformers are used (e.g. BART is trained with CNN Corpus and Pegasus with XSum)
|
82 |
+
- There is a difference in the quality of the results depending on the sort of text which is being summarized. BART, for example, having been trained with a dataset in the news domain, will be better at summarizing news in comparison to scientific articles
|
83 |
|
84 |
### How to use
|
85 |
|