English
Summarization
File size: 17,306 Bytes
cedcfdf
7b5f399
 
824dfb9
cedcfdf
7b5f399
cd926e4
 
 
 
 
7b5f399
 
cedcfdf
7b5f399
 
 
 
63f2349
7b5f399
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
534a51a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
824dfb9
7b5f399
 
 
54137d9
 
0427b8d
 
 
 
 
 
 
 
 
 
 
 
 
54137d9
 
7b5f399
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f6147de
 
 
7b5f399
 
f6147de
 
 
 
 
 
7b5f399
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
---
language: en
tags:
- Summarization
license: apache-2.0
datasets:
- scientific_papers
- big_patent
- cnn_corpus
- cnn_dailymail
- xsum
- MCTI_data
thumbnail: https://github.com/Marcosdib/S2Query/Classification_Architecture_model.png
---

![MCTIimg](https://antigo.mctic.gov.br/mctic/export/sites/institucional/institucional/entidadesVinculadas/conselhos/pag-old/RODAPE_MCTI.png)


# MCTI Text Automatic Text Summarization Task (uncased) DRAFT

Disclaimer: 

## According to the abstract,

Text classification is a traditional problem in Natural Language Processing (NLP). Most of the state-of-the-art implementations
require high-quality, voluminous, labeled data. Pre- trained models on large corpora have shown beneficial for text classification
and other NLP tasks, but they can only take a limited amount of symbols as input. This is a real case study that explores 
different machine learning strategies to classify a small amount of long, unstructured, and uneven data to find a proper method
with good performance. The collected data includes texts of financing opportunities the international R&D funding organizations
provided on theirwebsites. The main goal is to find international R&D funding eligible for Brazilian researchers, sponsored by
the Ministry of Science, Technology and Innovation. We use pre-training and word embedding solutions to learn the relationship
of the words from other datasets with considerable similarity and larger scale. Then, using the acquired features, based on the
available dataset from MCTI, we apply transfer learning plus deep learning models to improve the comprehension of each sentence.
Compared to the baseline accuracy rate of 81%, based on the available datasets, and the 85% accuracy rate achieved through a 
Transformer-based approach, the Word2Vec-based approach improved the accuracy rate to 88%. The research results serve as 
asuccessful case of artificial intelligence in a federal government application.

This model focus on a more specific problem, creating a Research Financing Products Portfolio (FPP) outside ofthe Union budget,
supported by the Brazilian Ministry of Science, Technology, and Innovation (MCTI). It was introduced in ["Using transfer learning to classify long unstructured texts with small amounts of labeled data"](https://www.scitepress.org/Link.aspx?doi=10.5220/0011527700003318) and first released in
[this repository](https://huggingface.co/unb-lamfo-nlp-mcti). This model is uncased: it does not make a difference between english
and English.

## Model description

This Automatic Text Summarizarion (ATS) Model was developed in the Python language to be applied to the Research Financing Products 
Portfolio (FPP) of the Brazilian Ministry of Science, Technology and Innovation. It was produced in parallel with the writing of a 
Sistematic Literature Review paper, in which there is a discussion concerning many summarization methods, datasets, and evaluators
as well as a brief overview of the nature of the task itself and the state-of-the-art of its implementation.

The input of the model can be either a single text, a dataframe or a csv file containing multiple texts (in the English language) and its output
are the summarized texts and their evaluation metrics. As an optional (although recommended) input, the model accepts gold-standard summaries 
for the texts, i.e., human written (or extracted) summaries of the texts which are considered to be good representations of their contents. 
Evaluators like ROUGE, which in its many variations is the most used to perform the task, require gold-standard summaries as inputs. There are, 
however, Evaluation Methods which do not deppend on the existence of a golden-summary (e.g. the cosine similarity method, the Kullback Leibler
Divergence method) and this is why an evaluation can be made even when only the text is taken as an input to the model.

The text output is produced by a chosen method of ATS which can be extractive (built with the most relevant sentences of the source document) 
or abstractive (written from scratch in an abstractive manner). The latter is achieved by means of transformers, and the ones present in the
model are the already existing and vastly applied BART-Large CNN, Pegasus-XSUM and mT5 Multilingual XLSUM. The extractive methods are taken from
the Sumy Python Library and include SumyRandom, SumyLuhn, SumyLsa, SumyLexRank, SumyTextRank, SumySumBasic, SumyKL and SumyReduction. Each of the
methods used for text summarization will be described indvidually in the following sections.


![architeru](https://github.com/marcosdib/S2Query/Classification_Architecture_model.png)

## Methods

| Method                 | Kind of ATS | Description | Documentation |
|:----------------------:|:-----------:|:-----------:|:-------------:|
| SumyRandom             | Extractive  |             |   []()        |
| Sumy Luhn              | Extractive  |             |   []()        |
| SumyLsa                | Extractive  |             |   []()        |
| SumyLexRank            | Extractive  |             |   []()        |
| SumyTextRank           | Extractive  |             |   []()        |
| SumySumBasic           | Extractive  |             |   []()        |
| SumyKL                 | Extractive  |             |   []()        |
| SumyReduction          | Extractive  |             |   []()        |
| BART-Large CNN         | Abstractive |             | [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn)                    |
| Pegasus-XSUM           | Abstractive |             | [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum)                            |
| mT5 Multilingual XLSUM | Abstractive |             | [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum)|


## Model variations

With the motivation to increase accuracy obtained with baseline implementation, we implemented a transfer learning 
strategy under the assumption that small data available for training was insufficient for adequate embedding training. 
In this context, we considered two approaches: 

   i) pre-training wordembeddings using similar datasets for text classification;
   ii) using transformers and attention mechanisms (Longformer) to create contextualized embeddings.

XXXX has originally been released in base and large variations, for cased and uncased input text. The uncased models 
also strips out an accent markers. Chinese and multilingual uncased and cased versions followed shortly after.  
Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of 
two models.  

Other 24 smaller models are released afterward.  

The detailed release history can be found on the [here](https://huggingface.co/unb-lamfo-nlp-mcti) on github.

| Model                        | #params | Language |
|------------------------------|--------------------|-------|
| [`mcti-base-uncased`]        | 110M    | English  |
| [`mcti-large-uncased`]       | 340M    | English  | sub 
| [`mcti-base-cased`]          | 110M    | English  |
| [`mcti-large-cased`]         | 110M    | Chinese  |
| [`-base-multilingual-cased`] | 110M    | Multiple |

  | Dataset                    | Compatibility to base* |
  |----------------------------|------------------------|
  | Labeled MCTI               | 100%                   |
  | Full MCTI                  | 100%                   |
  | BBC News Articles          | 56.77%                 |
  | New unlabeled MCTI         | 75.26%                 |


## Intended uses

You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://www.google.com) to look for
fine-tuned versions of a task that interests you.

Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like XXX.

### How to use

You can use this model directly with a pipeline for masked language modeling:

```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")

[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
  'score': 0.1073106899857521,
  'token': 4827,
  'token_str': 'fashion'},
 {'sequence': "[CLS] hello i'm a role model. [SEP]",
  'score': 0.08774490654468536,
  'token': 2535,
  'token_str': 'role'},
 {'sequence': "[CLS] hello i'm a new model. [SEP]",
  'score': 0.05338378623127937,
  'token': 2047,
  'token_str': 'new'},
 {'sequence': "[CLS] hello i'm a super model. [SEP]",
  'score': 0.04667217284440994,
  'token': 3565,
  'token_str': 'super'},
 {'sequence': "[CLS] hello i'm a fine model. [SEP]",
  'score': 0.027095865458250046,
  'token': 2986,
  'token_str': 'fine'}]
```

Here is how to use this model to get the features of a given text in PyTorch:

```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```

and in TensorFlow:

```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```

### Limitations and bias

Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:

```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("The man worked as a [MASK].")

[{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
  'score': 0.09747550636529922,
  'token': 10533,
  'token_str': 'carpenter'},
 {'sequence': '[CLS] the man worked as a waiter. [SEP]',
  'score': 0.0523831807076931,
  'token': 15610,
  'token_str': 'waiter'},
 {'sequence': '[CLS] the man worked as a barber. [SEP]',
  'score': 0.04962705448269844,
  'token': 13362,
  'token_str': 'barber'},
 {'sequence': '[CLS] the man worked as a mechanic. [SEP]',
  'score': 0.03788609802722931,
  'token': 15893,
  'token_str': 'mechanic'},
 {'sequence': '[CLS] the man worked as a salesman. [SEP]',
  'score': 0.037680890411138535,
  'token': 18968,
  'token_str': 'salesman'}]

>>> unmasker("The woman worked as a [MASK].")

[{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
  'score': 0.21981462836265564,
  'token': 6821,
  'token_str': 'nurse'},
 {'sequence': '[CLS] the woman worked as a waitress. [SEP]',
  'score': 0.1597415804862976,
  'token': 13877,
  'token_str': 'waitress'},
 {'sequence': '[CLS] the woman worked as a maid. [SEP]',
  'score': 0.1154729500412941,
  'token': 10850,
  'token_str': 'maid'},
 {'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
  'score': 0.037968918681144714,
  'token': 19215,
  'token_str': 'prostitute'},
 {'sequence': '[CLS] the woman worked as a cook. [SEP]',
  'score': 0.03042375110089779,
  'token': 5660,
  'token_str': 'cook'}]
```

This bias will also affect all fine-tuned versions of this model.

## Training data

The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).


## Training procedure

### Preprocessing

The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:

```
[CLS] Sentence A [SEP] Sentence B [SEP]
```

With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus, and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.

The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.

### Pretraining

The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.

## Evaluation results

### Model training with Word2Vec embeddings

Now we have a pre-trained model of word2vec embeddings that has already learned relevant meaningsfor our classification problem.
We can couple it to our classification models (Fig. 4), realizing transferlearning and then training the model with the labeled
data in a supervised manner. The new coupled model can be seen in Figure 5 under word2vec model training. The Table 3 shows the
obtained results with related metrics. With this implementation, we achieved new levels of accuracy with 86% for the CNN
architecture and 88% for the LSTM architecture.


Table 1: Results from Pre-trained WE + ML models.

| ML Model |  Accuracy | F1 Score  | Precision |   Recall  |
|:--------:|:---------:|:---------:|:---------:|:---------:|
| NN       |  0.8269   |  0.8545   |  0.8392   |  0.8712   |
| DNN      |  0.7115   |  0.7794   |  0.7255   |  0.8485   |
| CNN      |  0.8654   |  0.9083   |  0.8486   |  0.9773   |
| LSTM     |  0.8846   |  0.9139   |  0.9056   |  0.9318   |

### Transformer-based implementation

Another way we used pre-trained vector representations was by use of a Longformer (Beltagy et al., 2020). We chose it because
of the limitation of the first generation of transformers and BERT-based architectures involving the size of the sentences:
the maximum of 512 tokens. The reason behind that limitation is that the self-attention mechanism scale quadratically with the
input sequence length O(n2) (Beltagy et al., 2020). The Longformer allowed the processing sequences of a thousand characters
without facing the memory bottleneck of BERT-like architectures and achieved SOTA in several benchmarks.

For our text length distribution in Figure 3, if we used a Bert-based architecture with a maximum length of 512, 99 sentences
would have to be truncated and probably miss some critical information. By comparison, with the Longformer, with a maximum 
length of 4096, only eight sentences will have their information shortened.

To apply the Longformer, we used the pre-trained base (available on the link) that was previously trained with a combination
of vast datasets as input to the model, as shown in figure 5 under Longformer model training. After coupling to our classification
models, we realized supervised training of the whole model. At this point, only transfer learning was applied since more 
computational power was needed to realize the fine-tuning of the weights. The results with related metrics can be viewed in table 4.
This approach achieved adequate accuracy scores, above 82% in all implementation architectures.


Table 2: Results from Pre-trained Longformer + ML models.

| ML Model |  Accuracy | F1 Score  | Precision |   Recall  |
|:--------:|:---------:|:---------:|:---------:|:---------:|
| NN       |  0.8269   |  0.8754   |0.7950     |  0.9773   |
| DNN      |  0.8462   |  0.8776   |0.8474     |  0.9123   |
| CNN      |  0.8462   |  0.8776   |0.8474     |  0.9123   |
| LSTM     |  0.8269   |  0.8801   |0.8571     |  0.9091   | 


## Checkpoints
- Examples
- Implementation Notes
- Usage Example
- >>>
- >>> ...


## Config

## Tokenizer

## Training data

## Training procedure

## Preprocessing

## Pretraining

## Evaluation results
## Benchmarks

### BibTeX entry and citation info

```bibtex
@conference{webist22,
author       ={Daniel O. Cajueiro and Maísa {Kely de Melo}. and Arthur G. Nery and Silvia A. dos Reis and Igor Tavares
              and Li Weigang and Victor R. R. Celestino.},
title        ={A comprehensive review of automatic text summarization techniques: method, data, evaluation and coding},
booktitle    ={Proceedings of the 18th International Conference on Web Information Systems and Technologies - WEBIST,},
year         ={2022},
pages        ={},
publisher    ={},
organization ={},
doi          ={},
isbn         ={},
issn         ={},
}
```

<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
	<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>