File size: 4,456 Bytes
12cd236
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
868b950
 
 
12cd236
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
868b950
12cd236
 
 
 
868b950
 
 
 
 
 
 
 
 
 
 
 
12cd236
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
model-index:
- name: summarize
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# summarize

This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6935
- Evaluation: {'evaluation_runtime': 28.518348455429077, 'samples_per_second': 33.3118869588378, 'steps_per_second': 33.3118869588378}
- Rounded Rouge: {'rouge1': 0.1705, 'rouge2': 0.0588, 'rougeL': 0.1354, 'rougeLsum': 0.1355}

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step | Validation Loss | Evaluation                                                                                                                   | Rounded Rouge                                                               |
|:-------------:|:-----:|:----:|:---------------:|:----------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------:|
| 3.1701        | 1.0   | 500  | 2.8229          | {'evaluation_runtime': 30.270989179611206, 'samples_per_second': 31.383183230756966, 'steps_per_second': 31.383183230756966} | {'rouge1': 0.1615, 'rouge2': 0.0525, 'rougeL': 0.128, 'rougeLsum': 0.1281}  |
| 2.9661        | 2.0   | 1000 | 2.7672          | {'evaluation_runtime': 28.879830598831177, 'samples_per_second': 32.894929793613414, 'steps_per_second': 32.894929793613414} | {'rouge1': 0.1676, 'rouge2': 0.0567, 'rougeL': 0.1326, 'rougeLsum': 0.1327} |
| 2.9128        | 3.0   | 1500 | 2.7414          | {'evaluation_runtime': 28.787310361862183, 'samples_per_second': 33.00065160858421, 'steps_per_second': 33.00065160858421}   | {'rouge1': 0.1693, 'rouge2': 0.0575, 'rougeL': 0.1342, 'rougeLsum': 0.1343} |
| 2.8783        | 4.0   | 2000 | 2.7240          | {'evaluation_runtime': 28.755173683166504, 'samples_per_second': 33.03753301814126, 'steps_per_second': 33.03753301814126}   | {'rouge1': 0.1694, 'rouge2': 0.0581, 'rougeL': 0.1343, 'rougeLsum': 0.1344} |
| 2.8548        | 5.0   | 2500 | 2.7137          | {'evaluation_runtime': 30.050004959106445, 'samples_per_second': 31.613971488284534, 'steps_per_second': 31.613971488284534} | {'rouge1': 0.171, 'rouge2': 0.0591, 'rougeL': 0.1354, 'rougeLsum': 0.1354}  |
| 2.8353        | 6.0   | 3000 | 2.7047          | {'evaluation_runtime': 29.376569986343384, 'samples_per_second': 32.33869714679546, 'steps_per_second': 32.33869714679546}   | {'rouge1': 0.1703, 'rouge2': 0.0587, 'rougeL': 0.135, 'rougeLsum': 0.135}   |
| 2.8229        | 7.0   | 3500 | 2.6996          | {'evaluation_runtime': 27.381307363510132, 'samples_per_second': 34.69520236517353, 'steps_per_second': 34.69520236517353}   | {'rouge1': 0.1714, 'rouge2': 0.0592, 'rougeL': 0.1357, 'rougeLsum': 0.1357} |
| 2.8154        | 8.0   | 4000 | 2.6958          | {'evaluation_runtime': 27.409220457077026, 'samples_per_second': 34.65986934899169, 'steps_per_second': 34.65986934899169}   | {'rouge1': 0.17, 'rouge2': 0.0587, 'rougeL': 0.1351, 'rougeLsum': 0.1352}   |
| 2.8068        | 9.0   | 4500 | 2.6943          | {'evaluation_runtime': 27.376741409301758, 'samples_per_second': 34.7009889086807, 'steps_per_second': 34.7009889086807}     | {'rouge1': 0.1702, 'rouge2': 0.0588, 'rougeL': 0.1352, 'rougeLsum': 0.1353} |
| 2.8           | 10.0  | 5000 | 2.6935          | {'evaluation_runtime': 28.518348455429077, 'samples_per_second': 33.3118869588378, 'steps_per_second': 33.3118869588378}     | {'rouge1': 0.1705, 'rouge2': 0.0588, 'rougeL': 0.1354, 'rougeLsum': 0.1355} |


### Framework versions

- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2