File size: 2,443 Bytes
95389ad
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
abe1d03
95389ad
 
 
3bdb022
 
 
 
 
57ef51d
3bdb022
 
 
 
 
7d4cbbd
 
fa9bfd4
3bdb022
1baa086
e907838
 
 
1baa086
e907838
 
 
1baa086
e907838
 
3bdb022
cf7704e
3bdb022
1baa086
e907838
 
 
1baa086
e907838
 
 
1baa086
e907838
 
95389ad
abe1d03
95389ad
abe1d03
95389ad
abe1d03
95389ad
abe1d03
 
 
 
 
95389ad
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0baabe4
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
---
library_name: transformers
license: mit
base_model: pierreguillou/bert-base-cased-squad-v1.1-portuguese
tags:
- generated_from_trainer
model-index:
- name: ibama_29102024_20241029175942
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# ibama_29102024_20241029175942

This model is a fine-tuned version of [pierreguillou/bert-base-cased-squad-v1.1-portuguese](https://huggingface.co/pierreguillou/bert-base-cased-squad-v1.1-portuguese).

## Model description

Dataset com 1750 registros. 
M茅dia do tamanho dos contextos: 2467.439831104856

["train"] : 1421 registros

["test"]  : 329 registros

{'exact_match': 6.990881458966565, 'f1': 41.36428322707063}



## Resultados: 

### :: Filtrando registros de ['test'] onde o contexto possuia at茅 6697 caracteres.
    
Modelo: ibama_29102024_20241029175942 : 

    'exact_match': 3.9755351681957185, 'f1': 38.429269059347

Modelo: pierreguillou/bert-base-cased-squad-v1.1-portuguese : 

    'exact_match': 6.422018348623853, 'f1': 37.47550481021018

Modelo: neuralmind/bert-base-portuguese-cased : 

    'exact_match': 0.0, 'f1': 21.520346204352514

### :: Filtrando registros de ['test'] onde o contexto possuia at茅 512 caracteres.

Modelo: ibama_29102024_20241029175942 : 

    'exact_match': 12.67605633802817, 'f1': 70.76635146201694 

Modelo: pierreguillou/bert-base-cased-squad-v1.1-portuguese : 

    'exact_match': 1.408450704225352, 'f1': 38.42469128241023

Modelo: neuralmind/bert-base-portuguese-cased : 
    
    'exact_match': 0.0, 'f1': 15.264048430063177

### Training results

It achieves the following results on the evaluation set:

- Loss: 4.1817

Epoch	Training Loss	Validation Loss
    1	No log	          4.598662
    2	No log	          4.266841
    3	No log	          4.225364
    4	No log	          4.181730

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP

### Framework versions

- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1

### Notebook
https://colab.research.google.com/drive/1q1tZ7qkcjsNYrt3VLbrJ6C72mZihFzGm