abarbosa commited on
Commit
d8cc9c0
·
1 Parent(s): d586147

add albertina

Browse files
Files changed (38) hide show
  1. bootstrap_confidence_intervals-00000-of-00001.parquet +2 -2
  2. create_parquet_files.py +2 -0
  3. evaluation_results-00000-of-00001.parquet +2 -2
  4. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/.hydra/config.yaml +41 -0
  5. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/.hydra/hydra.yaml +157 -0
  6. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/.hydra/overrides.yaml +1 -0
  7. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/bootstrap_confidence_intervals.csv +2 -0
  8. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/evaluation_results.csv +2 -0
  9. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only_inference_results.jsonl +0 -0
  10. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/run_inference_experiment.log +199 -0
  11. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only/.hydra/config.yaml +41 -0
  12. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only/.hydra/hydra.yaml +157 -0
  13. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only/.hydra/overrides.yaml +1 -0
  14. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only/bootstrap_confidence_intervals.csv +2 -0
  15. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only/evaluation_results.csv +2 -0
  16. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only_inference_results.jsonl +0 -0
  17. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only/run_inference_experiment.log +199 -0
  18. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only/.hydra/config.yaml +41 -0
  19. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only/.hydra/hydra.yaml +157 -0
  20. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only/.hydra/overrides.yaml +1 -0
  21. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only/bootstrap_confidence_intervals.csv +2 -0
  22. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only/evaluation_results.csv +2 -0
  23. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only_inference_results.jsonl +0 -0
  24. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only/run_inference_experiment.log +199 -0
  25. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only/.hydra/config.yaml +41 -0
  26. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only/.hydra/hydra.yaml +157 -0
  27. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only/.hydra/overrides.yaml +1 -0
  28. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only/bootstrap_confidence_intervals.csv +2 -0
  29. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only/evaluation_results.csv +2 -0
  30. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only_inference_results.jsonl +0 -0
  31. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only/run_inference_experiment.log +199 -0
  32. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only/.hydra/config.yaml +41 -0
  33. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only/.hydra/hydra.yaml +157 -0
  34. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only/.hydra/overrides.yaml +1 -0
  35. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only/bootstrap_confidence_intervals.csv +2 -0
  36. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only/evaluation_results.csv +2 -0
  37. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only_inference_results.jsonl +0 -0
  38. runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only/run_inference_experiment.log +199 -0
bootstrap_confidence_intervals-00000-of-00001.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:74dcbf8cc574efe06dda34ff9b1ccb6d2a07e4f9cb21d0f66d1e4dc0e7e95f6b
3
- size 29581
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:074e9f9d7094de3d194528f70edbd86e929dd3303cb3beb55fc4a4fec07f7d46
3
+ size 28701
create_parquet_files.py CHANGED
@@ -104,6 +104,8 @@ def simplify_experiment_name(name):
104
  name = name.replace('bert-base-multilingual-cased', 'mbert-base')
105
  elif 'bert-large-portuguese-cased' in name:
106
  name = name.replace('bert-large-portuguese-cased', 'bertimbau-large')
 
 
107
 
108
  # Handle Llama variants
109
  elif 'Llama-3.1-8B-llama31_classification_lora' in name:
 
104
  name = name.replace('bert-base-multilingual-cased', 'mbert-base')
105
  elif 'bert-large-portuguese-cased' in name:
106
  name = name.replace('bert-large-portuguese-cased', 'bertimbau-large')
107
+ elif 'albertina-1b5-portuguese-ptbr-encoder' in name:
108
+ name = name.replace('albertina-1b5-portuguese-ptbr-encoder', 'albertina-1b5-ptbr')
109
 
110
  # Handle Llama variants
111
  elif 'Llama-3.1-8B-llama31_classification_lora' in name:
evaluation_results-00000-of-00001.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0fca8588cb67b7a43ce4d331b7c998443115ff62ed9e462adb1d33eb032426a8
3
- size 70007
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:314a1e842e3c233857b23955ffe06d109c0b465905994de4ea68fcfe40a92c84
3
+ size 68039
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/.hydra/config.yaml ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ cache_dir: /tmp/
2
+ dataset:
3
+ name: kamel-usp/aes_enem_dataset
4
+ split: JBCS2025
5
+ training_params:
6
+ seed: 42
7
+ num_train_epochs: 20
8
+ logging_steps: 100
9
+ metric_for_best_model: QWK
10
+ bf16: true
11
+ bootstrap:
12
+ enabled: true
13
+ n_bootstrap: 10000
14
+ bootstrap_seed: 42
15
+ metrics:
16
+ - QWK
17
+ - Macro_F1
18
+ - Weighted_F1
19
+ post_training_results:
20
+ model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
21
+ experiments:
22
+ model:
23
+ name: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only
24
+ type: encoder_classification
25
+ num_labels: 6
26
+ output_dir: ./results/
27
+ logging_dir: ./logs/
28
+ best_model_dir: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only
29
+ tokenizer:
30
+ name: PORTULAN/albertina-1b5-portuguese-ptbr-encoder
31
+ dataset:
32
+ grade_index: 0
33
+ use_full_context: false
34
+ training_params:
35
+ weight_decay: 0.01
36
+ warmup_ratio: 0.1
37
+ learning_rate: 5.0e-05
38
+ train_batch_size: 4
39
+ eval_batch_size: 4
40
+ gradient_accumulation_steps: 4
41
+ gradient_checkpointing: false
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/.hydra/hydra.yaml ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ hydra:
2
+ run:
3
+ dir: inference_output/2025-07-11/01-30-40
4
+ sweep:
5
+ dir: multirun/${now:%Y-%m-%d}/${now:%H-%M-%S}
6
+ subdir: ${hydra.job.num}
7
+ launcher:
8
+ _target_: hydra._internal.core_plugins.basic_launcher.BasicLauncher
9
+ sweeper:
10
+ _target_: hydra._internal.core_plugins.basic_sweeper.BasicSweeper
11
+ max_batch_size: null
12
+ params: null
13
+ help:
14
+ app_name: ${hydra.job.name}
15
+ header: '${hydra.help.app_name} is powered by Hydra.
16
+
17
+ '
18
+ footer: 'Powered by Hydra (https://hydra.cc)
19
+
20
+ Use --hydra-help to view Hydra specific help
21
+
22
+ '
23
+ template: '${hydra.help.header}
24
+
25
+ == Configuration groups ==
26
+
27
+ Compose your configuration from those groups (group=option)
28
+
29
+
30
+ $APP_CONFIG_GROUPS
31
+
32
+
33
+ == Config ==
34
+
35
+ Override anything in the config (foo.bar=value)
36
+
37
+
38
+ $CONFIG
39
+
40
+
41
+ ${hydra.help.footer}
42
+
43
+ '
44
+ hydra_help:
45
+ template: 'Hydra (${hydra.runtime.version})
46
+
47
+ See https://hydra.cc for more info.
48
+
49
+
50
+ == Flags ==
51
+
52
+ $FLAGS_HELP
53
+
54
+
55
+ == Configuration groups ==
56
+
57
+ Compose your configuration from those groups (For example, append hydra/job_logging=disabled
58
+ to command line)
59
+
60
+
61
+ $HYDRA_CONFIG_GROUPS
62
+
63
+
64
+ Use ''--cfg hydra'' to Show the Hydra config.
65
+
66
+ '
67
+ hydra_help: ???
68
+ hydra_logging:
69
+ version: 1
70
+ formatters:
71
+ simple:
72
+ format: '[%(asctime)s][HYDRA] %(message)s'
73
+ handlers:
74
+ console:
75
+ class: logging.StreamHandler
76
+ formatter: simple
77
+ stream: ext://sys.stdout
78
+ root:
79
+ level: INFO
80
+ handlers:
81
+ - console
82
+ loggers:
83
+ logging_example:
84
+ level: DEBUG
85
+ disable_existing_loggers: false
86
+ job_logging:
87
+ version: 1
88
+ formatters:
89
+ simple:
90
+ format: '[%(asctime)s][%(name)s][%(levelname)s] - %(message)s'
91
+ handlers:
92
+ console:
93
+ class: logging.StreamHandler
94
+ formatter: simple
95
+ stream: ext://sys.stdout
96
+ file:
97
+ class: logging.FileHandler
98
+ formatter: simple
99
+ filename: ${hydra.runtime.output_dir}/${hydra.job.name}.log
100
+ root:
101
+ level: INFO
102
+ handlers:
103
+ - console
104
+ - file
105
+ disable_existing_loggers: false
106
+ env: {}
107
+ mode: RUN
108
+ searchpath: []
109
+ callbacks: {}
110
+ output_subdir: .hydra
111
+ overrides:
112
+ hydra:
113
+ - hydra.run.dir=inference_output/2025-07-11/01-30-40
114
+ - hydra.mode=RUN
115
+ task:
116
+ - experiments=temp_inference/kamel-usp_jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only
117
+ job:
118
+ name: run_inference_experiment
119
+ chdir: null
120
+ override_dirname: experiments=temp_inference/kamel-usp_jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only
121
+ id: ???
122
+ num: ???
123
+ config_name: config
124
+ env_set: {}
125
+ env_copy: []
126
+ config:
127
+ override_dirname:
128
+ kv_sep: '='
129
+ item_sep: ','
130
+ exclude_keys: []
131
+ runtime:
132
+ version: 1.3.2
133
+ version_base: '1.1'
134
+ cwd: /workspace/jbcs2025
135
+ config_sources:
136
+ - path: hydra.conf
137
+ schema: pkg
138
+ provider: hydra
139
+ - path: /workspace/jbcs2025/configs
140
+ schema: file
141
+ provider: main
142
+ - path: ''
143
+ schema: structured
144
+ provider: schema
145
+ output_dir: /workspace/jbcs2025/inference_output/2025-07-11/01-30-40
146
+ choices:
147
+ experiments: temp_inference/kamel-usp_jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only
148
+ hydra/env: default
149
+ hydra/callbacks: null
150
+ hydra/job_logging: default
151
+ hydra/hydra_logging: default
152
+ hydra/hydra_help: default
153
+ hydra/help: default
154
+ hydra/sweeper: basic
155
+ hydra/launcher: basic
156
+ hydra/output: default
157
+ verbose: false
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/.hydra/overrides.yaml ADDED
@@ -0,0 +1 @@
 
 
1
+ - experiments=temp_inference/kamel-usp_jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/bootstrap_confidence_intervals.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ experiment_id,timestamp,QWK_mean,QWK_lower_95ci,QWK_upper_95ci,QWK_ci_width,Macro_F1_mean,Macro_F1_lower_95ci,Macro_F1_upper_95ci,Macro_F1_ci_width,Weighted_F1_mean,Weighted_F1_lower_95ci,Weighted_F1_upper_95ci,Weighted_F1_ci_width
2
+ jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only,2025-07-11 01:30:46,0.6816712351479063,0.5762288119767628,0.7805907228920596,0.20436191091529676,0.536327190509169,0.41269287700049234,0.6944784058515344,0.2817855288510421,0.7122266522749188,0.6365110603189872,0.7850158989286448,0.14850483860965757
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/evaluation_results.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ accuracy,RMSE,QWK,HDIV,Macro_F1,Micro_F1,Weighted_F1,TP_0,TN_0,FP_0,FN_0,TP_1,TN_1,FP_1,FN_1,TP_2,TN_2,FP_2,FN_2,TP_3,TN_3,FP_3,FN_3,TP_4,TN_4,FP_4,FN_4,TP_5,TN_5,FP_5,FN_5,timestamp,id
2
+ 0.7028985507246377,25.931906372573962,0.6826328310864394,0.007246376811594235,0.49688027796692447,0.7028985507246377,0.7116672747290037,0,137,0,1,0,138,0,0,6,122,6,4,45,65,7,21,40,71,16,11,6,116,12,4,2025-07-11 01:30:46,jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only_inference_results.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/run_inference_experiment.log ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2025-07-11 01:30:46,572][__main__][INFO] - Starting inference experiment
2
+ [2025-07-11 01:30:46,574][__main__][INFO] - cache_dir: /tmp/
3
+ dataset:
4
+ name: kamel-usp/aes_enem_dataset
5
+ split: JBCS2025
6
+ training_params:
7
+ seed: 42
8
+ num_train_epochs: 20
9
+ logging_steps: 100
10
+ metric_for_best_model: QWK
11
+ bf16: true
12
+ bootstrap:
13
+ enabled: true
14
+ n_bootstrap: 10000
15
+ bootstrap_seed: 42
16
+ metrics:
17
+ - QWK
18
+ - Macro_F1
19
+ - Weighted_F1
20
+ post_training_results:
21
+ model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
22
+ experiments:
23
+ model:
24
+ name: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only
25
+ type: encoder_classification
26
+ num_labels: 6
27
+ output_dir: ./results/
28
+ logging_dir: ./logs/
29
+ best_model_dir: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only
30
+ tokenizer:
31
+ name: PORTULAN/albertina-1b5-portuguese-ptbr-encoder
32
+ dataset:
33
+ grade_index: 0
34
+ use_full_context: false
35
+ training_params:
36
+ weight_decay: 0.01
37
+ warmup_ratio: 0.1
38
+ learning_rate: 5.0e-05
39
+ train_batch_size: 4
40
+ eval_batch_size: 4
41
+ gradient_accumulation_steps: 4
42
+ gradient_checkpointing: false
43
+
44
+ [2025-07-11 01:30:46,576][__main__][INFO] - Running inference with fine-tuned HF model
45
+ [2025-07-11 01:30:50,657][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/config.json
46
+ [2025-07-11 01:30:50,660][transformers.configuration_utils][INFO] - Model config DebertaV2Config {
47
+ "architectures": [
48
+ "DebertaV2ForMaskedLM"
49
+ ],
50
+ "attention_head_size": 64,
51
+ "attention_probs_dropout_prob": 0.1,
52
+ "conv_act": "gelu",
53
+ "conv_kernel_size": 3,
54
+ "hidden_act": "gelu",
55
+ "hidden_dropout_prob": 0.1,
56
+ "hidden_size": 1536,
57
+ "initializer_range": 0.02,
58
+ "intermediate_size": 6144,
59
+ "layer_norm_eps": 1e-07,
60
+ "legacy": true,
61
+ "max_position_embeddings": 512,
62
+ "max_relative_positions": -1,
63
+ "model_type": "deberta-v2",
64
+ "norm_rel_ebd": "layer_norm",
65
+ "num_attention_heads": 24,
66
+ "num_hidden_layers": 48,
67
+ "pad_token_id": 0,
68
+ "pooler_dropout": 0,
69
+ "pooler_hidden_act": "gelu",
70
+ "pooler_hidden_size": 1536,
71
+ "pos_att_type": [
72
+ "p2c",
73
+ "c2p"
74
+ ],
75
+ "position_biased_input": false,
76
+ "position_buckets": 256,
77
+ "relative_attention": true,
78
+ "share_att_key": true,
79
+ "torch_dtype": "bfloat16",
80
+ "transformers_version": "4.53.1",
81
+ "type_vocab_size": 0,
82
+ "vocab_size": 128100
83
+ }
84
+
85
+ [2025-07-11 01:30:51,082][transformers.tokenization_utils_base][INFO] - loading file spm.model from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/spm.model
86
+ [2025-07-11 01:30:51,083][transformers.tokenization_utils_base][INFO] - loading file tokenizer.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/tokenizer.json
87
+ [2025-07-11 01:30:51,083][transformers.tokenization_utils_base][INFO] - loading file added_tokens.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/added_tokens.json
88
+ [2025-07-11 01:30:51,083][transformers.tokenization_utils_base][INFO] - loading file special_tokens_map.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/special_tokens_map.json
89
+ [2025-07-11 01:30:51,083][transformers.tokenization_utils_base][INFO] - loading file tokenizer_config.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/tokenizer_config.json
90
+ [2025-07-11 01:30:51,083][transformers.tokenization_utils_base][INFO] - loading file chat_template.jinja from cache at None
91
+ [2025-07-11 01:30:51,312][__main__][INFO] - Tokenizer function parameters- Padding:longest; Truncation: True; Use Full Context: False
92
+ [2025-07-11 01:30:51,723][__main__][INFO] -
93
+ Token statistics for 'train' split:
94
+ [2025-07-11 01:30:51,723][__main__][INFO] - Total examples: 500
95
+ [2025-07-11 01:30:51,723][__main__][INFO] - Min tokens: 512
96
+ [2025-07-11 01:30:51,723][__main__][INFO] - Max tokens: 512
97
+ [2025-07-11 01:30:51,723][__main__][INFO] - Avg tokens: 512.00
98
+ [2025-07-11 01:30:51,723][__main__][INFO] - Std tokens: 0.00
99
+ [2025-07-11 01:30:51,817][__main__][INFO] -
100
+ Token statistics for 'validation' split:
101
+ [2025-07-11 01:30:51,817][__main__][INFO] - Total examples: 132
102
+ [2025-07-11 01:30:51,817][__main__][INFO] - Min tokens: 512
103
+ [2025-07-11 01:30:51,817][__main__][INFO] - Max tokens: 512
104
+ [2025-07-11 01:30:51,817][__main__][INFO] - Avg tokens: 512.00
105
+ [2025-07-11 01:30:51,817][__main__][INFO] - Std tokens: 0.00
106
+ [2025-07-11 01:30:51,915][__main__][INFO] -
107
+ Token statistics for 'test' split:
108
+ [2025-07-11 01:30:51,916][__main__][INFO] - Total examples: 138
109
+ [2025-07-11 01:30:51,916][__main__][INFO] - Min tokens: 512
110
+ [2025-07-11 01:30:51,916][__main__][INFO] - Max tokens: 512
111
+ [2025-07-11 01:30:51,916][__main__][INFO] - Avg tokens: 512.00
112
+ [2025-07-11 01:30:51,916][__main__][INFO] - Std tokens: 0.00
113
+ [2025-07-11 01:30:51,916][__main__][INFO] - If token statistics are the same (max, avg, min) keep in mind that this is due to batched tokenization and padding.
114
+ [2025-07-11 01:30:51,916][__main__][INFO] - Model max length: 512. If it is the same as stats, then there is a high chance that sequences are being truncated.
115
+ [2025-07-11 01:30:51,916][__main__][INFO] - Loading model from: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only
116
+ [2025-07-11 01:30:51,916][__main__][INFO] - Loading model from: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only
117
+ [2025-07-11 01:30:53,129][__main__][INFO] - Model need ≈ 9.51 GiB to run inference and 27.02 for training
118
+ [2025-07-11 01:30:53,932][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--kamel-usp--jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only/snapshots/5ad996b4a4a15ce313490a7dd5f9001638938bae/config.json
119
+ [2025-07-11 01:30:53,933][transformers.configuration_utils][INFO] - Model config DebertaV2Config {
120
+ "architectures": [
121
+ "DebertaV2ForSequenceClassification"
122
+ ],
123
+ "attention_head_size": 64,
124
+ "attention_probs_dropout_prob": 0.1,
125
+ "conv_act": "gelu",
126
+ "conv_kernel_size": 3,
127
+ "hidden_act": "gelu",
128
+ "hidden_dropout_prob": 0.1,
129
+ "hidden_size": 1536,
130
+ "id2label": {
131
+ "0": 0,
132
+ "1": 40,
133
+ "2": 80,
134
+ "3": 120,
135
+ "4": 160,
136
+ "5": 200
137
+ },
138
+ "initializer_range": 0.02,
139
+ "intermediate_size": 6144,
140
+ "label2id": {
141
+ "0": 0,
142
+ "40": 1,
143
+ "80": 2,
144
+ "120": 3,
145
+ "160": 4,
146
+ "200": 5
147
+ },
148
+ "layer_norm_eps": 1e-07,
149
+ "legacy": true,
150
+ "max_position_embeddings": 512,
151
+ "max_relative_positions": -1,
152
+ "model_type": "deberta-v2",
153
+ "norm_rel_ebd": "layer_norm",
154
+ "num_attention_heads": 24,
155
+ "num_hidden_layers": 48,
156
+ "pad_token_id": 0,
157
+ "pooler_dropout": 0,
158
+ "pooler_hidden_act": "gelu",
159
+ "pooler_hidden_size": 1536,
160
+ "pos_att_type": [
161
+ "p2c",
162
+ "c2p"
163
+ ],
164
+ "position_biased_input": false,
165
+ "position_buckets": 256,
166
+ "relative_attention": true,
167
+ "share_att_key": true,
168
+ "torch_dtype": "bfloat16",
169
+ "transformers_version": "4.53.1",
170
+ "type_vocab_size": 0,
171
+ "vocab_size": 128100
172
+ }
173
+
174
+ [2025-07-11 01:31:54,528][transformers.modeling_utils][INFO] - loading weights file model.safetensors from cache at /tmp/models--kamel-usp--jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only/snapshots/5ad996b4a4a15ce313490a7dd5f9001638938bae/model.safetensors
175
+ [2025-07-11 01:31:54,532][transformers.modeling_utils][INFO] - Will use torch_dtype=torch.bfloat16 as defined in model's config object
176
+ [2025-07-11 01:31:54,532][transformers.modeling_utils][INFO] - Instantiating DebertaV2ForSequenceClassification model under default dtype torch.bfloat16.
177
+ [2025-07-11 01:31:56,446][transformers.modeling_utils][INFO] - All model checkpoint weights were used when initializing DebertaV2ForSequenceClassification.
178
+
179
+ [2025-07-11 01:31:56,446][transformers.modeling_utils][INFO] - All the weights of DebertaV2ForSequenceClassification were initialized from the model checkpoint at kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only.
180
+ If your task is similar to the task the model of the checkpoint was trained on, you can already use DebertaV2ForSequenceClassification for predictions without further training.
181
+ [2025-07-11 01:31:56,466][transformers.training_args][INFO] - PyTorch: setting up devices
182
+ [2025-07-11 01:31:56,489][transformers.training_args][INFO] - The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).
183
+ [2025-07-11 01:31:56,497][transformers.trainer][INFO] - You have loaded a model on multiple GPUs. `is_model_parallel` attribute will be force-set to `True` to avoid any unexpected behavior such as device placement mismatching.
184
+ [2025-07-11 01:31:56,522][transformers.trainer][INFO] - Using auto half precision backend
185
+ [2025-07-11 01:31:59,885][__main__][INFO] - Running inference on test dataset
186
+ [2025-07-11 01:31:59,887][transformers.trainer][INFO] - The following columns in the test set don't have a corresponding argument in `DebertaV2ForSequenceClassification.forward` and have been ignored: essay_text, prompt, grades, supporting_text, id_prompt, essay_year, reference, id. If essay_text, prompt, grades, supporting_text, id_prompt, essay_year, reference, id are not expected by `DebertaV2ForSequenceClassification.forward`, you can safely ignore this message.
187
+ [2025-07-11 01:31:59,900][transformers.trainer][INFO] -
188
+ ***** Running Prediction *****
189
+ [2025-07-11 01:31:59,900][transformers.trainer][INFO] - Num examples = 138
190
+ [2025-07-11 01:31:59,900][transformers.trainer][INFO] - Batch size = 4
191
+ [2025-07-11 01:32:08,637][__main__][INFO] - Inference results saved to jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only_inference_results.jsonl
192
+ [2025-07-11 01:32:08,638][__main__][INFO] - Computing bootstrap confidence intervals for metrics: ['QWK', 'Macro_F1', 'Weighted_F1']
193
+ [2025-07-11 01:34:15,233][__main__][INFO] - Bootstrap CI results saved to bootstrap_confidence_intervals.csv
194
+ [2025-07-11 01:34:15,233][__main__][INFO] - Bootstrap Confidence Intervals (95%):
195
+ [2025-07-11 01:34:15,233][__main__][INFO] - QWK: 0.6817 [0.5762, 0.7806]
196
+ [2025-07-11 01:34:15,233][__main__][INFO] - Macro_F1: 0.5363 [0.4127, 0.6945]
197
+ [2025-07-11 01:34:15,233][__main__][INFO] - Weighted_F1: 0.7122 [0.6365, 0.7850]
198
+ [2025-07-11 01:34:15,233][__main__][INFO] - Inference results: {'accuracy': 0.7028985507246377, 'RMSE': 25.931906372573962, 'QWK': 0.6826328310864394, 'HDIV': 0.007246376811594235, 'Macro_F1': 0.49688027796692447, 'Micro_F1': 0.7028985507246377, 'Weighted_F1': 0.7116672747290037, 'TP_0': np.int64(0), 'TN_0': np.int64(137), 'FP_0': np.int64(0), 'FN_0': np.int64(1), 'TP_1': np.int64(0), 'TN_1': np.int64(138), 'FP_1': np.int64(0), 'FN_1': np.int64(0), 'TP_2': np.int64(6), 'TN_2': np.int64(122), 'FP_2': np.int64(6), 'FN_2': np.int64(4), 'TP_3': np.int64(45), 'TN_3': np.int64(65), 'FP_3': np.int64(7), 'FN_3': np.int64(21), 'TP_4': np.int64(40), 'TN_4': np.int64(71), 'FP_4': np.int64(16), 'FN_4': np.int64(11), 'TP_5': np.int64(6), 'TN_5': np.int64(116), 'FP_5': np.int64(12), 'FN_5': np.int64(4)}
199
+ [2025-07-11 01:34:15,237][__main__][INFO] - Inference experiment completed
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only/.hydra/config.yaml ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ cache_dir: /tmp/
2
+ dataset:
3
+ name: kamel-usp/aes_enem_dataset
4
+ split: JBCS2025
5
+ training_params:
6
+ seed: 42
7
+ num_train_epochs: 20
8
+ logging_steps: 100
9
+ metric_for_best_model: QWK
10
+ bf16: true
11
+ bootstrap:
12
+ enabled: true
13
+ n_bootstrap: 10000
14
+ bootstrap_seed: 42
15
+ metrics:
16
+ - QWK
17
+ - Macro_F1
18
+ - Weighted_F1
19
+ post_training_results:
20
+ model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
21
+ experiments:
22
+ model:
23
+ name: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only
24
+ type: encoder_classification
25
+ num_labels: 6
26
+ output_dir: ./results/
27
+ logging_dir: ./logs/
28
+ best_model_dir: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only
29
+ tokenizer:
30
+ name: PORTULAN/albertina-1b5-portuguese-ptbr-encoder
31
+ dataset:
32
+ grade_index: 1
33
+ use_full_context: false
34
+ training_params:
35
+ weight_decay: 0.01
36
+ warmup_ratio: 0.1
37
+ learning_rate: 5.0e-05
38
+ train_batch_size: 4
39
+ eval_batch_size: 4
40
+ gradient_accumulation_steps: 4
41
+ gradient_checkpointing: false
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only/.hydra/hydra.yaml ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ hydra:
2
+ run:
3
+ dir: inference_output/2025-07-11/01-34-20
4
+ sweep:
5
+ dir: multirun/${now:%Y-%m-%d}/${now:%H-%M-%S}
6
+ subdir: ${hydra.job.num}
7
+ launcher:
8
+ _target_: hydra._internal.core_plugins.basic_launcher.BasicLauncher
9
+ sweeper:
10
+ _target_: hydra._internal.core_plugins.basic_sweeper.BasicSweeper
11
+ max_batch_size: null
12
+ params: null
13
+ help:
14
+ app_name: ${hydra.job.name}
15
+ header: '${hydra.help.app_name} is powered by Hydra.
16
+
17
+ '
18
+ footer: 'Powered by Hydra (https://hydra.cc)
19
+
20
+ Use --hydra-help to view Hydra specific help
21
+
22
+ '
23
+ template: '${hydra.help.header}
24
+
25
+ == Configuration groups ==
26
+
27
+ Compose your configuration from those groups (group=option)
28
+
29
+
30
+ $APP_CONFIG_GROUPS
31
+
32
+
33
+ == Config ==
34
+
35
+ Override anything in the config (foo.bar=value)
36
+
37
+
38
+ $CONFIG
39
+
40
+
41
+ ${hydra.help.footer}
42
+
43
+ '
44
+ hydra_help:
45
+ template: 'Hydra (${hydra.runtime.version})
46
+
47
+ See https://hydra.cc for more info.
48
+
49
+
50
+ == Flags ==
51
+
52
+ $FLAGS_HELP
53
+
54
+
55
+ == Configuration groups ==
56
+
57
+ Compose your configuration from those groups (For example, append hydra/job_logging=disabled
58
+ to command line)
59
+
60
+
61
+ $HYDRA_CONFIG_GROUPS
62
+
63
+
64
+ Use ''--cfg hydra'' to Show the Hydra config.
65
+
66
+ '
67
+ hydra_help: ???
68
+ hydra_logging:
69
+ version: 1
70
+ formatters:
71
+ simple:
72
+ format: '[%(asctime)s][HYDRA] %(message)s'
73
+ handlers:
74
+ console:
75
+ class: logging.StreamHandler
76
+ formatter: simple
77
+ stream: ext://sys.stdout
78
+ root:
79
+ level: INFO
80
+ handlers:
81
+ - console
82
+ loggers:
83
+ logging_example:
84
+ level: DEBUG
85
+ disable_existing_loggers: false
86
+ job_logging:
87
+ version: 1
88
+ formatters:
89
+ simple:
90
+ format: '[%(asctime)s][%(name)s][%(levelname)s] - %(message)s'
91
+ handlers:
92
+ console:
93
+ class: logging.StreamHandler
94
+ formatter: simple
95
+ stream: ext://sys.stdout
96
+ file:
97
+ class: logging.FileHandler
98
+ formatter: simple
99
+ filename: ${hydra.runtime.output_dir}/${hydra.job.name}.log
100
+ root:
101
+ level: INFO
102
+ handlers:
103
+ - console
104
+ - file
105
+ disable_existing_loggers: false
106
+ env: {}
107
+ mode: RUN
108
+ searchpath: []
109
+ callbacks: {}
110
+ output_subdir: .hydra
111
+ overrides:
112
+ hydra:
113
+ - hydra.run.dir=inference_output/2025-07-11/01-34-20
114
+ - hydra.mode=RUN
115
+ task:
116
+ - experiments=temp_inference/kamel-usp_jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only
117
+ job:
118
+ name: run_inference_experiment
119
+ chdir: null
120
+ override_dirname: experiments=temp_inference/kamel-usp_jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only
121
+ id: ???
122
+ num: ???
123
+ config_name: config
124
+ env_set: {}
125
+ env_copy: []
126
+ config:
127
+ override_dirname:
128
+ kv_sep: '='
129
+ item_sep: ','
130
+ exclude_keys: []
131
+ runtime:
132
+ version: 1.3.2
133
+ version_base: '1.1'
134
+ cwd: /workspace/jbcs2025
135
+ config_sources:
136
+ - path: hydra.conf
137
+ schema: pkg
138
+ provider: hydra
139
+ - path: /workspace/jbcs2025/configs
140
+ schema: file
141
+ provider: main
142
+ - path: ''
143
+ schema: structured
144
+ provider: schema
145
+ output_dir: /workspace/jbcs2025/inference_output/2025-07-11/01-34-20
146
+ choices:
147
+ experiments: temp_inference/kamel-usp_jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only
148
+ hydra/env: default
149
+ hydra/callbacks: null
150
+ hydra/job_logging: default
151
+ hydra/hydra_logging: default
152
+ hydra/hydra_help: default
153
+ hydra/help: default
154
+ hydra/sweeper: basic
155
+ hydra/launcher: basic
156
+ hydra/output: default
157
+ verbose: false
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only/.hydra/overrides.yaml ADDED
@@ -0,0 +1 @@
 
 
1
+ - experiments=temp_inference/kamel-usp_jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only/bootstrap_confidence_intervals.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ experiment_id,timestamp,QWK_mean,QWK_lower_95ci,QWK_upper_95ci,QWK_ci_width,Macro_F1_mean,Macro_F1_lower_95ci,Macro_F1_upper_95ci,Macro_F1_ci_width,Weighted_F1_mean,Weighted_F1_lower_95ci,Weighted_F1_upper_95ci,Weighted_F1_ci_width
2
+ jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only,2025-07-11 01:34:26,0.2382531468171123,0.08558340882557128,0.386519313107836,0.3009359042822647,0.13898233480051359,0.08334403539825075,0.20313595816462865,0.1197919227663779,0.15418097819475352,0.0921235036938581,0.22086688751892972,0.1287433838250716
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only/evaluation_results.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ accuracy,RMSE,QWK,HDIV,Macro_F1,Micro_F1,Weighted_F1,TP_0,TN_0,FP_0,FN_0,TP_1,TN_1,FP_1,FN_1,TP_2,TN_2,FP_2,FN_2,TP_3,TN_3,FP_3,FN_3,TP_4,TN_4,FP_4,FN_4,TP_5,TN_5,FP_5,FN_5,timestamp,id
2
+ 0.15942028985507245,60.72031391337771,0.24034067303697548,0.1159420289855072,0.13165647872210337,0.15942028985507245,0.1542258389739044,0,137,0,1,0,99,4,35,3,74,59,2,6,73,14,45,9,79,33,17,4,112,6,16,2025-07-11 01:34:26,jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only_inference_results.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only/run_inference_experiment.log ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2025-07-11 01:34:25,998][__main__][INFO] - Starting inference experiment
2
+ [2025-07-11 01:34:25,999][__main__][INFO] - cache_dir: /tmp/
3
+ dataset:
4
+ name: kamel-usp/aes_enem_dataset
5
+ split: JBCS2025
6
+ training_params:
7
+ seed: 42
8
+ num_train_epochs: 20
9
+ logging_steps: 100
10
+ metric_for_best_model: QWK
11
+ bf16: true
12
+ bootstrap:
13
+ enabled: true
14
+ n_bootstrap: 10000
15
+ bootstrap_seed: 42
16
+ metrics:
17
+ - QWK
18
+ - Macro_F1
19
+ - Weighted_F1
20
+ post_training_results:
21
+ model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
22
+ experiments:
23
+ model:
24
+ name: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only
25
+ type: encoder_classification
26
+ num_labels: 6
27
+ output_dir: ./results/
28
+ logging_dir: ./logs/
29
+ best_model_dir: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only
30
+ tokenizer:
31
+ name: PORTULAN/albertina-1b5-portuguese-ptbr-encoder
32
+ dataset:
33
+ grade_index: 1
34
+ use_full_context: false
35
+ training_params:
36
+ weight_decay: 0.01
37
+ warmup_ratio: 0.1
38
+ learning_rate: 5.0e-05
39
+ train_batch_size: 4
40
+ eval_batch_size: 4
41
+ gradient_accumulation_steps: 4
42
+ gradient_checkpointing: false
43
+
44
+ [2025-07-11 01:34:26,001][__main__][INFO] - Running inference with fine-tuned HF model
45
+ [2025-07-11 01:34:30,940][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/config.json
46
+ [2025-07-11 01:34:30,943][transformers.configuration_utils][INFO] - Model config DebertaV2Config {
47
+ "architectures": [
48
+ "DebertaV2ForMaskedLM"
49
+ ],
50
+ "attention_head_size": 64,
51
+ "attention_probs_dropout_prob": 0.1,
52
+ "conv_act": "gelu",
53
+ "conv_kernel_size": 3,
54
+ "hidden_act": "gelu",
55
+ "hidden_dropout_prob": 0.1,
56
+ "hidden_size": 1536,
57
+ "initializer_range": 0.02,
58
+ "intermediate_size": 6144,
59
+ "layer_norm_eps": 1e-07,
60
+ "legacy": true,
61
+ "max_position_embeddings": 512,
62
+ "max_relative_positions": -1,
63
+ "model_type": "deberta-v2",
64
+ "norm_rel_ebd": "layer_norm",
65
+ "num_attention_heads": 24,
66
+ "num_hidden_layers": 48,
67
+ "pad_token_id": 0,
68
+ "pooler_dropout": 0,
69
+ "pooler_hidden_act": "gelu",
70
+ "pooler_hidden_size": 1536,
71
+ "pos_att_type": [
72
+ "p2c",
73
+ "c2p"
74
+ ],
75
+ "position_biased_input": false,
76
+ "position_buckets": 256,
77
+ "relative_attention": true,
78
+ "share_att_key": true,
79
+ "torch_dtype": "bfloat16",
80
+ "transformers_version": "4.53.1",
81
+ "type_vocab_size": 0,
82
+ "vocab_size": 128100
83
+ }
84
+
85
+ [2025-07-11 01:34:31,458][transformers.tokenization_utils_base][INFO] - loading file spm.model from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/spm.model
86
+ [2025-07-11 01:34:31,458][transformers.tokenization_utils_base][INFO] - loading file tokenizer.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/tokenizer.json
87
+ [2025-07-11 01:34:31,458][transformers.tokenization_utils_base][INFO] - loading file added_tokens.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/added_tokens.json
88
+ [2025-07-11 01:34:31,458][transformers.tokenization_utils_base][INFO] - loading file special_tokens_map.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/special_tokens_map.json
89
+ [2025-07-11 01:34:31,458][transformers.tokenization_utils_base][INFO] - loading file tokenizer_config.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/tokenizer_config.json
90
+ [2025-07-11 01:34:31,458][transformers.tokenization_utils_base][INFO] - loading file chat_template.jinja from cache at None
91
+ [2025-07-11 01:34:31,689][__main__][INFO] - Tokenizer function parameters- Padding:longest; Truncation: True; Use Full Context: False
92
+ [2025-07-11 01:34:32,088][__main__][INFO] -
93
+ Token statistics for 'train' split:
94
+ [2025-07-11 01:34:32,088][__main__][INFO] - Total examples: 500
95
+ [2025-07-11 01:34:32,088][__main__][INFO] - Min tokens: 512
96
+ [2025-07-11 01:34:32,088][__main__][INFO] - Max tokens: 512
97
+ [2025-07-11 01:34:32,088][__main__][INFO] - Avg tokens: 512.00
98
+ [2025-07-11 01:34:32,088][__main__][INFO] - Std tokens: 0.00
99
+ [2025-07-11 01:34:32,181][__main__][INFO] -
100
+ Token statistics for 'validation' split:
101
+ [2025-07-11 01:34:32,181][__main__][INFO] - Total examples: 132
102
+ [2025-07-11 01:34:32,181][__main__][INFO] - Min tokens: 512
103
+ [2025-07-11 01:34:32,181][__main__][INFO] - Max tokens: 512
104
+ [2025-07-11 01:34:32,181][__main__][INFO] - Avg tokens: 512.00
105
+ [2025-07-11 01:34:32,181][__main__][INFO] - Std tokens: 0.00
106
+ [2025-07-11 01:34:32,281][__main__][INFO] -
107
+ Token statistics for 'test' split:
108
+ [2025-07-11 01:34:32,281][__main__][INFO] - Total examples: 138
109
+ [2025-07-11 01:34:32,281][__main__][INFO] - Min tokens: 512
110
+ [2025-07-11 01:34:32,281][__main__][INFO] - Max tokens: 512
111
+ [2025-07-11 01:34:32,281][__main__][INFO] - Avg tokens: 512.00
112
+ [2025-07-11 01:34:32,281][__main__][INFO] - Std tokens: 0.00
113
+ [2025-07-11 01:34:32,281][__main__][INFO] - If token statistics are the same (max, avg, min) keep in mind that this is due to batched tokenization and padding.
114
+ [2025-07-11 01:34:32,281][__main__][INFO] - Model max length: 512. If it is the same as stats, then there is a high chance that sequences are being truncated.
115
+ [2025-07-11 01:34:32,281][__main__][INFO] - Loading model from: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only
116
+ [2025-07-11 01:34:32,282][__main__][INFO] - Loading model from: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only
117
+ [2025-07-11 01:34:33,362][__main__][INFO] - Model need ≈ 9.51 GiB to run inference and 27.02 for training
118
+ [2025-07-11 01:34:34,304][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--kamel-usp--jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only/snapshots/adb37bfa550aaf8233d141929271f33f9a7147f1/config.json
119
+ [2025-07-11 01:34:34,304][transformers.configuration_utils][INFO] - Model config DebertaV2Config {
120
+ "architectures": [
121
+ "DebertaV2ForSequenceClassification"
122
+ ],
123
+ "attention_head_size": 64,
124
+ "attention_probs_dropout_prob": 0.1,
125
+ "conv_act": "gelu",
126
+ "conv_kernel_size": 3,
127
+ "hidden_act": "gelu",
128
+ "hidden_dropout_prob": 0.1,
129
+ "hidden_size": 1536,
130
+ "id2label": {
131
+ "0": 0,
132
+ "1": 40,
133
+ "2": 80,
134
+ "3": 120,
135
+ "4": 160,
136
+ "5": 200
137
+ },
138
+ "initializer_range": 0.02,
139
+ "intermediate_size": 6144,
140
+ "label2id": {
141
+ "0": 0,
142
+ "40": 1,
143
+ "80": 2,
144
+ "120": 3,
145
+ "160": 4,
146
+ "200": 5
147
+ },
148
+ "layer_norm_eps": 1e-07,
149
+ "legacy": true,
150
+ "max_position_embeddings": 512,
151
+ "max_relative_positions": -1,
152
+ "model_type": "deberta-v2",
153
+ "norm_rel_ebd": "layer_norm",
154
+ "num_attention_heads": 24,
155
+ "num_hidden_layers": 48,
156
+ "pad_token_id": 0,
157
+ "pooler_dropout": 0,
158
+ "pooler_hidden_act": "gelu",
159
+ "pooler_hidden_size": 1536,
160
+ "pos_att_type": [
161
+ "p2c",
162
+ "c2p"
163
+ ],
164
+ "position_biased_input": false,
165
+ "position_buckets": 256,
166
+ "relative_attention": true,
167
+ "share_att_key": true,
168
+ "torch_dtype": "bfloat16",
169
+ "transformers_version": "4.53.1",
170
+ "type_vocab_size": 0,
171
+ "vocab_size": 128100
172
+ }
173
+
174
+ [2025-07-11 01:35:30,309][transformers.modeling_utils][INFO] - loading weights file model.safetensors from cache at /tmp/models--kamel-usp--jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only/snapshots/adb37bfa550aaf8233d141929271f33f9a7147f1/model.safetensors
175
+ [2025-07-11 01:35:30,313][transformers.modeling_utils][INFO] - Will use torch_dtype=torch.bfloat16 as defined in model's config object
176
+ [2025-07-11 01:35:30,313][transformers.modeling_utils][INFO] - Instantiating DebertaV2ForSequenceClassification model under default dtype torch.bfloat16.
177
+ [2025-07-11 01:35:31,939][transformers.modeling_utils][INFO] - All model checkpoint weights were used when initializing DebertaV2ForSequenceClassification.
178
+
179
+ [2025-07-11 01:35:31,939][transformers.modeling_utils][INFO] - All the weights of DebertaV2ForSequenceClassification were initialized from the model checkpoint at kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only.
180
+ If your task is similar to the task the model of the checkpoint was trained on, you can already use DebertaV2ForSequenceClassification for predictions without further training.
181
+ [2025-07-11 01:35:31,960][transformers.training_args][INFO] - PyTorch: setting up devices
182
+ [2025-07-11 01:35:31,984][transformers.training_args][INFO] - The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).
183
+ [2025-07-11 01:35:31,991][transformers.trainer][INFO] - You have loaded a model on multiple GPUs. `is_model_parallel` attribute will be force-set to `True` to avoid any unexpected behavior such as device placement mismatching.
184
+ [2025-07-11 01:35:32,016][transformers.trainer][INFO] - Using auto half precision backend
185
+ [2025-07-11 01:35:35,310][__main__][INFO] - Running inference on test dataset
186
+ [2025-07-11 01:35:35,311][transformers.trainer][INFO] - The following columns in the test set don't have a corresponding argument in `DebertaV2ForSequenceClassification.forward` and have been ignored: essay_text, essay_year, id, id_prompt, reference, prompt, grades, supporting_text. If essay_text, essay_year, id, id_prompt, reference, prompt, grades, supporting_text are not expected by `DebertaV2ForSequenceClassification.forward`, you can safely ignore this message.
187
+ [2025-07-11 01:35:35,324][transformers.trainer][INFO] -
188
+ ***** Running Prediction *****
189
+ [2025-07-11 01:35:35,324][transformers.trainer][INFO] - Num examples = 138
190
+ [2025-07-11 01:35:35,324][transformers.trainer][INFO] - Batch size = 4
191
+ [2025-07-11 01:35:44,076][__main__][INFO] - Inference results saved to jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only_inference_results.jsonl
192
+ [2025-07-11 01:35:44,077][__main__][INFO] - Computing bootstrap confidence intervals for metrics: ['QWK', 'Macro_F1', 'Weighted_F1']
193
+ [2025-07-11 01:37:50,577][__main__][INFO] - Bootstrap CI results saved to bootstrap_confidence_intervals.csv
194
+ [2025-07-11 01:37:50,578][__main__][INFO] - Bootstrap Confidence Intervals (95%):
195
+ [2025-07-11 01:37:50,578][__main__][INFO] - QWK: 0.2383 [0.0856, 0.3865]
196
+ [2025-07-11 01:37:50,578][__main__][INFO] - Macro_F1: 0.1390 [0.0833, 0.2031]
197
+ [2025-07-11 01:37:50,578][__main__][INFO] - Weighted_F1: 0.1542 [0.0921, 0.2209]
198
+ [2025-07-11 01:37:50,578][__main__][INFO] - Inference results: {'accuracy': 0.15942028985507245, 'RMSE': 60.72031391337771, 'QWK': 0.24034067303697548, 'HDIV': 0.1159420289855072, 'Macro_F1': 0.13165647872210337, 'Micro_F1': 0.15942028985507245, 'Weighted_F1': 0.1542258389739044, 'TP_0': np.int64(0), 'TN_0': np.int64(137), 'FP_0': np.int64(0), 'FN_0': np.int64(1), 'TP_1': np.int64(0), 'TN_1': np.int64(99), 'FP_1': np.int64(4), 'FN_1': np.int64(35), 'TP_2': np.int64(3), 'TN_2': np.int64(74), 'FP_2': np.int64(59), 'FN_2': np.int64(2), 'TP_3': np.int64(6), 'TN_3': np.int64(73), 'FP_3': np.int64(14), 'FN_3': np.int64(45), 'TP_4': np.int64(9), 'TN_4': np.int64(79), 'FP_4': np.int64(33), 'FN_4': np.int64(17), 'TP_5': np.int64(4), 'TN_5': np.int64(112), 'FP_5': np.int64(6), 'FN_5': np.int64(16)}
199
+ [2025-07-11 01:37:50,578][__main__][INFO] - Inference experiment completed
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only/.hydra/config.yaml ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ cache_dir: /tmp/
2
+ dataset:
3
+ name: kamel-usp/aes_enem_dataset
4
+ split: JBCS2025
5
+ training_params:
6
+ seed: 42
7
+ num_train_epochs: 20
8
+ logging_steps: 100
9
+ metric_for_best_model: QWK
10
+ bf16: true
11
+ bootstrap:
12
+ enabled: true
13
+ n_bootstrap: 10000
14
+ bootstrap_seed: 42
15
+ metrics:
16
+ - QWK
17
+ - Macro_F1
18
+ - Weighted_F1
19
+ post_training_results:
20
+ model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
21
+ experiments:
22
+ model:
23
+ name: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only
24
+ type: encoder_classification
25
+ num_labels: 6
26
+ output_dir: ./results/
27
+ logging_dir: ./logs/
28
+ best_model_dir: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only
29
+ tokenizer:
30
+ name: PORTULAN/albertina-1b5-portuguese-ptbr-encoder
31
+ dataset:
32
+ grade_index: 2
33
+ use_full_context: false
34
+ training_params:
35
+ weight_decay: 0.01
36
+ warmup_ratio: 0.1
37
+ learning_rate: 5.0e-05
38
+ train_batch_size: 4
39
+ eval_batch_size: 4
40
+ gradient_accumulation_steps: 4
41
+ gradient_checkpointing: false
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only/.hydra/hydra.yaml ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ hydra:
2
+ run:
3
+ dir: inference_output/2025-07-11/01-37-55
4
+ sweep:
5
+ dir: multirun/${now:%Y-%m-%d}/${now:%H-%M-%S}
6
+ subdir: ${hydra.job.num}
7
+ launcher:
8
+ _target_: hydra._internal.core_plugins.basic_launcher.BasicLauncher
9
+ sweeper:
10
+ _target_: hydra._internal.core_plugins.basic_sweeper.BasicSweeper
11
+ max_batch_size: null
12
+ params: null
13
+ help:
14
+ app_name: ${hydra.job.name}
15
+ header: '${hydra.help.app_name} is powered by Hydra.
16
+
17
+ '
18
+ footer: 'Powered by Hydra (https://hydra.cc)
19
+
20
+ Use --hydra-help to view Hydra specific help
21
+
22
+ '
23
+ template: '${hydra.help.header}
24
+
25
+ == Configuration groups ==
26
+
27
+ Compose your configuration from those groups (group=option)
28
+
29
+
30
+ $APP_CONFIG_GROUPS
31
+
32
+
33
+ == Config ==
34
+
35
+ Override anything in the config (foo.bar=value)
36
+
37
+
38
+ $CONFIG
39
+
40
+
41
+ ${hydra.help.footer}
42
+
43
+ '
44
+ hydra_help:
45
+ template: 'Hydra (${hydra.runtime.version})
46
+
47
+ See https://hydra.cc for more info.
48
+
49
+
50
+ == Flags ==
51
+
52
+ $FLAGS_HELP
53
+
54
+
55
+ == Configuration groups ==
56
+
57
+ Compose your configuration from those groups (For example, append hydra/job_logging=disabled
58
+ to command line)
59
+
60
+
61
+ $HYDRA_CONFIG_GROUPS
62
+
63
+
64
+ Use ''--cfg hydra'' to Show the Hydra config.
65
+
66
+ '
67
+ hydra_help: ???
68
+ hydra_logging:
69
+ version: 1
70
+ formatters:
71
+ simple:
72
+ format: '[%(asctime)s][HYDRA] %(message)s'
73
+ handlers:
74
+ console:
75
+ class: logging.StreamHandler
76
+ formatter: simple
77
+ stream: ext://sys.stdout
78
+ root:
79
+ level: INFO
80
+ handlers:
81
+ - console
82
+ loggers:
83
+ logging_example:
84
+ level: DEBUG
85
+ disable_existing_loggers: false
86
+ job_logging:
87
+ version: 1
88
+ formatters:
89
+ simple:
90
+ format: '[%(asctime)s][%(name)s][%(levelname)s] - %(message)s'
91
+ handlers:
92
+ console:
93
+ class: logging.StreamHandler
94
+ formatter: simple
95
+ stream: ext://sys.stdout
96
+ file:
97
+ class: logging.FileHandler
98
+ formatter: simple
99
+ filename: ${hydra.runtime.output_dir}/${hydra.job.name}.log
100
+ root:
101
+ level: INFO
102
+ handlers:
103
+ - console
104
+ - file
105
+ disable_existing_loggers: false
106
+ env: {}
107
+ mode: RUN
108
+ searchpath: []
109
+ callbacks: {}
110
+ output_subdir: .hydra
111
+ overrides:
112
+ hydra:
113
+ - hydra.run.dir=inference_output/2025-07-11/01-37-55
114
+ - hydra.mode=RUN
115
+ task:
116
+ - experiments=temp_inference/kamel-usp_jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only
117
+ job:
118
+ name: run_inference_experiment
119
+ chdir: null
120
+ override_dirname: experiments=temp_inference/kamel-usp_jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only
121
+ id: ???
122
+ num: ???
123
+ config_name: config
124
+ env_set: {}
125
+ env_copy: []
126
+ config:
127
+ override_dirname:
128
+ kv_sep: '='
129
+ item_sep: ','
130
+ exclude_keys: []
131
+ runtime:
132
+ version: 1.3.2
133
+ version_base: '1.1'
134
+ cwd: /workspace/jbcs2025
135
+ config_sources:
136
+ - path: hydra.conf
137
+ schema: pkg
138
+ provider: hydra
139
+ - path: /workspace/jbcs2025/configs
140
+ schema: file
141
+ provider: main
142
+ - path: ''
143
+ schema: structured
144
+ provider: schema
145
+ output_dir: /workspace/jbcs2025/inference_output/2025-07-11/01-37-55
146
+ choices:
147
+ experiments: temp_inference/kamel-usp_jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only
148
+ hydra/env: default
149
+ hydra/callbacks: null
150
+ hydra/job_logging: default
151
+ hydra/hydra_logging: default
152
+ hydra/hydra_help: default
153
+ hydra/help: default
154
+ hydra/sweeper: basic
155
+ hydra/launcher: basic
156
+ hydra/output: default
157
+ verbose: false
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only/.hydra/overrides.yaml ADDED
@@ -0,0 +1 @@
 
 
1
+ - experiments=temp_inference/kamel-usp_jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only/bootstrap_confidence_intervals.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ experiment_id,timestamp,QWK_mean,QWK_lower_95ci,QWK_upper_95ci,QWK_ci_width,Macro_F1_mean,Macro_F1_lower_95ci,Macro_F1_upper_95ci,Macro_F1_ci_width,Weighted_F1_mean,Weighted_F1_lower_95ci,Weighted_F1_upper_95ci,Weighted_F1_ci_width
2
+ jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only,2025-07-11 01:38:01,0.2517666669121323,0.09322536648339473,0.40469018928519745,0.3114648228018027,0.22857107700198048,0.1640445655410444,0.305809643248977,0.14176507770793262,0.30780478274990675,0.23016566290720317,0.3851747099823908,0.15500904707518762
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only/evaluation_results.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ accuracy,RMSE,QWK,HDIV,Macro_F1,Micro_F1,Weighted_F1,TP_0,TN_0,FP_0,FN_0,TP_1,TN_1,FP_1,FN_1,TP_2,TN_2,FP_2,FN_2,TP_3,TN_3,FP_3,FN_3,TP_4,TN_4,FP_4,FN_4,TP_5,TN_5,FP_5,FN_5,timestamp,id
2
+ 0.3115942028985507,59.27093282765904,0.25441318069968977,0.09420289855072461,0.2158405391141964,0.3115942028985507,0.30799298925619395,0,137,0,1,9,78,31,20,7,103,17,11,14,67,26,31,13,81,19,25,0,129,2,7,2025-07-11 01:38:01,jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only_inference_results.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only/run_inference_experiment.log ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2025-07-11 01:38:01,376][__main__][INFO] - Starting inference experiment
2
+ [2025-07-11 01:38:01,377][__main__][INFO] - cache_dir: /tmp/
3
+ dataset:
4
+ name: kamel-usp/aes_enem_dataset
5
+ split: JBCS2025
6
+ training_params:
7
+ seed: 42
8
+ num_train_epochs: 20
9
+ logging_steps: 100
10
+ metric_for_best_model: QWK
11
+ bf16: true
12
+ bootstrap:
13
+ enabled: true
14
+ n_bootstrap: 10000
15
+ bootstrap_seed: 42
16
+ metrics:
17
+ - QWK
18
+ - Macro_F1
19
+ - Weighted_F1
20
+ post_training_results:
21
+ model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
22
+ experiments:
23
+ model:
24
+ name: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only
25
+ type: encoder_classification
26
+ num_labels: 6
27
+ output_dir: ./results/
28
+ logging_dir: ./logs/
29
+ best_model_dir: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only
30
+ tokenizer:
31
+ name: PORTULAN/albertina-1b5-portuguese-ptbr-encoder
32
+ dataset:
33
+ grade_index: 2
34
+ use_full_context: false
35
+ training_params:
36
+ weight_decay: 0.01
37
+ warmup_ratio: 0.1
38
+ learning_rate: 5.0e-05
39
+ train_batch_size: 4
40
+ eval_batch_size: 4
41
+ gradient_accumulation_steps: 4
42
+ gradient_checkpointing: false
43
+
44
+ [2025-07-11 01:38:01,380][__main__][INFO] - Running inference with fine-tuned HF model
45
+ [2025-07-11 01:38:06,302][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/config.json
46
+ [2025-07-11 01:38:06,305][transformers.configuration_utils][INFO] - Model config DebertaV2Config {
47
+ "architectures": [
48
+ "DebertaV2ForMaskedLM"
49
+ ],
50
+ "attention_head_size": 64,
51
+ "attention_probs_dropout_prob": 0.1,
52
+ "conv_act": "gelu",
53
+ "conv_kernel_size": 3,
54
+ "hidden_act": "gelu",
55
+ "hidden_dropout_prob": 0.1,
56
+ "hidden_size": 1536,
57
+ "initializer_range": 0.02,
58
+ "intermediate_size": 6144,
59
+ "layer_norm_eps": 1e-07,
60
+ "legacy": true,
61
+ "max_position_embeddings": 512,
62
+ "max_relative_positions": -1,
63
+ "model_type": "deberta-v2",
64
+ "norm_rel_ebd": "layer_norm",
65
+ "num_attention_heads": 24,
66
+ "num_hidden_layers": 48,
67
+ "pad_token_id": 0,
68
+ "pooler_dropout": 0,
69
+ "pooler_hidden_act": "gelu",
70
+ "pooler_hidden_size": 1536,
71
+ "pos_att_type": [
72
+ "p2c",
73
+ "c2p"
74
+ ],
75
+ "position_biased_input": false,
76
+ "position_buckets": 256,
77
+ "relative_attention": true,
78
+ "share_att_key": true,
79
+ "torch_dtype": "bfloat16",
80
+ "transformers_version": "4.53.1",
81
+ "type_vocab_size": 0,
82
+ "vocab_size": 128100
83
+ }
84
+
85
+ [2025-07-11 01:38:06,748][transformers.tokenization_utils_base][INFO] - loading file spm.model from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/spm.model
86
+ [2025-07-11 01:38:06,748][transformers.tokenization_utils_base][INFO] - loading file tokenizer.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/tokenizer.json
87
+ [2025-07-11 01:38:06,748][transformers.tokenization_utils_base][INFO] - loading file added_tokens.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/added_tokens.json
88
+ [2025-07-11 01:38:06,748][transformers.tokenization_utils_base][INFO] - loading file special_tokens_map.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/special_tokens_map.json
89
+ [2025-07-11 01:38:06,748][transformers.tokenization_utils_base][INFO] - loading file tokenizer_config.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/tokenizer_config.json
90
+ [2025-07-11 01:38:06,748][transformers.tokenization_utils_base][INFO] - loading file chat_template.jinja from cache at None
91
+ [2025-07-11 01:38:06,967][__main__][INFO] - Tokenizer function parameters- Padding:longest; Truncation: True; Use Full Context: False
92
+ [2025-07-11 01:38:07,435][__main__][INFO] -
93
+ Token statistics for 'train' split:
94
+ [2025-07-11 01:38:07,435][__main__][INFO] - Total examples: 500
95
+ [2025-07-11 01:38:07,435][__main__][INFO] - Min tokens: 512
96
+ [2025-07-11 01:38:07,435][__main__][INFO] - Max tokens: 512
97
+ [2025-07-11 01:38:07,435][__main__][INFO] - Avg tokens: 512.00
98
+ [2025-07-11 01:38:07,435][__main__][INFO] - Std tokens: 0.00
99
+ [2025-07-11 01:38:07,528][__main__][INFO] -
100
+ Token statistics for 'validation' split:
101
+ [2025-07-11 01:38:07,528][__main__][INFO] - Total examples: 132
102
+ [2025-07-11 01:38:07,528][__main__][INFO] - Min tokens: 512
103
+ [2025-07-11 01:38:07,528][__main__][INFO] - Max tokens: 512
104
+ [2025-07-11 01:38:07,528][__main__][INFO] - Avg tokens: 512.00
105
+ [2025-07-11 01:38:07,528][__main__][INFO] - Std tokens: 0.00
106
+ [2025-07-11 01:38:07,624][__main__][INFO] -
107
+ Token statistics for 'test' split:
108
+ [2025-07-11 01:38:07,624][__main__][INFO] - Total examples: 138
109
+ [2025-07-11 01:38:07,624][__main__][INFO] - Min tokens: 512
110
+ [2025-07-11 01:38:07,624][__main__][INFO] - Max tokens: 512
111
+ [2025-07-11 01:38:07,624][__main__][INFO] - Avg tokens: 512.00
112
+ [2025-07-11 01:38:07,624][__main__][INFO] - Std tokens: 0.00
113
+ [2025-07-11 01:38:07,625][__main__][INFO] - If token statistics are the same (max, avg, min) keep in mind that this is due to batched tokenization and padding.
114
+ [2025-07-11 01:38:07,625][__main__][INFO] - Model max length: 512. If it is the same as stats, then there is a high chance that sequences are being truncated.
115
+ [2025-07-11 01:38:07,625][__main__][INFO] - Loading model from: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only
116
+ [2025-07-11 01:38:07,625][__main__][INFO] - Loading model from: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only
117
+ [2025-07-11 01:38:08,597][__main__][INFO] - Model need ≈ 9.51 GiB to run inference and 27.02 for training
118
+ [2025-07-11 01:38:09,405][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--kamel-usp--jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only/snapshots/932551e130d45fd1107b0d1d9a6a645bfa6fb16a/config.json
119
+ [2025-07-11 01:38:09,406][transformers.configuration_utils][INFO] - Model config DebertaV2Config {
120
+ "architectures": [
121
+ "DebertaV2ForSequenceClassification"
122
+ ],
123
+ "attention_head_size": 64,
124
+ "attention_probs_dropout_prob": 0.1,
125
+ "conv_act": "gelu",
126
+ "conv_kernel_size": 3,
127
+ "hidden_act": "gelu",
128
+ "hidden_dropout_prob": 0.1,
129
+ "hidden_size": 1536,
130
+ "id2label": {
131
+ "0": 0,
132
+ "1": 40,
133
+ "2": 80,
134
+ "3": 120,
135
+ "4": 160,
136
+ "5": 200
137
+ },
138
+ "initializer_range": 0.02,
139
+ "intermediate_size": 6144,
140
+ "label2id": {
141
+ "0": 0,
142
+ "40": 1,
143
+ "80": 2,
144
+ "120": 3,
145
+ "160": 4,
146
+ "200": 5
147
+ },
148
+ "layer_norm_eps": 1e-07,
149
+ "legacy": true,
150
+ "max_position_embeddings": 512,
151
+ "max_relative_positions": -1,
152
+ "model_type": "deberta-v2",
153
+ "norm_rel_ebd": "layer_norm",
154
+ "num_attention_heads": 24,
155
+ "num_hidden_layers": 48,
156
+ "pad_token_id": 0,
157
+ "pooler_dropout": 0,
158
+ "pooler_hidden_act": "gelu",
159
+ "pooler_hidden_size": 1536,
160
+ "pos_att_type": [
161
+ "p2c",
162
+ "c2p"
163
+ ],
164
+ "position_biased_input": false,
165
+ "position_buckets": 256,
166
+ "relative_attention": true,
167
+ "share_att_key": true,
168
+ "torch_dtype": "bfloat16",
169
+ "transformers_version": "4.53.1",
170
+ "type_vocab_size": 0,
171
+ "vocab_size": 128100
172
+ }
173
+
174
+ [2025-07-11 01:39:10,302][transformers.modeling_utils][INFO] - loading weights file model.safetensors from cache at /tmp/models--kamel-usp--jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only/snapshots/932551e130d45fd1107b0d1d9a6a645bfa6fb16a/model.safetensors
175
+ [2025-07-11 01:39:10,305][transformers.modeling_utils][INFO] - Will use torch_dtype=torch.bfloat16 as defined in model's config object
176
+ [2025-07-11 01:39:10,305][transformers.modeling_utils][INFO] - Instantiating DebertaV2ForSequenceClassification model under default dtype torch.bfloat16.
177
+ [2025-07-11 01:39:11,869][transformers.modeling_utils][INFO] - All model checkpoint weights were used when initializing DebertaV2ForSequenceClassification.
178
+
179
+ [2025-07-11 01:39:11,869][transformers.modeling_utils][INFO] - All the weights of DebertaV2ForSequenceClassification were initialized from the model checkpoint at kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only.
180
+ If your task is similar to the task the model of the checkpoint was trained on, you can already use DebertaV2ForSequenceClassification for predictions without further training.
181
+ [2025-07-11 01:39:11,889][transformers.training_args][INFO] - PyTorch: setting up devices
182
+ [2025-07-11 01:39:11,916][transformers.training_args][INFO] - The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).
183
+ [2025-07-11 01:39:11,924][transformers.trainer][INFO] - You have loaded a model on multiple GPUs. `is_model_parallel` attribute will be force-set to `True` to avoid any unexpected behavior such as device placement mismatching.
184
+ [2025-07-11 01:39:11,949][transformers.trainer][INFO] - Using auto half precision backend
185
+ [2025-07-11 01:39:15,275][__main__][INFO] - Running inference on test dataset
186
+ [2025-07-11 01:39:15,276][transformers.trainer][INFO] - The following columns in the test set don't have a corresponding argument in `DebertaV2ForSequenceClassification.forward` and have been ignored: id, essay_year, essay_text, id_prompt, grades, prompt, reference, supporting_text. If id, essay_year, essay_text, id_prompt, grades, prompt, reference, supporting_text are not expected by `DebertaV2ForSequenceClassification.forward`, you can safely ignore this message.
187
+ [2025-07-11 01:39:15,289][transformers.trainer][INFO] -
188
+ ***** Running Prediction *****
189
+ [2025-07-11 01:39:15,289][transformers.trainer][INFO] - Num examples = 138
190
+ [2025-07-11 01:39:15,289][transformers.trainer][INFO] - Batch size = 4
191
+ [2025-07-11 01:39:24,057][__main__][INFO] - Inference results saved to jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only_inference_results.jsonl
192
+ [2025-07-11 01:39:24,058][__main__][INFO] - Computing bootstrap confidence intervals for metrics: ['QWK', 'Macro_F1', 'Weighted_F1']
193
+ [2025-07-11 01:41:31,078][__main__][INFO] - Bootstrap CI results saved to bootstrap_confidence_intervals.csv
194
+ [2025-07-11 01:41:31,079][__main__][INFO] - Bootstrap Confidence Intervals (95%):
195
+ [2025-07-11 01:41:31,079][__main__][INFO] - QWK: 0.2518 [0.0932, 0.4047]
196
+ [2025-07-11 01:41:31,079][__main__][INFO] - Macro_F1: 0.2286 [0.1640, 0.3058]
197
+ [2025-07-11 01:41:31,079][__main__][INFO] - Weighted_F1: 0.3078 [0.2302, 0.3852]
198
+ [2025-07-11 01:41:31,079][__main__][INFO] - Inference results: {'accuracy': 0.3115942028985507, 'RMSE': 59.27093282765904, 'QWK': 0.25441318069968977, 'HDIV': 0.09420289855072461, 'Macro_F1': 0.2158405391141964, 'Micro_F1': 0.3115942028985507, 'Weighted_F1': 0.30799298925619395, 'TP_0': np.int64(0), 'TN_0': np.int64(137), 'FP_0': np.int64(0), 'FN_0': np.int64(1), 'TP_1': np.int64(9), 'TN_1': np.int64(78), 'FP_1': np.int64(31), 'FN_1': np.int64(20), 'TP_2': np.int64(7), 'TN_2': np.int64(103), 'FP_2': np.int64(17), 'FN_2': np.int64(11), 'TP_3': np.int64(14), 'TN_3': np.int64(67), 'FP_3': np.int64(26), 'FN_3': np.int64(31), 'TP_4': np.int64(13), 'TN_4': np.int64(81), 'FP_4': np.int64(19), 'FN_4': np.int64(25), 'TP_5': np.int64(0), 'TN_5': np.int64(129), 'FP_5': np.int64(2), 'FN_5': np.int64(7)}
199
+ [2025-07-11 01:41:31,079][__main__][INFO] - Inference experiment completed
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only/.hydra/config.yaml ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ cache_dir: /tmp/
2
+ dataset:
3
+ name: kamel-usp/aes_enem_dataset
4
+ split: JBCS2025
5
+ training_params:
6
+ seed: 42
7
+ num_train_epochs: 20
8
+ logging_steps: 100
9
+ metric_for_best_model: QWK
10
+ bf16: true
11
+ bootstrap:
12
+ enabled: true
13
+ n_bootstrap: 10000
14
+ bootstrap_seed: 42
15
+ metrics:
16
+ - QWK
17
+ - Macro_F1
18
+ - Weighted_F1
19
+ post_training_results:
20
+ model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
21
+ experiments:
22
+ model:
23
+ name: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only
24
+ type: encoder_classification
25
+ num_labels: 6
26
+ output_dir: ./results/
27
+ logging_dir: ./logs/
28
+ best_model_dir: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only
29
+ tokenizer:
30
+ name: PORTULAN/albertina-1b5-portuguese-ptbr-encoder
31
+ dataset:
32
+ grade_index: 3
33
+ use_full_context: false
34
+ training_params:
35
+ weight_decay: 0.01
36
+ warmup_ratio: 0.1
37
+ learning_rate: 5.0e-05
38
+ train_batch_size: 4
39
+ eval_batch_size: 4
40
+ gradient_accumulation_steps: 4
41
+ gradient_checkpointing: false
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only/.hydra/hydra.yaml ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ hydra:
2
+ run:
3
+ dir: inference_output/2025-07-11/01-41-35
4
+ sweep:
5
+ dir: multirun/${now:%Y-%m-%d}/${now:%H-%M-%S}
6
+ subdir: ${hydra.job.num}
7
+ launcher:
8
+ _target_: hydra._internal.core_plugins.basic_launcher.BasicLauncher
9
+ sweeper:
10
+ _target_: hydra._internal.core_plugins.basic_sweeper.BasicSweeper
11
+ max_batch_size: null
12
+ params: null
13
+ help:
14
+ app_name: ${hydra.job.name}
15
+ header: '${hydra.help.app_name} is powered by Hydra.
16
+
17
+ '
18
+ footer: 'Powered by Hydra (https://hydra.cc)
19
+
20
+ Use --hydra-help to view Hydra specific help
21
+
22
+ '
23
+ template: '${hydra.help.header}
24
+
25
+ == Configuration groups ==
26
+
27
+ Compose your configuration from those groups (group=option)
28
+
29
+
30
+ $APP_CONFIG_GROUPS
31
+
32
+
33
+ == Config ==
34
+
35
+ Override anything in the config (foo.bar=value)
36
+
37
+
38
+ $CONFIG
39
+
40
+
41
+ ${hydra.help.footer}
42
+
43
+ '
44
+ hydra_help:
45
+ template: 'Hydra (${hydra.runtime.version})
46
+
47
+ See https://hydra.cc for more info.
48
+
49
+
50
+ == Flags ==
51
+
52
+ $FLAGS_HELP
53
+
54
+
55
+ == Configuration groups ==
56
+
57
+ Compose your configuration from those groups (For example, append hydra/job_logging=disabled
58
+ to command line)
59
+
60
+
61
+ $HYDRA_CONFIG_GROUPS
62
+
63
+
64
+ Use ''--cfg hydra'' to Show the Hydra config.
65
+
66
+ '
67
+ hydra_help: ???
68
+ hydra_logging:
69
+ version: 1
70
+ formatters:
71
+ simple:
72
+ format: '[%(asctime)s][HYDRA] %(message)s'
73
+ handlers:
74
+ console:
75
+ class: logging.StreamHandler
76
+ formatter: simple
77
+ stream: ext://sys.stdout
78
+ root:
79
+ level: INFO
80
+ handlers:
81
+ - console
82
+ loggers:
83
+ logging_example:
84
+ level: DEBUG
85
+ disable_existing_loggers: false
86
+ job_logging:
87
+ version: 1
88
+ formatters:
89
+ simple:
90
+ format: '[%(asctime)s][%(name)s][%(levelname)s] - %(message)s'
91
+ handlers:
92
+ console:
93
+ class: logging.StreamHandler
94
+ formatter: simple
95
+ stream: ext://sys.stdout
96
+ file:
97
+ class: logging.FileHandler
98
+ formatter: simple
99
+ filename: ${hydra.runtime.output_dir}/${hydra.job.name}.log
100
+ root:
101
+ level: INFO
102
+ handlers:
103
+ - console
104
+ - file
105
+ disable_existing_loggers: false
106
+ env: {}
107
+ mode: RUN
108
+ searchpath: []
109
+ callbacks: {}
110
+ output_subdir: .hydra
111
+ overrides:
112
+ hydra:
113
+ - hydra.run.dir=inference_output/2025-07-11/01-41-35
114
+ - hydra.mode=RUN
115
+ task:
116
+ - experiments=temp_inference/kamel-usp_jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only
117
+ job:
118
+ name: run_inference_experiment
119
+ chdir: null
120
+ override_dirname: experiments=temp_inference/kamel-usp_jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only
121
+ id: ???
122
+ num: ???
123
+ config_name: config
124
+ env_set: {}
125
+ env_copy: []
126
+ config:
127
+ override_dirname:
128
+ kv_sep: '='
129
+ item_sep: ','
130
+ exclude_keys: []
131
+ runtime:
132
+ version: 1.3.2
133
+ version_base: '1.1'
134
+ cwd: /workspace/jbcs2025
135
+ config_sources:
136
+ - path: hydra.conf
137
+ schema: pkg
138
+ provider: hydra
139
+ - path: /workspace/jbcs2025/configs
140
+ schema: file
141
+ provider: main
142
+ - path: ''
143
+ schema: structured
144
+ provider: schema
145
+ output_dir: /workspace/jbcs2025/inference_output/2025-07-11/01-41-35
146
+ choices:
147
+ experiments: temp_inference/kamel-usp_jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only
148
+ hydra/env: default
149
+ hydra/callbacks: null
150
+ hydra/job_logging: default
151
+ hydra/hydra_logging: default
152
+ hydra/hydra_help: default
153
+ hydra/help: default
154
+ hydra/sweeper: basic
155
+ hydra/launcher: basic
156
+ hydra/output: default
157
+ verbose: false
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only/.hydra/overrides.yaml ADDED
@@ -0,0 +1 @@
 
 
1
+ - experiments=temp_inference/kamel-usp_jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only/bootstrap_confidence_intervals.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ experiment_id,timestamp,QWK_mean,QWK_lower_95ci,QWK_upper_95ci,QWK_ci_width,Macro_F1_mean,Macro_F1_lower_95ci,Macro_F1_upper_95ci,Macro_F1_ci_width,Weighted_F1_mean,Weighted_F1_lower_95ci,Weighted_F1_upper_95ci,Weighted_F1_ci_width
2
+ jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only,2025-07-11 01:41:41,0.5396531933398239,0.4204347375953364,0.6511327204944922,0.23069798289915583,0.34029020389270115,0.2300104953539315,0.4995002696501232,0.2694897742961917,0.5926251278756054,0.5065006889621307,0.6785371638323412,0.17203647487021045
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only/evaluation_results.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ accuracy,RMSE,QWK,HDIV,Macro_F1,Micro_F1,Weighted_F1,TP_0,TN_0,FP_0,FN_0,TP_1,TN_1,FP_1,FN_1,TP_2,TN_2,FP_2,FN_2,TP_3,TN_3,FP_3,FN_3,TP_4,TN_4,FP_4,FN_4,TP_5,TN_5,FP_5,FN_5,timestamp,id
2
+ 0.6014492753623188,27.662562992324986,0.543046357615894,0.007246376811594235,0.2974558904383466,0.6014492753623188,0.5920471313301213,0,137,0,1,0,137,0,1,3,122,7,6,60,36,26,16,18,78,14,28,2,125,8,3,2025-07-11 01:41:41,jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only_inference_results.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only/run_inference_experiment.log ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2025-07-11 01:41:41,920][__main__][INFO] - Starting inference experiment
2
+ [2025-07-11 01:41:41,921][__main__][INFO] - cache_dir: /tmp/
3
+ dataset:
4
+ name: kamel-usp/aes_enem_dataset
5
+ split: JBCS2025
6
+ training_params:
7
+ seed: 42
8
+ num_train_epochs: 20
9
+ logging_steps: 100
10
+ metric_for_best_model: QWK
11
+ bf16: true
12
+ bootstrap:
13
+ enabled: true
14
+ n_bootstrap: 10000
15
+ bootstrap_seed: 42
16
+ metrics:
17
+ - QWK
18
+ - Macro_F1
19
+ - Weighted_F1
20
+ post_training_results:
21
+ model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
22
+ experiments:
23
+ model:
24
+ name: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only
25
+ type: encoder_classification
26
+ num_labels: 6
27
+ output_dir: ./results/
28
+ logging_dir: ./logs/
29
+ best_model_dir: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only
30
+ tokenizer:
31
+ name: PORTULAN/albertina-1b5-portuguese-ptbr-encoder
32
+ dataset:
33
+ grade_index: 3
34
+ use_full_context: false
35
+ training_params:
36
+ weight_decay: 0.01
37
+ warmup_ratio: 0.1
38
+ learning_rate: 5.0e-05
39
+ train_batch_size: 4
40
+ eval_batch_size: 4
41
+ gradient_accumulation_steps: 4
42
+ gradient_checkpointing: false
43
+
44
+ [2025-07-11 01:41:41,924][__main__][INFO] - Running inference with fine-tuned HF model
45
+ [2025-07-11 01:41:46,936][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/config.json
46
+ [2025-07-11 01:41:46,940][transformers.configuration_utils][INFO] - Model config DebertaV2Config {
47
+ "architectures": [
48
+ "DebertaV2ForMaskedLM"
49
+ ],
50
+ "attention_head_size": 64,
51
+ "attention_probs_dropout_prob": 0.1,
52
+ "conv_act": "gelu",
53
+ "conv_kernel_size": 3,
54
+ "hidden_act": "gelu",
55
+ "hidden_dropout_prob": 0.1,
56
+ "hidden_size": 1536,
57
+ "initializer_range": 0.02,
58
+ "intermediate_size": 6144,
59
+ "layer_norm_eps": 1e-07,
60
+ "legacy": true,
61
+ "max_position_embeddings": 512,
62
+ "max_relative_positions": -1,
63
+ "model_type": "deberta-v2",
64
+ "norm_rel_ebd": "layer_norm",
65
+ "num_attention_heads": 24,
66
+ "num_hidden_layers": 48,
67
+ "pad_token_id": 0,
68
+ "pooler_dropout": 0,
69
+ "pooler_hidden_act": "gelu",
70
+ "pooler_hidden_size": 1536,
71
+ "pos_att_type": [
72
+ "p2c",
73
+ "c2p"
74
+ ],
75
+ "position_biased_input": false,
76
+ "position_buckets": 256,
77
+ "relative_attention": true,
78
+ "share_att_key": true,
79
+ "torch_dtype": "bfloat16",
80
+ "transformers_version": "4.53.1",
81
+ "type_vocab_size": 0,
82
+ "vocab_size": 128100
83
+ }
84
+
85
+ [2025-07-11 01:41:47,798][transformers.tokenization_utils_base][INFO] - loading file spm.model from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/spm.model
86
+ [2025-07-11 01:41:47,798][transformers.tokenization_utils_base][INFO] - loading file tokenizer.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/tokenizer.json
87
+ [2025-07-11 01:41:47,798][transformers.tokenization_utils_base][INFO] - loading file added_tokens.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/added_tokens.json
88
+ [2025-07-11 01:41:47,798][transformers.tokenization_utils_base][INFO] - loading file special_tokens_map.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/special_tokens_map.json
89
+ [2025-07-11 01:41:47,798][transformers.tokenization_utils_base][INFO] - loading file tokenizer_config.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/tokenizer_config.json
90
+ [2025-07-11 01:41:47,798][transformers.tokenization_utils_base][INFO] - loading file chat_template.jinja from cache at None
91
+ [2025-07-11 01:41:48,068][__main__][INFO] - Tokenizer function parameters- Padding:longest; Truncation: True; Use Full Context: False
92
+ [2025-07-11 01:41:48,553][__main__][INFO] -
93
+ Token statistics for 'train' split:
94
+ [2025-07-11 01:41:48,554][__main__][INFO] - Total examples: 500
95
+ [2025-07-11 01:41:48,554][__main__][INFO] - Min tokens: 512
96
+ [2025-07-11 01:41:48,554][__main__][INFO] - Max tokens: 512
97
+ [2025-07-11 01:41:48,554][__main__][INFO] - Avg tokens: 512.00
98
+ [2025-07-11 01:41:48,554][__main__][INFO] - Std tokens: 0.00
99
+ [2025-07-11 01:41:48,646][__main__][INFO] -
100
+ Token statistics for 'validation' split:
101
+ [2025-07-11 01:41:48,646][__main__][INFO] - Total examples: 132
102
+ [2025-07-11 01:41:48,646][__main__][INFO] - Min tokens: 512
103
+ [2025-07-11 01:41:48,646][__main__][INFO] - Max tokens: 512
104
+ [2025-07-11 01:41:48,647][__main__][INFO] - Avg tokens: 512.00
105
+ [2025-07-11 01:41:48,647][__main__][INFO] - Std tokens: 0.00
106
+ [2025-07-11 01:41:48,746][__main__][INFO] -
107
+ Token statistics for 'test' split:
108
+ [2025-07-11 01:41:48,747][__main__][INFO] - Total examples: 138
109
+ [2025-07-11 01:41:48,747][__main__][INFO] - Min tokens: 512
110
+ [2025-07-11 01:41:48,747][__main__][INFO] - Max tokens: 512
111
+ [2025-07-11 01:41:48,747][__main__][INFO] - Avg tokens: 512.00
112
+ [2025-07-11 01:41:48,747][__main__][INFO] - Std tokens: 0.00
113
+ [2025-07-11 01:41:48,747][__main__][INFO] - If token statistics are the same (max, avg, min) keep in mind that this is due to batched tokenization and padding.
114
+ [2025-07-11 01:41:48,747][__main__][INFO] - Model max length: 512. If it is the same as stats, then there is a high chance that sequences are being truncated.
115
+ [2025-07-11 01:41:48,747][__main__][INFO] - Loading model from: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only
116
+ [2025-07-11 01:41:48,747][__main__][INFO] - Loading model from: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only
117
+ [2025-07-11 01:41:49,700][__main__][INFO] - Model need ≈ 9.51 GiB to run inference and 27.02 for training
118
+ [2025-07-11 01:41:50,879][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--kamel-usp--jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only/snapshots/6873e152b4a5203d81466439ffd264a562da1f0b/config.json
119
+ [2025-07-11 01:41:50,880][transformers.configuration_utils][INFO] - Model config DebertaV2Config {
120
+ "architectures": [
121
+ "DebertaV2ForSequenceClassification"
122
+ ],
123
+ "attention_head_size": 64,
124
+ "attention_probs_dropout_prob": 0.1,
125
+ "conv_act": "gelu",
126
+ "conv_kernel_size": 3,
127
+ "hidden_act": "gelu",
128
+ "hidden_dropout_prob": 0.1,
129
+ "hidden_size": 1536,
130
+ "id2label": {
131
+ "0": 0,
132
+ "1": 40,
133
+ "2": 80,
134
+ "3": 120,
135
+ "4": 160,
136
+ "5": 200
137
+ },
138
+ "initializer_range": 0.02,
139
+ "intermediate_size": 6144,
140
+ "label2id": {
141
+ "0": 0,
142
+ "40": 1,
143
+ "80": 2,
144
+ "120": 3,
145
+ "160": 4,
146
+ "200": 5
147
+ },
148
+ "layer_norm_eps": 1e-07,
149
+ "legacy": true,
150
+ "max_position_embeddings": 512,
151
+ "max_relative_positions": -1,
152
+ "model_type": "deberta-v2",
153
+ "norm_rel_ebd": "layer_norm",
154
+ "num_attention_heads": 24,
155
+ "num_hidden_layers": 48,
156
+ "pad_token_id": 0,
157
+ "pooler_dropout": 0,
158
+ "pooler_hidden_act": "gelu",
159
+ "pooler_hidden_size": 1536,
160
+ "pos_att_type": [
161
+ "p2c",
162
+ "c2p"
163
+ ],
164
+ "position_biased_input": false,
165
+ "position_buckets": 256,
166
+ "relative_attention": true,
167
+ "share_att_key": true,
168
+ "torch_dtype": "bfloat16",
169
+ "transformers_version": "4.53.1",
170
+ "type_vocab_size": 0,
171
+ "vocab_size": 128100
172
+ }
173
+
174
+ [2025-07-11 01:42:46,329][transformers.modeling_utils][INFO] - loading weights file model.safetensors from cache at /tmp/models--kamel-usp--jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only/snapshots/6873e152b4a5203d81466439ffd264a562da1f0b/model.safetensors
175
+ [2025-07-11 01:42:46,334][transformers.modeling_utils][INFO] - Will use torch_dtype=torch.bfloat16 as defined in model's config object
176
+ [2025-07-11 01:42:46,334][transformers.modeling_utils][INFO] - Instantiating DebertaV2ForSequenceClassification model under default dtype torch.bfloat16.
177
+ [2025-07-11 01:42:47,880][transformers.modeling_utils][INFO] - All model checkpoint weights were used when initializing DebertaV2ForSequenceClassification.
178
+
179
+ [2025-07-11 01:42:47,881][transformers.modeling_utils][INFO] - All the weights of DebertaV2ForSequenceClassification were initialized from the model checkpoint at kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only.
180
+ If your task is similar to the task the model of the checkpoint was trained on, you can already use DebertaV2ForSequenceClassification for predictions without further training.
181
+ [2025-07-11 01:42:47,901][transformers.training_args][INFO] - PyTorch: setting up devices
182
+ [2025-07-11 01:42:47,932][transformers.training_args][INFO] - The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).
183
+ [2025-07-11 01:42:47,939][transformers.trainer][INFO] - You have loaded a model on multiple GPUs. `is_model_parallel` attribute will be force-set to `True` to avoid any unexpected behavior such as device placement mismatching.
184
+ [2025-07-11 01:42:47,964][transformers.trainer][INFO] - Using auto half precision backend
185
+ [2025-07-11 01:42:51,283][__main__][INFO] - Running inference on test dataset
186
+ [2025-07-11 01:42:51,285][transformers.trainer][INFO] - The following columns in the test set don't have a corresponding argument in `DebertaV2ForSequenceClassification.forward` and have been ignored: grades, prompt, id, essay_text, id_prompt, supporting_text, essay_year, reference. If grades, prompt, id, essay_text, id_prompt, supporting_text, essay_year, reference are not expected by `DebertaV2ForSequenceClassification.forward`, you can safely ignore this message.
187
+ [2025-07-11 01:42:51,297][transformers.trainer][INFO] -
188
+ ***** Running Prediction *****
189
+ [2025-07-11 01:42:51,297][transformers.trainer][INFO] - Num examples = 138
190
+ [2025-07-11 01:42:51,297][transformers.trainer][INFO] - Batch size = 4
191
+ [2025-07-11 01:42:59,939][__main__][INFO] - Inference results saved to jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only_inference_results.jsonl
192
+ [2025-07-11 01:42:59,940][__main__][INFO] - Computing bootstrap confidence intervals for metrics: ['QWK', 'Macro_F1', 'Weighted_F1']
193
+ [2025-07-11 01:45:06,056][__main__][INFO] - Bootstrap CI results saved to bootstrap_confidence_intervals.csv
194
+ [2025-07-11 01:45:06,057][__main__][INFO] - Bootstrap Confidence Intervals (95%):
195
+ [2025-07-11 01:45:06,057][__main__][INFO] - QWK: 0.5397 [0.4204, 0.6511]
196
+ [2025-07-11 01:45:06,057][__main__][INFO] - Macro_F1: 0.3403 [0.2300, 0.4995]
197
+ [2025-07-11 01:45:06,057][__main__][INFO] - Weighted_F1: 0.5926 [0.5065, 0.6785]
198
+ [2025-07-11 01:45:06,057][__main__][INFO] - Inference results: {'accuracy': 0.6014492753623188, 'RMSE': 27.662562992324986, 'QWK': 0.543046357615894, 'HDIV': 0.007246376811594235, 'Macro_F1': 0.2974558904383466, 'Micro_F1': 0.6014492753623188, 'Weighted_F1': 0.5920471313301213, 'TP_0': np.int64(0), 'TN_0': np.int64(137), 'FP_0': np.int64(0), 'FN_0': np.int64(1), 'TP_1': np.int64(0), 'TN_1': np.int64(137), 'FP_1': np.int64(0), 'FN_1': np.int64(1), 'TP_2': np.int64(3), 'TN_2': np.int64(122), 'FP_2': np.int64(7), 'FN_2': np.int64(6), 'TP_3': np.int64(60), 'TN_3': np.int64(36), 'FP_3': np.int64(26), 'FN_3': np.int64(16), 'TP_4': np.int64(18), 'TN_4': np.int64(78), 'FP_4': np.int64(14), 'FN_4': np.int64(28), 'TP_5': np.int64(2), 'TN_5': np.int64(125), 'FP_5': np.int64(8), 'FN_5': np.int64(3)}
199
+ [2025-07-11 01:45:06,058][__main__][INFO] - Inference experiment completed
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only/.hydra/config.yaml ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ cache_dir: /tmp/
2
+ dataset:
3
+ name: kamel-usp/aes_enem_dataset
4
+ split: JBCS2025
5
+ training_params:
6
+ seed: 42
7
+ num_train_epochs: 20
8
+ logging_steps: 100
9
+ metric_for_best_model: QWK
10
+ bf16: true
11
+ bootstrap:
12
+ enabled: true
13
+ n_bootstrap: 10000
14
+ bootstrap_seed: 42
15
+ metrics:
16
+ - QWK
17
+ - Macro_F1
18
+ - Weighted_F1
19
+ post_training_results:
20
+ model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
21
+ experiments:
22
+ model:
23
+ name: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only
24
+ type: encoder_classification
25
+ num_labels: 6
26
+ output_dir: ./results/
27
+ logging_dir: ./logs/
28
+ best_model_dir: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only
29
+ tokenizer:
30
+ name: PORTULAN/albertina-1b5-portuguese-ptbr-encoder
31
+ dataset:
32
+ grade_index: 4
33
+ use_full_context: false
34
+ training_params:
35
+ weight_decay: 0.01
36
+ warmup_ratio: 0.1
37
+ learning_rate: 5.0e-05
38
+ train_batch_size: 4
39
+ eval_batch_size: 4
40
+ gradient_accumulation_steps: 4
41
+ gradient_checkpointing: false
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only/.hydra/hydra.yaml ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ hydra:
2
+ run:
3
+ dir: inference_output/2025-07-11/01-45-10
4
+ sweep:
5
+ dir: multirun/${now:%Y-%m-%d}/${now:%H-%M-%S}
6
+ subdir: ${hydra.job.num}
7
+ launcher:
8
+ _target_: hydra._internal.core_plugins.basic_launcher.BasicLauncher
9
+ sweeper:
10
+ _target_: hydra._internal.core_plugins.basic_sweeper.BasicSweeper
11
+ max_batch_size: null
12
+ params: null
13
+ help:
14
+ app_name: ${hydra.job.name}
15
+ header: '${hydra.help.app_name} is powered by Hydra.
16
+
17
+ '
18
+ footer: 'Powered by Hydra (https://hydra.cc)
19
+
20
+ Use --hydra-help to view Hydra specific help
21
+
22
+ '
23
+ template: '${hydra.help.header}
24
+
25
+ == Configuration groups ==
26
+
27
+ Compose your configuration from those groups (group=option)
28
+
29
+
30
+ $APP_CONFIG_GROUPS
31
+
32
+
33
+ == Config ==
34
+
35
+ Override anything in the config (foo.bar=value)
36
+
37
+
38
+ $CONFIG
39
+
40
+
41
+ ${hydra.help.footer}
42
+
43
+ '
44
+ hydra_help:
45
+ template: 'Hydra (${hydra.runtime.version})
46
+
47
+ See https://hydra.cc for more info.
48
+
49
+
50
+ == Flags ==
51
+
52
+ $FLAGS_HELP
53
+
54
+
55
+ == Configuration groups ==
56
+
57
+ Compose your configuration from those groups (For example, append hydra/job_logging=disabled
58
+ to command line)
59
+
60
+
61
+ $HYDRA_CONFIG_GROUPS
62
+
63
+
64
+ Use ''--cfg hydra'' to Show the Hydra config.
65
+
66
+ '
67
+ hydra_help: ???
68
+ hydra_logging:
69
+ version: 1
70
+ formatters:
71
+ simple:
72
+ format: '[%(asctime)s][HYDRA] %(message)s'
73
+ handlers:
74
+ console:
75
+ class: logging.StreamHandler
76
+ formatter: simple
77
+ stream: ext://sys.stdout
78
+ root:
79
+ level: INFO
80
+ handlers:
81
+ - console
82
+ loggers:
83
+ logging_example:
84
+ level: DEBUG
85
+ disable_existing_loggers: false
86
+ job_logging:
87
+ version: 1
88
+ formatters:
89
+ simple:
90
+ format: '[%(asctime)s][%(name)s][%(levelname)s] - %(message)s'
91
+ handlers:
92
+ console:
93
+ class: logging.StreamHandler
94
+ formatter: simple
95
+ stream: ext://sys.stdout
96
+ file:
97
+ class: logging.FileHandler
98
+ formatter: simple
99
+ filename: ${hydra.runtime.output_dir}/${hydra.job.name}.log
100
+ root:
101
+ level: INFO
102
+ handlers:
103
+ - console
104
+ - file
105
+ disable_existing_loggers: false
106
+ env: {}
107
+ mode: RUN
108
+ searchpath: []
109
+ callbacks: {}
110
+ output_subdir: .hydra
111
+ overrides:
112
+ hydra:
113
+ - hydra.run.dir=inference_output/2025-07-11/01-45-10
114
+ - hydra.mode=RUN
115
+ task:
116
+ - experiments=temp_inference/kamel-usp_jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only
117
+ job:
118
+ name: run_inference_experiment
119
+ chdir: null
120
+ override_dirname: experiments=temp_inference/kamel-usp_jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only
121
+ id: ???
122
+ num: ???
123
+ config_name: config
124
+ env_set: {}
125
+ env_copy: []
126
+ config:
127
+ override_dirname:
128
+ kv_sep: '='
129
+ item_sep: ','
130
+ exclude_keys: []
131
+ runtime:
132
+ version: 1.3.2
133
+ version_base: '1.1'
134
+ cwd: /workspace/jbcs2025
135
+ config_sources:
136
+ - path: hydra.conf
137
+ schema: pkg
138
+ provider: hydra
139
+ - path: /workspace/jbcs2025/configs
140
+ schema: file
141
+ provider: main
142
+ - path: ''
143
+ schema: structured
144
+ provider: schema
145
+ output_dir: /workspace/jbcs2025/inference_output/2025-07-11/01-45-10
146
+ choices:
147
+ experiments: temp_inference/kamel-usp_jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only
148
+ hydra/env: default
149
+ hydra/callbacks: null
150
+ hydra/job_logging: default
151
+ hydra/hydra_logging: default
152
+ hydra/hydra_help: default
153
+ hydra/help: default
154
+ hydra/sweeper: basic
155
+ hydra/launcher: basic
156
+ hydra/output: default
157
+ verbose: false
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only/.hydra/overrides.yaml ADDED
@@ -0,0 +1 @@
 
 
1
+ - experiments=temp_inference/kamel-usp_jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only/bootstrap_confidence_intervals.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ experiment_id,timestamp,QWK_mean,QWK_lower_95ci,QWK_upper_95ci,QWK_ci_width,Macro_F1_mean,Macro_F1_lower_95ci,Macro_F1_upper_95ci,Macro_F1_ci_width,Weighted_F1_mean,Weighted_F1_lower_95ci,Weighted_F1_upper_95ci,Weighted_F1_ci_width
2
+ jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only,2025-07-11 01:45:16,0.030023967736090135,-0.1487321238529284,0.2089807036607759,0.35771282751370426,0.15106212205759126,0.09700495581807897,0.20820622511097128,0.11120126929289231,0.1716628043898459,0.10670643853108219,0.24081069381270429,0.1341042552816221
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only/evaluation_results.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ accuracy,RMSE,QWK,HDIV,Macro_F1,Micro_F1,Weighted_F1,TP_0,TN_0,FP_0,FN_0,TP_1,TN_1,FP_1,FN_1,TP_2,TN_2,FP_2,FN_2,TP_3,TN_3,FP_3,FN_3,TP_4,TN_4,FP_4,FN_4,TP_5,TN_5,FP_5,FN_5,timestamp,id
2
+ 0.1956521739130435,76.7453912131782,0.028976674608011566,0.16666666666666663,0.1530071129978623,0.1956521739130435,0.1712828232958547,4,106,10,18,1,93,13,31,14,58,56,10,3,104,9,22,5,91,15,27,0,127,8,3,2025-07-11 01:45:16,jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only_inference_results.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
runs/large_models/albertina/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only/run_inference_experiment.log ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2025-07-11 01:45:16,521][__main__][INFO] - Starting inference experiment
2
+ [2025-07-11 01:45:16,523][__main__][INFO] - cache_dir: /tmp/
3
+ dataset:
4
+ name: kamel-usp/aes_enem_dataset
5
+ split: JBCS2025
6
+ training_params:
7
+ seed: 42
8
+ num_train_epochs: 20
9
+ logging_steps: 100
10
+ metric_for_best_model: QWK
11
+ bf16: true
12
+ bootstrap:
13
+ enabled: true
14
+ n_bootstrap: 10000
15
+ bootstrap_seed: 42
16
+ metrics:
17
+ - QWK
18
+ - Macro_F1
19
+ - Weighted_F1
20
+ post_training_results:
21
+ model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
22
+ experiments:
23
+ model:
24
+ name: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only
25
+ type: encoder_classification
26
+ num_labels: 6
27
+ output_dir: ./results/
28
+ logging_dir: ./logs/
29
+ best_model_dir: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only
30
+ tokenizer:
31
+ name: PORTULAN/albertina-1b5-portuguese-ptbr-encoder
32
+ dataset:
33
+ grade_index: 4
34
+ use_full_context: false
35
+ training_params:
36
+ weight_decay: 0.01
37
+ warmup_ratio: 0.1
38
+ learning_rate: 5.0e-05
39
+ train_batch_size: 4
40
+ eval_batch_size: 4
41
+ gradient_accumulation_steps: 4
42
+ gradient_checkpointing: false
43
+
44
+ [2025-07-11 01:45:16,525][__main__][INFO] - Running inference with fine-tuned HF model
45
+ [2025-07-11 01:45:21,093][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/config.json
46
+ [2025-07-11 01:45:21,096][transformers.configuration_utils][INFO] - Model config DebertaV2Config {
47
+ "architectures": [
48
+ "DebertaV2ForMaskedLM"
49
+ ],
50
+ "attention_head_size": 64,
51
+ "attention_probs_dropout_prob": 0.1,
52
+ "conv_act": "gelu",
53
+ "conv_kernel_size": 3,
54
+ "hidden_act": "gelu",
55
+ "hidden_dropout_prob": 0.1,
56
+ "hidden_size": 1536,
57
+ "initializer_range": 0.02,
58
+ "intermediate_size": 6144,
59
+ "layer_norm_eps": 1e-07,
60
+ "legacy": true,
61
+ "max_position_embeddings": 512,
62
+ "max_relative_positions": -1,
63
+ "model_type": "deberta-v2",
64
+ "norm_rel_ebd": "layer_norm",
65
+ "num_attention_heads": 24,
66
+ "num_hidden_layers": 48,
67
+ "pad_token_id": 0,
68
+ "pooler_dropout": 0,
69
+ "pooler_hidden_act": "gelu",
70
+ "pooler_hidden_size": 1536,
71
+ "pos_att_type": [
72
+ "p2c",
73
+ "c2p"
74
+ ],
75
+ "position_biased_input": false,
76
+ "position_buckets": 256,
77
+ "relative_attention": true,
78
+ "share_att_key": true,
79
+ "torch_dtype": "bfloat16",
80
+ "transformers_version": "4.53.1",
81
+ "type_vocab_size": 0,
82
+ "vocab_size": 128100
83
+ }
84
+
85
+ [2025-07-11 01:45:21,511][transformers.tokenization_utils_base][INFO] - loading file spm.model from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/spm.model
86
+ [2025-07-11 01:45:21,511][transformers.tokenization_utils_base][INFO] - loading file tokenizer.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/tokenizer.json
87
+ [2025-07-11 01:45:21,511][transformers.tokenization_utils_base][INFO] - loading file added_tokens.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/added_tokens.json
88
+ [2025-07-11 01:45:21,511][transformers.tokenization_utils_base][INFO] - loading file special_tokens_map.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/special_tokens_map.json
89
+ [2025-07-11 01:45:21,511][transformers.tokenization_utils_base][INFO] - loading file tokenizer_config.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/tokenizer_config.json
90
+ [2025-07-11 01:45:21,511][transformers.tokenization_utils_base][INFO] - loading file chat_template.jinja from cache at None
91
+ [2025-07-11 01:45:21,789][__main__][INFO] - Tokenizer function parameters- Padding:longest; Truncation: True; Use Full Context: False
92
+ [2025-07-11 01:45:22,298][__main__][INFO] -
93
+ Token statistics for 'train' split:
94
+ [2025-07-11 01:45:22,298][__main__][INFO] - Total examples: 500
95
+ [2025-07-11 01:45:22,298][__main__][INFO] - Min tokens: 512
96
+ [2025-07-11 01:45:22,298][__main__][INFO] - Max tokens: 512
97
+ [2025-07-11 01:45:22,298][__main__][INFO] - Avg tokens: 512.00
98
+ [2025-07-11 01:45:22,299][__main__][INFO] - Std tokens: 0.00
99
+ [2025-07-11 01:45:22,398][__main__][INFO] -
100
+ Token statistics for 'validation' split:
101
+ [2025-07-11 01:45:22,398][__main__][INFO] - Total examples: 132
102
+ [2025-07-11 01:45:22,398][__main__][INFO] - Min tokens: 512
103
+ [2025-07-11 01:45:22,398][__main__][INFO] - Max tokens: 512
104
+ [2025-07-11 01:45:22,398][__main__][INFO] - Avg tokens: 512.00
105
+ [2025-07-11 01:45:22,398][__main__][INFO] - Std tokens: 0.00
106
+ [2025-07-11 01:45:22,498][__main__][INFO] -
107
+ Token statistics for 'test' split:
108
+ [2025-07-11 01:45:22,498][__main__][INFO] - Total examples: 138
109
+ [2025-07-11 01:45:22,498][__main__][INFO] - Min tokens: 512
110
+ [2025-07-11 01:45:22,498][__main__][INFO] - Max tokens: 512
111
+ [2025-07-11 01:45:22,498][__main__][INFO] - Avg tokens: 512.00
112
+ [2025-07-11 01:45:22,498][__main__][INFO] - Std tokens: 0.00
113
+ [2025-07-11 01:45:22,498][__main__][INFO] - If token statistics are the same (max, avg, min) keep in mind that this is due to batched tokenization and padding.
114
+ [2025-07-11 01:45:22,498][__main__][INFO] - Model max length: 512. If it is the same as stats, then there is a high chance that sequences are being truncated.
115
+ [2025-07-11 01:45:22,499][__main__][INFO] - Loading model from: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only
116
+ [2025-07-11 01:45:22,499][__main__][INFO] - Loading model from: kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only
117
+ [2025-07-11 01:45:23,468][__main__][INFO] - Model need ≈ 9.51 GiB to run inference and 27.02 for training
118
+ [2025-07-11 01:45:24,361][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--kamel-usp--jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only/snapshots/46296a27ad8fd570a2177c4bf8427be01ad50052/config.json
119
+ [2025-07-11 01:45:24,362][transformers.configuration_utils][INFO] - Model config DebertaV2Config {
120
+ "architectures": [
121
+ "DebertaV2ForSequenceClassification"
122
+ ],
123
+ "attention_head_size": 64,
124
+ "attention_probs_dropout_prob": 0.1,
125
+ "conv_act": "gelu",
126
+ "conv_kernel_size": 3,
127
+ "hidden_act": "gelu",
128
+ "hidden_dropout_prob": 0.1,
129
+ "hidden_size": 1536,
130
+ "id2label": {
131
+ "0": 0,
132
+ "1": 40,
133
+ "2": 80,
134
+ "3": 120,
135
+ "4": 160,
136
+ "5": 200
137
+ },
138
+ "initializer_range": 0.02,
139
+ "intermediate_size": 6144,
140
+ "label2id": {
141
+ "0": 0,
142
+ "40": 1,
143
+ "80": 2,
144
+ "120": 3,
145
+ "160": 4,
146
+ "200": 5
147
+ },
148
+ "layer_norm_eps": 1e-07,
149
+ "legacy": true,
150
+ "max_position_embeddings": 512,
151
+ "max_relative_positions": -1,
152
+ "model_type": "deberta-v2",
153
+ "norm_rel_ebd": "layer_norm",
154
+ "num_attention_heads": 24,
155
+ "num_hidden_layers": 48,
156
+ "pad_token_id": 0,
157
+ "pooler_dropout": 0,
158
+ "pooler_hidden_act": "gelu",
159
+ "pooler_hidden_size": 1536,
160
+ "pos_att_type": [
161
+ "p2c",
162
+ "c2p"
163
+ ],
164
+ "position_biased_input": false,
165
+ "position_buckets": 256,
166
+ "relative_attention": true,
167
+ "share_att_key": true,
168
+ "torch_dtype": "bfloat16",
169
+ "transformers_version": "4.53.1",
170
+ "type_vocab_size": 0,
171
+ "vocab_size": 128100
172
+ }
173
+
174
+ [2025-07-11 01:46:19,417][transformers.modeling_utils][INFO] - loading weights file model.safetensors from cache at /tmp/models--kamel-usp--jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only/snapshots/46296a27ad8fd570a2177c4bf8427be01ad50052/model.safetensors
175
+ [2025-07-11 01:46:19,420][transformers.modeling_utils][INFO] - Will use torch_dtype=torch.bfloat16 as defined in model's config object
176
+ [2025-07-11 01:46:19,420][transformers.modeling_utils][INFO] - Instantiating DebertaV2ForSequenceClassification model under default dtype torch.bfloat16.
177
+ [2025-07-11 01:46:20,985][transformers.modeling_utils][INFO] - All model checkpoint weights were used when initializing DebertaV2ForSequenceClassification.
178
+
179
+ [2025-07-11 01:46:20,986][transformers.modeling_utils][INFO] - All the weights of DebertaV2ForSequenceClassification were initialized from the model checkpoint at kamel-usp/jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only.
180
+ If your task is similar to the task the model of the checkpoint was trained on, you can already use DebertaV2ForSequenceClassification for predictions without further training.
181
+ [2025-07-11 01:46:21,006][transformers.training_args][INFO] - PyTorch: setting up devices
182
+ [2025-07-11 01:46:21,030][transformers.training_args][INFO] - The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).
183
+ [2025-07-11 01:46:21,037][transformers.trainer][INFO] - You have loaded a model on multiple GPUs. `is_model_parallel` attribute will be force-set to `True` to avoid any unexpected behavior such as device placement mismatching.
184
+ [2025-07-11 01:46:21,062][transformers.trainer][INFO] - Using auto half precision backend
185
+ [2025-07-11 01:46:24,415][__main__][INFO] - Running inference on test dataset
186
+ [2025-07-11 01:46:24,417][transformers.trainer][INFO] - The following columns in the test set don't have a corresponding argument in `DebertaV2ForSequenceClassification.forward` and have been ignored: reference, prompt, essay_year, id_prompt, essay_text, supporting_text, grades, id. If reference, prompt, essay_year, id_prompt, essay_text, supporting_text, grades, id are not expected by `DebertaV2ForSequenceClassification.forward`, you can safely ignore this message.
187
+ [2025-07-11 01:46:24,430][transformers.trainer][INFO] -
188
+ ***** Running Prediction *****
189
+ [2025-07-11 01:46:24,430][transformers.trainer][INFO] - Num examples = 138
190
+ [2025-07-11 01:46:24,430][transformers.trainer][INFO] - Batch size = 4
191
+ [2025-07-11 01:46:33,202][__main__][INFO] - Inference results saved to jbcs2025_albertina-1b5-portuguese-ptbr-encoder-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only_inference_results.jsonl
192
+ [2025-07-11 01:46:33,203][__main__][INFO] - Computing bootstrap confidence intervals for metrics: ['QWK', 'Macro_F1', 'Weighted_F1']
193
+ [2025-07-11 01:48:41,802][__main__][INFO] - Bootstrap CI results saved to bootstrap_confidence_intervals.csv
194
+ [2025-07-11 01:48:41,802][__main__][INFO] - Bootstrap Confidence Intervals (95%):
195
+ [2025-07-11 01:48:41,802][__main__][INFO] - QWK: 0.0300 [-0.1487, 0.2090]
196
+ [2025-07-11 01:48:41,802][__main__][INFO] - Macro_F1: 0.1511 [0.0970, 0.2082]
197
+ [2025-07-11 01:48:41,802][__main__][INFO] - Weighted_F1: 0.1717 [0.1067, 0.2408]
198
+ [2025-07-11 01:48:41,802][__main__][INFO] - Inference results: {'accuracy': 0.1956521739130435, 'RMSE': 76.7453912131782, 'QWK': 0.028976674608011566, 'HDIV': 0.16666666666666663, 'Macro_F1': 0.1530071129978623, 'Micro_F1': 0.1956521739130435, 'Weighted_F1': 0.1712828232958547, 'TP_0': np.int64(4), 'TN_0': np.int64(106), 'FP_0': np.int64(10), 'FN_0': np.int64(18), 'TP_1': np.int64(1), 'TN_1': np.int64(93), 'FP_1': np.int64(13), 'FN_1': np.int64(31), 'TP_2': np.int64(14), 'TN_2': np.int64(58), 'FP_2': np.int64(56), 'FN_2': np.int64(10), 'TP_3': np.int64(3), 'TN_3': np.int64(104), 'FP_3': np.int64(9), 'FN_3': np.int64(22), 'TP_4': np.int64(5), 'TN_4': np.int64(91), 'FP_4': np.int64(15), 'FN_4': np.int64(27), 'TP_5': np.int64(0), 'TN_5': np.int64(127), 'FP_5': np.int64(8), 'FN_5': np.int64(3)}
199
+ [2025-07-11 01:48:41,803][__main__][INFO] - Inference experiment completed