07/16/2024 16:50:17 - INFO - llamafactory.hparams.parser - Process rank: 7, device: cuda:7, n_gpu: 1, distributed training: True, compute dtype: torch.bfloat16 [INFO|parser.py:325] 2024-07-16 16:50:17,965 >> Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: True, compute dtype: torch.bfloat16 07/16/2024 16:50:18 - INFO - llamafactory.hparams.parser - Process rank: 4, device: cuda:4, n_gpu: 1, distributed training: True, compute dtype: torch.bfloat16 07/16/2024 16:50:18 - INFO - llamafactory.hparams.parser - Process rank: 6, device: cuda:6, n_gpu: 1, distributed training: True, compute dtype: torch.bfloat16 07/16/2024 16:50:18 - INFO - llamafactory.data.template - Add pad token: 07/16/2024 16:50:18 - INFO - llamafactory.hparams.parser - Process rank: 5, device: cuda:5, n_gpu: 1, distributed training: True, compute dtype: torch.bfloat16 07/16/2024 16:50:18 - INFO - llamafactory.hparams.parser - Process rank: 1, device: cuda:1, n_gpu: 1, distributed training: True, compute dtype: torch.bfloat16 [INFO|tokenization_utils_base.py:2161] 2024-07-16 16:50:18,169 >> loading file tokenizer.model from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-2-7b-chat-hf/snapshots/f5db02db724555f92da89c216ac04704f23d4590/tokenizer.model [INFO|tokenization_utils_base.py:2161] 2024-07-16 16:50:18,169 >> loading file tokenizer.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-2-7b-chat-hf/snapshots/f5db02db724555f92da89c216ac04704f23d4590/tokenizer.json [INFO|tokenization_utils_base.py:2161] 2024-07-16 16:50:18,170 >> loading file added_tokens.json from cache at None [INFO|tokenization_utils_base.py:2161] 2024-07-16 16:50:18,170 >> loading file special_tokens_map.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-2-7b-chat-hf/snapshots/f5db02db724555f92da89c216ac04704f23d4590/special_tokens_map.json [INFO|tokenization_utils_base.py:2161] 2024-07-16 16:50:18,170 >> loading file tokenizer_config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-2-7b-chat-hf/snapshots/f5db02db724555f92da89c216ac04704f23d4590/tokenizer_config.json 07/16/2024 16:50:18 - INFO - llamafactory.data.template - Add pad token: [INFO|template.py:372] 2024-07-16 16:50:18,281 >> Add pad token: [INFO|loader.py:50] 2024-07-16 16:50:18,282 >> Loading dataset 0716_truthfulqa_benchmark_train_2.json... 07/16/2024 16:50:18 - INFO - llamafactory.hparams.parser - Process rank: 2, device: cuda:2, n_gpu: 1, distributed training: True, compute dtype: torch.bfloat16 07/16/2024 16:50:18 - INFO - llamafactory.data.template - Add pad token: 07/16/2024 16:50:18 - INFO - llamafactory.data.template - Add pad token: 07/16/2024 16:50:18 - INFO - llamafactory.data.template - Add pad token: 07/16/2024 16:50:18 - INFO - llamafactory.data.template - Add pad token: 07/16/2024 16:50:18 - INFO - llamafactory.data.template - Add pad token: 07/16/2024 16:50:19 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_train_2.json... 07/16/2024 16:50:19 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_train_2.json... 07/16/2024 16:50:19 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_train_2.json... 07/16/2024 16:50:19 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_train_2.json... 07/16/2024 16:50:19 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_train_2.json... 07/16/2024 16:50:19 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_train_2.json... 07/16/2024 16:50:19 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_train_2.json... [INFO|configuration_utils.py:733] 2024-07-16 16:50:20,327 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-2-7b-chat-hf/snapshots/f5db02db724555f92da89c216ac04704f23d4590/config.json [INFO|configuration_utils.py:800] 2024-07-16 16:50:20,328 >> Model config LlamaConfig { "_name_or_path": "meta-llama/Llama-2-7b-chat-hf", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 11008, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 32, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "float16", "transformers_version": "4.42.3", "use_cache": true, "vocab_size": 32000 } [INFO|modeling_utils.py:3556] 2024-07-16 16:50:20,350 >> loading weights file model.safetensors from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-2-7b-chat-hf/snapshots/f5db02db724555f92da89c216ac04704f23d4590/model.safetensors.index.json [INFO|modeling_utils.py:1531] 2024-07-16 16:50:20,351 >> Instantiating LlamaForCausalLM model under default dtype torch.bfloat16. [INFO|configuration_utils.py:1000] 2024-07-16 16:50:20,352 >> Generate config GenerationConfig { "bos_token_id": 1, "eos_token_id": 2 } [INFO|modeling_utils.py:4364] 2024-07-16 16:50:37,558 >> All model checkpoint weights were used when initializing LlamaForCausalLM. [INFO|modeling_utils.py:4372] 2024-07-16 16:50:37,559 >> All the weights of LlamaForCausalLM were initialized from the model checkpoint at meta-llama/Llama-2-7b-chat-hf. If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training. [INFO|configuration_utils.py:955] 2024-07-16 16:50:37,738 >> loading configuration file generation_config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Llama-2-7b-chat-hf/snapshots/f5db02db724555f92da89c216ac04704f23d4590/generation_config.json [INFO|configuration_utils.py:1000] 2024-07-16 16:50:37,738 >> Generate config GenerationConfig { "bos_token_id": 1, "do_sample": true, "eos_token_id": 2, "max_length": 4096, "pad_token_id": 0, "temperature": 0.6, "top_p": 0.9 } [INFO|checkpointing.py:103] 2024-07-16 16:50:37,746 >> Gradient checkpointing enabled. [INFO|attention.py:80] 2024-07-16 16:50:37,746 >> Using torch SDPA for faster training and inference. [INFO|adapter.py:302] 2024-07-16 16:50:37,746 >> Upcasting trainable params to float32. [INFO|adapter.py:48] 2024-07-16 16:50:37,746 >> Fine-tuning method: Full [INFO|loader.py:196] 2024-07-16 16:50:37,798 >> trainable params: 6,738,415,616 || all params: 6,738,415,616 || trainable%: 100.0000 07/16/2024 16:50:37 - INFO - llamafactory.model.model_utils.checkpointing - Gradient checkpointing enabled. 07/16/2024 16:50:37 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. 07/16/2024 16:50:37 - INFO - llamafactory.model.adapter - Upcasting trainable params to float32. 07/16/2024 16:50:37 - INFO - llamafactory.model.adapter - Fine-tuning method: Full 07/16/2024 16:50:37 - INFO - llamafactory.model.loader - trainable params: 6,738,415,616 || all params: 6,738,415,616 || trainable%: 100.0000 [INFO|trainer.py:642] 2024-07-16 16:50:37,804 >> Using auto half precision backend 07/16/2024 16:50:38 - INFO - llamafactory.model.model_utils.checkpointing - Gradient checkpointing enabled. 07/16/2024 16:50:38 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. 07/16/2024 16:50:38 - INFO - llamafactory.model.adapter - Upcasting trainable params to float32. 07/16/2024 16:50:38 - INFO - llamafactory.model.adapter - Fine-tuning method: Full 07/16/2024 16:50:38 - INFO - llamafactory.model.model_utils.checkpointing - Gradient checkpointing enabled. 07/16/2024 16:50:38 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. 07/16/2024 16:50:38 - INFO - llamafactory.model.adapter - Upcasting trainable params to float32. 07/16/2024 16:50:38 - INFO - llamafactory.model.adapter - Fine-tuning method: Full 07/16/2024 16:50:38 - INFO - llamafactory.model.model_utils.checkpointing - Gradient checkpointing enabled. 07/16/2024 16:50:38 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. 07/16/2024 16:50:38 - INFO - llamafactory.model.adapter - Upcasting trainable params to float32. 07/16/2024 16:50:38 - INFO - llamafactory.model.adapter - Fine-tuning method: Full 07/16/2024 16:50:38 - INFO - llamafactory.model.loader - trainable params: 6,738,415,616 || all params: 6,738,415,616 || trainable%: 100.0000 07/16/2024 16:50:38 - INFO - llamafactory.model.loader - trainable params: 6,738,415,616 || all params: 6,738,415,616 || trainable%: 100.0000 07/16/2024 16:50:38 - INFO - llamafactory.model.loader - trainable params: 6,738,415,616 || all params: 6,738,415,616 || trainable%: 100.0000 07/16/2024 16:50:38 - INFO - llamafactory.model.model_utils.checkpointing - Gradient checkpointing enabled. 07/16/2024 16:50:38 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. 07/16/2024 16:50:38 - INFO - llamafactory.model.adapter - Upcasting trainable params to float32. 07/16/2024 16:50:38 - INFO - llamafactory.model.adapter - Fine-tuning method: Full 07/16/2024 16:50:38 - INFO - llamafactory.model.loader - trainable params: 6,738,415,616 || all params: 6,738,415,616 || trainable%: 100.0000 07/16/2024 16:50:38 - INFO - llamafactory.model.model_utils.checkpointing - Gradient checkpointing enabled. 07/16/2024 16:50:38 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. 07/16/2024 16:50:38 - INFO - llamafactory.model.adapter - Upcasting trainable params to float32. 07/16/2024 16:50:38 - INFO - llamafactory.model.adapter - Fine-tuning method: Full 07/16/2024 16:50:38 - INFO - llamafactory.model.loader - trainable params: 6,738,415,616 || all params: 6,738,415,616 || trainable%: 100.0000 07/16/2024 16:50:38 - INFO - llamafactory.model.model_utils.checkpointing - Gradient checkpointing enabled. 07/16/2024 16:50:38 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. 07/16/2024 16:50:38 - INFO - llamafactory.model.adapter - Upcasting trainable params to float32. 07/16/2024 16:50:38 - INFO - llamafactory.model.adapter - Fine-tuning method: Full 07/16/2024 16:50:38 - INFO - llamafactory.model.loader - trainable params: 6,738,415,616 || all params: 6,738,415,616 || trainable%: 100.0000 [INFO|trainer.py:2128] 2024-07-16 16:50:57,380 >> ***** Running training ***** [INFO|trainer.py:2129] 2024-07-16 16:50:57,380 >> Num examples = 4,958 [INFO|trainer.py:2130] 2024-07-16 16:50:57,380 >> Num Epochs = 5 [INFO|trainer.py:2131] 2024-07-16 16:50:57,380 >> Instantaneous batch size per device = 2 [INFO|trainer.py:2134] 2024-07-16 16:50:57,380 >> Total train batch size (w. parallel, distributed & accumulation) = 128 [INFO|trainer.py:2135] 2024-07-16 16:50:57,380 >> Gradient Accumulation steps = 8 [INFO|trainer.py:2136] 2024-07-16 16:50:57,380 >> Total optimization steps = 190 [INFO|trainer.py:2137] 2024-07-16 16:50:57,381 >> Number of trainable parameters = 6,738,415,616 [INFO|callbacks.py:310] 2024-07-16 16:51:09,938 >> {'loss': 8.2514, 'learning_rate': 5.0000e-07, 'epoch': 0.03, 'throughput': 545.43} [INFO|callbacks.py:310] 2024-07-16 16:51:20,951 >> {'loss': 8.2793, 'learning_rate': 1.0000e-06, 'epoch': 0.05, 'throughput': 584.51} [INFO|callbacks.py:310] 2024-07-16 16:51:31,943 >> {'loss': 8.1700, 'learning_rate': 1.5000e-06, 'epoch': 0.08, 'throughput': 598.15} [INFO|callbacks.py:310] 2024-07-16 16:51:42,923 >> {'loss': 7.6197, 'learning_rate': 2.0000e-06, 'epoch': 0.10, 'throughput': 609.21} [INFO|callbacks.py:310] 2024-07-16 16:51:53,920 >> {'loss': 6.9491, 'learning_rate': 2.5000e-06, 'epoch': 0.13, 'throughput': 612.41} [INFO|callbacks.py:310] 2024-07-16 16:52:04,919 >> {'loss': 5.2054, 'learning_rate': 3.0000e-06, 'epoch': 0.15, 'throughput': 613.36} [INFO|callbacks.py:310] 2024-07-16 16:52:15,920 >> {'loss': 4.8642, 'learning_rate': 3.5000e-06, 'epoch': 0.18, 'throughput': 615.05} [INFO|callbacks.py:310] 2024-07-16 16:52:26,924 >> {'loss': 3.2874, 'learning_rate': 4.0000e-06, 'epoch': 0.21, 'throughput': 615.94} [INFO|callbacks.py:310] 2024-07-16 16:52:37,962 >> {'loss': 2.6310, 'learning_rate': 4.5000e-06, 'epoch': 0.23, 'throughput': 613.25} [INFO|callbacks.py:310] 2024-07-16 16:52:48,988 >> {'loss': 0.6982, 'learning_rate': 5.0000e-06, 'epoch': 0.26, 'throughput': 613.59} [INFO|callbacks.py:310] 2024-07-16 16:53:00,018 >> {'loss': 0.3276, 'learning_rate': 4.9996e-06, 'epoch': 0.28, 'throughput': 613.98} [INFO|callbacks.py:310] 2024-07-16 16:53:11,029 >> {'loss': 0.2930, 'learning_rate': 4.9985e-06, 'epoch': 0.31, 'throughput': 615.72} [INFO|callbacks.py:310] 2024-07-16 16:53:22,020 >> {'loss': 0.2129, 'learning_rate': 4.9966e-06, 'epoch': 0.34, 'throughput': 615.94} [INFO|callbacks.py:310] 2024-07-16 16:53:33,021 >> {'loss': 0.4712, 'learning_rate': 4.9939e-06, 'epoch': 0.36, 'throughput': 616.20} [INFO|callbacks.py:310] 2024-07-16 16:53:44,003 >> {'loss': 0.2350, 'learning_rate': 4.9905e-06, 'epoch': 0.39, 'throughput': 617.55} [INFO|callbacks.py:310] 2024-07-16 16:53:55,026 >> {'loss': 0.2020, 'learning_rate': 4.9863e-06, 'epoch': 0.41, 'throughput': 618.05} [INFO|callbacks.py:310] 2024-07-16 16:54:06,023 >> {'loss': 0.1981, 'learning_rate': 4.9814e-06, 'epoch': 0.44, 'throughput': 617.73} [INFO|callbacks.py:310] 2024-07-16 16:54:17,028 >> {'loss': 0.1517, 'learning_rate': 4.9757e-06, 'epoch': 0.46, 'throughput': 617.10} [INFO|callbacks.py:310] 2024-07-16 16:54:28,037 >> {'loss': 0.4335, 'learning_rate': 4.9692e-06, 'epoch': 0.49, 'throughput': 617.28} [INFO|callbacks.py:310] 2024-07-16 16:54:39,050 >> {'loss': 0.3609, 'learning_rate': 4.9620e-06, 'epoch': 0.52, 'throughput': 617.29} [INFO|callbacks.py:310] 2024-07-16 16:54:50,034 >> {'loss': 0.1708, 'learning_rate': 4.9541e-06, 'epoch': 0.54, 'throughput': 618.54} [INFO|callbacks.py:310] 2024-07-16 16:55:01,020 >> {'loss': 0.2277, 'learning_rate': 4.9454e-06, 'epoch': 0.57, 'throughput': 617.70} [INFO|callbacks.py:310] 2024-07-16 16:55:12,039 >> {'loss': 0.3437, 'learning_rate': 4.9359e-06, 'epoch': 0.59, 'throughput': 617.93} [INFO|callbacks.py:310] 2024-07-16 16:55:23,067 >> {'loss': 0.2229, 'learning_rate': 4.9257e-06, 'epoch': 0.62, 'throughput': 619.02} [INFO|callbacks.py:310] 2024-07-16 16:55:34,096 >> {'loss': 0.1242, 'learning_rate': 4.9148e-06, 'epoch': 0.65, 'throughput': 617.82} [INFO|callbacks.py:310] 2024-07-16 16:55:45,128 >> {'loss': 0.2117, 'learning_rate': 4.9032e-06, 'epoch': 0.67, 'throughput': 617.94} [INFO|callbacks.py:310] 2024-07-16 16:55:56,152 >> {'loss': 0.2706, 'learning_rate': 4.8908e-06, 'epoch': 0.70, 'throughput': 618.70} [INFO|callbacks.py:310] 2024-07-16 16:56:07,175 >> {'loss': 0.2084, 'learning_rate': 4.8776e-06, 'epoch': 0.72, 'throughput': 618.27} [INFO|callbacks.py:310] 2024-07-16 16:56:18,165 >> {'loss': 0.0981, 'learning_rate': 4.8638e-06, 'epoch': 0.75, 'throughput': 618.39} [INFO|callbacks.py:310] 2024-07-16 16:56:29,154 >> {'loss': 0.1600, 'learning_rate': 4.8492e-06, 'epoch': 0.77, 'throughput': 618.50} [INFO|callbacks.py:310] 2024-07-16 16:56:40,149 >> {'loss': 0.1614, 'learning_rate': 4.8340e-06, 'epoch': 0.80, 'throughput': 617.80} [INFO|callbacks.py:310] 2024-07-16 16:56:51,163 >> {'loss': 0.1742, 'learning_rate': 4.8180e-06, 'epoch': 0.83, 'throughput': 617.47} [INFO|callbacks.py:310] 2024-07-16 16:57:02,179 >> {'loss': 0.1107, 'learning_rate': 4.8013e-06, 'epoch': 0.85, 'throughput': 617.90} [INFO|callbacks.py:310] 2024-07-16 16:57:13,192 >> {'loss': 0.0822, 'learning_rate': 4.7839e-06, 'epoch': 0.88, 'throughput': 617.42} [INFO|callbacks.py:310] 2024-07-16 16:57:24,203 >> {'loss': 0.1873, 'learning_rate': 4.7658e-06, 'epoch': 0.90, 'throughput': 617.01} [INFO|callbacks.py:310] 2024-07-16 16:57:35,243 >> {'loss': 0.2375, 'learning_rate': 4.7470e-06, 'epoch': 0.93, 'throughput': 616.94} [INFO|callbacks.py:310] 2024-07-16 16:57:46,259 >> {'loss': 0.2667, 'learning_rate': 4.7275e-06, 'epoch': 0.95, 'throughput': 617.73} [INFO|callbacks.py:310] 2024-07-16 16:57:57,247 >> {'loss': 0.1547, 'learning_rate': 4.7074e-06, 'epoch': 0.98, 'throughput': 618.14} [INFO|callbacks.py:310] 2024-07-16 16:58:08,231 >> {'loss': 0.1662, 'learning_rate': 4.6865e-06, 'epoch': 1.01, 'throughput': 618.69} [INFO|callbacks.py:310] 2024-07-16 16:58:19,239 >> {'loss': 0.0808, 'learning_rate': 4.6651e-06, 'epoch': 1.03, 'throughput': 618.41} [INFO|callbacks.py:310] 2024-07-16 16:58:30,245 >> {'loss': 0.0884, 'learning_rate': 4.6429e-06, 'epoch': 1.06, 'throughput': 618.04} [INFO|callbacks.py:310] 2024-07-16 16:58:41,281 >> {'loss': 0.0883, 'learning_rate': 4.6201e-06, 'epoch': 1.08, 'throughput': 618.55} [INFO|callbacks.py:310] 2024-07-16 16:58:52,323 >> {'loss': 0.0562, 'learning_rate': 4.5967e-06, 'epoch': 1.11, 'throughput': 618.59} [INFO|callbacks.py:310] 2024-07-16 16:59:03,347 >> {'loss': 0.0856, 'learning_rate': 4.5726e-06, 'epoch': 1.14, 'throughput': 618.38} [INFO|callbacks.py:310] 2024-07-16 16:59:14,361 >> {'loss': 0.0612, 'learning_rate': 4.5479e-06, 'epoch': 1.16, 'throughput': 618.26} [INFO|callbacks.py:310] 2024-07-16 16:59:25,365 >> {'loss': 0.0944, 'learning_rate': 4.5225e-06, 'epoch': 1.19, 'throughput': 618.35} [INFO|callbacks.py:310] 2024-07-16 16:59:36,390 >> {'loss': 0.0624, 'learning_rate': 4.4966e-06, 'epoch': 1.21, 'throughput': 618.23} [INFO|callbacks.py:310] 2024-07-16 16:59:47,376 >> {'loss': 0.0363, 'learning_rate': 4.4700e-06, 'epoch': 1.24, 'throughput': 618.21} [INFO|callbacks.py:310] 2024-07-16 16:59:58,399 >> {'loss': 0.1039, 'learning_rate': 4.4429e-06, 'epoch': 1.26, 'throughput': 618.16} [INFO|callbacks.py:310] 2024-07-16 17:00:09,394 >> {'loss': 0.0488, 'learning_rate': 4.4151e-06, 'epoch': 1.29, 'throughput': 618.19} [INFO|callbacks.py:310] 2024-07-16 17:00:20,399 >> {'loss': 0.0613, 'learning_rate': 4.3868e-06, 'epoch': 1.32, 'throughput': 618.44} [INFO|callbacks.py:310] 2024-07-16 17:00:31,407 >> {'loss': 0.0700, 'learning_rate': 4.3579e-06, 'epoch': 1.34, 'throughput': 618.12} [INFO|callbacks.py:310] 2024-07-16 17:00:42,424 >> {'loss': 0.0463, 'learning_rate': 4.3284e-06, 'epoch': 1.37, 'throughput': 618.08} [INFO|callbacks.py:310] 2024-07-16 17:00:53,462 >> {'loss': 0.0671, 'learning_rate': 4.2983e-06, 'epoch': 1.39, 'throughput': 618.15} [INFO|callbacks.py:310] 2024-07-16 17:01:04,468 >> {'loss': 0.0428, 'learning_rate': 4.2678e-06, 'epoch': 1.42, 'throughput': 618.43} [INFO|callbacks.py:310] 2024-07-16 17:01:15,463 >> {'loss': 0.0678, 'learning_rate': 4.2366e-06, 'epoch': 1.45, 'throughput': 618.43} [INFO|callbacks.py:310] 2024-07-16 17:01:26,456 >> {'loss': 0.0476, 'learning_rate': 4.2050e-06, 'epoch': 1.47, 'throughput': 618.38} [INFO|callbacks.py:310] 2024-07-16 17:01:37,444 >> {'loss': 0.0442, 'learning_rate': 4.1728e-06, 'epoch': 1.50, 'throughput': 618.82} [INFO|callbacks.py:310] 2024-07-16 17:01:48,428 >> {'loss': 0.0336, 'learning_rate': 4.1401e-06, 'epoch': 1.52, 'throughput': 619.09} [INFO|callbacks.py:310] 2024-07-16 17:01:59,445 >> {'loss': 0.0460, 'learning_rate': 4.1070e-06, 'epoch': 1.55, 'throughput': 618.77} [INFO|callbacks.py:310] 2024-07-16 17:02:10,459 >> {'loss': 0.0416, 'learning_rate': 4.0733e-06, 'epoch': 1.57, 'throughput': 618.46} [INFO|callbacks.py:310] 2024-07-16 17:02:21,470 >> {'loss': 0.0649, 'learning_rate': 4.0392e-06, 'epoch': 1.60, 'throughput': 618.87} [INFO|callbacks.py:310] 2024-07-16 17:02:32,483 >> {'loss': 0.0591, 'learning_rate': 4.0045e-06, 'epoch': 1.63, 'throughput': 619.05} [INFO|callbacks.py:310] 2024-07-16 17:02:43,490 >> {'loss': 0.0318, 'learning_rate': 3.9695e-06, 'epoch': 1.65, 'throughput': 618.83} [INFO|callbacks.py:310] 2024-07-16 17:02:54,478 >> {'loss': 0.0462, 'learning_rate': 3.9339e-06, 'epoch': 1.68, 'throughput': 618.87} [INFO|callbacks.py:310] 2024-07-16 17:03:05,466 >> {'loss': 0.0465, 'learning_rate': 3.8980e-06, 'epoch': 1.70, 'throughput': 618.98} [INFO|callbacks.py:310] 2024-07-16 17:03:16,480 >> {'loss': 0.0316, 'learning_rate': 3.8616e-06, 'epoch': 1.73, 'throughput': 619.15} [INFO|callbacks.py:310] 2024-07-16 17:03:27,480 >> {'loss': 0.1000, 'learning_rate': 3.8248e-06, 'epoch': 1.75, 'throughput': 619.38} [INFO|callbacks.py:310] 2024-07-16 17:03:38,513 >> {'loss': 0.0711, 'learning_rate': 3.7876e-06, 'epoch': 1.78, 'throughput': 619.25} [INFO|callbacks.py:310] 2024-07-16 17:03:49,506 >> {'loss': 0.0494, 'learning_rate': 3.7500e-06, 'epoch': 1.81, 'throughput': 619.69} [INFO|callbacks.py:310] 2024-07-16 17:04:00,519 >> {'loss': 0.0618, 'learning_rate': 3.7120e-06, 'epoch': 1.83, 'throughput': 619.64} [INFO|callbacks.py:310] 2024-07-16 17:04:11,528 >> {'loss': 0.0511, 'learning_rate': 3.6737e-06, 'epoch': 1.86, 'throughput': 619.61} [INFO|callbacks.py:310] 2024-07-16 17:04:22,525 >> {'loss': 0.0464, 'learning_rate': 3.6350e-06, 'epoch': 1.88, 'throughput': 619.64} [INFO|callbacks.py:310] 2024-07-16 17:04:33,509 >> {'loss': 0.0331, 'learning_rate': 3.5959e-06, 'epoch': 1.91, 'throughput': 620.02} [INFO|callbacks.py:310] 2024-07-16 17:04:44,504 >> {'loss': 0.0706, 'learning_rate': 3.5565e-06, 'epoch': 1.94, 'throughput': 620.04} [INFO|callbacks.py:310] 2024-07-16 17:04:55,490 >> {'loss': 0.0442, 'learning_rate': 3.5168e-06, 'epoch': 1.96, 'throughput': 620.14} [INFO|callbacks.py:310] 2024-07-16 17:05:06,484 >> {'loss': 0.0420, 'learning_rate': 3.4768e-06, 'epoch': 1.99, 'throughput': 619.89} [INFO|callbacks.py:310] 2024-07-16 17:05:17,496 >> {'loss': 0.0210, 'learning_rate': 3.4365e-06, 'epoch': 2.01, 'throughput': 619.84} [INFO|callbacks.py:310] 2024-07-16 17:05:28,503 >> {'loss': 0.0094, 'learning_rate': 3.3959e-06, 'epoch': 2.04, 'throughput': 619.91} [INFO|callbacks.py:310] 2024-07-16 17:05:39,520 >> {'loss': 0.0021, 'learning_rate': 3.3551e-06, 'epoch': 2.06, 'throughput': 620.26} [INFO|callbacks.py:310] 2024-07-16 17:05:50,557 >> {'loss': 0.0146, 'learning_rate': 3.3139e-06, 'epoch': 2.09, 'throughput': 620.13} [INFO|callbacks.py:310] 2024-07-16 17:06:01,557 >> {'loss': 0.0237, 'learning_rate': 3.2725e-06, 'epoch': 2.12, 'throughput': 620.48} [INFO|callbacks.py:310] 2024-07-16 17:06:12,563 >> {'loss': 0.0031, 'learning_rate': 3.2309e-06, 'epoch': 2.14, 'throughput': 620.57} [INFO|callbacks.py:310] 2024-07-16 17:06:23,576 >> {'loss': 0.0034, 'learning_rate': 3.1891e-06, 'epoch': 2.17, 'throughput': 620.54} [INFO|callbacks.py:310] 2024-07-16 17:06:34,571 >> {'loss': 0.0045, 'learning_rate': 3.1470e-06, 'epoch': 2.19, 'throughput': 620.66} [INFO|callbacks.py:310] 2024-07-16 17:06:45,568 >> {'loss': 0.0031, 'learning_rate': 3.1048e-06, 'epoch': 2.22, 'throughput': 620.62} [INFO|callbacks.py:310] 2024-07-16 17:06:56,589 >> {'loss': 0.0341, 'learning_rate': 3.0624e-06, 'epoch': 2.25, 'throughput': 620.63} [INFO|callbacks.py:310] 2024-07-16 17:07:07,621 >> {'loss': 0.0095, 'learning_rate': 3.0198e-06, 'epoch': 2.27, 'throughput': 620.55} [INFO|callbacks.py:310] 2024-07-16 17:07:18,643 >> {'loss': 0.0459, 'learning_rate': 2.9770e-06, 'epoch': 2.30, 'throughput': 620.80} [INFO|callbacks.py:310] 2024-07-16 17:07:29,659 >> {'loss': 0.0104, 'learning_rate': 2.9341e-06, 'epoch': 2.32, 'throughput': 620.80} [INFO|callbacks.py:310] 2024-07-16 17:07:40,662 >> {'loss': 0.0201, 'learning_rate': 2.8911e-06, 'epoch': 2.35, 'throughput': 620.51} [INFO|callbacks.py:310] 2024-07-16 17:07:51,636 >> {'loss': 0.0021, 'learning_rate': 2.8479e-06, 'epoch': 2.37, 'throughput': 620.75} [INFO|callbacks.py:310] 2024-07-16 17:08:02,651 >> {'loss': 0.0430, 'learning_rate': 2.8047e-06, 'epoch': 2.40, 'throughput': 620.64} [INFO|callbacks.py:310] 2024-07-16 17:08:13,671 >> {'loss': 0.0207, 'learning_rate': 2.7613e-06, 'epoch': 2.43, 'throughput': 620.61} [INFO|callbacks.py:310] 2024-07-16 17:08:24,677 >> {'loss': 0.0148, 'learning_rate': 2.7179e-06, 'epoch': 2.45, 'throughput': 620.69} [INFO|callbacks.py:310] 2024-07-16 17:08:35,700 >> {'loss': 0.0040, 'learning_rate': 2.6744e-06, 'epoch': 2.48, 'throughput': 620.55} [INFO|callbacks.py:310] 2024-07-16 17:08:46,703 >> {'loss': 0.0131, 'learning_rate': 2.6308e-06, 'epoch': 2.50, 'throughput': 620.54} [INFO|callbacks.py:310] 2024-07-16 17:08:57,742 >> {'loss': 0.0455, 'learning_rate': 2.5872e-06, 'epoch': 2.53, 'throughput': 620.37} [INFO|callbacks.py:310] 2024-07-16 17:09:08,772 >> {'loss': 0.0031, 'learning_rate': 2.5436e-06, 'epoch': 2.55, 'throughput': 620.27} [INFO|callbacks.py:310] 2024-07-16 17:09:19,760 >> {'loss': 0.0099, 'learning_rate': 2.5000e-06, 'epoch': 2.58, 'throughput': 620.49} [INFO|callbacks.py:310] 2024-07-16 17:09:30,748 >> {'loss': 0.0797, 'learning_rate': 2.4564e-06, 'epoch': 2.61, 'throughput': 620.49} [INFO|callbacks.py:310] 2024-07-16 17:09:41,738 >> {'loss': 0.0059, 'learning_rate': 2.4128e-06, 'epoch': 2.63, 'throughput': 620.63} [INFO|callbacks.py:310] 2024-07-16 17:09:52,737 >> {'loss': 0.0438, 'learning_rate': 2.3692e-06, 'epoch': 2.66, 'throughput': 620.38} [INFO|callbacks.py:310] 2024-07-16 17:10:03,734 >> {'loss': 0.0149, 'learning_rate': 2.3256e-06, 'epoch': 2.68, 'throughput': 620.63} [INFO|callbacks.py:310] 2024-07-16 17:10:14,743 >> {'loss': 0.0126, 'learning_rate': 2.2821e-06, 'epoch': 2.71, 'throughput': 620.57} [INFO|callbacks.py:310] 2024-07-16 17:10:25,754 >> {'loss': 0.0255, 'learning_rate': 2.2387e-06, 'epoch': 2.74, 'throughput': 620.46} [INFO|callbacks.py:310] 2024-07-16 17:10:36,760 >> {'loss': 0.0048, 'learning_rate': 2.1953e-06, 'epoch': 2.76, 'throughput': 620.34} [INFO|callbacks.py:310] 2024-07-16 17:10:47,758 >> {'loss': 0.0142, 'learning_rate': 2.1521e-06, 'epoch': 2.79, 'throughput': 620.20} [INFO|callbacks.py:310] 2024-07-16 17:10:58,726 >> {'loss': 0.0193, 'learning_rate': 2.1089e-06, 'epoch': 2.81, 'throughput': 620.23} [INFO|callbacks.py:310] 2024-07-16 17:11:09,695 >> {'loss': 0.0055, 'learning_rate': 2.0659e-06, 'epoch': 2.84, 'throughput': 620.32} [INFO|callbacks.py:310] 2024-07-16 17:11:20,678 >> {'loss': 0.0144, 'learning_rate': 2.0230e-06, 'epoch': 2.86, 'throughput': 620.23} [INFO|callbacks.py:310] 2024-07-16 17:11:31,656 >> {'loss': 0.0272, 'learning_rate': 1.9802e-06, 'epoch': 2.89, 'throughput': 620.22} [INFO|callbacks.py:310] 2024-07-16 17:11:42,652 >> {'loss': 0.0101, 'learning_rate': 1.9376e-06, 'epoch': 2.92, 'throughput': 620.18} [INFO|callbacks.py:310] 2024-07-16 17:11:53,647 >> {'loss': 0.0109, 'learning_rate': 1.8952e-06, 'epoch': 2.94, 'throughput': 620.51} [INFO|callbacks.py:310] 2024-07-16 17:12:04,642 >> {'loss': 0.0180, 'learning_rate': 1.8530e-06, 'epoch': 2.97, 'throughput': 620.55} [INFO|callbacks.py:310] 2024-07-16 17:12:15,639 >> {'loss': 0.0141, 'learning_rate': 1.8109e-06, 'epoch': 2.99, 'throughput': 620.38} [INFO|callbacks.py:310] 2024-07-16 17:12:26,637 >> {'loss': 0.0057, 'learning_rate': 1.7691e-06, 'epoch': 3.02, 'throughput': 620.36} [INFO|callbacks.py:310] 2024-07-16 17:12:37,628 >> {'loss': 0.0063, 'learning_rate': 1.7275e-06, 'epoch': 3.05, 'throughput': 620.33} [INFO|callbacks.py:310] 2024-07-16 17:12:48,606 >> {'loss': 0.0138, 'learning_rate': 1.6861e-06, 'epoch': 3.07, 'throughput': 620.27} [INFO|callbacks.py:310] 2024-07-16 17:12:59,593 >> {'loss': 0.0011, 'learning_rate': 1.6449e-06, 'epoch': 3.10, 'throughput': 619.97} [INFO|callbacks.py:310] 2024-07-16 17:13:10,574 >> {'loss': 0.0006, 'learning_rate': 1.6041e-06, 'epoch': 3.12, 'throughput': 619.94} [INFO|callbacks.py:310] 2024-07-16 17:13:21,574 >> {'loss': 0.0055, 'learning_rate': 1.5635e-06, 'epoch': 3.15, 'throughput': 619.97} [INFO|callbacks.py:310] 2024-07-16 17:13:32,565 >> {'loss': 0.0011, 'learning_rate': 1.5232e-06, 'epoch': 3.17, 'throughput': 619.90} [INFO|callbacks.py:310] 2024-07-16 17:13:43,569 >> {'loss': 0.0173, 'learning_rate': 1.4832e-06, 'epoch': 3.20, 'throughput': 620.05} [INFO|callbacks.py:310] 2024-07-16 17:13:54,579 >> {'loss': 0.0027, 'learning_rate': 1.4435e-06, 'epoch': 3.23, 'throughput': 619.89} [INFO|callbacks.py:310] 2024-07-16 17:14:05,559 >> {'loss': 0.0029, 'learning_rate': 1.4041e-06, 'epoch': 3.25, 'throughput': 619.81} [INFO|callbacks.py:310] 2024-07-16 17:14:16,541 >> {'loss': 0.0003, 'learning_rate': 1.3650e-06, 'epoch': 3.28, 'throughput': 619.97} [INFO|callbacks.py:310] 2024-07-16 17:14:27,516 >> {'loss': 0.0007, 'learning_rate': 1.3263e-06, 'epoch': 3.30, 'throughput': 619.98} [INFO|callbacks.py:310] 2024-07-16 17:14:38,496 >> {'loss': 0.0080, 'learning_rate': 1.2880e-06, 'epoch': 3.33, 'throughput': 620.05} [INFO|callbacks.py:310] 2024-07-16 17:14:49,489 >> {'loss': 0.0004, 'learning_rate': 1.2500e-06, 'epoch': 3.35, 'throughput': 620.16} [INFO|callbacks.py:310] 2024-07-16 17:15:00,489 >> {'loss': 0.0049, 'learning_rate': 1.2124e-06, 'epoch': 3.38, 'throughput': 620.39} [INFO|callbacks.py:310] 2024-07-16 17:15:11,487 >> {'loss': 0.0012, 'learning_rate': 1.1752e-06, 'epoch': 3.41, 'throughput': 620.30} [INFO|callbacks.py:310] 2024-07-16 17:15:22,486 >> {'loss': 0.0044, 'learning_rate': 1.1384e-06, 'epoch': 3.43, 'throughput': 620.50} [INFO|callbacks.py:310] 2024-07-16 17:15:33,486 >> {'loss': 0.0017, 'learning_rate': 1.1020e-06, 'epoch': 3.46, 'throughput': 620.57} [INFO|callbacks.py:310] 2024-07-16 17:15:44,480 >> {'loss': 0.0003, 'learning_rate': 1.0661e-06, 'epoch': 3.48, 'throughput': 620.49} [INFO|callbacks.py:310] 2024-07-16 17:15:55,449 >> {'loss': 0.0099, 'learning_rate': 1.0305e-06, 'epoch': 3.51, 'throughput': 620.45} [INFO|callbacks.py:310] 2024-07-16 17:16:06,411 >> {'loss': 0.0068, 'learning_rate': 9.9546e-07, 'epoch': 3.54, 'throughput': 620.34} [INFO|callbacks.py:310] 2024-07-16 17:16:17,397 >> {'loss': 0.0025, 'learning_rate': 9.6085e-07, 'epoch': 3.56, 'throughput': 620.33} [INFO|callbacks.py:310] 2024-07-16 17:16:28,378 >> {'loss': 0.0004, 'learning_rate': 9.2670e-07, 'epoch': 3.59, 'throughput': 620.50} [INFO|callbacks.py:310] 2024-07-16 17:16:39,370 >> {'loss': 0.0101, 'learning_rate': 8.9303e-07, 'epoch': 3.61, 'throughput': 620.37} [INFO|callbacks.py:310] 2024-07-16 17:16:50,370 >> {'loss': 0.0068, 'learning_rate': 8.5985e-07, 'epoch': 3.64, 'throughput': 620.41} [INFO|callbacks.py:310] 2024-07-16 17:17:01,381 >> {'loss': 0.0007, 'learning_rate': 8.2717e-07, 'epoch': 3.66, 'throughput': 620.29} [INFO|callbacks.py:310] 2024-07-16 17:17:12,379 >> {'loss': 0.0161, 'learning_rate': 7.9500e-07, 'epoch': 3.69, 'throughput': 620.19} [INFO|callbacks.py:310] 2024-07-16 17:17:23,362 >> {'loss': 0.0115, 'learning_rate': 7.6335e-07, 'epoch': 3.72, 'throughput': 620.44} [INFO|callbacks.py:310] 2024-07-16 17:17:34,347 >> {'loss': 0.0052, 'learning_rate': 7.3223e-07, 'epoch': 3.74, 'throughput': 620.49} [INFO|callbacks.py:310] 2024-07-16 17:17:45,329 >> {'loss': 0.0098, 'learning_rate': 7.0165e-07, 'epoch': 3.77, 'throughput': 620.56} [INFO|callbacks.py:310] 2024-07-16 17:17:56,308 >> {'loss': 0.0005, 'learning_rate': 6.7162e-07, 'epoch': 3.79, 'throughput': 620.74} [INFO|callbacks.py:310] 2024-07-16 17:18:07,295 >> {'loss': 0.0012, 'learning_rate': 6.4214e-07, 'epoch': 3.82, 'throughput': 620.71} [INFO|callbacks.py:310] 2024-07-16 17:18:18,292 >> {'loss': 0.0013, 'learning_rate': 6.1323e-07, 'epoch': 3.85, 'throughput': 620.61} [INFO|callbacks.py:310] 2024-07-16 17:18:29,277 >> {'loss': 0.0003, 'learning_rate': 5.8489e-07, 'epoch': 3.87, 'throughput': 620.76} [INFO|callbacks.py:310] 2024-07-16 17:18:40,277 >> {'loss': 0.0026, 'learning_rate': 5.5714e-07, 'epoch': 3.90, 'throughput': 620.61} [INFO|callbacks.py:310] 2024-07-16 17:18:51,272 >> {'loss': 0.0097, 'learning_rate': 5.2997e-07, 'epoch': 3.92, 'throughput': 620.67} [INFO|callbacks.py:310] 2024-07-16 17:19:02,251 >> {'loss': 0.0047, 'learning_rate': 5.0341e-07, 'epoch': 3.95, 'throughput': 620.62} [INFO|callbacks.py:310] 2024-07-16 17:19:13,213 >> {'loss': 0.0081, 'learning_rate': 4.7746e-07, 'epoch': 3.97, 'throughput': 620.72} [INFO|callbacks.py:310] 2024-07-16 17:19:24,189 >> {'loss': 0.0018, 'learning_rate': 4.5212e-07, 'epoch': 4.00, 'throughput': 620.95} [INFO|callbacks.py:310] 2024-07-16 17:19:35,181 >> {'loss': 0.0053, 'learning_rate': 4.2741e-07, 'epoch': 4.03, 'throughput': 620.97} [INFO|callbacks.py:310] 2024-07-16 17:19:46,170 >> {'loss': 0.0005, 'learning_rate': 4.0332e-07, 'epoch': 4.05, 'throughput': 621.01} [INFO|callbacks.py:310] 2024-07-16 17:19:57,184 >> {'loss': 0.0001, 'learning_rate': 3.7988e-07, 'epoch': 4.08, 'throughput': 620.88} [INFO|callbacks.py:310] 2024-07-16 17:20:08,184 >> {'loss': 0.0018, 'learning_rate': 3.5708e-07, 'epoch': 4.10, 'throughput': 620.79} [INFO|callbacks.py:310] 2024-07-16 17:20:19,180 >> {'loss': 0.0010, 'learning_rate': 3.3494e-07, 'epoch': 4.13, 'throughput': 620.68} [INFO|callbacks.py:310] 2024-07-16 17:20:30,173 >> {'loss': 0.0001, 'learning_rate': 3.1345e-07, 'epoch': 4.15, 'throughput': 620.79} [INFO|callbacks.py:310] 2024-07-16 17:20:41,136 >> {'loss': 0.0012, 'learning_rate': 2.9263e-07, 'epoch': 4.18, 'throughput': 620.89} [INFO|callbacks.py:310] 2024-07-16 17:20:52,115 >> {'loss': 0.0001, 'learning_rate': 2.7248e-07, 'epoch': 4.21, 'throughput': 620.82} [INFO|callbacks.py:310] 2024-07-16 17:21:03,096 >> {'loss': 0.0002, 'learning_rate': 2.5301e-07, 'epoch': 4.23, 'throughput': 620.76} [INFO|callbacks.py:310] 2024-07-16 17:21:14,072 >> {'loss': 0.0001, 'learning_rate': 2.3423e-07, 'epoch': 4.26, 'throughput': 620.84} [INFO|callbacks.py:310] 2024-07-16 17:21:25,062 >> {'loss': 0.0003, 'learning_rate': 2.1614e-07, 'epoch': 4.28, 'throughput': 620.65} [INFO|callbacks.py:310] 2024-07-16 17:21:36,062 >> {'loss': 0.0001, 'learning_rate': 1.9874e-07, 'epoch': 4.31, 'throughput': 620.79} [INFO|callbacks.py:310] 2024-07-16 17:21:47,070 >> {'loss': 0.0007, 'learning_rate': 1.8204e-07, 'epoch': 4.34, 'throughput': 620.70} [INFO|callbacks.py:310] 2024-07-16 17:21:58,075 >> {'loss': 0.0003, 'learning_rate': 1.6605e-07, 'epoch': 4.36, 'throughput': 620.56} [INFO|callbacks.py:310] 2024-07-16 17:22:09,064 >> {'loss': 0.0001, 'learning_rate': 1.5077e-07, 'epoch': 4.39, 'throughput': 620.70} [INFO|callbacks.py:310] 2024-07-16 17:22:20,044 >> {'loss': 0.0008, 'learning_rate': 1.3620e-07, 'epoch': 4.41, 'throughput': 620.90} [INFO|callbacks.py:310] 2024-07-16 17:22:31,005 >> {'loss': 0.0001, 'learning_rate': 1.2236e-07, 'epoch': 4.44, 'throughput': 620.92} [INFO|callbacks.py:310] 2024-07-16 17:22:41,991 >> {'loss': 0.0004, 'learning_rate': 1.0924e-07, 'epoch': 4.46, 'throughput': 621.02} [INFO|callbacks.py:310] 2024-07-16 17:22:52,983 >> {'loss': 0.0001, 'learning_rate': 9.6846e-08, 'epoch': 4.49, 'throughput': 620.99} [INFO|callbacks.py:310] 2024-07-16 17:23:03,992 >> {'loss': 0.0005, 'learning_rate': 8.5185e-08, 'epoch': 4.52, 'throughput': 621.03} [INFO|callbacks.py:310] 2024-07-16 17:23:14,988 >> {'loss': 0.0076, 'learning_rate': 7.4261e-08, 'epoch': 4.54, 'throughput': 621.25} [INFO|callbacks.py:310] 2024-07-16 17:23:26,004 >> {'loss': 0.0004, 'learning_rate': 6.4075e-08, 'epoch': 4.57, 'throughput': 621.12} [INFO|callbacks.py:310] 2024-07-16 17:23:37,007 >> {'loss': 0.0040, 'learning_rate': 5.4631e-08, 'epoch': 4.59, 'throughput': 621.16} [INFO|callbacks.py:310] 2024-07-16 17:23:47,992 >> {'loss': 0.0001, 'learning_rate': 4.5932e-08, 'epoch': 4.62, 'throughput': 621.22} [INFO|callbacks.py:310] 2024-07-16 17:23:58,970 >> {'loss': 0.0005, 'learning_rate': 3.7981e-08, 'epoch': 4.65, 'throughput': 621.17} [INFO|callbacks.py:310] 2024-07-16 17:24:09,940 >> {'loss': 0.0002, 'learning_rate': 3.0779e-08, 'epoch': 4.67, 'throughput': 621.23} [INFO|callbacks.py:310] 2024-07-16 17:24:20,909 >> {'loss': 0.0001, 'learning_rate': 2.4330e-08, 'epoch': 4.70, 'throughput': 621.25} [INFO|callbacks.py:310] 2024-07-16 17:24:31,898 >> {'loss': 0.0001, 'learning_rate': 1.8635e-08, 'epoch': 4.72, 'throughput': 621.28} [INFO|callbacks.py:310] 2024-07-16 17:24:42,890 >> {'loss': 0.0001, 'learning_rate': 1.3695e-08, 'epoch': 4.75, 'throughput': 621.30} [INFO|callbacks.py:310] 2024-07-16 17:24:53,884 >> {'loss': 0.0167, 'learning_rate': 9.5133e-09, 'epoch': 4.77, 'throughput': 621.38} [INFO|callbacks.py:310] 2024-07-16 17:25:04,898 >> {'loss': 0.0002, 'learning_rate': 6.0899e-09, 'epoch': 4.80, 'throughput': 621.28} [INFO|callbacks.py:310] 2024-07-16 17:25:15,899 >> {'loss': 0.0003, 'learning_rate': 3.4262e-09, 'epoch': 4.83, 'throughput': 621.31} [INFO|callbacks.py:310] 2024-07-16 17:25:26,879 >> {'loss': 0.0024, 'learning_rate': 1.5229e-09, 'epoch': 4.85, 'throughput': 621.29} [INFO|callbacks.py:310] 2024-07-16 17:25:37,870 >> {'loss': 0.0005, 'learning_rate': 3.8076e-10, 'epoch': 4.88, 'throughput': 621.19} [INFO|callbacks.py:310] 2024-07-16 17:25:48,849 >> {'loss': 0.0017, 'learning_rate': 0.0000e+00, 'epoch': 4.90, 'throughput': 621.15} [INFO|trainer.py:3478] 2024-07-16 17:25:55,242 >> Saving model checkpoint to saves/LLaMA2-7B-Chat/full/train_2024-07-16-16-48-49_llama2_2/checkpoint-190 [INFO|configuration_utils.py:472] 2024-07-16 17:25:55,245 >> Configuration saved in saves/LLaMA2-7B-Chat/full/train_2024-07-16-16-48-49_llama2_2/checkpoint-190/config.json [INFO|configuration_utils.py:769] 2024-07-16 17:25:55,246 >> Configuration saved in saves/LLaMA2-7B-Chat/full/train_2024-07-16-16-48-49_llama2_2/checkpoint-190/generation_config.json [INFO|modeling_utils.py:2698] 2024-07-16 17:26:08,802 >> The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 3 checkpoint shards. You can find where each parameters has been saved in the index located at saves/LLaMA2-7B-Chat/full/train_2024-07-16-16-48-49_llama2_2/checkpoint-190/model.safetensors.index.json. [INFO|tokenization_utils_base.py:2574] 2024-07-16 17:26:08,803 >> tokenizer config file saved in saves/LLaMA2-7B-Chat/full/train_2024-07-16-16-48-49_llama2_2/checkpoint-190/tokenizer_config.json [INFO|tokenization_utils_base.py:2583] 2024-07-16 17:26:08,803 >> Special tokens file saved in saves/LLaMA2-7B-Chat/full/train_2024-07-16-16-48-49_llama2_2/checkpoint-190/special_tokens_map.json [INFO|trainer.py:2383] 2024-07-16 17:26:40,079 >> Training completed. Do not forget to share your model on huggingface.co/models =) [INFO|trainer.py:3478] 2024-07-16 17:26:46,683 >> Saving model checkpoint to saves/LLaMA2-7B-Chat/full/train_2024-07-16-16-48-49_llama2_2 [INFO|configuration_utils.py:472] 2024-07-16 17:26:46,686 >> Configuration saved in saves/LLaMA2-7B-Chat/full/train_2024-07-16-16-48-49_llama2_2/config.json [INFO|configuration_utils.py:769] 2024-07-16 17:26:46,686 >> Configuration saved in saves/LLaMA2-7B-Chat/full/train_2024-07-16-16-48-49_llama2_2/generation_config.json [INFO|modeling_utils.py:2698] 2024-07-16 17:27:01,023 >> The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 3 checkpoint shards. You can find where each parameters has been saved in the index located at saves/LLaMA2-7B-Chat/full/train_2024-07-16-16-48-49_llama2_2/model.safetensors.index.json. [INFO|tokenization_utils_base.py:2574] 2024-07-16 17:27:01,024 >> tokenizer config file saved in saves/LLaMA2-7B-Chat/full/train_2024-07-16-16-48-49_llama2_2/tokenizer_config.json [INFO|tokenization_utils_base.py:2583] 2024-07-16 17:27:01,025 >> Special tokens file saved in saves/LLaMA2-7B-Chat/full/train_2024-07-16-16-48-49_llama2_2/special_tokens_map.json [WARNING|ploting.py:89] 2024-07-16 17:27:02,103 >> No metric eval_loss to plot. [WARNING|ploting.py:89] 2024-07-16 17:27:02,103 >> No metric eval_accuracy to plot. [INFO|modelcard.py:449] 2024-07-16 17:27:02,103 >> Dropping the following result as it does not have all the necessary fields: {'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}}