[INFO|configuration_utils.py:733] 2024-10-17 08:14:57,712 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [INFO|configuration_utils.py:800] 2024-10-17 08:14:57,713 >> Model config LlamaConfig { "_name_or_path": "meta-llama/Meta-Llama-3.1-8B", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.44.2", "use_cache": true, "vocab_size": 128256 } [INFO|tokenization_utils_base.py:2269] 2024-10-17 08:14:59,380 >> loading file tokenizer.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/tokenizer.json [INFO|tokenization_utils_base.py:2269] 2024-10-17 08:14:59,380 >> loading file added_tokens.json from cache at None [INFO|tokenization_utils_base.py:2269] 2024-10-17 08:14:59,380 >> loading file special_tokens_map.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/special_tokens_map.json [INFO|tokenization_utils_base.py:2269] 2024-10-17 08:14:59,380 >> loading file tokenizer_config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/tokenizer_config.json [INFO|tokenization_utils_base.py:2513] 2024-10-17 08:14:59,766 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. [INFO|configuration_utils.py:733] 2024-10-17 08:15:00,640 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [INFO|configuration_utils.py:800] 2024-10-17 08:15:00,641 >> Model config LlamaConfig { "_name_or_path": "meta-llama/Meta-Llama-3.1-8B", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.44.2", "use_cache": true, "vocab_size": 128256 } [INFO|tokenization_utils_base.py:2269] 2024-10-17 08:15:00,834 >> loading file tokenizer.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/tokenizer.json [INFO|tokenization_utils_base.py:2269] 2024-10-17 08:15:00,835 >> loading file added_tokens.json from cache at None [INFO|tokenization_utils_base.py:2269] 2024-10-17 08:15:00,835 >> loading file special_tokens_map.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/special_tokens_map.json [INFO|tokenization_utils_base.py:2269] 2024-10-17 08:15:00,835 >> loading file tokenizer_config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/tokenizer_config.json [INFO|tokenization_utils_base.py:2513] 2024-10-17 08:15:01,190 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. [INFO|configuration_utils.py:733] 2024-10-17 08:15:10,416 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [INFO|configuration_utils.py:800] 2024-10-17 08:15:10,417 >> Model config LlamaConfig { "_name_or_path": "meta-llama/Meta-Llama-3.1-8B", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.44.2", "use_cache": true, "vocab_size": 128256 } [INFO|modeling_utils.py:3678] 2024-10-17 08:15:11,328 >> loading weights file model.safetensors from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/model.safetensors.index.json [INFO|modeling_utils.py:1606] 2024-10-17 08:16:45,202 >> Instantiating LlamaForCausalLM model under default dtype torch.bfloat16. [INFO|configuration_utils.py:1038] 2024-10-17 08:16:45,203 >> Generate config GenerationConfig { "bos_token_id": 128000, "eos_token_id": 128001 } [INFO|modeling_utils.py:4507] 2024-10-17 08:16:51,256 >> All model checkpoint weights were used when initializing LlamaForCausalLM. [INFO|modeling_utils.py:4515] 2024-10-17 08:16:51,256 >> All the weights of LlamaForCausalLM were initialized from the model checkpoint at meta-llama/Meta-Llama-3.1-8B. If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training. [INFO|configuration_utils.py:993] 2024-10-17 08:16:51,546 >> loading configuration file generation_config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/generation_config.json [INFO|configuration_utils.py:1038] 2024-10-17 08:16:51,546 >> Generate config GenerationConfig { "bos_token_id": 128000, "do_sample": true, "eos_token_id": 128001, "temperature": 0.6, "top_p": 0.9 } [INFO|trainer.py:648] 2024-10-17 08:16:52,594 >> Using auto half precision backend [INFO|trainer.py:2134] 2024-10-17 08:16:53,000 >> ***** Running training ***** [INFO|trainer.py:2135] 2024-10-17 08:16:53,000 >> Num examples = 3,997 [INFO|trainer.py:2136] 2024-10-17 08:16:53,000 >> Num Epochs = 3 [INFO|trainer.py:2137] 2024-10-17 08:16:53,001 >> Instantaneous batch size per device = 2 [INFO|trainer.py:2140] 2024-10-17 08:16:53,001 >> Total train batch size (w. parallel, distributed & accumulation) = 16 [INFO|trainer.py:2141] 2024-10-17 08:16:53,001 >> Gradient Accumulation steps = 8 [INFO|trainer.py:2142] 2024-10-17 08:16:53,001 >> Total optimization steps = 747 [INFO|trainer.py:2143] 2024-10-17 08:16:53,006 >> Number of trainable parameters = 20,971,520 [INFO|trainer.py:3503] 2024-10-17 08:24:32,400 >> Saving model checkpoint to saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/checkpoint-100 [INFO|configuration_utils.py:733] 2024-10-17 08:24:32,849 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [INFO|configuration_utils.py:800] 2024-10-17 08:24:32,850 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.44.2", "use_cache": true, "vocab_size": 128256 } [INFO|tokenization_utils_base.py:2684] 2024-10-17 08:24:33,047 >> tokenizer config file saved in saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/checkpoint-100/tokenizer_config.json [INFO|tokenization_utils_base.py:2693] 2024-10-17 08:24:33,047 >> Special tokens file saved in saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/checkpoint-100/special_tokens_map.json [INFO|trainer.py:3503] 2024-10-17 08:32:00,527 >> Saving model checkpoint to saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/checkpoint-200 [INFO|configuration_utils.py:733] 2024-10-17 08:32:00,937 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [INFO|configuration_utils.py:800] 2024-10-17 08:32:00,938 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.44.2", "use_cache": true, "vocab_size": 128256 } [INFO|tokenization_utils_base.py:2684] 2024-10-17 08:32:01,122 >> tokenizer config file saved in saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/checkpoint-200/tokenizer_config.json [INFO|tokenization_utils_base.py:2693] 2024-10-17 08:32:01,122 >> Special tokens file saved in saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/checkpoint-200/special_tokens_map.json [INFO|trainer.py:3503] 2024-10-17 08:39:39,234 >> Saving model checkpoint to saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/checkpoint-300 [INFO|configuration_utils.py:733] 2024-10-17 08:39:39,661 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [INFO|configuration_utils.py:800] 2024-10-17 08:39:39,662 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.44.2", "use_cache": true, "vocab_size": 128256 } [INFO|tokenization_utils_base.py:2684] 2024-10-17 08:39:39,845 >> tokenizer config file saved in saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/checkpoint-300/tokenizer_config.json [INFO|tokenization_utils_base.py:2693] 2024-10-17 08:39:39,846 >> Special tokens file saved in saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/checkpoint-300/special_tokens_map.json [INFO|trainer.py:3503] 2024-10-17 08:47:09,898 >> Saving model checkpoint to saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/checkpoint-400 [INFO|configuration_utils.py:733] 2024-10-17 08:47:10,317 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [INFO|configuration_utils.py:800] 2024-10-17 08:47:10,318 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.44.2", "use_cache": true, "vocab_size": 128256 } [INFO|tokenization_utils_base.py:2684] 2024-10-17 08:47:10,506 >> tokenizer config file saved in saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/checkpoint-400/tokenizer_config.json [INFO|tokenization_utils_base.py:2693] 2024-10-17 08:47:10,506 >> Special tokens file saved in saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/checkpoint-400/special_tokens_map.json [INFO|trainer.py:3503] 2024-10-17 08:54:50,670 >> Saving model checkpoint to saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/checkpoint-500 [INFO|configuration_utils.py:733] 2024-10-17 08:54:51,091 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [INFO|configuration_utils.py:800] 2024-10-17 08:54:51,092 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.44.2", "use_cache": true, "vocab_size": 128256 } [INFO|tokenization_utils_base.py:2684] 2024-10-17 08:54:51,279 >> tokenizer config file saved in saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/checkpoint-500/tokenizer_config.json [INFO|tokenization_utils_base.py:2693] 2024-10-17 08:54:51,279 >> Special tokens file saved in saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/checkpoint-500/special_tokens_map.json [INFO|trainer.py:3503] 2024-10-17 09:02:21,804 >> Saving model checkpoint to saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/checkpoint-600 [INFO|configuration_utils.py:733] 2024-10-17 09:02:22,249 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [INFO|configuration_utils.py:800] 2024-10-17 09:02:22,250 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.44.2", "use_cache": true, "vocab_size": 128256 } [INFO|tokenization_utils_base.py:2684] 2024-10-17 09:02:22,433 >> tokenizer config file saved in saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/checkpoint-600/tokenizer_config.json [INFO|tokenization_utils_base.py:2693] 2024-10-17 09:02:22,433 >> Special tokens file saved in saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/checkpoint-600/special_tokens_map.json [INFO|trainer.py:3503] 2024-10-17 09:09:56,637 >> Saving model checkpoint to saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/checkpoint-700 [INFO|configuration_utils.py:733] 2024-10-17 09:09:57,227 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [INFO|configuration_utils.py:800] 2024-10-17 09:09:57,228 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.44.2", "use_cache": true, "vocab_size": 128256 } [INFO|tokenization_utils_base.py:2684] 2024-10-17 09:09:57,413 >> tokenizer config file saved in saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/checkpoint-700/tokenizer_config.json [INFO|tokenization_utils_base.py:2693] 2024-10-17 09:09:57,413 >> Special tokens file saved in saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/checkpoint-700/special_tokens_map.json [INFO|trainer.py:3503] 2024-10-17 09:13:37,150 >> Saving model checkpoint to saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/checkpoint-747 [INFO|configuration_utils.py:733] 2024-10-17 09:13:37,684 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [INFO|configuration_utils.py:800] 2024-10-17 09:13:37,685 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.44.2", "use_cache": true, "vocab_size": 128256 } [INFO|tokenization_utils_base.py:2684] 2024-10-17 09:13:37,867 >> tokenizer config file saved in saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/checkpoint-747/tokenizer_config.json [INFO|tokenization_utils_base.py:2693] 2024-10-17 09:13:37,867 >> Special tokens file saved in saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/checkpoint-747/special_tokens_map.json [INFO|trainer.py:2394] 2024-10-17 09:13:38,353 >> Training completed. Do not forget to share your model on huggingface.co/models =) [INFO|trainer.py:3503] 2024-10-17 09:13:38,355 >> Saving model checkpoint to saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26 [INFO|configuration_utils.py:733] 2024-10-17 09:13:38,754 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [INFO|configuration_utils.py:800] 2024-10-17 09:13:38,755 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.44.2", "use_cache": true, "vocab_size": 128256 } [INFO|tokenization_utils_base.py:2684] 2024-10-17 09:13:38,938 >> tokenizer config file saved in saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/tokenizer_config.json [INFO|tokenization_utils_base.py:2693] 2024-10-17 09:13:38,938 >> Special tokens file saved in saves/Llama-3.1-8B/lora/train_2024-10-17-08-10-26/special_tokens_map.json [INFO|modelcard.py:449] 2024-10-17 09:13:39,311 >> Dropping the following result as it does not have all the necessary fields: {'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}}