--- library_name: transformers license: other base_model: TinyLlama/TinyLlama_v1.1 tags: - llama-factory - full - generated_from_trainer model-index: - name: newData-progressive-yoco-tiny-llama-CDL-18 results: [] --- # newData-progressive-yoco-tiny-llama-CDL-18 This model is a fine-tuned version of [/ephemeral/hossein/output/newData-progressive-yoco-tiny-llama-CDL-19/checkpoint-50](https://huggingface.co//ephemeral/hossein/output/newData-progressive-yoco-tiny-llama-CDL-19/checkpoint-50) on the alpaca_reformatted, the UltraInteract_sft_reformatted, the reformatted_ultrachat_200k, the reformatted_MathInstruct and the small_slim_pajama datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 58 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 1856 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.005 - training_steps: 50 ### Training results ### Framework versions - Transformers 4.45.2 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3