practica_2_kangaroo

This model is a fine-tuned version of hustvl/yolos-tiny on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6938

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 19 0.9975
No log 2.0 38 0.8337
0.8733 3.0 57 0.9003
0.8733 4.0 76 0.7992
0.8733 5.0 95 0.7225
0.577 6.0 114 0.8095
0.577 7.0 133 0.8329
0.4498 8.0 152 0.7701
0.4498 9.0 171 0.7072
0.4498 10.0 190 0.7774
0.3697 11.0 209 0.7421
0.3697 12.0 228 0.6773
0.3697 13.0 247 0.6309
0.3348 14.0 266 0.7009
0.3348 15.0 285 0.7800
0.2907 16.0 304 0.7364
0.2907 17.0 323 0.6137
0.2907 18.0 342 0.6721
0.2595 19.0 361 0.6353
0.2595 20.0 380 0.6392
0.2595 21.0 399 0.6280
0.244 22.0 418 0.5759
0.244 23.0 437 0.5613
0.2154 24.0 456 0.6886
0.2154 25.0 475 0.6181
0.2154 26.0 494 0.6223
0.1989 27.0 513 0.5730
0.1989 28.0 532 0.6037
0.1848 29.0 551 0.7125
0.1848 30.0 570 0.6218
0.1848 31.0 589 0.5871
0.1686 32.0 608 0.6126
0.1686 33.0 627 0.6017
0.1686 34.0 646 0.7448
0.1667 35.0 665 0.6713
0.1667 36.0 684 0.7800
0.1584 37.0 703 0.7249
0.1584 38.0 722 0.6830
0.1584 39.0 741 0.6575
0.1424 40.0 760 0.6051
0.1424 41.0 779 0.6029
0.1424 42.0 798 0.6182
0.1399 43.0 817 0.5813
0.1399 44.0 836 0.6202
0.1312 45.0 855 0.6301
0.1312 46.0 874 0.7338
0.1312 47.0 893 0.7173
0.1278 48.0 912 0.6548
0.1278 49.0 931 0.7101
0.1166 50.0 950 0.6286
0.1166 51.0 969 0.5544
0.1166 52.0 988 0.6381
0.1108 53.0 1007 0.7138
0.1108 54.0 1026 0.6907
0.1108 55.0 1045 0.7450
0.1097 56.0 1064 0.7085
0.1097 57.0 1083 0.6120
0.1063 58.0 1102 0.6301
0.1063 59.0 1121 0.6081
0.1063 60.0 1140 0.5714
0.1025 61.0 1159 0.6341
0.1025 62.0 1178 0.5742
0.1025 63.0 1197 0.6593
0.1017 64.0 1216 0.6832
0.1017 65.0 1235 0.6422
0.0931 66.0 1254 0.6032
0.0931 67.0 1273 0.6909
0.0931 68.0 1292 0.6501
0.0888 69.0 1311 0.6737
0.0888 70.0 1330 0.7715
0.0888 71.0 1349 0.5660
0.0801 72.0 1368 0.5877
0.0801 73.0 1387 0.6078
0.0848 74.0 1406 0.5911
0.0848 75.0 1425 0.6001
0.0848 76.0 1444 0.7010
0.0827 77.0 1463 0.5590
0.0827 78.0 1482 0.5833
0.0767 79.0 1501 0.5435
0.0767 80.0 1520 0.5577
0.0767 81.0 1539 0.6186
0.0724 82.0 1558 0.6701
0.0724 83.0 1577 0.6461
0.0724 84.0 1596 0.5634
0.0707 85.0 1615 0.7126
0.0707 86.0 1634 0.6726
0.0707 87.0 1653 0.5629
0.0707 88.0 1672 0.6799
0.0707 89.0 1691 0.6672
0.0707 90.0 1710 0.7435
0.0707 91.0 1729 0.6398
0.0707 92.0 1748 0.6162
0.0802 93.0 1767 0.5773
0.0802 94.0 1786 0.6004
0.0659 95.0 1805 0.6375
0.0659 96.0 1824 0.6713
0.0659 97.0 1843 0.7374
0.0651 98.0 1862 0.6655
0.0651 99.0 1881 0.7368
0.0624 100.0 1900 0.6938

Framework versions

  • Transformers 4.48.3
  • Pytorch 2.5.1+cu124
  • Datasets 3.3.1
  • Tokenizers 0.21.0
Downloads last month
88
Safetensors
Model size
6.47M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for magarcd/practica_2_kangaroo

Base model

hustvl/yolos-tiny
Finetuned
(18)
this model