|
--- |
|
library_name: transformers |
|
license: apache-2.0 |
|
base_model: c14kevincardenas/beit-large-patch16-384-limb |
|
tags: |
|
- image-regression |
|
- human-movement |
|
- vision |
|
- generated_from_trainer |
|
model-index: |
|
- name: limbxy_pose_2heads_2layers_32embeddim |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# limbxy_pose_2heads_2layers_32embeddim |
|
|
|
This model is a fine-tuned version of [c14kevincardenas/beit-large-patch16-384-limb](https://huggingface.co/c14kevincardenas/beit-large-patch16-384-limb) on the c14kevincardenas/beta_caller_284_limbxy_pose dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.1417 |
|
- Rmse: 0.3765 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 5e-05 |
|
- train_batch_size: 64 |
|
- eval_batch_size: 64 |
|
- seed: 2014 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- lr_scheduler_warmup_steps: 250 |
|
- num_epochs: 20.0 |
|
- mixed_precision_training: Native AMP |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Rmse | |
|
|:-------------:|:-----:|:----:|:---------------:|:------:| |
|
| 0.3414 | 1.0 | 89 | 0.3311 | 0.5754 | |
|
| 0.3313 | 2.0 | 178 | 0.3311 | 0.5754 | |
|
| 0.2277 | 3.0 | 267 | 0.1668 | 0.4085 | |
|
| 0.1756 | 4.0 | 356 | 0.1516 | 0.3893 | |
|
| 0.1761 | 5.0 | 445 | 0.1556 | 0.3945 | |
|
| 0.1632 | 6.0 | 534 | 0.1460 | 0.3821 | |
|
| 0.1488 | 7.0 | 623 | 0.1453 | 0.3812 | |
|
| 0.1502 | 8.0 | 712 | 0.1449 | 0.3807 | |
|
| 0.1512 | 9.0 | 801 | 0.1509 | 0.3885 | |
|
| 0.1491 | 10.0 | 890 | 0.1451 | 0.3809 | |
|
| 0.1615 | 11.0 | 979 | 0.1451 | 0.3810 | |
|
| 0.1549 | 12.0 | 1068 | 0.1453 | 0.3812 | |
|
| 0.1511 | 13.0 | 1157 | 0.1451 | 0.3810 | |
|
| 0.146 | 14.0 | 1246 | 0.1448 | 0.3806 | |
|
| 0.1495 | 15.0 | 1335 | 0.1442 | 0.3797 | |
|
| 0.1463 | 16.0 | 1424 | 0.1433 | 0.3785 | |
|
| 0.1455 | 17.0 | 1513 | 0.1428 | 0.3779 | |
|
| 0.144 | 18.0 | 1602 | 0.1422 | 0.3772 | |
|
| 0.1461 | 19.0 | 1691 | 0.1427 | 0.3778 | |
|
| 0.1425 | 20.0 | 1780 | 0.1417 | 0.3765 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.45.2 |
|
- Pytorch 2.5.0+cu124 |
|
- Datasets 3.0.1 |
|
- Tokenizers 0.20.1 |
|
|