image-captioning-output

This model is a fine-tuned version of on the coco_dataset_script dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3319
  • Rouge1: 21.9307
  • Rouge2: 4.1909
  • Rougel: 20.068
  • Rougelsum: 19.9653
  • Gen Len: 12.0625

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3.0

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 20 0.3359 18.236 0.5556 18.2694 18.255 7.0
No log 2.0 40 0.3315 19.2924 3.6258 18.3375 18.3568 14.1875
No log 3.0 60 0.3319 21.9307 4.1909 20.068 19.9653 12.0625

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
7
Safetensors
Model size
239M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support