SolCoderFuncs / README.md
Pipper's picture
training latge t5 comment 2 code done 12/22/2023, 05:25:03
839e1ce
|
raw
history blame
2.4 kB
metadata
license: bsd-3-clause
base_model: Salesforce/codet5p-220m
tags:
  - generated_from_trainer
model-index:
  - name: SolCoderFuncs
    results: []

SolCoderFuncs

This model is a fine-tuned version of Salesforce/codet5p-220m on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5510

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 37
  • eval_batch_size: 37
  • seed: 100
  • distributed_type: multi-GPU
  • num_devices: 4
  • total_train_batch_size: 148
  • total_eval_batch_size: 148
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss
0.8793 1.0 3600 0.7881
0.7622 2.0 7200 0.7190
0.7077 3.0 10800 0.6769
0.659 4.0 14400 0.6518
0.6212 5.0 18000 0.6300
0.589 6.0 21600 0.6119
0.562 7.0 25200 0.6014
0.5361 8.0 28800 0.5905
0.5171 9.0 32400 0.5799
0.4973 10.0 36000 0.5747
0.4772 11.0 39600 0.5666
0.4619 12.0 43200 0.5610
0.4443 13.0 46800 0.5588
0.4335 14.0 50400 0.5571
0.4192 15.0 54000 0.5534
0.4062 16.0 57600 0.5512
0.3977 17.0 61200 0.5513
0.3864 18.0 64800 0.5515
0.3791 19.0 68400 0.5507
0.3718 20.0 72000 0.5510

Framework versions

  • Transformers 4.33.0
  • Pytorch 2.1.0+cu121
  • Datasets 2.11.0
  • Tokenizers 0.13.3