CodegebraGPT-10b / README.md
sr5434's picture
Update README.md
6abf7dc
|
raw
history blame
1.34 kB
metadata
license: apache-2.0
library_name: peft
tags:
  - generated_from_trainer
base_model: upstage/SOLAR-10.7B-v1.0
model-index:
  - name: outputs
    results: []
datasets:
  - sr5434/CodegebraGPT_data

outputs

This model is a fine-tuned version of upstage/SOLAR-10.7B-v1.0 on the text only 100k samples subset of sr5434/CodegebraGPT_Data dataset.

Model description

It can chat with you about science, engineering, math, or coding.

Intended uses & limitations

This is not finetuned with RLHF and is not intended to be used in production.

Training and evaluation data

CodegebraGPT 100k text dataset

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Framework versions

  • PEFT 0.7.2.dev0
  • Transformers 4.36.2
  • Pytorch 2.0.1
  • Datasets 2.16.0
  • Tokenizers 0.15.0