Training Hardware

This model was trained using Intel GPU:

  • Intel(R) Data Center GPU Max 1100
  • CPU: Intel(R) Xeon(R) Platinum 8480+

phiverse-1_5-lora-tuned-dolly.sft

This model is a fine-tuned version of microsoft/phi-1_5 on the dolly 15k dataset from databricks (databricks/databricks-dolly-15k). It achieves the following results on the evaluation set:

  • Loss: 2.4899

Intended uses

Text generation tasks including chatbots, content generation, and various other NLP applications.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.05
  • training_steps: 296

Training results

Training Loss Epoch Step Validation Loss
2.7682 1.6129 100 2.5802
2.5575 3.2258 200 2.4899

Framework versions

  • PEFT 0.11.1
  • Transformers 4.41.2
  • Pytorch 2.1.0.post0+cxx11.abi
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
12
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for deveshreddy27/phiverse-1_5-lora-tuned-dolly.sft

Base model

microsoft/phi-1_5
Adapter
(482)
this model