sft-zephyr-7b-beta-v1
This model is a fine-tuned version of HuggingFaceH4/zephyr-7b-beta on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.4927
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 1000
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.0538 | 0.19 | 50 | 1.1364 |
0.7744 | 0.37 | 100 | 0.7777 |
0.5936 | 0.56 | 150 | 0.6507 |
0.5449 | 0.74 | 200 | 0.6087 |
0.501 | 0.93 | 250 | 0.5840 |
0.5752 | 1.12 | 300 | 0.5552 |
0.4542 | 1.3 | 350 | 0.5419 |
0.5115 | 1.49 | 400 | 0.5243 |
0.4224 | 1.67 | 450 | 0.5188 |
0.4486 | 1.86 | 500 | 0.5055 |
0.3865 | 2.04 | 550 | 0.5038 |
0.4193 | 2.23 | 600 | 0.5048 |
0.4294 | 2.42 | 650 | 0.4995 |
0.4077 | 2.6 | 700 | 0.5014 |
0.4667 | 2.79 | 750 | 0.4985 |
0.4226 | 2.97 | 800 | 0.4937 |
0.4195 | 3.16 | 850 | 0.4920 |
0.338 | 3.35 | 900 | 0.4923 |
0.3943 | 3.53 | 950 | 0.4926 |
0.3953 | 3.72 | 1000 | 0.4927 |
Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0
- Datasets 2.15.0
- Tokenizers 0.15.0
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for hllj/sft-zephyr-7b-beta-v1
Base model
mistralai/Mistral-7B-v0.1
Finetuned
HuggingFaceH4/zephyr-7b-beta