Saqr Logo

saqr-7b-instruct

This model is a fine-tuned version of tiiuae/falcon-7b on ultrachat_200k, UltraFeedback, and gsm8k datasets.

Model description

This model is a fine-tuned version of tiiuae/falcon-7b using supervised fine-tuning on nearly the same datasets as Zephyr-7B-beta.

Training and evaluation data

The evaluation for training can be found here.

The evaluation can be found at the Hugging Face Leaderboard here.

Training procedure

Can be found here.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 7
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 14
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant
  • lr_scheduler_warmup_ratio: 0.03
  • training_steps: 5000

Training results

Framework versions

  • PEFT 0.8.2
  • Transformers 4.38.0.dev0
  • Pytorch 2.1.0+cu121
  • Datasets 2.17.0
  • Tokenizers 0.15.1
Downloads last month
6
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Menouar/saqr-7b-instruct

Base model

tiiuae/falcon-7b
Adapter
(333)
this model

Datasets used to train Menouar/saqr-7b-instruct