This page provided a step-by-step guide to fine-tuning the deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
model using the SFTTrainer
. By following these steps, you can
adapt the model to perform specific tasks more effectively.
Supervised Fine-Tuning (SFT) is a critical process for adapting pre-trained language models to specific tasks or domains. While pre-trained models have impressive general capabilities, they often need to be customized to excel at particular use cases. SFT bridges this gap by further training the model on relevant datasets with human-validated examples.
The supervised structure of the task enables models to learn specific output formats and behaviors. For example, SFT can teach a model to consistently use chat templates or follow domain-specific guidelines. The decision to use Supervised Fine-Tuning depends on two primary factors:
SFT allows precise control over the modelās output structure. This is particularly valuable when you need the model to:
When working in specialized domains, SFT helps align the model with domain-specific requirements by:
This evaluation will help determine if SFT is the right approach for your needs.
The supervised fine-tuning process requires a task-specific dataset structured with input-output pairs. Each pair should consist of:
The data format must be compatible with your modelās chat template. Hereās an example dataset suitable for supervised fine-tuning:
The SFTTrainer configuration requires consideration of several parameters that control the training process:
Parameter | Description |
---|---|
num_train_epochs | The total number of training epochs to run (e.g., 1-3 epochs) |
per_device_train_batch_size | The number of training examples processed per GPU in one forward/backward pass (typically 2-8 for large models) |
gradient_accumulation_steps | Number of updates to accumulate before performing a backward pass, effectively increasing batch size |
learning_rate | The step size for model weight updates during training (typically 2e-4 for fine-tuning) |
gradient_checkpointing | Memory optimization technique that trades computation for memory by recomputing intermediate activations |
warmup_ratio | Portion of training steps used for learning rate warmup (e.g., 0.03 = 3% of steps) |
logging_steps | Frequency of logging training metrics and progress (e.g., every 10 steps) |
save_strategy | When to save model checkpoints (e.g., āepochā saves after each epoch, āstepsā saves every N steps) |
Training Duration Parameters:
num_train_epochs
: Controls total training durationmax_steps
: Alternative to epochs, sets maximum number of training stepsBatch Size Parameters:
per_device_train_batch_size
: Determines memory usage and training stabilitygradient_accumulation_steps
: Enables larger effective batch sizesLearning Rate Parameters:
learning_rate
: Controls size of weight updateswarmup_ratio
: Portion of training used for learning rate warmupMonitoring Parameters:
logging_steps
: Frequency of metric loggingeval_steps
: How often to evaluate on validation datasave_steps
: Frequency of model checkpoint savesWe will use the SFTTrainer
class from the Transformers Reinforcement Learning (TRL) library, which is built on top of the transformers
library. Hereās a complete example using the TRL library:
from datasets import load_dataset
from trl import SFTConfig, SFTTrainer
import torch
# Set device
device = "cuda" if torch.cuda.is_available() else "cpu"
# Load dataset
dataset = load_dataset("HuggingFaceTB/smoltalk")
# Configure trainer
training_args = SFTConfig(
output_dir="./sft_output",
max_steps=1000,
per_device_train_batch_size=4,
learning_rate=5e-5,
logging_steps=10,
save_steps=100,
evaluation_strategy="steps",
eval_steps=50,
)
# Initialize trainer
trainer = SFTTrainer(
model=model,
args=training_args,
train_dataset=dataset["train"],
eval_dataset=dataset["test"],
tokenizer=tokenizer,
)
# Start training
trainer.train()
Training loss typically follows three distinct phases:
Effective monitoring involves tracking quantitative metrics, and evaluating qualitative metrics. Available metrics are:
As training progresses, the loss curve should gradually stabilize. The key indicator of healthy training is a small gap between training and validation loss, suggesting the model is learning generalizable patterns rather than memorizing specific examples. The absolute loss values will vary depending on your task and dataset.
The graph above shows a typical training progression. Notice how both training and validation loss decrease sharply at first, then gradually level off. This pattern indicates the model is learning effectively while maintaining generalization ability.
Several patterns in the loss curves can indicate potential issues:
If the validation loss starts increasing while training loss continues to decrease, your model is likely overfitting to the training data. Consider:
If the loss doesnāt show significant improvement, the model might be:
Extremely low loss values could suggest memorization rather than learning. This is particularly concerning if:
Monitor both the loss values and the modelās actual outputs during training. Sometimes the loss can look good while the model develops unwanted behaviors. Regular qualitative evaluation of the modelās responses helps catch issues that metrics alone might miss.
In section 11.4 we will learn how to evaluate the model using benchmark datasets. For now, we will focus on the qualitative evaluation of the model.
After completing SFT, consider these follow-up actions:
Youāve learned how to fine-tune models using SFT! To continue your learning: