metadata
license: mit
datasets:
- Replete-AI/code_bagel
language:
- en
tags:
- code
pipeline_tag: text-generation
Base_model
microsoft/Phi-3-medium-128k-instruct
(https://huggingface.co/microsoft/Phi-3-medium-128k-instruct)
Datasets
Replete-AI/code_bagel
(https://huggingface.co/datasets/Replete-AI/code_bagel)
SFT Training Code
https://github.com/hiyouga/LLaMA-Factory
Train Loss
Train State
Trainable params: 27852800 || all params: 13988090880 || trainable%: 0.1991
Total Training Duration:69h18m17s
{
"epoch": 0.9999679800589659,
"total_flos": 1.446273483573748e+20,
"train_loss": 0.44412665014957775,
"train_runtime": 249497.725,
"train_samples_per_second": 13.018,
"train_steps_per_second": 0.102
}
Sample inference code
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "/home/models/phi3/Phi-3-medium-128k-instruct/"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "Write a python code to train llm mode by lora and sft ?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 4096,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
Outputs by microsoft/Phi-3-medium-128k-instruct
To train a language model using Lora and SFT (Supervised Fine-tuning), you can follow these steps:
1. Install the required libraries:
```python
!pip install transformers
- Import the necessary libraries:
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Trainer, TrainingArguments
- Load the pre-trained Lora model and tokenizer:
tokenizer = AutoTokenizer.from_pretrained("microsoft/llama-65b-lora")
model = AutoModelForCausalLM.from_pretrained("microsoft/llama-65b-lora")
- Load the SFT dataset:
# Replace with the path to your SFT dataset
train_dataset =...
- Define the training arguments:
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=3,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
warmup_steps=500,
weight_decay=0.01,
logging_dir='./logs',
logging_steps=10,
)
- Create a custom Trainer for Lora and SFT:
class LoraSFTTrainer(Trainer):
def compute_loss(self, model, inputs):
labels = inputs.pop("labels")
outputs = model(**inputs, use_cache=False)
lm_logits = outputs.logits
loss_fct = torch.nn.CrossEntropyLoss()
masked_lm_loss = loss_fct(lm_logits.view(-1, self.model.config.vocab_size), labels.view(-1))
return masked_lm_loss
- Initialize the trainer and train the model:
trainer = LoraSFTTrainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=None,
)
trainer.train()
Outputs by REILX/Phi-3-medium-128k-code-instruct
import torch
from transformers import RobertaForCausalLM, RobertaTokenizer
# Load the model and tokenizer
model = RobertaForCausalLM.from_pretrained('roberta-base')
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
# Load the data
data = [
"This is a sample sentence.",
"Another sample sentence."
]
# Tokenize the data
input_ids = [tokenizer.encode(sentence, add_special_tokens=True) for sentence in data]
# Train the model
model.train()
for input_id in input_ids:
outputs = model(input_id, labels=input_id)
loss = outputs.loss
loss.backward()
optimizer.step()
# Save the model
model.save_pretrained('my_model')
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1200
- num_epochs: 1.0