Akil15's picture
Update README.md
e65f2fe verified
metadata
library_name: peft,sfttrainer
base_model: NousResearch/Llama-2-7b-chat-hf
hub-id: Akil15/finetune_llama_v_0.1
license: apache-2.0
datasets:
  - Akil15/evol_20k_filter
language:
  - en
pipeline_tag: text-generation

Model Card for Model ID:

This is a Supervised PEFT(Parameter Efficient Fine-Tuning) based tuning of the Llama model of base conversational type to a code-based chatbot using the alpaca Dataset and SFT Trainer.

Training:

The model was trained under one epoch using SFT Trainer for up to 200 Steps by observing through significant gradient loss value (step-wise).

Training Args:

{ "num_train_epochs": 1, "fp16": false, "bf16": false, "per_device_train_batch_size": 4, "per_device_eval_batch_size": 4, "gradient_accumulation_steps": 4, "gradient_checkpointing": true, "max_grad_norm": 0.3, "learning_rate": 2e-4, "weight_decay": 0.001, "optim": "paged_adamw_32bit", "lr_scheduler_type": "cosine", "max_steps": -1, "warmup_ratio": 0.03, "group_by_length": true, "save_steps": 0, "logging_steps": 25, "base_lrs": [0.0002, 0.0002], "last_epoch": 199, "verbose": false, "_step_count": 200, "_get_lr_called_within_step": false, "_last_lr": [0.00019143163189119916, 0.00019143163189119916], "lr_lambdas": [{}, {}] }

Usage:

These configurations (trained weights) are injected into the base model using PeftModel.from_pretrained() method.

Git-Repos:

Refer to this Github repo for notebooks: https://github.com/mr-nobody15/codebot_llama/tree/main

Framework versions:

  • PEFT 0.7.1