Overview

This is a testing LORA for Llama-3 8B or Llama-3 8B Instruct. The goal for this model was to bring back some of the expressive prose and writing style of the base model, as well as shift the rather dry style of the 8B instruct.

Data Processing

Raw Data to Custom Data

  • Started with ~40GB of raw data
  • Aggressively selected for writing style
  • Cleaned multiple times both automatically and by hand
  • Final dataset size: 78.4MB
  • No synthetic data present in the used dataset

Data Cleaning Code

Once the training is verified to be beneficial (and thus the cleaning was likely to be correct), the data cleaning code will be released.

Training Procedure

Training Framework

Training was done QLORA style via Axolotl. The full training script along with the data processing scripts will be released similarly once the procedure is verified to benefit the model in a useful way.

Training Parameters

  • Base Model: Llama 3 8B (Non instruct)
  • r: 4
  • alpha: 8
  • dropout: 0
  • warmup: 45 steps
  • epochs: 2
  • lr: constant with warmup
  • optimizer: adamw (torch fused)
  • weight decay: 0.1
  • adam_b1: 0.9
  • adam_b2: 0.999
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for Blackroot/Llama3-RP-Lora

Merges
7 models