This repo is my fine tuned lora of Llama on the first 4 volumes of Eminence in shadow and konosuba to test its ability to record new information. The training used alpaca-lora on a 3090 for 10 hours with :

  • Micro Batch Size 2,
  • batch size 64,
  • 35 epochs,
  • 3e-4 learning rate,
  • lora rank 256,
  • 512 lora alpha,
  • 0.05 lora dropout,
  • 352 cutoff
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.