nbeerbower's picture
Update README.md
95ca104 verified
metadata
library_name: transformers
license: apache-2.0
datasets:
  - nbeerbower/GreatFirewall-DPO
  - nbeerbower/Schule-DPO
  - nbeerbower/Purpura-DPO
  - nbeerbower/Arkhaios-DPO
  - jondurbin/truthy-dpo-v0.1
  - antiven0m/physical-reasoning-dpo
  - flammenai/Date-DPO-NoAsterisks
  - flammenai/Prude-Phi3-DPO
  - Atsunori/HelpSteer2-DPO
  - jondurbin/gutenberg-dpo-v0.1
  - nbeerbower/gutenberg2-dpo
  - nbeerbower/gutenberg-moderne-dpo
base_model:
  - nbeerbower/EVA-abliterated-Qwen2.5-7B

image/png

🧪 Part of an Experiment

This model is meant to investigate the effects of changing LoRA rank on the same tune.

Dumpling-Qwen2.5-7B-1k-r32

nbeerbower/EVA-abliterated-Qwen2.5-7B finetuned on:

Method

QLoRA ORPO tune with 2x RTX 3090 for 2 epochs.

# QLoRA config
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch_dtype,
    bnb_4bit_use_double_quant=True,
)

# LoRA config
peft_config = LoraConfig(
    r=32,
    lora_alpha=32,
    lora_dropout=0.05,
    bias="none",
    task_type="CAUSAL_LM",
    target_modules=['up_proj', 'down_proj', 'gate_proj', 'k_proj', 'q_proj', 'v_proj', 'o_proj']
)