xzuyn commited on
Commit
b13afcb
·
verified ·
1 Parent(s): 991817b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -0
README.md ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.2
3
+ language:
4
+ - en
5
+ datasets:
6
+ - PJMixers-Dev/HailMary-v0.1-KTO
7
+ base_model:
8
+ - PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B
9
+ ---
10
+ [PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B](https://huggingface.co/PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B) was further trained using KTO (with `apo_zero_unpaired` loss type) using a mix of instruct, RP, and storygen datasets. I created rejected samples by using the SFT with bad settings (including logit bias) for every model turn.
11
+
12
+ The model was only trained at `max_length=6144`, and is nowhere near a full epoch as it eventually crashed. So think of this like a test of a test.
13
+
14
+ ---
15
+
16
+ ![train/rewards/chosen/rejected](https://huggingface.co/PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-HailMary-v0.1-KTO-3B/resolve/main/images/train_rewards_chosen_rejected.png)
17
+ ![train/rewards/margins](https://huggingface.co/PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-HailMary-v0.1-KTO-3B/resolve/main/images/train_rewards_margins.png)
18
+ ![train/logits/chosen/rejected](https://huggingface.co/PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-HailMary-v0.1-KTO-3B/resolve/main/images/train_logits_chosen_rejected.png)
19
+ ![train/logps/chosen/rejected](https://huggingface.co/PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-HailMary-v0.1-KTO-3B/resolve/main/images/train_logps_chosen_rejected.png)
20
+ ![train/loss](https://huggingface.co/PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-HailMary-v0.1-KTO-3B/resolve/main/images/train_loss.png)
21
+ ![train/grad_norm](https://huggingface.co/PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-HailMary-v0.1-KTO-3B/resolve/main/images/train_grad_norm.png)