Commit History
add gptneox embeddings, fix phi2 inputs, also fix the casting (#1083)
78c5b19
unverified
additional logging to get maximum token length of a sequence in the dataset (#1066) [skip ci]
2f2582e
unverified
update sharegpt conversations when chatml chat template is set (#1075) [skip ci]
0ce1a65
unverified
be more robust about checking embedding modules for lora finetunes (#1074) [skip ci]
0f10080
unverified
Add: mlflow for experiment tracking (#1059) [skip ci]
090c24d
unverified
fix: torch_dtype mistral default to fp32 (#1050)
c3e8165
unverified
Phi2 rewrite (#1058)
732851f
unverified
streaming multipack for pretraining dataset (#959)
553c80f
unverified
feature: better device mapping for large models (#918)
bdfefaf
unverified
RL/DPO (#935)
f243c21
bump transformers and update attention class map name (#1023)
bcc78d8
unverified
chore(config): clean up old log for Qwen (#1034)
74532dd
unverified
Adds chat templates (#1022)
f8ae59b
unverified
[WandB] Push axolotl config to top level wandb files (#1014)
4f4d638
unverified
remove landmark attn and xpos rope implementations (#1010)
70b46ca
unverified
Feat: Warns to add to modules_to_save when adding tokens or switching special_tokens (#787)
1ffa386
unverified
fix: switch to using the HuggingFace Transformers NEFT implementation (#941)
ef24342
unverified
kallewoof
commited on