Commit History
Add desc to map/filter (#1162)
6840381
unverified
support for explicit test_dataset definition for evals (#786)
cda52dc
unverified
Falcon embeddings (#1149) [skip docker]
e799e08
unverified
Vram fix attempt (#1164) [skip ci]
32580c1
unverified
improve vram use w gradient checkpointing (#1167) [skip ci]
802f966
unverified
Add mlflow callback for pushing config to mlflow artifacts (#1125)
b8e5603
unverified
JohanWork
commited on
jupyter lab fixes (#1139) [skip ci]
eaaeefc
unverified
Qwen2 (#1166)
f5a828a
unverified
make sure the model config loader respects the model_revision too (#1160) [skip-ci]
fccb542
unverified
Deprecate max packed sequence len (#1141)
2ce5c0d
unverified
feat(dataset): add config to keep processed dataset in memory (#1152)
3db5f2f
unverified
Multipack simplify for Mixtral (#1142)
6910e6a
unverified
fix bf16 check when preprocessing data (#1140)
317fa25
unverified
fix(preprocess): Make sure dataset not loaded from cache when using preprocess cli (#1136)
1e56b88
unverified
Preprocess dataset size fix (#1131)
7570446
unverified
Add `layers_to_transform` for `lora_config` (#1118)
8487b97
unverified
xzuyn
commited on
Enable or disable bf16 support based on availability (#1116)
0865613
unverified
Simon Hällqvist
commited on
keep gate in fp32 for 16 bit loras (#1105)
da97285
unverified
add gptneox embeddings, fix phi2 inputs, also fix the casting (#1083)
78c5b19
unverified
additional logging to get maximum token length of a sequence in the dataset (#1066) [skip ci]
2f2582e
unverified
update sharegpt conversations when chatml chat template is set (#1075) [skip ci]
0ce1a65
unverified
be more robust about checking embedding modules for lora finetunes (#1074) [skip ci]
0f10080
unverified
Add: mlflow for experiment tracking (#1059) [skip ci]
090c24d
unverified
fix: torch_dtype mistral default to fp32 (#1050)
c3e8165
unverified
Phi2 rewrite (#1058)
732851f
unverified
streaming multipack for pretraining dataset (#959)
553c80f
unverified
feature: better device mapping for large models (#918)
bdfefaf
unverified
RL/DPO (#935)
f243c21
bump transformers and update attention class map name (#1023)
bcc78d8
unverified
chore(config): clean up old log for Qwen (#1034)
74532dd
unverified
Adds chat templates (#1022)
f8ae59b
unverified
[WandB] Push axolotl config to top level wandb files (#1014)
4f4d638
unverified
remove landmark attn and xpos rope implementations (#1010)
70b46ca
unverified
Feat: Warns to add to modules_to_save when adding tokens or switching special_tokens (#787)
1ffa386
unverified
fix: switch to using the HuggingFace Transformers NEFT implementation (#941)
ef24342
unverified
kallewoof
commited on