Commit History
fix: torch_dtype mistral default to fp32 (#1050)
c3e8165
unverified
Nanobit
commited on
Phi2 rewrite (#1058)
732851f
unverified
winglian
commited on
streaming multipack for pretraining dataset (#959)
553c80f
unverified
feature: better device mapping for large models (#918)
bdfefaf
unverified
RL/DPO (#935)
f243c21
winglian
commited on
bump transformers and update attention class map name (#1023)
bcc78d8
unverified
winglian
commited on
chore(config): clean up old log for Qwen (#1034)
74532dd
unverified
Nanobit
commited on
Adds chat templates (#1022)
f8ae59b
unverified
mhenrichsen
commited on
[WandB] Push axolotl config to top level wandb files (#1014)
4f4d638
unverified
hamel
commited on
remove landmark attn and xpos rope implementations (#1010)
70b46ca
unverified
winglian
commited on
Feat: Warns to add to modules_to_save when adding tokens or switching special_tokens (#787)
1ffa386
unverified
Nanobit
commited on
fix: switch to using the HuggingFace Transformers NEFT implementation (#941)
ef24342
unverified
kallewoof
commited on
Fix Deepspeed loading (#950)
5ea3aa3
unverified
winglian
commited on
Flash attn hotfix (#951)
f1f60cb
unverified
winglian
commited on
new evals_per_epoch and saves_per_epoch to make things cleaner (#944)
5f79b82
unverified
winglian
commited on
Mixtral official (#942)
7fabc4d
unverified
winglian
commited on
Mixtral multipack (#928)
68b227a
unverified
winglian
commited on
support for mamba (#915)
40a6362
unverified
winglian
commited on
fix(tokenizer): handle fast tokenizer properly for bos/eos (#914)
fde091c
unverified
Nanobit
commited on
Support device_map=sequential & max_memory config parameters (#903)
992e742
unverified
Feat(wandb): Refactor to be more flexible (#767)
a1da39c
unverified
Nanobit
commited on
feature: loss watchdog for terminating training runs that are failing (#899)
58ec8b1
unverified
fix for qwen w lora (#906)
3e3229e
unverified
winglian
commited on
Feat: Add Qwen (#894)
1115c50
unverified
Nanobit
commited on
fix: warning should not show if eval_batch_size not provided (#896)
7ee3c4c
unverified
Nanobit
commited on
Feat: Add warmup_ratio (#893)
fb12895
unverified
Nanobit
commited on
fix: revert local dir dataset load (#878)
575a082
unverified
Nanobit
commited on
Phi update 202311 (#876)
9bf854e
unverified
winglian
commited on
don't train if eval split is too small (#873)
797f3dd
unverified
winglian
commited on
Feat: Add dataset loading from S3, GCS (#765)
3cc67d2
unverified
Nanobit
commited on
allow overriding of model_config parameters from the YML (#853)
1bc1186
unverified
winglian
commited on
multipack len should use max, not min (#863)
0c2a630
unverified
winglian
commited on
various bugfixes (#856)
1470650
unverified
winglian
commited on
cleanup the old multipack dataloader (#841)
1a6309c
unverified
winglian
commited on
multipack w batch sampler (#795)
641e6f7
unverified
winglian
commited on
use accelerate logging for zero/main loggin only
b2430ce
winglian
commited on
cleanup verbosity a bit
4c834bf
winglian
commited on
update table for rwkv4 support, fix process count for dataset (#822)
cdc71f7
unverified
winglian
commited on
fix model parallel (#816)
964d858
unverified
winglian
commited on
fix(tokenizer): update log order after update (#806)
10388a8
unverified
Nanobit
commited on
fix(config): Set eos/bos to tokenizer if different (#801)
637ed09
unverified
Nanobit
commited on
refactor neft patch to be more re-usable similar to trl's impl (#796)
827ec3d
unverified
winglian
commited on
Create preprocess CLI (#785)
e50ab07
unverified
casperhansen
commited on