Commit History
chore(readme): update instruction to set config to load from cache (#1030)
b31038a
unverified
added tiny llama examples for lora and qlora (#1027)
c75f916
unverified
Tim Dolan
commited on
use recommended setting for use_reentrant w gradient checkpointing (#1021)
4d2e842
unverified
Fix: bf16 support for inference (#981)
3678a6c
unverified
Adds chat templates (#1022)
f8ae59b
unverified
[WandB] Push axolotl config to top level wandb files (#1014)
4f4d638
unverified
add ultrachat prompt strategies (#996)
ba043a3
unverified
feat: remove need to add load_in* during merge (#1017)
f6ecf14
unverified
[Docs] Nit: Remind people to auth to wandb if they are going to use it (#1013)
dec66d7
unverified
Update README.md (#1012)
76357dc
unverified
remove landmark attn and xpos rope implementations (#1010)
70b46ca
unverified
add config to model card (#1005)
85dd4d5
unverified
Set eval_sample_packing to false in mistral config.yaml (#1003)
384b817
unverified
Kevin Sydney
commited on
FEAT: add tagging support to axolotl (#1004)
db9094d
unverified
Add an example config for finetuning a 34B model on a 24GB GPU (#1000)
6ef46f8
unverified
Evan Griffiths
commited on
set output_router_logits for mixtral config: (#995)
628b754
unverified
support for cuda 12.1 (#989)
37820f6
unverified
chore: Update transformers to latest (#986)
7d4185f
unverified
change val size (#992)
93ebec1
unverified
Add tests to Docker (#993)
2e61dc3
unverified
Feat: Warns to add to modules_to_save when adding tokens or switching special_tokens (#787)
1ffa386
unverified
bump actions versions
62ba160
fix mistral prompt assembly (#982)
7bbaac9
unverified
Dockerfile torch fix (#987)
161bcb6
unverified
Update README.md (#966)
d25c34c
unverified
fix: add lr scheduler kwargs to Trainer (#972)
13e9381
unverified
fix for build for nccl in dockerfile (#970)
85de004
unverified
update to latest nccl in docker image (#965)
80ec7af
unverified
update transformers to fix checkpoint saving (#963)
f28e755
unverified
dumpmemory
commited on
Fix prompt assembly for llama (#952)
5ada140
unverified
fix: switch to using the HuggingFace Transformers NEFT implementation (#941)
ef24342
unverified
kallewoof
commited on
Fix Deepspeed loading (#950)
5ea3aa3
unverified
Flash attn hotfix (#951)
f1f60cb
unverified
fix: remove excessive newlines in system prompt(s) for alpaca (#936)
450e04d
unverified
kallewoof
commited on
More hints on what to do with CUDA Out of memory errors (#925)
b0cf397
unverified
Juraj Bednar
commited on