Commit History
chore: refactor truthy check and fix mypy (#780)
11d1d60
unverified
refactor setup trainer so we can add more hooks (#773)
6c81c61
unverified
disable eval table w sample packing in examples (#778)
9b43e7e
unverified
simplify by removing duplicate base_model_config (#772)
2d8def6
unverified
Fix: Warn when fullfinetune without adapter (#770)
44c9d01
unverified
convert exponential notation lr to floats (#771)
ca84cca
unverified
Hotfix for not saving correctly (#762)
32eeeb5
unverified
Fix: Cannot tokenize with bf16 and on cpu (#766)
afedc47
unverified
Fix: eval table conflict with eval_sample_packing (#769)
9923b72
unverified
remove lora fused packing test (#758)
21cf09b
unverified
Implement fused modules (#747)
15d3a65
unverified
add to docs (#703)
a21935f
unverified
chore: bump transformers to v4.34.1 to fix tokenizer issue (#745)
8966a6f
unverified
Fix DeepSpeed Zero 3 Saving (#709)
e4d1585
unverified
add a latest tag for regular axolotl image, cleanup extraneous print statement (#746)
70157cc
unverified
improve: Enhance code readability of prompt_tokenizers.py (#707)
3a99495
unverified
Fix(model): Linear detected and added to target module with rope linear (#738)
440c3ab
unverified
catch ConnectionError when checking dataset from HuggingFace (#743)
992d57f
unverified
Napuh
commited on
badge (#739)
91a016f
unverified
Mistral: Sliding Window Attention with Flash Attention and Sample Packing (#732)
a045db0
unverified
Clarify custom format example (#729)
e1b214c
unverified
fixes for alpaca w chatml, and don't include attention_mask w mistral for flash attention (#728)
3553172
unverified
tweak for xformers install w pytorch 2.1.0 (#727)
7f2027d
unverified
workaround for installing xformers w torch 2.1.0 (#725)
8d288a2
unverified
misc sharegpt fixes (#723)
f30afe4
unverified
pin xformers >= 0.0.22 (#724)
bfbdba8
unverified
add noisy embedding (#721)
3bd9528
unverified
Maxime
Maxime
commited on
fix pytorch 2.1.0 build, add multipack docs (#722)
2aa1f71
unverified
improve handling of the prepared ds path and other cfg defaults (#701)
1c412c7
unverified
Save Axolotl config as WandB artifact (#716)
490923f
unverified
Jan Philipp Harries
commited on
fix(doc): update default doc according to arg (#714)
5855dde
unverified
Fix: lowercase `True` values in config (#713)
ace70b3
unverified
atgctg
commited on
fix(doc): Add note on inference w sample packing (#712)
11c48c5
unverified
Get qlora mistral-7b fine tuning working on a single 4090 (#708)
295b266
unverified
lukemarsden
commited on
Update README with some explanations (#700)
77c84e0
unverified
fix unneeded space (#699)
f91db19
unverified
add docker images for pytorch 2.10 (#697)
7f2618b
unverified
apex not needed as amp is part of pytorch (#696)
aca0398
unverified
Merge pull request #693 from OpenAccess-AI-Collective/update-mistral-example
29b8f46
unverified
lint
83a950b
unverified
fix multiline for docker (#694)
de87ea6
unverified
new lr, sample pack
4c8ddf2
Fix: Higher vram usage for mistral and sample_packing (#691)
669f1d0
unverified
Adding qlora config for Mistral (#675)
d4a88e4
unverified
Abhishek Mishra
commited on