Commit History
Fix the wrong adapter in qwen2-moe-qlora example (#1501) [skip ci]
7f17eff
unverified
qwen2_moe support w multipack (#1455)
6086be8
unverified
fix some of the edge cases for Jamba (#1452)
05b398a
unverified
Jamba (#1451)
02af082
unverified
turn sample_packing on for training (#1438) [skip ci]
c19d060
unverified
chore(config): refactor old mistral config (#1435)
f1ebaa0
unverified
strip out hacky qlora-fsdp workarounds now that qlora-fsdp fixes are upstreamed (#1428)
2a1589f
unverified
Fix Gemma 7b qlora.yml (#1405)
6366b0c
unverified
Train parameters exclusively in specific ranges (#1390)
05bcc9e
unverified
FDSP + QLoRA (#1378)
9b6ee83
unverified
Update tinyllama lora.yml to fix eval packing issue (#1362)
8984bf1
unverified
chore: enable sample_packing for Gemma (#1351)
170d4d7
unverified
Add StableLM 2 Example Scripts (#1327) [skip ci]
f30d062
unverified
multipack for gemma (#1313)
2752d5f
unverified
Adding Google's gemma Model (#1312)
9e300ac
unverified
Add instructions for playing with qlora model to colab example (#1290)
6ab69ec
unverified
fix(examples): remove is_*_derived as it's parsed automatically (#1297)
a7a9a14
unverified
Add seq2seq eval benchmark callback (#1274)
5a5d474
unverified
Add MPS support (#1264)
fac2d98
unverified
lock pytorch (#1247) [skip ci]
1c7ed26
unverified
JohanWork
commited on
Pretrain transforms (#1261)
c7cf381
unverified
Peft lotfq (#1222)
4cb7900
unverified
Update qlora.yml - remove `max_packed_sequence_len` (#1210) [skip ci]
5407ddd
unverified
add colab example (#1196) [skip ci]
ee0b5f6
unverified
JohanWork
commited on
Mixtral fixes 20240124 (#1192) [skip ci]
54d2ac1
unverified
Phi2 multipack (#1173)
814aee6
unverified
Fine-Tuning Mistral-7b for Real-World Chatbot Applications Using Axolotl (Lora used) (#1155)
cc25039
unverified
Falcon embeddings (#1149) [skip docker]
e799e08
unverified
pin model_revision for phi2 (#1123)
c1b741d
unverified
Phi2 rewrite (#1058)
732851f
unverified
streaming multipack for pretraining dataset (#959)
553c80f
unverified
fix: lint (#1037)
8ba27f3
unverified
added tiny llama examples for lora and qlora (#1027)
c75f916
unverified
Tim Dolan
commited on
Set eval_sample_packing to false in mistral config.yaml (#1003)
384b817
unverified
Kevin Sydney
commited on
Add an example config for finetuning a 34B model on a 24GB GPU (#1000)
6ef46f8
unverified
Evan Griffiths
commited on