Commit History

Add MPS support (#1264)
fac2d98
unverified

Maxime winglian commited on

simplify haldning for newer multipack patches so they can be added in a single place (#1270)
5698943
unverified

winglian commited on

Fix bug preventing model_kwargs being injected (#1262)
73f1bda
unverified

Zac Brannelly commited on

relora: magnitude pruning of the optimizer (#1245)
8c2e05a
unverified

winglian commited on

fix(model): apply gate fp32 only for mixtral (#1241)
2d65f47
unverified

Nanobit winglian commited on

support for true batches with multipack (#1230)
00568c1
unverified

winglian commited on

Peft deepspeed resume (#1227)
c67fb71
unverified

winglian commited on

Support for additional_special_tokens (#1221) [skip ci]
25e037f
unverified

DreamGenX winglian commited on

Fix typo (#1231) [skip ci]
8608d80
unverified

xhedit commited on

Peft lotfq (#1222)
4cb7900
unverified

winglian commited on

Revert "run PR e2e docker CI tests in Modal" (#1220) [skip ci]
8da1633
unverified

winglian commited on

run PR e2e docker CI tests in Modal (#1217) [skip ci]
36d053f
unverified

winglian commited on

more checks and fixes for deepspeed and fsdp (#1208) [skip ci]
e923e62
unverified

winglian commited on

Feat/chatml add system message (#1117)
98b4762
unverified

mhenrichsen Mads Henrichsen winglian commited on

fix(log): improve warning to clarify that lora_modules_to_save expect a list (#1197)
08719b9
unverified

Nanobit commited on

Mixtral fixes 20240124 (#1192) [skip ci]
54d2ac1
unverified

winglian commited on

Phi2 multipack (#1173)
814aee6
unverified

winglian commited on

DPO cleanup (#1126)
7523d1f
unverified

winglian plaguss HF staff commited on

Falcon embeddings (#1149) [skip docker]
e799e08
unverified

winglian commited on

jupyter lab fixes (#1139) [skip ci]
eaaeefc
unverified

winglian commited on

Qwen2 (#1166)
f5a828a
unverified

winglian commited on

make sure the model config loader respects the model_revision too (#1160) [skip-ci]
fccb542
unverified

winglian commited on

Deprecate max packed sequence len (#1141)
2ce5c0d
unverified

winglian commited on

Multipack simplify for Mixtral (#1142)
6910e6a
unverified

winglian commited on

Add shifted sparse attention (#973) [skip-ci]
1d70f24
unverified

jrc joecummings winglian commited on

Add `layers_to_transform` for `lora_config` (#1118)
8487b97
unverified

xzuyn commited on

keep gate in fp32 for 16 bit loras (#1105)
da97285
unverified

winglian commited on

add gptneox embeddings, fix phi2 inputs, also fix the casting (#1083)
78c5b19
unverified

winglian commited on

be more robust about checking embedding modules for lora finetunes (#1074) [skip ci]
0f10080
unverified

winglian commited on

fix: torch_dtype mistral default to fp32 (#1050)
c3e8165
unverified

Nanobit commited on

Phi2 rewrite (#1058)
732851f
unverified

winglian commited on

feature: better device mapping for large models (#918)
bdfefaf
unverified

kallewoof Karl-Johan Alm winglian commited on

RL/DPO (#935)
f243c21

winglian commited on

bump transformers and update attention class map name (#1023)
bcc78d8
unverified

winglian commited on

Adds chat templates (#1022)
f8ae59b
unverified

mhenrichsen commited on

feat: expose bnb kwargs (#1018)
41353d2
unverified

Nanobit hamel commited on

remove landmark attn and xpos rope implementations (#1010)
70b46ca
unverified

winglian commited on

Feat: Warns to add to modules_to_save when adding tokens or switching special_tokens (#787)
1ffa386
unverified

Nanobit commited on

Fix Deepspeed loading (#950)
5ea3aa3
unverified

winglian commited on

Flash attn hotfix (#951)
f1f60cb
unverified

winglian commited on

Mixtral official (#942)
7fabc4d
unverified

winglian commited on

Mixtral multipack (#928)
68b227a
unverified

winglian commited on

support for mamba (#915)
40a6362
unverified

winglian commited on

fix(tokenizer): handle fast tokenizer properly for bos/eos (#914)
fde091c
unverified

Nanobit commited on

feat: add check for quantized model (#913)
a581e9f
unverified

Nanobit winglian commited on

Support device_map=sequential & max_memory config parameters (#903)
992e742
unverified

Bryan Thornbury winglian commited on

fix for qwen w lora (#906)
3e3229e
unverified

winglian commited on

Feat: Add Qwen (#894)
1115c50
unverified

Nanobit commited on

Phi update 202311 (#876)
9bf854e
unverified

winglian commited on

allow overriding of model_config parameters from the YML (#853)
1bc1186
unverified

winglian commited on