Commit History
don't use mask expansion for inference (#392)
1687be6
unverified
Feat(doc): Add max_steps to readme (#389)
41ecb45
unverified
Feat(config): add max steps (#387)
3c2ad00
unverified
Added "epoch" evaluation_strategy (#388)
5d48a10
unverified
Feat(config): Add hub_strategy (#386)
73a0b6e
unverified
Error msg for sharegpt if conv has less than 2 msg (#379)
63fdb5a
unverified
new llama-2 default settings (#370)
fdffef5
unverified
don't pass rope_scaling kwarg if it's None (#383)
919246f
unverified
bump flash-attn to 2.0.4 for the base docker image (#382)
ffac902
unverified
Fix crash when running without CUDA
15f6e57
Feat(doc): Improve sharegpt doc (#378)
729c299
unverified
save tokenizer before training starts (#380)
86a91e2
unverified
try to detect accelerate and only use device_map=None in that case (#373)
094fc2c
unverified
Create FUNDING.yml
2dafa73
unverified
fix check for flash attn branching (#377)
343ac84
unverified
remove unnecessary local variable
0c96727
simplify `load_tokenizer`
efb3b2c
improve GPU logging to break out pytorch cache and system mem
7b55fe6
quiet noise from llama tokenizer by setting pad token earlier
e029ab3
extract module for working with cfg
8cec513
fix DefaultDict.__or__
a13e45d
revert previous change and build ax images w docker on gpu (#371)
918f1b0
unverified
attempt to run non-base docker builds on regular cpu hosts (#369)
c3fde36
unverified
Attention mask and position id fixes for packing (#285)
2bb0b78
unverified
Fix(save): Save as safetensors (#363)
a276c9c
unverified
Add wandb_entity to wandb options, update example configs, update README (#361)
7019509
unverified
Fix(model loading): Warn when model revision is passed to gptq (#364)
96bd6ae
unverified
Fix(message): Improve error message for bad format (#365)
e37d935
unverified
Feat: Add rope scaling (#343)
b521206
unverified
feat(merge): save tokenizer on merge (#362)
289d5c4
unverified
Merge pull request #355 from tmm1/bitsandbytes-fixes
35c8b90
unverified
Update README.md on pretraining_dataset (#360)
fae6ed8
unverified
Clarify pre-tokenize before multigpu (#359)
94d03c8
unverified
Merge pull request #356 from tmm1/load_model-args
11ddccb
unverified
Merge pull request #354 from tmm1/gpu-util
9643121
unverified
simplify load_model signature
7181022
Merge pull request #350 from tmm1/group-len-false-examples
f5c11f8
unverified
bump to latest bitsandbytes release with major bug fixes
fce40aa
use newer pynvml package
9c31410
log GPU memory usage
e303d64
note pattern when using groups
b4d1d22
update comment for group_by_length
9f99104
set group_by_length to false in examples
36fefcf
ensure enable_input_require_grads is called on model before getting the peft model (#345)
176b888
unverified
experimental llama 2 chat support (#296)
3392270
unverified
Jan Philipp Harries
Jan Philipp Harries
commited on
add a basic ds zero3 config (#347)
bb53a16
unverified
Update XFormers Attention Monkeypatch to handle Llama-2 70B (GQA) (#339)
10405b9
unverified
ssmi153
commited on
Added Orca Mini prompt strategy (#263)
c93655c
unverified
Jan Philipp Harries
Jan Philipp Harries
commited on