Commit History
Add support for GPTQ using native transformers/peft (#468)
3355706
unverified
add eval benchmark callback (#441)
7657632
unverified
customizable ascii art (#506)
548787d
unverified
Fix missing 'packaging' wheel (#482)
c500d02
unverified
Maxime
commited on
allow newer deps
c29117a
flash attn pip install (#426)
cf66547
unverified
adds color (#425)
0a22847
unverified
remove extra accelearate in requirements (#430)
82e111a
unverified
Attention mask and position id fixes for packing (#285)
2bb0b78
unverified
Merge pull request #355 from tmm1/bitsandbytes-fixes
35c8b90
unverified
bump to latest bitsandbytes release with major bug fixes
fce40aa
use newer pynvml package
9c31410
log GPU memory usage
e303d64
pin accelerate so it works with llama2 (#330)
6c9a87c
unverified
latest HEAD of accelerate causes 0 loss immediately w FSDP (#321)
9f69c4d
unverified
add hf_transfer to requirements for faster hf upload
6dd2e7d
Update requirements.txt
273b3a3
unverified
Teknium
commited on