Commit History
don't pass rope_scaling kwarg if it's None (#383)
919246f
unverified
try to detect accelerate and only use device_map=None in that case (#373)
094fc2c
unverified
remove unnecessary local variable
0c96727
simplify `load_tokenizer`
efb3b2c
improve GPU logging to break out pytorch cache and system mem
7b55fe6
quiet noise from llama tokenizer by setting pad token earlier
e029ab3
Attention mask and position id fixes for packing (#285)
2bb0b78
unverified
Feat: Add rope scaling (#343)
b521206
unverified
Merge pull request #356 from tmm1/load_model-args
11ddccb
unverified
simplify load_model signature
7181022
log GPU memory usage
e303d64
ensure enable_input_require_grads is called on model before getting the peft model (#345)
176b888
unverified
fix typo
2eda9e0
scope flash-attn+qlora fix correctly, scope to llama, add comment
78b9efb
move flash-attn monkey patch alongside the others
312a9fa
ensure flash-attn fixes happen in both adapter/lora modes, and use torch_dtype
248bf90
qlora w flash attention fixes (#333)
77085ea
unverified
add peft install back since it doesn't get installed by setup.py (#331)
db2a358
unverified
don't resize embeddings to multiples of 32x by default
1066751
Adding logging enhancement
553a86b
support for loading a model by git revision
69a2350
skip explicit model type too if using trust_remote_code
d69da99
don't use llama if trust_remote_code is set since that needs to use AutoModel path
66afb76
optionally define whether to use_fast tokenizer
47d601f
add float16 docs and tweak typehints
88e17ff
style correction
136522f
maciej.karasek
commited on
issue #205 bugfix
556fe40
maciej.karasek
commited on
Merge branch 'main' into flash-optimum
fd2c981
unverified
Merge pull request #187 from OpenAccess-AI-Collective/strip-peft-device-map
93dacba
unverified
Merge pull request #177 from NanoCode012/fix/landmark-patch
8002ffb
unverified
Merge branch 'main' into strip-peft-device-map
5e616d9
unverified
Merge pull request #159 from AngainorDev/patch-1
8e568bb
unverified
add check for attr
c9a149f
Fix strict and Lint
b565ecf
match up gradient checkpointing when using lora w config
fe0b768
Fix undefined LlamaForCausalLM and del try except
563b6d8
peft no longer needs device_map
cd0a6f6
Refactor landmark attention patch
919727b
Fix missing cfg.
a808bf9
unverified
Angainor Development
commited on
Merge pull request #182 from OpenAccess-AI-Collective/fix-llama-ref
0124825
unverified
more gpt-neox long ctx fixes
ab5cd28
more tweaks to do pre-training with bettertransformers
1210dc8
add support for opimum bettertransformers
1edc30c
fix for local variable 'LlamaForCausalLM' referenced before assignment
14163c1
Merge branch 'main' into patch-1
79e2a6f
unverified
Angainor Development
commited on