Commit History
new prompters, misc fixes for output dir missing using fsdp, and changing max seq len
4ac9e25
Merge pull request #124 from OpenAccess-AI-Collective/xformers-fix
2d0ba3b
unverified
remove unused import and update readme
e3c494c
copy xformers attn from ooba since we removed dep on alpaca_lora_4bit
6cb2310
fix up tokenizer config, isort fix
39a208c
split up llama model loading so config can be loaded from base config and models can be loaded from a path
2520ecd
Fix incorrect rebase
594e72b
fix relative path for fixtures
cfcc549
Apply isort then black
37293dc
Fix mypy typing
e9650d3
Lint models.py
f4e5d86
fix relative path for fixtures
e65aeed
refactor: fix previous refactors
56f9ca5
Refactor to use DictDefault instead
8bd7a49
Convert attrdict to addict
bdfe7c9
Merge pull request #67 from OpenAccess-AI-Collective/refactor-tokenizer-load
0d4a7f4
unverified
Merge branch 'main' into refactor/rename-4b-to-gptq
147241c
unverified
fix auto linear modules for lora w/o any set already
4c90633
refactor(param): rename load_4bit config param by gptq
dd00657
Thytu
commited on
load the tokenizer seperately from the model
32e6fe9
Add cfg.lora_target_linear
9196237
qlora and 4bit check so we are able to merge and unload
1987e5c
fix merge conflict failure, black format
7b5e762
fixes to make qlora actually work
34c99f9
fix tokenizer loading, got openllama 3b working
e396654
stray s
f523a08
cfg.cfg fix, also de-dupe lora module list
676d7da
fix tuple add to list
a8771b0
attempt to find linear modules for qlora
ffd1043
apply black formatting
ce34d64
Merge branch 'main' of github.com:OpenAccess-AI-Collective/axolotl into dev
ce694e2
remove un-needed code, add validation
1f5d83e
fix: handles AutoTokenizer from untrusted source
88ad05d
unverified
Valentin De Matos
commited on