fsdp requires params be the same type too (#493) 98bf76e unverified winglian commited on Aug 28, 2023
Fix(tokenizer): Make sure to add pad for CodeLlamaTokenizer (#489) 4c37bd0 unverified Nanobit commited on Aug 28, 2023
fix: finetune model inference needs the dtype fix to work with flash-attn f311df9 unverified Maxime commited on Aug 26, 2023
Fix(tokenizer): Fix condition to add pad token (#477) 71bd062 unverified Nanobit commited on Aug 25, 2023
ReLoRA implementation (with quantization) (#322) bde3c5a unverified chargoddard winglian commited on Aug 24, 2023
workaround so training doesn't hang when packed dataloader batches aren't even (#461) c69faee unverified winglian commited on Aug 23, 2023
recast loralayer, norm, lmhead + embed token weights per original qlora (#393) 96deb6b unverified winglian commited on Aug 21, 2023
support user defined prompters, pretokenized datasets in config, local parquet, local arrow files (#348) d2e7f27 unverified winglian commited on Aug 20, 2023
use save_strategy from config if available (#434) b3f5e00 unverified winglian commited on Aug 19, 2023
Fix(config): Update handling of deepspeed config (#404) c01015f unverified Nanobit commited on Aug 15, 2023
use context manager to run things on rank0 before others (#397) fc2d6be unverified winglian commited on Aug 15, 2023
don't pass rope_scaling kwarg if it's None (#383) 919246f unverified winglian commited on Aug 13, 2023
try to detect accelerate and only use device_map=None in that case (#373) 094fc2c unverified tmm1 commited on Aug 13, 2023
Attention mask and position id fixes for packing (#285) 2bb0b78 unverified winglian commited on Aug 12, 2023
Add wandb_entity to wandb options, update example configs, update README (#361) 7019509 unverified Morgan McGuire Morgan McGuire winglian commited on Aug 12, 2023
Fix(model loading): Warn when model revision is passed to gptq (#364) 96bd6ae unverified Nanobit commited on Aug 12, 2023