Generalizing the chat_template prompt strategy (#1660) [skip ci] cc11c6b unverified Keith Stevens commited on May 28, 2024
Switch to parallel FFD bin packing algorithm. (#1619) 367b2e8 unverified winglian daaave commited on May 23, 2024
support for custom messages field in sharegpt (#1651) bbfed31 unverified winglian commited on May 23, 2024
enable loraplus setting for dpo trainer (#1646) a27d5e1 unverified thepowerfuldeez commited on May 22, 2024
Fix llama3 chat_template (extra <|eot_id|> on last turn) (#1635) 7c2bf30 unverified leonardlin winglian commited on May 21, 2024
FIX: max_length and max_prompt_length was not being sent to ORPOTrainer (#1584) 1e1921b unverified alimosavian Ali Mosavian winglian commited on May 14, 2024
feat: Add LLaMA-3 instruct prompt strategies for fine-tuning (#1553) 50421c8 unverified Ram Ram winglian commited on May 11, 2024
adding llama3 fastchat conversation monkeypatch (#1539) b32c08f unverified Antoni-Joan Solergibert winglian commited on May 10, 2024
ignore the fsdp_config section too (#1606) [skip ci] fff06af unverified winglian commited on May 9, 2024
make sure to save the lora adapter at the end of RL/dpo training (#1573) 796a085 unverified winglian commited on May 8, 2024
Pass deepspeed and fsdp as None explicitly when merging adapters to allow custom device_map (#1575) 9e1480e unverified chiragjn commited on May 7, 2024
Gradio configuration parameters (#1591) 3367fca unverified marijnfs Marijn Stollenga Marijn Stollenga winglian commited on May 6, 2024
Pass weakref to model in the SIGINT handler to free up model post train function (#1581) dde02fc unverified chiragjn winglian commited on May 3, 2024
FIX: TRL trainer preprocessing step was running in one process (#1583) b9bb169 unverified Ali Mosavian Ali Mosavian commited on May 3, 2024
Add debug option for RL dataset preprocessing (#1404) cc5d31e unverified abhinand Nanobit commited on Apr 30, 2024
make sure everything stays in the same dtype when using dpo + FSDP (#1559) 68601ec unverified winglian commited on Apr 22, 2024
Add support for Gemma chat template (#1530) 60f5ce0 unverified Haoxiang-Wang winglian commited on Apr 21, 2024
wrap prepared_ds_path in str() to avoid TypeError in fsspec package (#1548) 7477a53 unverified Frank Ruis winglian commited on Apr 21, 2024
Update SaveAxolotlConfigtoWandBCallback to use artifact instead of save (#1483) 5ed2939 unverified tcapelle winglian commited on Apr 9, 2024
use locale agnostic seperator to make large nums easier to read (#1503) da9b1a3 unverified winglian commited on Apr 9, 2024
WIP: Support table logging for mlflow, too (#1506) 057fa44 unverified DavidFarago Dave Farago winglian commited on Apr 9, 2024
Correctly handle splits for datasets.arrow_dataset.Dataset objects (#1504) 8fa0785 unverified scottifer8 winglian commited on Apr 9, 2024
add field to sft dataset pydantic for completion support (#1497) ff01c45 unverified winglian commited on Apr 9, 2024
ignore issues with calculating # params when printing (#1493) 2fa65b9 unverified winglian commited on Apr 8, 2024
drop empty token from beginning if tokenizer has no bos_token (in the case of qwen) (#1490) 934fc85 unverified winglian commited on Apr 7, 2024
feat: validate sample packing requires flash_attention (#1465) bf4cd67 unverified Nanobit commited on Apr 5, 2024
don't use deepspeed or fsdp when merging loras (#1479) 87ca3f9 unverified winglian commited on Apr 5, 2024
refactor utils.data module for line count linter (#1476) e0fcef4 unverified winglian commited on Apr 4, 2024