silly-v0.2
Finetune of Mistral-Nemo-Base-2407 designed to emulate the writing style of character.ai models.
- 2 epochs of SFT on RP data, then about an hour of PPO on 8xH100 with POLAR-7B RFT
- Kind of wonky, if you're dealing with longer messages you may need to decrease your temperature
- ChatML chat format
- Reviews:
its typically good at writing, v good for 12b, coherent in RP, follows context and starts conversations well
I do legit like it, it feels good to use. When it gives me stable output the output is high quality and on task, its got small model stupid where basic logic holds but it invents things or forgets them (feels like small effective context window maybe?) which, to be clear, is like. Perfectly fine. Very good st synthesizing and inferring information provided in context on a higher level
This is mostly a proof-of-concept, showcasing that POLAR reward models can be very useful for "out of distribution" tasks like roleplaying. If you're working on your own roleplay finetunes, please consider using POLAR!
- Downloads last month
- -