Experimental de-slopped, de-aligned, EQ-tuned model trained via ORPO on 4k synthetic pairs on a single A100 for 3 epochs; inspired by Gutenberg-DPO.
Despite success on the de-slopping front, I seem to have totalled the model's prefrontal cortex in the process. So it goes. Training data is everything.
- Downloads last month
- 3
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.