Post
4904
I recently added a recipe in ellora to improve reasoning capabilities to Gemma-3-1B using self-supervised learning. Model now shows step-by-step thinking in <think> tags before answering.
Logic puzzle accuracy: 61% β 84%. 3 hours training on single GPU. π§
Used GRPO where model generates multiple responses and learns to prefer better reasoning. Works surprisingly well for making smaller models more transparent.
π Colab: https://colab.research.google.com/github/codelion/ellora/blob/main/Ellora_Recipe_2_Reasoning_LoRA_with_Self-Rewarding_GRPO.ipynb
π€ Model: codelion/gemma-3-1b-it-reasoning-grpo-lora
π» Code: https://github.com/codelion/ellora
Logic puzzle accuracy: 61% β 84%. 3 hours training on single GPU. π§
Used GRPO where model generates multiple responses and learns to prefer better reasoning. Works surprisingly well for making smaller models more transparent.
π Colab: https://colab.research.google.com/github/codelion/ellora/blob/main/Ellora_Recipe_2_Reasoning_LoRA_with_Self-Rewarding_GRPO.ipynb
π€ Model: codelion/gemma-3-1b-it-reasoning-grpo-lora
π» Code: https://github.com/codelion/ellora