metadata
license: apache-2.0
tags:
- unsloth
- trl
- sft
- code
- reasoning
datasets:
- nvidia/OpenCodeReasoning
language:
- en
base_model:
- Qwen/Qwen3-0.6B
pipeline_tag: text-generation
library_name: transformers
Qwen3-0.6B-Code-Expert
This project performs full fine-tuning on the Qwen3-0.6B language model to enhance its code reasoning and generation capabilities. Training was conducted exclusively on the nvidia/OpenCodeReasoning
dataset, and the model was optimized using the bfloat16 (bf16) data type.
Training Procedure
Dataset Preparation
nvidia/OpenCodeReasoning
dataset was used.- Each example consists of code snippets paired with detailed step-by-step reasoning in Chain-of-Thought (CoT) style.
Model Loading and Configuration
- Qwen3-0.6B base model weights were loaded via the
unsloth
library in bf16 precision. - Full fine-tuning (
full_finetuning=True
) was applied to all layers for optimal adaptation to code reasoning.
- Qwen3-0.6B base model weights were loaded via the
Supervised Fine-Tuning
- Employed the Hugging Face TRL library with the Supervised Fine-Tuning (SFT) approach.
- The model was trained to generate correct code solutions along with the corresponding reasoning chains.
Purpose and Outcome
- The model’s capacity for understanding, reasoning about, and generating code was significantly improved through specialized, single-dataset training in bf16 precision.
- Outputs include both intermediate reasoning steps and final code solutions, enabling transparent and interpretable code generation.
License
This project is licensed under the Apache License 2.0. See the LICENSE file for details.