File size: 5,518 Bytes
fdf0276 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 |
---
language: en
license: other
tags:
- qwen
- grpo
- instruct
- fine-tuned
- reasoning
- 3b
- menda
- chat
- transformers
library_name: transformers
datasets:
- gsm8k
model-index:
- name: Menda-3b-Optim-100
results:
- task:
type: text-generation
name: Text Generation
dataset:
type: gsm8k
name: GSM8K
metrics:
- name: Accuracy
type: accuracy
value: 70.0
- task:
type: text-generation
name: Text Generation
dataset:
type: mmlu
name: MMLU (Overall)
metrics:
- name: Accuracy
type: accuracy
value: 70.35
---
# Menda-3b-Optim-100: Optimized GRPO-Tuned Qwen2.5 Model
Menda-3b-Optim-100 is a fine-tuned version of Qwen2.5-3B-Instruct, trained with an optimized GRPO (Guided Reinforcement from Preference Optimization) methodology for 100 steps. This model shows significantly improved performance on reasoning benchmarks and achieves the highest MMLU score among all Menda-3B checkpoints.
## Model Details
- **Base Model**: Qwen/Qwen2.5-3B-Instruct
- **Training Method**: Optimized GRPO with enhanced reward functions
- **Training Steps**: 100
- **Parameters**: 3 billion
- **Context Length**: 32K tokens
- **Training Data**: GSM8K (mathematical reasoning)
- **Chat Template**: Uses the Qwen2 chat template
## Optimization Improvements
This model uses several key optimizations over the standard GRPO approach:
1. **Higher Learning Rate**: 2e-5 (4x higher than standard)
2. **Improved Scheduler**: Cosine with restarts
3. **Enhanced Reward Functions**:
- Continuous correctness rewards with partial credit
- Multi-component reasoning quality assessment
- Format validation with both strict and soft checks
4. **Adjusted Batch Processing**: Optimized gradient accumulation
## Benchmark Results
Menda-3b-Optim-100 has been evaluated on several standard benchmarks:
| Benchmark | Task Type | Accuracy |
|-----------|-----------|----------|
| GSM8K | Mathematical Reasoning | 70.0% |
| OpenBookQA | Knowledge-based QA | 20.0% (40.0% normalized) |
### MMLU Performance
| MMLU Category | Score |
|---------------|-------|
| Overall | 70.35% |
| Humanities | 76.15% |
| Social Sciences | 76.67% |
| STEM | 61.58% |
| Other | 71.54% |
## Key Strengths
- **Highest MMLU Score**: This checkpoint achieves the highest overall MMLU score (70.35%) among all Menda-3B checkpoints.
- **Strong Mathematical Reasoning**: Excellent 70% performance on GSM8K, demonstrating strong mathematical problem-solving capabilities.
- **Balanced Performance**: Maintains strong performance across diverse knowledge domains.
- **Efficient Training**: Achieves superior results with minimal training (only 100 steps).
- **Subject-Specific Excellence**: Perfect 100% on Logical Fallacies, Medical Genetics, Professional Psychology, and College Biology.
## Chat Format
This model uses the standard Qwen2 chat template. For best results when using the model directly, format your prompts as follows:
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
Your question here<|im_end|>
<|im_start|>assistant
```
When using the model through the Hugging Face Transformers library, the chat template will be applied automatically when using the `chat_template` functionality.
## Usage Examples
### Basic Usage with Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "weathermanj/Menda-3b-Optim-100"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
prompt = "Explain the concept of machine learning in simple terms."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=300)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### Chat Usage with Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "weathermanj/Menda-3b-Optim-100"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Give me a short introduction to large language models."}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
## Training Configuration
The model was trained using the optimized GRPO methodology with the following configuration:
- **LoRA Rank**: 128
- **Learning Rate**: 2e-5
- **Optimizer**: AdamW (8-bit)
- **Batch Size**: 1 per device
- **Gradient Accumulation Steps**: 8
- **Scheduler**: Cosine with restarts
- **Training Samples**: 100 examples from GSM8K
## License
This model inherits the license of the base Qwen2.5-3B-Instruct model. Please refer to the [Qwen2 license](https://huggingface.co/Qwen/Qwen2-3B-Instruct/blob/main/LICENSE) for details.
|