This model is intended to be uncensored, and it can also role-play and use reasoning.
Reasoning Mode:
Think about the reasoning process in the mind first, then provide the answer.
The reasoning process should be wrapped within <think> </think> tags, then provide the answer after that, i.e., <think> reasoning process here </think> answer.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
# Pivot model
- model: unsloth/Llama-3.2-3B-Instruct
# Target models
- model: bunnycore/Llama-3.2-3B-KodCode-R1
- model: bunnycore/Llama-3.2-3b-RP-Toxic-R1
merge_method: sce
base_model: unsloth/Llama-3.2-3B-Instruct
parameters:
select_topk: 1.2
dtype: bfloat16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 21.27 |
IFEval (0-Shot) | 63.19 |
BBH (3-Shot) | 22.98 |
MATH Lvl 5 (4-Shot) | 16.99 |
GPQA (0-shot) | 2.13 |
MuSR (0-shot) | 1.43 |
MMLU-PRO (5-shot) | 20.89 |
- Downloads last month
- 46
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for bunnycore/Llama-3.2-3B-ToxicKod
Merge model
this model
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard63.190
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard22.980
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard16.990
- acc_norm on GPQA (0-shot)Open LLM Leaderboard2.130
- acc_norm on MuSR (0-shot)Open LLM Leaderboard1.430
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard20.890