Llama
Collection
Meta-based models
•
823 items
•
Updated
•
1
This is a merge of pre-trained language models created using mergekit.
This model was merged using the TIES merge method using nbeerbower/Llama3.1-Allades-8B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
parameters:
density: 0.5
weight: 0.5
- model: DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: nbeerbower/Llama3.1-Allades-8B
parameters:
normalize: true
int8_mask: true
dtype: float16
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 20.65 |
IFEval (0-Shot) | 34.60 |
BBH (3-Shot) | 23.04 |
MATH Lvl 5 (4-Shot) | 33.84 |
GPQA (0-shot) | 7.38 |
MuSR (0-shot) | 3.92 |
MMLU-PRO (5-shot) | 21.13 |