--- base_model: - ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3 - AiCloser/Qwen2.5-32B-AGI - nbeerbower/Dumpling-Qwen2.5-32B-v2 - rinna/qwen2.5-bakeneko-32b-instruct - Sao10K/32B-Qwen2.5-Kunou-v1 - allura-org/Qwen2.5-32b-RP-Ink - EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2 - TheSkullery/Q2.5-Hydroblated-R1-32B-v2.5 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [TheSkullery/Q2.5-Hydroblated-R1-32B-v2.5](https://huggingface.co/TheSkullery/Q2.5-Hydroblated-R1-32B-v2.5) as a base. ### Models Merged The following models were included in the merge: * [ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3](https://huggingface.co/ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3) * [AiCloser/Qwen2.5-32B-AGI](https://huggingface.co/AiCloser/Qwen2.5-32B-AGI) * [nbeerbower/Dumpling-Qwen2.5-32B-v2](https://huggingface.co/nbeerbower/Dumpling-Qwen2.5-32B-v2) * [rinna/qwen2.5-bakeneko-32b-instruct](https://huggingface.co/rinna/qwen2.5-bakeneko-32b-instruct) * [Sao10K/32B-Qwen2.5-Kunou-v1](https://huggingface.co/Sao10K/32B-Qwen2.5-Kunou-v1) * [allura-org/Qwen2.5-32b-RP-Ink](https://huggingface.co/allura-org/Qwen2.5-32b-RP-Ink) * [EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: TheSkullery/Q2.5-Hydroblated-R1-32B-v2.5 merge_method: sce dype: float32 out_dtype: bfloat16 tokenizer_source: Qwen/Qwen2.5-32B Parameters: select_topk: 0.16 models: - model: EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2 - model: ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3 - model: allura-org/Qwen2.5-32b-RP-Ink - model: Sao10K/32B-Qwen2.5-Kunou-v1 - model: nbeerbower/Dumpling-Qwen2.5-32B-v2 - model: rinna/qwen2.5-bakeneko-32b-instruct - model: AiCloser/Qwen2.5-32B-AGI ```