--- base_model: - sometimesanotion/Lamarck-14B-v0.7-Fusion - sometimesanotion/Qwenvergence-14B-v11 - prithivMLmods/Messier-Opus-14B-Elite7 - jpacifico/Chocolatine-2-14B-Instruct-v2.0b3 - prithivMLmods/Equuleus-Opus-14B-Exp - Sakalti/Saka-14B - Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8 library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8](https://huggingface.co/Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8) as a base. ### Models Merged The following models were included in the merge: * [sometimesanotion/Lamarck-14B-v0.7-Fusion](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.7-Fusion) * [sometimesanotion/Qwenvergence-14B-v11](https://huggingface.co/sometimesanotion/Qwenvergence-14B-v11) * [prithivMLmods/Messier-Opus-14B-Elite7](https://huggingface.co/prithivMLmods/Messier-Opus-14B-Elite7) * [jpacifico/Chocolatine-2-14B-Instruct-v2.0b3](https://huggingface.co/jpacifico/Chocolatine-2-14B-Instruct-v2.0b3) * [prithivMLmods/Equuleus-Opus-14B-Exp](https://huggingface.co/prithivMLmods/Equuleus-Opus-14B-Exp) * [Sakalti/Saka-14B](https://huggingface.co/Sakalti/Saka-14B) ### Configuration The following YAML configuration was used to produce this model: ```yaml name: NQLSG-Qwen2.5-14B-MegaFusion-v8.7 models: - model: jpacifico/Chocolatine-2-14B-Instruct-v2.0b3 - model: Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8 - model: prithivMLmods/Equuleus-Opus-14B-Exp - model: prithivMLmods/Messier-Opus-14B-Elite7 - model: Sakalti/Saka-14B - model: sometimesanotion/Lamarck-14B-v0.7-Fusion - model: sometimesanotion/Qwenvergence-14B-v11 base_model: Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8 chat_template: auto dtype: bfloat16 merge_method: model_stock parameters: int8_mask: true tokenizer: source: union ```