--- base_model: - prithivMLmods/Equuleus-Opus-14B-Exp - sometimesanotion/LamarckInfusion-14B-v1 - Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8.8 - Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8.7 - suayptalha/Lamarckvergence-14B - suayptalha/Lix-14B-v0.1 - wanlige/li-14b-v0.4 - Sakalti/Saka-14B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8.7](https://huggingface.co/Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8.7) as a base. ### Models Merged The following models were included in the merge: * [prithivMLmods/Equuleus-Opus-14B-Exp](https://huggingface.co/prithivMLmods/Equuleus-Opus-14B-Exp) * [sometimesanotion/LamarckInfusion-14B-v1](https://huggingface.co/sometimesanotion/LamarckInfusion-14B-v1) * [Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8.8](https://huggingface.co/Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8.8) * [suayptalha/Lamarckvergence-14B](https://huggingface.co/suayptalha/Lamarckvergence-14B) * [suayptalha/Lix-14B-v0.1](https://huggingface.co/suayptalha/Lix-14B-v0.1) * [wanlige/li-14b-v0.4](https://huggingface.co/wanlige/li-14b-v0.4) * [Sakalti/Saka-14B](https://huggingface.co/Sakalti/Saka-14B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8.7 - model: Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8.8 - model: prithivMLmods/Equuleus-Opus-14B-Exp - model: Sakalti/Saka-14B - model: sometimesanotion/LamarckInfusion-14B-v1 - model: suayptalha/Lamarckvergence-14B - model: suayptalha/Lix-14B-v0.1 - model: wanlige/li-14b-v0.4 base_model: Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8.7 chat_template: auto dtype: bfloat16 merge_method: model_stock parameters: int8_mask: true tokenizer: source: union ```