--- base_model: - SicariusSicariiStuff/Negative_LLAMA_70B - TheDrummer/Fallen-Llama-3.3-R1-70B-v1 - Sao10K/L3-70B-Euryale-v2.1 - huihui-ai/Llama-3.3-70B-Instruct-abliterated library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [huihui-ai/Llama-3.3-70B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated) as a base. ### Models Merged The following models were included in the merge: * [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B) * [TheDrummer/Fallen-Llama-3.3-R1-70B-v1](https://huggingface.co/TheDrummer/Fallen-Llama-3.3-R1-70B-v1) * [Sao10K/L3-70B-Euryale-v2.1](https://huggingface.co/Sao10K/L3-70B-Euryale-v2.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: TheDrummer/Fallen-Llama-3.3-R1-70B-v1 parameters: select_topk: 0.17 - model: Sao10K/L3-70B-Euryale-v2.1 parameters: select_topk: 0.17 - model: SicariusSicariiStuff/Negative_LLAMA_70B parameters: select_topk: 0.17 merge_method: sce base_model: huihui-ai/Llama-3.3-70B-Instruct-abliterated parameters: int8_mask: true chat_template: llama3 tokenizer: source: base dtype: bfloat16 ```