--- base_model: - allura-org/Mistral-Small-Sisyphus-24b-2503 - PocketDoc/Dans-DangerousWinds-V1.1.1-24b - ReadyArt/Forgotten-Safeword-24B library_name: transformers tags: - mergekit - merge --- # EXPERIMENTAL - NOT FOR PUBLIC USE - TO BE DELETED This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details Experimental 24B merge. Not very good. I'm iterating on it. ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [allura-org/Mistral-Small-Sisyphus-24b-2503](https://huggingface.co/allura-org/Mistral-Small-Sisyphus-24b-2503) as a base. ### Models Merged The following models were included in the merge: * [PocketDoc/Dans-DangerousWinds-V1.1.1-24b](https://huggingface.co/PocketDoc/Dans-DangerousWinds-V1.1.1-24b) * [ReadyArt/Forgotten-Safeword-24B](https://huggingface.co/ReadyArt/Forgotten-Safeword-24B) ### Configuration The following YAML configuration was used to produce this model: ```yaml # removing Thinker for now models: - model: ReadyArt/Forgotten-Safeword-24B parameters: weight: 0.9 density: 0.95 - model: PocketDoc/Dans-DangerousWinds-V1.1.1-24b parameters: weight: 0.7 density: 0.8 merge_method: ties base_model: allura-org/Mistral-Small-Sisyphus-24b-2503 parameters: normalize: true tokenizer: source: base dtype: bfloat16 ```