--- base_model: [flammenai/Mahou-1.3-mistral-nemo-12B] library_name: transformers tags: - mergekit - merge --- # MN-Tiramisu-12B This is a really yappity-yappy yapping model that's good for long-form RP. Tried to rein it in with Mahou and give it some more character understanding with Pantheon. Feedback is always welcome. ## Merge Details This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ### Merge Method This model was merged using the linear [DARE](https://arxiv.org/abs/2311.03099) merge method using flammenai/Mahou-1.3-mistral-nemo-12B as a base. ### Models Merged The following models were included in the merge: * nbeerbower/mistral-nemo-gutenberg-12B-v4 * Sao10K/MN-12B-Lyra-v1 * Gryphe/Pantheon-RP-1.5-12b-Nemo * flammenai/Mahou-1.3-mistral-nemo-12B ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: flammenai/Mahou-1.3-mistral-nemo-12B dtype: bfloat16 merge_method: dare_linear slices: - sources: - layer_range: [0, 40] model: Gryphe/Pantheon-RP-1.5-12b-Nemo parameters: weight: [0.45, 0.35, 0.35, 0.2, 0.2] - layer_range: [0, 40] model: Sao10K/MN-12B-Lyra-v1 parameters: weight: [0.25, 0.3, 0.35, 0.3, 0.2] - layer_range: [0, 40] model: nbeerbower/mistral-nemo-gutenberg-12B-v4 parameters: weight: - filter: mlp value: [0.1, 0.2, 0.1, 0.4, 0.5] - value: [0.1, 0.2, 0.1, 0.2, 0.2] - layer_range: [0, 40] model: flammenai/Mahou-1.3-mistral-nemo-12B parameters: weight: - filter: mlp value: [0.2, 0.15, 0.2, 0.1, 0.1] - value: [0.2, 0.15, 0.2, 0.3, 0.4] tokenizer_source: union ``` And as always, have a great day!