File size: 3,670 Bytes
b15a8b8
f46a022
 
 
 
 
 
 
 
 
 
 
 
 
b15a8b8
 
f46a022
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---

base_model:
- ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
- TheDrummer/Fallen-Llama-3.3-R1-70B-v1
- LatitudeGames/Wayfarer-Large-70B-Llama-3.3
- TheDrummer/Anubis-70B-v1
- nbeerbower/Llama-3.1-Nemotron-lorablated-70B
- Sao10K/70B-L3.3-mhnnn-x1
- SicariusSicariiStuff/Negative_LLAMA_70B
library_name: transformers
tags:
- mergekit
- merge
license: llama3.3
---

Expanded V2 of Dungeonmaster, I decided to move away from the R1 base here, because I feel it the pros dont necessarily outweigh the cons. For V2 I decided to go for the classic nbeerbower/Llama-3.1-Nemotron-lorablated-70B as the base.
Dungeonmaster expanded features 2 extra models, bringing the total up to 7! Admittedly I was concerned about that many models in one single merge. But you never know, so I decided to try both and see...

My ideal vision for Dungeonmaster were these 7 models.

- LatitudeGames/Wayfarer-Large-70B-Llama-3.3 - A fine-tuned model specifically designed for this very application.
- ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.3 - Another fine-tune trained on RP datasets.
- Sao10K/70B-L3.3-mhnnn-x1 - For some extra unhinged creativity
- TheDrummer/Anubis-70B-v1 - Another excellent RP fine-tune to help balance things out.
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1 - For it's strong descriptive writing.
- SicariusSicariiStuff/Negative_LLAMA_70B - To assist with the darker undertones.
- TheDrummer/Fallen-Llama-3.3-R1-70B-v1 - The secret sauce, a completely unhinged thinking model that turns things up to 11.

# Mergekit

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the [Linear DELLA](https://arxiv.org/abs/2406.11617) merge method using [nbeerbower/Llama-3.1-Nemotron-lorablated-70B](https://huggingface.co/nbeerbower/Llama-3.1-Nemotron-lorablated-70B) as a base.

### Models Merged

The following models were included in the merge:
* [ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4](https://huggingface.co/ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4)
* [EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1](https://huggingface.co/EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1)
* [TheDrummer/Fallen-Llama-3.3-R1-70B-v1](https://huggingface.co/TheDrummer/Fallen-Llama-3.3-R1-70B-v1)
* [LatitudeGames/Wayfarer-Large-70B-Llama-3.3](https://huggingface.co/LatitudeGames/Wayfarer-Large-70B-Llama-3.3)
* [TheDrummer/Anubis-70B-v1](https://huggingface.co/TheDrummer/Anubis-70B-v1)
* [Sao10K/70B-L3.3-mhnnn-x1](https://huggingface.co/Sao10K/70B-L3.3-mhnnn-x1)
* [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B)

### Configuration

The following YAML configuration was used to produce this model:

```yaml

models:

  - model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1

    parameters:

      density: 0.7

  - model: ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4

    parameters:

      density: 0.7

  - model: Sao10K/70B-L3.3-mhnnn-x1

    parameters:

      density: 0.7

  - model: LatitudeGames/Wayfarer-Large-70B-Llama-3.3

    parameters:

      density: 0.7

  - model: TheDrummer/Fallen-Llama-3.3-R1-70B-v1

    parameters:

      density: 0.7

  - model: TheDrummer/Anubis-70B-v1

    parameters:

      density: 0.7

  - model: SicariusSicariiStuff/Negative_LLAMA_70B

    parameters:

      density: 0.7

merge_method: della_linear

base_model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B

parameters:

  weight: 0.14

  epsilon: 0.2

  lambda: 1.1

  normalize: true

dtype: bfloat16

tokenizer:

 source: nbeerbower/Llama-3.1-Nemotron-lorablated-70B

```