File size: 4,117 Bytes
c41b1de
 
 
 
 
 
 
6b49cd3
c41b1de
 
6b49cd3
c41b1de
 
0e53f4d
 
 
 
 
 
 
 
 
 
 
 
20bf9ee
0e53f4d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3008ccf
cf37bad
0e53f4d
33cdd67
 
0e53f4d
 
 
 
6b49cd3
 
 
0e53f4d
 
6b49cd3
46079a2
6b49cd3
0e53f4d
6b49cd3
 
 
 
 
 
 
 
 
0e53f4d
6b49cd3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0e53f4d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
---
base_model: v000000/SwallowMaid-8B-L3-SPPO-abliterated
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- llama
---

This model was converted to GGUF format from [`v000000/SwallowMaid-8B-L3-SPPO-abliterated`](https://huggingface.co/v000000/SwallowMaid-8B-L3-SPPO-abliterated) using llama.cpp
Refer to the [original model card](https://huggingface.co/v000000/SwallowMaid-8B-L3-SPPO-abliterated) for more details on the model.

<!DOCTYPE html>
<style>

h1 {
  color: #801ffa; /* Red color */
  font-size: 1.25em; /* Larger font size */
  text-align: left; /* Center alignment */
  text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.5); /* Shadow effect */
  background: linear-gradient(90deg, #801ffa, #e9a8fb); /* Gradient background */
  -webkit-background-clip: text; /* Clipping the background to text */
  -webkit-text-fill-color: transparent; /* Making the text transparent */
}

a {
  color: #801ffa; /* Red color */
  font-size: 1.25em; /* Larger font size */
  text-align: left; /* Center alignment */
  text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.5); /* Shadow effect */
  background: linear-gradient(90deg, #801ffa, #e9a8fb); /* Gradient background */
  -webkit-background-clip: text; /* Clipping the background to text */
  -webkit-text-fill-color: transparent; /* Making the text transparent */
}
</style>
<html lang="en">
<head>
</head>
<body>
<h1>SwallowMaid-8B-Llama-3-SPPO-abliterated</h1>

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/oTeII_LTTN667EePQ57SI.png)

"Llama-3-Instruct-8B-SPPO-Iter3" fully uncensored with 35% RP-Mix infused to gain some roleplay capabilities and prose while attempting to preserve the qualities of Meta's Llama-3-Instruct finetune.

# <a>Quants</a>
* [GGUF Q8_0](https://huggingface.co/v000000/SwallowMaid-8B-L3-SPPO-abliterated-Q8_0-GGUF)

# <h1>merge</h1>

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

# <h1>Merge Details</h1>
# <h1>Merge Method</h1>

This model was merged using a multi-step merge method.

# <h1>Models Merged</h1>

The following models were included in the merge:
* [grimjim/Llama-3-Instruct-abliteration-LoRA-8B](https://huggingface.co/grimjim/Llama-3-Instruct-abliteration-LoRA-8B)
* [UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3](https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3)
* [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS)
* [maldv/llama-3-fantasy-writer-8b](https://huggingface.co/maldv/llama-3-fantasy-writer-8b)
* [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1)
* [Nitral-AI/Hathor_Respawn-L3-8B-v0.8](https://huggingface.co/Nitral-AI/Hathor_Respawn-L3-8B-v0.8)

# <h1>Configuration</h1>

The following YAML configuration was used to produce this model:

```yaml
# Part 3, Apply abliteration (SwallowMaid-8B)
models:
  - model: sppo-rpmix-part2+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
    parameters:
      weight: 1.0
merge_method: linear
dtype: float32

# Part 2, infuse 35% swallow+rpmix to SPPO-Iter3 (sppo-rpmix-part2)
models:
  - model: UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3
    parameters:
      weight: 1.0
  - model: rpmix-part1
    parameters:
      weight: 0.35
merge_method: task_arithmetic
base_model: UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3
parameters:
    normalize: false
dtype: float32

# Part 1, linear merge rpmix (rpmix-part1)
models:
  - model: Nitral-AI/Hathor_Respawn-L3-8B-v0.8
    parameters:
      weight: 0.6
  - model: maldv/llama-3-fantasy-writer-8b
    parameters:
      weight: 0.1
  - model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
    parameters:
      weight: 0.4
  - model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
    parameters:
      weight: 0.15
merge_method: linear
dtype: float32
```

# <h1>Prompt Template:</h1>
```bash
<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{output}<|eot_id|>

```

</body>
</html>