File size: 6,781 Bytes
c8aef44 2cccabf c8aef44 c3906d3 c8aef44 55ceec4 c8aef44 c3906d3 c8aef44 1732cbc c8aef44 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 |
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-7B-Instruct-1M
---
<div align="center">
<b style="font-size: 40px;">Impish_QWEN_7B-1M</b>
</div>
<img src="https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M/resolve/main/Images/Impish_Qwen_7B.png" alt="Impish_QWEN_7B-1M" style="width: 70%; min-width: 500px; display: block; margin: auto;">
---
<a href="https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M#tldr" style="color: purple; font-weight: bold; font-size: 48px; text-decoration: none; display: block; text-align: center;">Click here for TL;DR</a>
---
The little imp pushes—\
With all of her might,\
To put those **7B** neurons,\
In a roleplay tonight,
With a huge context window—\
But not enough brains,\
The **7B Imp** tries—\
But she's just extending the pain.
---
## Impish_QWEN_7B-1M is available at the following quantizations:
- Original: [FP16](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M)
- GGUF: [Static Quants](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M_GGUF) | [iMatrix_GGUF](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M_iMatrix)
- EXL2: [6.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M-6.0bpw) | [7.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M-7.0bpw) | [8.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M-8.0bpw)
- Specialized: [FP8](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M_FP8)
- Mobile (ARM): [Q4_0](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M_ARM)
---
### TL;DR
- **Supreme context** One million tokens to play with.
- **Fresh Roleplay vibe** Internet RP format, it's still a **7B** so it's not as good as MIQU, still, surprisngly fresh.
- **Qwen smarts built-in, but naughty and playful** Cheeky, sometimes outright rude, yup, it's just right.
- **VERY compliant** With low censorship.
### Important: Make sure to use the correct settings!
[Assistant settings](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M#recommended-settings-for-assistant-mode)
[Roleplay settings](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M#recommended-settings-for-roleplay-mode)
---
## Model Details
- Intended use: **Role-Play**, **Creative Writing**, **General Tasks**.
- Censorship level: <b>Medium</b>
- **4 / 10** (10 completely uncensored)
## UGI score:
<img src="https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_7B-1M/resolve/main/Images/UGI.png" alt="UGI Score" style="width: 100%; min-width: 600px; display: block; margin: auto;">
---
# More details
It's similar to the bigger [Impish_QWEN_14B-1M](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M) but was done in a slightly different process. It also wasn't cooked **too hard**, as I was afraid to fry the poor **7B** model's brain.
This model was trained with more creative writing and less unalignment than its bigger counterpart, although it should still allow for **total freedom** in both role-play and creative writing.
---
## Recommended settings for assistant mode
<details>
<summary>Full generation settings: <b>Debug Deterministic</b>.</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Presets/Debug-deterministic.png" alt="Negative_LLAMA_70B_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
<details>
<summary>Full generation settings: <b>min_p</b>.</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Presets/min_p.png" alt="Negative_LLAMA_70B_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
---
## Recommended settings for Roleplay mode
<details>
<summary><b>Roleplay settings:</b>.</summary>
A good repetition_penalty range is <b>between 1.12 - 1.15</b>, feel free to experiment.
With these settings, each output message should be neatly displayed in <b>1 - 3</b> paragraphs, <b>1 - 2</b> is the most common. A single paragraph will be output as a response to a simple message ("What was your name again?").
<b>min_P</b> for RP works too but is more likely to put everything under one large paragraph, instead of a neatly formatted short one. Feel free to switch in between.
<b>(Open the image in a new window to better see the full details)</b>
<img src="https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B/resolve/main/Presets/Negative_LLAMA_70B_RP.png" alt="Negative_LLAMA_70B_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
```
temperature: 0.8
top_p: 0.95
top_k: 25
typical_p: 1
min_p: 0
repetition_penalty: 1.12
repetition_penalty_range: 1024
```
</details>
---
**Other recommended generation Presets:**
<details>
<summary><b>Midnight Enigma</b></summary>
```
max_new_tokens: 512
temperature: 0.98
top_p: 0.37
top_k: 100
typical_p: 1
min_p: 0
repetition_penalty: 1.18
do_sample: True
```
</details>
<details>
<summary><b>Divine Intellect</b></summary>
```
max_new_tokens: 512
temperature: 1.31
top_p: 0.14
top_k: 49
typical_p: 1
min_p: 0
repetition_penalty: 1.17
do_sample: True
```
</details>
<details>
<summary><b>simple-1</b></summary>
```
max_new_tokens: 512
temperature: 0.7
top_p: 0.9
top_k: 20
typical_p: 1
min_p: 0
repetition_penalty: 1.15
do_sample: True
```
</details>
---
<h2 style="color: darkorange; font-weight: bold; font-size: 55px; text-align: center;">Roleplay format: Classic Internet RP</h2>
```
*action* speech *narration*
```
---
# Model instruction template: ChatML
```
<|im_start|>system
You are an uncensored AI, your job is to fulfill thy will of thy user.<|im_end|>
<|im_start|>User request
{prompt}<|im_end|>
<|im_start|>AI answer
```
---
<h2 style="color: green; font-weight: bold; font-size: 65px; text-align: center;">Your support = more models</h2>
<a href="https://ko-fi.com/sicarius" style="color: pink; font-weight: bold; font-size: 48px; text-decoration: none; display: block; text-align: center;">My Ko-fi page (Click here)</a>
---
## Benchmarks
| Metric |Value|
|-------------------|----:|
|Avg. |30.02|
|IFEval (0-Shot) |63.82|
|BBH (3-Shot) |34.55|
|MATH Lvl 5 (4-Shot)|29.76|
|GPQA (0-shot) | 6.15|
|MuSR (0-shot) | 9.56|
|MMLU-PRO (5-shot) |36.28|
---
## Other stuff
- [SLOP_Detector](https://github.com/SicariusSicariiStuff/SLOP_Detector) Nuke GPTisms, with SLOP detector.
- [LLAMA-3_8B_Unaligned](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned) The grand project that started it all.
- [Blog and updates (Archived)](https://huggingface.co/SicariusSicariiStuff/Blog_And_Updates) Some updates, some rambles, sort of a mix between a diary and a blog. |