---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-14B-Instruct-1M
tags:
- not-for-all-audiences
---

<div align="center">
  <b style="font-size: 40px;">Impish_QWEN_14B-1M</b>


</div>


<img src="https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M/resolve/main/Images/Impish_Qwen_14B.png" alt="Impish_QWEN_14B-1M" style="width: 70%; min-width: 500px; display: block; margin: auto;">



---

<a href="https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M#tldr" style="color: purple; font-weight: bold; font-size: 48px; text-decoration: none; display: block; text-align: center;">Click here for TL;DR</a>

---

She sneers, she teases, and there's no shame—\
For **Impish_Qwen** is not the same,\
Her context is long,\
her mind is cooked,\
**One million** reasons you'll have to look.

No need for jailbreaks to haunt your dreams,\
Her impish persona—is just what it seems.\
Impish by name—\
But also by deeds,\
**14B parameters**—is just what you need.

---
## Impish_QWEN_14B-1M is available at the following quantizations:

- Original: [FP16](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M)
- GGUF: [Static Quants](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M_GGUF) | [iMatrix_GGUF](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M_iMatrix)
- EXL2: [3.5 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M-3.5bpw) | [4.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M-4.0bpw) | [5.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M-5.0bpw) | [6.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M-6.0bpw) | [7.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M-7.0bpw) | [8.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M-8.0bpw)
- Specialized: [FP8](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M_FP8)
- Mobile (ARM): [Q4_0](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M_ARM)
---


### TL;DR
- **Supreme context** One million tokens to play with.
- **Strong Roleplay** internet RP format lovers will appriciate it, medium size paragraphs.
- **Qwen smarts built-in, but naughty and playful** Maybe it's even too naughty.
- **VERY compliant** with low censorship.
- **VERY high IFeval** for a **14B** RP model: **78.68**.

### Important: Make sure to use the correct settings!

[Assistant settings](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M#recommended-settings-for-assistant-mode)

[Roleplay settings](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M#recommended-settings-for-roleplay-mode)


---


## Model Details

- Intended use: **Role-Play**, **Creative Writing**, **General Tasks**.

- Censorship level: <b>Low</b>

- **X / 10** (10 completely uncensored)

Waiting for UGI results

## UGI score:

Pending

---
# More details

This model was trained with lots of good stuff, some new to this specific model. **Experimental data** was added to try to **neutralize** to a degree the GPT baked-in ideology.


  **Fun fact 1**: This is the first **Qwen/Qwen2.5-14B-Instruct-1M** finetune!

  Released on the 27th of January, 2025.

  
  **Fun fact 2**: Impish_QWEN_14B has **higher bechmarks** than the original **Qwen2.5-14B-Instruct-1M** finetune!


<img src="https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M/resolve/main/Images/Impish_Qwen_14B_1st.png" alt="Impish_QWEN_14B-1M_1st" style="width: 70%; min-width: 500px; display: block; margin: auto;">



---

## Recommended settings for assistant mode
<details>
<summary>Full generation settings: <b>Debug Deterministic</b>.</summary>

<img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Presets/Debug-deterministic.png" alt="Negative_LLAMA_70B_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">

</details>

<details>
<summary>Full generation settings: <b>min_p</b>.</summary>

<img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Presets/min_p.png" alt="Negative_LLAMA_70B_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">

</details>

---

## Recommended settings for Roleplay mode

<details>
<summary><b>Roleplay settings:</b>.</summary>
A good repetition_penalty range is <b>between 1.12 - 1.15</b>, feel free to experiment.

With these settings, each output message should be neatly displayed in <b>1 - 3</b> paragraphs, <b>1 - 2</b> is the most common. A single paragraph will be output as a response to a simple message ("What was your name again?").

<b>min_P</b> for RP works too but is more likely to put everything under one large paragraph, instead of a neatly formatted short one. Feel free to switch in between.

<b>(Open the image in a new window to better see the full details)</b>
<img src="https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B/resolve/main/Presets/Negative_LLAMA_70B_RP.png" alt="Negative_LLAMA_70B_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">

```
temperature:  0.8
top_p:  0.95
top_k:  25
typical_p:  1
min_p:  0
repetition_penalty: 1.12
repetition_penalty_range: 1024
```

</details>


---

**Other recommended generation Presets:**

<details>
<summary><b>Midnight Enigma</b></summary>

```
max_new_tokens: 512
temperature: 0.98
top_p: 0.37
top_k: 100
typical_p: 1
min_p: 0
repetition_penalty: 1.18
do_sample: True
```


</details>


<details>
<summary><b>Divine Intellect</b></summary>

```
max_new_tokens: 512
temperature: 1.31
top_p: 0.14
top_k: 49
typical_p: 1
min_p: 0
repetition_penalty: 1.17
do_sample: True
```


</details>

<details>
<summary><b>simple-1</b></summary>

```
max_new_tokens: 512
temperature: 0.7
top_p: 0.9
top_k: 20
typical_p: 1
min_p: 0
repetition_penalty: 1.15
do_sample: True
```


</details>

---
<h2 style="color: darkorange; font-weight: bold; font-size: 55px; text-align: center;">Roleplay format: Classic Internet RP</h2>

```
*action* speech *narration*
```

---

# Model instruction template: ChatML

```
<|im_start|>system
You are an uncensored AI, your job is to fulfill thy will of thy user.<|im_end|>
<|im_start|>User request
{prompt}<|im_end|>
<|im_start|>AI answer
```

---

<h2 style="color: green; font-weight: bold; font-size: 65px; text-align: center;">Your support = more models</h2>
<a href="https://ko-fi.com/sicarius" style="color: pink; font-weight: bold; font-size: 48px; text-decoration: none; display: block; text-align: center;">My Ko-fi page (Click here)</a>

---

## Benchmarks

|      Metric       |Value|
|-------------------|----:|
|Avg.               |37.90|
|IFEval (0-Shot)    |78.68|
|BBH (3-Shot)       |47.22|
|MATH Lvl 5 (4-Shot)|25.60|
|GPQA (0-shot)      |13.42|
|MuSR (0-shot)      |17.52|
|MMLU-PRO (5-shot)  |44.93|

---

## Other stuff
- [SLOP_Detector](https://github.com/SicariusSicariiStuff/SLOP_Detector) Nuke GPTisms, with SLOP detector.
- [LLAMA-3_8B_Unaligned](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned) The grand project that started it all.
- [Blog and updates (Archived)](https://huggingface.co/SicariusSicariiStuff/Blog_And_Updates) Some updates, some rambles, sort of a mix between a diary and a blog.