--- license: apache-2.0 language: - en base_model: - Qwen/Qwen2.5-14B-Instruct-1M tags: - not-for-all-audiences ---
Impish_QWEN_14B-1M
Impish_QWEN_14B-1M --- Click here for TL;DR --- She sneers, she teases, and there's no shame—\ For **Impish_Qwen** is not the same,\ Her context is long,\ her mind is cooked,\ **One million** reasons you'll have to look. No need for jailbreaks to haunt your dreams,\ Her impish persona—is just what it seems.\ Impish by name—\ But also by deeds,\ **14B parameters**—is just what you need. --- ## Impish_QWEN_14B-1M is available at the following quantizations: - Original: [FP16](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M) - GGUF: [Static Quants](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M_GGUF) | [iMatrix_GGUF](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M_iMatrix) - EXL2: [3.5 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M-3.5bpw) | [4.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M-4.0bpw) | [5.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M-5.0bpw) | [6.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M-6.0bpw) | [7.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M-7.0bpw) | [8.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M-8.0bpw) - Specialized: [FP8](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M_FP8) - Mobile (ARM): [Q4_0](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M_ARM) --- ### TL;DR - **Supreme context** One million tokens to play with. - **Strong Roleplay** internet RP format lovers will appriciate it, medium size paragraphs. - **Qwen smarts built-in, but naughty and playful** Maybe it's even too naughty. - **VERY compliant** with low censorship. - **VERY high IFeval** for a **14B** RP model: **78.68**. ### Important: Make sure to use the correct settings! [Assistant settings](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M#recommended-settings-for-assistant-mode) [Roleplay settings](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M#recommended-settings-for-roleplay-mode) --- ## Model Details - Intended use: **Role-Play**, **Creative Writing**, **General Tasks**. - Censorship level: Low - **X / 10** (10 completely uncensored) Waiting for UGI results ## UGI score: Pending --- # More details This model was trained with lots of good stuff, some new to this specific model. **Experimental data** was added to try to **neutralize** to a degree the GPT baked-in ideology. **Fun fact 1**: This is the first **Qwen/Qwen2.5-14B-Instruct-1M** finetune! Released on the 27th of January, 2025. **Fun fact 2**: Impish_QWEN_14B has **higher bechmarks** than the original **Qwen2.5-14B-Instruct-1M** finetune! Impish_QWEN_14B-1M_1st --- ## Recommended settings for assistant mode
Full generation settings: Debug Deterministic. Negative_LLAMA_70B_Settings
Full generation settings: min_p. Negative_LLAMA_70B_Settings
--- ## Recommended settings for Roleplay mode
Roleplay settings:. A good repetition_penalty range is between 1.12 - 1.15, feel free to experiment. With these settings, each output message should be neatly displayed in 1 - 3 paragraphs, 1 - 2 is the most common. A single paragraph will be output as a response to a simple message ("What was your name again?"). min_P for RP works too but is more likely to put everything under one large paragraph, instead of a neatly formatted short one. Feel free to switch in between. (Open the image in a new window to better see the full details) Negative_LLAMA_70B_Settings ``` temperature: 0.8 top_p: 0.95 top_k: 25 typical_p: 1 min_p: 0 repetition_penalty: 1.12 repetition_penalty_range: 1024 ```
--- **Other recommended generation Presets:**
Midnight Enigma ``` max_new_tokens: 512 temperature: 0.98 top_p: 0.37 top_k: 100 typical_p: 1 min_p: 0 repetition_penalty: 1.18 do_sample: True ```
Divine Intellect ``` max_new_tokens: 512 temperature: 1.31 top_p: 0.14 top_k: 49 typical_p: 1 min_p: 0 repetition_penalty: 1.17 do_sample: True ```
simple-1 ``` max_new_tokens: 512 temperature: 0.7 top_p: 0.9 top_k: 20 typical_p: 1 min_p: 0 repetition_penalty: 1.15 do_sample: True ```
---

Roleplay format: Classic Internet RP

``` *action* speech *narration* ``` --- # Model instruction template: ChatML ``` <|im_start|>system You are an uncensored AI, your job is to fulfill thy will of thy user.<|im_end|> <|im_start|>User request {prompt}<|im_end|> <|im_start|>AI answer ``` ---

Your support = more models

My Ko-fi page (Click here) --- ## Benchmarks | Metric |Value| |-------------------|----:| |Avg. |37.90| |IFEval (0-Shot) |78.68| |BBH (3-Shot) |47.22| |MATH Lvl 5 (4-Shot)|25.60| |GPQA (0-shot) |13.42| |MuSR (0-shot) |17.52| |MMLU-PRO (5-shot) |44.93| --- ## Other stuff - [SLOP_Detector](https://github.com/SicariusSicariiStuff/SLOP_Detector) Nuke GPTisms, with SLOP detector. - [LLAMA-3_8B_Unaligned](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned) The grand project that started it all. - [Blog and updates (Archived)](https://huggingface.co/SicariusSicariiStuff/Blog_And_Updates) Some updates, some rambles, sort of a mix between a diary and a blog.