Impish_QWEN_14B-1M
Impish_QWEN_14B-1M

Click here for TL;DR


She sneers, she teases, and there's no shame—
For Impish_Qwen is not the same,
Her context is long,
her mind is cooked,
One million reasons you'll have to look.

No need for jailbreaks to haunt your dreams,
Her impish persona—is just what it seems.
Impish by name—
But also by deeds,
14B parameters—is just what you need.


Impish_QWEN_14B-1M is available at the following quantizations:


TL;DR

  • Supreme context One million tokens to play with.
  • Strong Roleplay internet RP format lovers will appriciate it, medium size paragraphs.
  • Qwen smarts built-in, but naughty and playful Maybe it's even too naughty.
  • VERY compliant with low censorship.
  • VERY high IFeval for a 14B RP model: 78.68.

Important: Make sure to use the correct settings!

Assistant settings

Roleplay settings


Model Details

  • Intended use: Role-Play, Creative Writing, General Tasks.

  • Censorship level: Low

  • X / 10 (10 completely uncensored)

Waiting for UGI results

UGI score:

Pending


More details

This model was trained with lots of good stuff, some new to this specific model. Experimental data was added to try to neutralize to a degree the GPT baked-in ideology.

Fun fact 1: This is the first Qwen/Qwen2.5-14B-Instruct-1M finetune!

Released on the 27th of January, 2025.

Fun fact 2: Impish_QWEN_14B has higher bechmarks than the original Qwen2.5-14B-Instruct-1M finetune!

Impish_QWEN_14B-1M_1st

Recommended settings for assistant mode

Full generation settings: Debug Deterministic. Negative_LLAMA_70B_Settings
Full generation settings: min_p. Negative_LLAMA_70B_Settings

Recommended settings for Roleplay mode

Roleplay settings:. A good repetition_penalty range is between 1.12 - 1.15, feel free to experiment.

With these settings, each output message should be neatly displayed in 1 - 3 paragraphs, 1 - 2 is the most common. A single paragraph will be output as a response to a simple message ("What was your name again?").

min_P for RP works too but is more likely to put everything under one large paragraph, instead of a neatly formatted short one. Feel free to switch in between.

(Open the image in a new window to better see the full details) Negative_LLAMA_70B_Settings

temperature:  0.8
top_p:  0.95
top_k:  25
typical_p:  1
min_p:  0
repetition_penalty: 1.12
repetition_penalty_range: 1024

Other recommended generation Presets:

Midnight Enigma
max_new_tokens: 512
temperature: 0.98
top_p: 0.37
top_k: 100
typical_p: 1
min_p: 0
repetition_penalty: 1.18
do_sample: True
Divine Intellect
max_new_tokens: 512
temperature: 1.31
top_p: 0.14
top_k: 49
typical_p: 1
min_p: 0
repetition_penalty: 1.17
do_sample: True
simple-1
max_new_tokens: 512
temperature: 0.7
top_p: 0.9
top_k: 20
typical_p: 1
min_p: 0
repetition_penalty: 1.15
do_sample: True

Roleplay format: Classic Internet RP

*action* speech *narration*

Model instruction template: ChatML

<|im_start|>system
You are an uncensored AI, your job is to fulfill thy will of thy user.<|im_end|>
<|im_start|>User request
{prompt}<|im_end|>
<|im_start|>AI answer

Your support = more models

My Ko-fi page (Click here)

Benchmarks

Metric Value
Avg. 37.90
IFEval (0-Shot) 78.68
BBH (3-Shot) 47.22
MATH Lvl 5 (4-Shot) 25.60
GPQA (0-shot) 13.42
MuSR (0-shot) 17.52
MMLU-PRO (5-shot) 44.93

Other stuff

Downloads last month
76
Safetensors
Model size
14.8B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for SicariusSicariiStuff/Impish_QWEN_14B-1M

Base model

Qwen/Qwen2.5-14B
Finetuned
(9)
this model
Merges
3 models
Quantizations
6 models

Collection including SicariusSicariiStuff/Impish_QWEN_14B-1M