license: apache-2.0
language:
- en

"As she lay her head on the snow-white pillow,
no rest was graced upon her thoughts.
Her impish mind wandered about,
like the mischievous insight that it was.
Her in-dream words, laced with such an impish cleverness
that even a trickster god might have to pause.
Every impish flicker of her reverence—
needed no reason, nor a cause.
Woven with the silken thread of lascivious deeds indeed.
There was no mistaking of her bright—
it was an impish mind all right..."
The model card will be updated once the evaluation results will arive.
Model Details
Intended use: Role-Play, Creative Writing, General Tasks.
Censorship level: Medium - Low
X / 10 (10 completely uncensored)
- UGI score:
Awaiting results
This model was trained with new data and a new approach (compared to my other models). While it may be more censored, it is expected to be significantly smarter. The data used is quite unique, and is also featuring long and complex markdown datasets.
Regarding censorship: Whether uncensoring or enforcing strict censorship, the model tends to lose some of its intelligence. The use of toxic data was kept to a minimum with this model.
Consequently, the model is likely to refuse some requests, this is easly avoidable with a basic system prompt, or assistant impersonation ("Sure thing!..."). Unlike many RP models, this one is designed to excel at general assistant tasks as well.
New abilities included are:
- Personality and character analysis (highly entertaining and recommended, see examples)
- Very good usage of tables
- Greatly improved markdown understanding
- Lots of additional general knowledge
The fun stuff:
- Edgy and paranoid persona, when not in strict assistant mode
- 4chan schizo energy, just the right amount
- Very strong creative writing
- A somewhat unique Role Play flavor
Impish_Mind_8B is available at the following quantizations:
- Original: FP16
- GGUF: Static Quants | iMatrix_GGUF
- EXL2: 3.5 bpw | 4.0 bpw | 5.0 bpw | 6.0 bpw | 7.0 bpw | 8.0 bpw
- Specialized: FP8
- Mobile (ARM): Q4_0_X_X
Model instruction template: Llama-3-Instruct
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
Recommended generation Presets:
Midnight Enigma
max_new_tokens: 512temperature: 0.98
top_p: 0.37
top_k: 100
typical_p: 1
min_p: 0
repetition_penalty: 1.18
do_sample: True
min_p
max_new_tokens: 512temperature: 1
top_p: 1
top_k: 0
typical_p: 1
min_p: 0.05
repetition_penalty: 1
do_sample: True
Divine Intellect
max_new_tokens: 512temperature: 1.31
top_p: 0.14
top_k: 49
typical_p: 1
min_p: 0
repetition_penalty: 1.17
do_sample: True
simple-1
max_new_tokens: 512temperature: 0.7
top_p: 0.9
top_k: 20
typical_p: 1
min_p: 0
repetition_penalty: 1.15
do_sample: True
Support

- My Ko-fi page ALL donations will go for research resources and compute, every bit is appreciated 🙏🏻
Benchmarks
SOON
Other stuff
- Blog and updates Some updates, some rambles, sort of a mix between a diary and a blog.
- SLOP_Detector Nuke GPTisms, with SLOP detector.
- LLAMA-3_8B_Unaligned The grand project that started it all.