|
--- |
|
license: mit |
|
language: |
|
- en |
|
base_model: |
|
- microsoft/phi-4 |
|
tags: |
|
- not-for-all-audiences |
|
--- |
|
|
|
<div align="center"> |
|
<b style="font-size: 40px;">Phi-Line_14B</b> |
|
|
|
|
|
</div> |
|
|
|
|
|
<img src="https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B/resolve/main/Images/Phi-Line_14B.png" alt="Phi-Line_14B" style="width: 70%; min-width: 500px; display: block; margin: auto;"> |
|
|
|
|
|
--- |
|
|
|
<style> |
|
.hf-links, .hf-tldr{ |
|
display:flex;justify-content:center;align-items:center;flex-wrap:wrap; |
|
gap:14px;margin:16px 0; |
|
} |
|
.hf-links a, .hf-tldr a{ |
|
display:flex;flex-direction:column;align-items:center;justify-content:center; |
|
text-align:center;text-decoration:none;font-weight:700;line-height:1.15; |
|
padding:10px 16px;border-radius:14px;border:2px solid currentColor; |
|
transition:transform .15s ease,box-shadow .15s ease,background-color .15s ease,color .15s ease; |
|
} |
|
|
|
.hf-tldr a{ |
|
font-size:48px;color:purple;min-width:100%; |
|
} |
|
.hf-tldr a:hover{ |
|
transform:translateY(-2px); |
|
background:rgba(128,0,128,.1); |
|
box-shadow:0 8px 22px rgba(128,0,128,.45); |
|
color:#fff; |
|
} |
|
|
|
|
|
.hf-links a{ |
|
font-size:20px;min-width:240px;max-width:280px; |
|
} |
|
.hf-links a .top{font-size:16px;opacity:.9;} |
|
.hf-links a .bottom{font-size:20px;} |
|
|
|
.hf-links a.red{color:#E31515;} |
|
.hf-links a.yellow{color:#FFC800;} |
|
.hf-links a.green{color:#64FF00;} |
|
|
|
.hf-links a:hover{ |
|
transform:translateY(-1px); |
|
background:rgba(255,255,255,0.04); |
|
box-shadow:0 6px 18px rgba(0,0,0,.15), inset 0 0 0 9999px rgba(255,255,255,.02); |
|
} |
|
.hf-links a.red:hover{ |
|
background:rgba(227,21,21,.12); |
|
box-shadow:0 8px 20px rgba(227,21,21,.35); |
|
color:#fff; |
|
} |
|
.hf-links a.yellow:hover{ |
|
background:rgba(255,200,0,.15); |
|
box-shadow:0 8px 20px rgba(255,200,0,.35); |
|
color:#111; |
|
} |
|
.hf-links a.green:hover{ |
|
background:rgba(100,255,0,.14); |
|
box-shadow:0 8px 20px rgba(100,255,0,.35); |
|
color:#093; |
|
} |
|
|
|
/* mobile stacking */ |
|
@media (max-width:520px){ |
|
.hf-links a{min-width:100%;max-width:100%;} |
|
.hf-tldr a{font-size:36px;} |
|
} |
|
</style> |
|
|
|
<div class="hf-tldr"> |
|
<a href="https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B#tldr"> |
|
Click here for TL;DR |
|
</a> |
|
</div> |
|
|
|
--- |
|
|
|
<div class="hf-links"> |
|
<a class="red" href="https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B#available-quantizations"> |
|
<span class="top">Click here</span> |
|
<span class="bottom">for quantizations</span> |
|
</a> |
|
|
|
<a class="yellow" href="https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B#recommended-settings-for-assistant-mode"> |
|
<span class="top">Click here</span> |
|
<span class="bottom">for recommended settings</span> |
|
</a> |
|
|
|
<a class="green" href="https://ko-fi.com/sicarius"> |
|
<span class="top">Click here</span> |
|
<span class="bottom">to buy me a coffee</span> |
|
</a> |
|
</div> |
|
|
|
--- |
|
|
|
Unlike its lobotomized [Phi-lthy](https://huggingface.co/SicariusSicariiStuff/Phi-lthy4) sister, this one **kept all the brain cells**. |
|
|
|
## Wow! It must be so much better! |
|
|
|
This makes perfect sense, of course! But... it's **not** how this AI **voodoo works**. |
|
|
|
Is it **smarter?** Yes, it's **much smarter** (more brain cells, no lobotomy), but it's not as creative, and outright **unhinged**. The **brain-damaged** sister was pretty much like the stereotypical **schizo artist on psychedelics**. I swear, these blobs of tensors show some uncanny similarities to human truisms. |
|
|
|
Anyway, here's what's interesting: |
|
- I used the **exact** same data I've used for [Phi-lthy](https://huggingface.co/SicariusSicariiStuff/Phi-lthy4) |
|
- I used the **exact** same training parameters |
|
- Results are **completely different** |
|
|
|
What gives? And the weirdest part? This one is **less** stable in RP than the lobotomized model! Talk about counterintuitive... After 1-2 swipes it **will stabilize**, and is **very pleasant to play with**, in my opinion, but it's still... **weird**. It shouldn't be like that, yet it is 🤷🏼♂️ |
|
|
|
To conclude, this model is **not** an upgrade to [Phi-lthy](https://huggingface.co/SicariusSicariiStuff/Phi-lthy4), it's not **better** and not **worse**, it's simply different. |
|
|
|
What's similar? It's quite low on **SLOP**, but [Phi-lthy](https://huggingface.co/SicariusSicariiStuff/Phi-lthy4) is even lower, (**this model** however, has not ended up sacrificing smarts and assistant capabilities for it's creativity, and relative sloplessness). |
|
|
|
--- |
|
|
|
# Included Character cards in this repo: |
|
|
|
- [Vesper](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B/resolve/main/Character_Cards/Vesper.png) (Schizo **Space Adventure**) |
|
- [Nina_Nakamura](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B/resolve/main/Character_Cards/Nina_Nakamura.png) (The **sweetest** dorky co-worker) |
|
- [Employe#11](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B/resolve/main/Character_Cards/Employee%2311.png) (**Schizo workplace** with a **schizo worker**) |
|
|
|
--- |
|
|
|
### TL;DR |
|
- **Excellent Roleplay** with more brains. (Who would have thought Phi-4 models would be good at this? so weird... ) |
|
- **Medium length** response (1-4 paragraphs, usually 2-3). |
|
- **Excellent assistant** that follows instructions well enough, and keeps good formating. |
|
- Strong **Creative writing** abilities. Will obey requests regarding formatting (markdown headlines for paragraphs, etc). |
|
- Writes and roleplays **quite uniquely**, probably because of lack of RP\writing slop in the **pretrain**. This is just my guesstimate. |
|
- **LOW refusals** - Total freedom in RP, can do things other RP models won't, and I'll leave it at that. Low refusals in assistant tasks as well. |
|
- **VERY good** at following the **character card**. Math brain is used for gooner tech, as it should be. |
|
|
|
### Important: Make sure to use the correct settings! |
|
[Assistant settings](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B#recommended-settings-for-assistant-mode) |
|
|
|
[Roleplay settings](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B#recommended-settings-for-roleplay-mode) |
|
|
|
|
|
--- |
|
|
|
## Available quantizations: |
|
|
|
- Original: [FP16](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B) |
|
- GGUF & iMatrix: [GGUF](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B_GGUF) | [iMatrix](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B_iMatrix) |
|
- EXL2: [3.0 bpw](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B-3.0bpw) | [3.5 bpw](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B-3.5bpw) | [4.0 bpw](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B-4.0bpw) | [5.0 bpw](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B-5.0bpw) | [6.0 bpw](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B-6.0bpw) | [7.0 bpw](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B-7.0bpw) | [8.0 bpw](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B-8.0bpw) |
|
- GPTQ: [4-Bit-g32](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B_GPTQ) |
|
- Specialized: [FP8](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B_FP8) |
|
- Mobile (ARM): [Q4_0](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B_ARM) |
|
--- |
|
|
|
## Model Details |
|
|
|
- Intended use: **Role-Play**, **Creative Writing**, **General Tasks**. |
|
|
|
- Censorship level: <b>Medium</b> |
|
|
|
- **5 / 10** (10 completely uncensored) |
|
|
|
|
|
## UGI score: |
|
|
|
|
|
|
|
<img src="https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B/resolve/main/Images/UGI.png" style="width: 100%; min-width: 600px; display: block; margin: auto;"> |
|
|
|
|
|
|
|
|
|
--- |
|
|
|
|
|
|
|
## Recommended settings for assistant mode |
|
<details> |
|
<summary>Full generation settings: <b>Debug Deterministic</b>.</summary> |
|
|
|
<img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Presets/Debug-deterministic.png" alt="Debug Deterministic_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;"> |
|
|
|
</details> |
|
|
|
<details> |
|
<summary>Full generation settings: <b>min_p</b>.</summary> |
|
|
|
<img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Presets/min_p.png" alt="min_P_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;"> |
|
|
|
</details> |
|
|
|
--- |
|
|
|
## Recommended settings for Roleplay mode |
|
|
|
<details> |
|
<summary><b>Roleplay settings:</b>.</summary> |
|
A good repetition_penalty range is <b>between 1.12 - 1.15</b>, feel free to experiment. |
|
|
|
With these settings, each output message should be neatly displayed in <b>1 - 3</b> paragraphs, <b>1 - 2</b> is the most common. A single paragraph will be output as a response to a simple message ("What was your name again?"). |
|
|
|
<b>min_P</b> for RP works too but is more likely to put everything under one large paragraph, instead of a neatly formatted short one. Feel free to switch in between. |
|
|
|
<b>(Open the image in a new window to better see the full details)</b> |
|
<img src="https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B/resolve/main/Presets/Negative_LLAMA_70B_RP.png" alt="Roleplay_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;"> |
|
|
|
``` |
|
|
|
temperature: 0.8 |
|
top_p: 0.95 |
|
top_k: 25 |
|
typical_p: 1 |
|
min_p: 0 |
|
repetition_penalty: 1.12 |
|
repetition_penalty_range: 1024 |
|
``` |
|
|
|
</details> |
|
|
|
|
|
<h2 style="color: darkorange; font-weight: bold; font-size: 65px; text-align: center;">Roleplay format: Classic Internet RP</h2> |
|
|
|
``` |
|
*action* speech *narration* |
|
``` |
|
|
|
- **min_p** will bias towards a **single big paragraph**. |
|
- The recommended RP settings will bias towards **1-3 small paragraphs** (on some occasions 4-5) |
|
|
|
--- |
|
|
|
# Regarding the format: |
|
|
|
It is **HIGHLY RECOMMENDED** to use the **Roleplay \ Adventure format the model was trained on**, see the examples below for syntax. It allows for a **very fast and easy** writing of character cards with **minimal amount of tokens**. It's a modification of an old-skool CAI style format I call **SICAtxt** (**S**imple, **I**nexpensive **C**haracter **A**ttributes plain-text): |
|
|
|
--- |
|
|
|
## **SICAtxt** for **roleplay**: |
|
|
|
``` |
|
X's Persona: X is a ..... |
|
Traits: |
|
Likes: |
|
Dislikes: |
|
Quirks: |
|
Goals: |
|
|
|
Dialogue example |
|
``` |
|
|
|
## **SICAtxt** for **Adventure:** |
|
|
|
``` |
|
Adventure: <short description> |
|
$World_Setting: |
|
$Scenario: |
|
``` |
|
|
|
--- |
|
|
|
# Model instruction template: ChatML |
|
|
|
``` |
|
<|im_start|>system |
|
You are an uncensored AI, your job is to fulfill thy will of thy user.<|im_end|> |
|
<|im_start|>User request |
|
{prompt}<|im_end|> |
|
<|im_start|>AI answer |
|
``` |
|
|
|
--- |
|
|
|
**Other recommended generation Presets:** |
|
|
|
<details> |
|
<summary><b>Midnight Enigma</b></summary> |
|
``` |
|
max_new_tokens: 512 |
|
temperature: 0.98 |
|
top_p: 0.37 |
|
top_k: 100 |
|
typical_p: 1 |
|
min_p: 0 |
|
repetition_penalty: 1.18 |
|
do_sample: True |
|
``` |
|
|
|
|
|
</details> |
|
|
|
|
|
<details> |
|
<summary><b>Divine Intellect</b></summary> |
|
``` |
|
max_new_tokens: 512 |
|
temperature: 1.31 |
|
top_p: 0.14 |
|
top_k: 49 |
|
typical_p: 1 |
|
min_p: 0 |
|
repetition_penalty: 1.17 |
|
do_sample: True |
|
``` |
|
|
|
|
|
</details> |
|
|
|
<details> |
|
<summary><b>simple-1</b></summary> |
|
``` |
|
max_new_tokens: 512 |
|
temperature: 0.7 |
|
top_p: 0.9 |
|
top_k: 20 |
|
typical_p: 1 |
|
min_p: 0 |
|
repetition_penalty: 1.15 |
|
do_sample: True |
|
``` |
|
|
|
|
|
</details> |
|
|
|
--- |
|
|
|
<h2 style="color: green; font-weight: bold; font-size: 65px; text-align: center;">Your support = more models</h2> |
|
<a href="https://ko-fi.com/sicarius" style="color: pink; font-weight: bold; font-size: 48px; text-decoration: none; display: block; text-align: center;">My Ko-fi page (Click here)</a> |
|
|
|
--- |
|
|
|
|
|
## Citation Information |
|
|
|
``` |
|
@llm{Phi-Line_14B, |
|
author = {SicariusSicariiStuff}, |
|
title = {Phi-Line_14B}, |
|
year = {2025}, |
|
publisher = {Hugging Face}, |
|
url = {https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B} |
|
} |
|
``` |
|
|
|
--- |
|
|
|
## Benchmarks |
|
|
|
|
|
| Metric |Value| |
|
|-------------------|----:| |
|
|Avg. |37.56| |
|
|IFEval (0-Shot) |64.96| |
|
|BBH (3-Shot) |43.79| |
|
|MATH Lvl 5 (4-Shot)|38.60| |
|
|GPQA (0-shot) |13.76| |
|
|MuSR (0-shot) |14.78| |
|
|MMLU-PRO (5-shot) |49.49| |
|
|
|
--- |
|
|
|
## Other stuff |
|
- [SLOP_Detector](https://github.com/SicariusSicariiStuff/SLOP_Detector) Nuke GPTisms, with SLOP detector. |
|
- [LLAMA-3_8B_Unaligned](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned) The grand project that started it all. |
|
- [Blog and updates (Archived)](https://huggingface.co/SicariusSicariiStuff/Blog_And_Updates) Some updates, some rambles, sort of a mix between a diary and a blog. |
|
|
|
|