Darkknight535's picture
Update README.md
e71964f verified
|
raw
history blame
4.21 kB
metadata
base_model:
  - PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
  - SicariusSicariiStuff/Redemption_Wind_24B
tags:
  - merge
  - mergekit
  - lazymergekit
  - PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
  - SicariusSicariiStuff/Redemption_Wind_24B

WindEngine-24B-Instruct

WinterEngine-24B-Instruct

  
       ❄       ❄         
        ❄     ❄          
  ❄      ❄❄❄      ❄     
    ❄❄❄❄❄❄❄❄❄      
  ❄      ❄❄❄      ❄     
        ❄     ❄          
       ❄       ❄        
    

Key Details

BASE MODEL: mistralai/Mistral-Small-24B-Base-2501
LICENSE: apache-2.0
LANGUAGE: English
CONTEXT LENGTH: 32768 tokens

Recommended Settings

TEMPERATURE: 1.2
MIN_P: 0.05
(Everything Else Neutral MEME Samplers Too.)

Prompting Format

<|im_start|>system
system prompt<|im_end|>
<|im_start|>user
Hello, WinterEngine!<|im_end|>
<|im_start|>assistant
Hello! How can I help you today?<|im_end|>

Story

You can ignore this if you want, but I just wanted to share something. I was trying to create a model that follows prompts well, stays uncensored, and brings a lot of creativity β€” especially with roleplay capabilities. I started out using the base 24B Instruct model β€” it was decent, but felt a bit dry and overly censored. So, I began testing and merging different models.

I found PersonalityEngine 24B, which followed instructions well and had solid roleplay potential, though it felt a little bland. Then I discovered Redemption Winds β€” much better at roleplay, but not as strong when it came to following instructions. After trying three different model merges, this pairing turned out to be the best combination.

The result? A model that follows instructions, excels at roleplay, and β€” for my single folks out there β€” works great for AI girlfriend roleplay, too.

LazyMergekit:

🧩 Configuration

slices:
  - sources:
      - model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
        layer_range: [0, 40]
      - model: SicariusSicariiStuff/Redemption_Wind_24B
        layer_range: [0, 40]
merge_method: slerp
base_model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16