WinterEngine-24B-Instruct

WinterEngine-24B-Instruct

  
       ❄       ❄         
        ❄     ❄          
  ❄      ❄❄❄      ❄     
     ❄❄❄❄❄❄❄❄❄      
  ❄      ❄❄❄      ❄     
        ❄     ❄          
       ❄       ❄        
    

Key Details

BASE MODEL: mistralai/Mistral-Small-24B-Base-2501
LICENSE: apache-2.0
LANGUAGE: English
CONTEXT LENGTH: 32768 tokens

Recommended Settings

TEMPERATURE: 1.2
MIN_P: 0.05
(Everything Else Neutral MEME Samplers Too.)

Prompting Format

<|im_start|>system
system prompt<|im_end|>
<|im_start|>user
Hello, WinterEngine!<|im_end|>
<|im_start|>assistant
Hello! How can I help you today?<|im_end|>

Quants

I-mat: https://huggingface.co/mradermacher/WindEngine-24B-Instruct-i1-GGUF
Normal: https://huggingface.co/mradermacher/WindEngine-24B-Instruct-GGUF

Big Thanks to mradermacher for the Quants.

Story

You can ignore this if you want, but I just wanted to share something. 
I was trying to create a model that follows prompts well, stays uncensored, and brings a lot of creativity β€” especially with roleplay capabilities. 
Started out using the base 24B Instruct model β€” it was decent, but felt a bit dry and overly censored.
So, I began testing and merging different models.
Then found PersonalityEngine 24B, which followed instructions well and had solid roleplay potential, though it felt a little bland.
Discovered Redemption Winds β€” much better at roleplay, but not as strong when it came to following instructions. After trying three different model merges, this pairing turned out to be the best combination.
[The result? A model that follows instructions, excels at roleplay, and β€” for my single folks out there β€” works great for AI girlfriend roleplay, too.] 

LazyMergekit:

🧩 Configuration

slices:
  - sources:
      - model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
        layer_range: [0, 40]
      - model: SicariusSicariiStuff/Redemption_Wind_24B
        layer_range: [0, 40]
merge_method: slerp
base_model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16
Downloads last month
173
Safetensors
Model size
23.6B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for Darkknight535/WinterEngine-24B-Instruct

Collection including Darkknight535/WinterEngine-24B-Instruct