What is this?

Model merge, I tested with Q4_K_S, so maybe that not final result. Overall, decent model, not too good or too bad. Still good for play RP, ERP if you have 16-24GB VRAM.

Recommend CHATML, Mistral V3 instruct. Or you can find what is the best for you. Have fun!

Merge Detail

### Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
 - model: knifeayumu/Cydonia-v1.2-Magnum-v4-22B
   parameters:
     density: 0.9
     weight: 1
 - model: Steelskull/MSM-MS-Cydrion-22B
   parameters:
     density: 0.6
     weight: 0.8
merge_method: dare_ties
base_model: TheDrummer/Cydonia-22B-v1.3
tokenizer_source: base

Downloads last month
2
Safetensors
Model size
22.2B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for DoppelReflEx/Mimicore-WhiteSnake-22B

Collection including DoppelReflEx/Mimicore-WhiteSnake-22B