ICEBLINK

image/png

Overview

An experimental GLM4.5 Air finetune.

Had this one in the works for a while, but was struggling to find the right hyperparams to get this model to behave nicely. Thank you to TheDrummer for helping me out with them.

This model is a creative writing and RP model. It's pretty verbose. The intent is to keep the behavior of the original model, but to slightly improve writing, dialogue & creativity.

SillyTavern Settings

Recommended Roleplay Format

> Actions: In plaintext
> Dialogue: "In quotes"
> Thoughts: *In asterisks*

Recommended Samplers

> Temp: 0.8
> MinP: 0.05
> TopP: 0.95

Instruct

GLM4.5 (no thinking): SillyTavern Preset

Quantizations

Creation Process

Creation Process: SFT

SFT on approx 10 million tokens, SFW / NSFW RP, stories, creative instruct & chat data.

MoE's are brutal to train even with a small dataset like mine, so I took a different approach from usual. I used a very low LR in an effort to avoid having to apply DPO / KTO training afterwards.

I think there's likely a better config to be found, but experimentation with the model to find it is quite draining.

Downloads last month
32
Safetensors
Model size
107B params
Tensor type
F32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for zerofata/GLM-4.5-Iceblink-106B-A12B

Finetuned
(4)
this model
Quantizations
4 models

Datasets used to train zerofata/GLM-4.5-Iceblink-106B-A12B

Collection including zerofata/GLM-4.5-Iceblink-106B-A12B