Configuration Parsing
Warning:
In config.json: "quantization_config.bits" must be an integer
Mistral-Small-Gutenberg-Doppel-22B - EXL2 6.8bpw
This is a 6.8bpw EXL2 quant of nbeerbower/Mistral-Small-Gutenberg-Doppel-22B
This quant was made using exllamav2-0.2.2 with default dataset.
I tested this quant shortly in some random RPs (including some RPs with 8k+ context) and it seems to work fine.
Prompt Templates
Uses Mistral v2/v3 format.
Original readme below
Mistral-Small-Gutenberg-Doppel-22B
mistralai/Mistral-Small-Instruct-2409 finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.
Method
ORPO tuned with an A40 on RunPod (plz sponsor me) for 3 epochs.
- Downloads last month
- 1
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for DeusImperator/Mistral-Small-Gutenberg-Doppel-22B_exl2_6.8bpw
Base model
mistralai/Mistral-Small-Instruct-2409