Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

A simple unalignment fine-tune on ~900k tokens aiming to make the model more compliant and willing to handle user requests.

This is the same unalignment training seen in concedo/Beepo-22B, so big thanks to concedo for the dataset.

Chat template is same as the original, ChatML.

Downloads last month
2
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for ReadyArt/Qwen2.5-14B-Instruct-1M-Unalign_EXL2_3.5bpw_H8

Base model

Qwen/Qwen2.5-14B
Quantized
(34)
this model

Collection including ReadyArt/Qwen2.5-14B-Instruct-1M-Unalign_EXL2_3.5bpw_H8