DISCLAIMER: This model is an experimental project by a beginner in fine-tuning. Output quality is not guaranteed, so please do not use it for production or professional work.

DISCLAIMER: This model is an experimental project by a beginner in fine-tuning. Output quality is not guaranteed, so please do not use it for production or professional work.

DISCLAIMER: This model is an experimental project by a beginner in fine-tuning. Output quality is not guaranteed, so please do not use it for production or professional work.

Remember to adjust temperature when calling chat completion api.

Usage

pip install "vllm>=0.8.5"

Use --enable-auto-tool-choice --tool-call-parser hermes to enable tool calling.

# enable reasoning
VLLM_ALLOW_LONG_MAX_MODEL_LEN=1 vllm serve hhzm/qwen3-14b-meow --reasoning-parser qwen3 --enable-auto-tool-choice --tool-call-parser hermes 

# disable reasoning
VLLM_ALLOW_LONG_MAX_MODEL_LEN=1 vllm serve hhzm/qwen3-14b-meow --chat-template qwen3-14b-meow/qwen3_nonthinking.jinja --enable-auto-tool-choice --tool-call-parser hermes 

For longer context window (>40960), use YaRN, factor is adjustable.

The environment variable VLLM_ALLOW_LONG_MAX_MODEL_LEN=1 is required to enable context lengths greater than 40960.

# enable YaRN rope scaling
--rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max_model_len 131072 

Expected to be compatible with older Volta and Turing generation GPUs, as it was trained with FlashAttention-2 disabled, and using FP16.

Downloads last month
13
Safetensors
Model size
14.8B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for hhzm/qwen3-14b-meow

Finetuned
Qwen/Qwen3-14B
Finetuned
(1)
this model
Quantizations
1 model

Dataset used to train hhzm/qwen3-14b-meow