GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.2-mlx

This quantized low-bit model was converted to MLX format from GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.2. Refer to the original model card for more details on the model.

Use with mlx

pip install gbx-lm
from gbx_lm import load, generate

model, tokenizer = load("GreenBitAI/Qwen-1.5-7B-Chat-layer-mix-bpw-2.2-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
Downloads last month
7
Safetensors
Model size
1.88B params
Tensor type
FP16
I16
U32
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.