Overview

DeepSeek developed and released the DeepSeek R1 Distill Qwen 1.5B model, a distilled version of the Qwen 1.5B language model. It is fine-tuned for high-performance text generation and optimized for dialogue and information-seeking tasks. This model achieves a balance of efficiency and accuracy while maintaining a smaller footprint compared to the original Qwen 1.5B.

The model is designed for applications in customer support, conversational AI, and research, prioritizing both helpfulness and safety.

Variants

No Variant Cortex CLI command
1 Deepseek-r1-distill-qwen-1.5b-1.5b cortex run deepseek-r1-distill-qwen-1.5b:1.5b

Use it with Jan (UI)

  1. Install Jan using Quickstart
  2. Use in Jan model Hub:
    cortexso/deepseek-r1-distill-qwen-1.5b
    

Use it with Cortex (CLI)

  1. Install Cortex using Quickstart
  2. Run the model with command:
    cortex run deepseek-r1-distill-qwen-1.5b
    

Credits

Downloads last month
968
GGUF
Model size
1.78B params
Architecture
qwen2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Collection including cortexso/deepseek-r1-distill-qwen-1.5b