File size: 520 Bytes
fa6b7f8
 
 
 
 
f245569
fa6b7f8
 
1
2
3
4
5
6
7
8
9
---
license: mit
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
---

# Antigma/DeepSeek-R1-Distill-Qwen-1.5B-GGUF 
This model was converted to GGUF format from [`deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) for more details on the model.