AlejandroOlmedo commited on
Commit
3b78183
·
verified ·
1 Parent(s): 9a793e3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -3
README.md CHANGED
@@ -13,9 +13,17 @@ tags:
13
  - mlx
14
  ---
15
 
16
- # Alejandroolmedo/DeepScaleR-1.5B-Preview-Q4-mlx
17
 
18
- The Model [Alejandroolmedo/DeepScaleR-1.5B-Preview-Q4-mlx](https://huggingface.co/Alejandroolmedo/DeepScaleR-1.5B-Preview-Q4-mlx) was converted to MLX format from [agentica-org/DeepScaleR-1.5B-Preview](https://huggingface.co/agentica-org/DeepScaleR-1.5B-Preview) using mlx-lm version **0.20.5**.
 
 
 
 
 
 
 
 
19
 
20
  ## Use with mlx
21
 
@@ -26,7 +34,7 @@ pip install mlx-lm
26
  ```python
27
  from mlx_lm import load, generate
28
 
29
- model, tokenizer = load("Alejandroolmedo/DeepScaleR-1.5B-Preview-Q4-mlx")
30
 
31
  prompt="hello"
32
 
 
13
  - mlx
14
  ---
15
 
16
+ # About:
17
 
18
+ **A fine-tuned version of Deepseek-R1-Distilled-Qwen-1.5B that surpasses the performance of OpenAI’s o1-preview with just 1.5B parameters on popular math evaluations.**
19
+
20
+ *Special thanks to Agentica for fine-tuning this version of Deepseek-R1-Distilled-Qwen-1.5B. More information about it can be found here: [https://huggingface.co/agentica-org/DeepScaleR-1.5B-Preview.](https://huggingface.co/agentica-org/DeepScaleR-1.5B-Preview.)*
21
+
22
+ I simply converted it to MLX format with a quantization of 4-bits for better performance on Apple Silicon Macs (M1,M2,M3,M4 Chips).
23
+
24
+ # Alejandroolmedo/DeepScaleR-1.5B-Preview-4bit-mlx
25
+
26
+ The Model [Alejandroolmedo/DeepScaleR-1.5B-Preview-4bit-mlx](https://huggingface.co/Alejandroolmedo/DeepScaleR-1.5B-Preview-4bit-mlx) was converted to MLX format from [agentica-org/DeepScaleR-1.5B-Preview](https://huggingface.co/agentica-org/DeepScaleR-1.5B-Preview) using mlx-lm version **0.20.5**.
27
 
28
  ## Use with mlx
29
 
 
34
  ```python
35
  from mlx_lm import load, generate
36
 
37
+ model, tokenizer = load("Alejandroolmedo/DeepScaleR-1.5B-Preview-4bit-mlx")
38
 
39
  prompt="hello"
40