AlejandroOlmedo commited on
Commit
dcf515c
·
verified ·
1 Parent(s): 91384d9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -29,15 +29,15 @@ https://huggingface.co/agentica-org/DeepScaleR-1.5B-Preview. (Base Model)
29
  # Other Types/Sizes:
30
  | Link | Type | Size| Notes |
31
  |-------|-----------|-----------|-----------|
32
- | [MLX] (https://huggingface.co/Alejandroolmedo/DeepScaleR-1.5B-Preview-8bit-mlx) | 8-bit | 1.90 GB | **Best Quality** |
33
- | [MLX] (https://huggingface.co/Alejandroolmedo/DeepScaleR-1.5B-Preview-6bit-mlx) | 6-bit | 1.46 GB | Better Quality|
34
- | [MLX] (https://huggingface.co/Alejandroolmedo/DeepScaleR-1.5B-Preview-4bit-mlx) | 4-bit | 1.01 GB | Good Quality|
35
 
36
  I simply converted it to MLX format with a quantization of 4-bits for better performance on Apple Silicon Macs (M1,M2,M3,M4 Chips).
37
 
38
- # Alejandroolmedo/DeepScaleR-1.5B-Preview-4bit-mlx
39
 
40
- The Model [Alejandroolmedo/DeepScaleR-1.5B-Preview-4bit-mlx](https://huggingface.co/Alejandroolmedo/DeepScaleR-1.5B-Preview-4bit-mlx) was converted to MLX format from [agentica-org/DeepScaleR-1.5B-Preview](https://huggingface.co/agentica-org/DeepScaleR-1.5B-Preview) using mlx-lm version **0.20.5**.
41
 
42
  ## Use with mlx
43
 
@@ -48,7 +48,7 @@ pip install mlx-lm
48
  ```python
49
  from mlx_lm import load, generate
50
 
51
- model, tokenizer = load("Alejandroolmedo/DeepScaleR-1.5B-Preview-4bit-mlx")
52
 
53
  prompt="hello"
54
 
 
29
  # Other Types/Sizes:
30
  | Link | Type | Size| Notes |
31
  |-------|-----------|-----------|-----------|
32
+ | [MLX] (https://huggingface.co/AlejandroOlmedo/DeepScaleR-1.5B-Preview-8bit-mlx) | 8-bit | 1.90 GB | **Best Quality** |
33
+ | [MLX] (https://huggingface.co/AlejandroOlmedo/DeepScaleR-1.5B-Preview-6bit-mlx) | 6-bit | 1.46 GB | Better Quality|
34
+ | [MLX] (https://huggingface.co/AlejandroOlmedo/DeepScaleR-1.5B-Preview-4bit-mlx) | 4-bit | 1.01 GB | Good Quality|
35
 
36
  I simply converted it to MLX format with a quantization of 4-bits for better performance on Apple Silicon Macs (M1,M2,M3,M4 Chips).
37
 
38
+ # AlejandroOlmedo/DeepScaleR-1.5B-Preview-4bit-mlx
39
 
40
+ The Model [AlejandroOlmedo/DeepScaleR-1.5B-Preview-4bit-mlx](https://huggingface.co/AlejandroOlmedo/DeepScaleR-1.5B-Preview-4bit-mlx) was converted to MLX format from [agentica-org/DeepScaleR-1.5B-Preview](https://huggingface.co/agentica-org/DeepScaleR-1.5B-Preview) using mlx-lm version **0.20.5**.
41
 
42
  ## Use with mlx
43
 
 
48
  ```python
49
  from mlx_lm import load, generate
50
 
51
+ model, tokenizer = load("AlejandroOlmedo/DeepScaleR-1.5B-Preview-4bit-mlx")
52
 
53
  prompt="hello"
54