Update README.md
Browse files
README.md
CHANGED
@@ -35,9 +35,9 @@ model-index:
|
|
35 |
results: []
|
36 |
---
|
37 |
|
38 |
-
# maxrubin629/DeepHermes-3-Llama-3-8B-Preview-
|
39 |
|
40 |
-
The Model [maxrubin629/DeepHermes-3-Llama-3-8B-Preview-
|
41 |
|
42 |
## Use with mlx
|
43 |
|
@@ -48,7 +48,7 @@ pip install mlx-lm
|
|
48 |
```python
|
49 |
from mlx_lm import load, generate
|
50 |
|
51 |
-
model, tokenizer = load("maxrubin629/DeepHermes-3-Llama-3-8B-Preview-
|
52 |
|
53 |
prompt="hello"
|
54 |
|
|
|
35 |
results: []
|
36 |
---
|
37 |
|
38 |
+
# maxrubin629/DeepHermes-3-Llama-3-8B-Preview-6bit
|
39 |
|
40 |
+
The Model [maxrubin629/DeepHermes-3-Llama-3-8B-Preview-6bit](https://huggingface.co/maxrubin629/DeepHermes-3-Llama-3-8B-Preview-6bit) was converted to MLX format from [NousResearch/DeepHermes-3-Llama-3-8B-Preview](https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-8B-Preview) using mlx-lm version **0.20.5**.
|
41 |
|
42 |
## Use with mlx
|
43 |
|
|
|
48 |
```python
|
49 |
from mlx_lm import load, generate
|
50 |
|
51 |
+
model, tokenizer = load("maxrubin629/DeepHermes-3-Llama-3-8B-Preview-6bit")
|
52 |
|
53 |
prompt="hello"
|
54 |
|