Update README.md
Browse files
README.md
CHANGED
@@ -29,7 +29,7 @@ The objective, as with the other Magnum models, is to emulate the prose style an
|
|
29 |
|
30 |
[Here's the rsLoRA adapter](https://huggingface.co/Doctor-Shotgun/Magnum-v4-SE-70B-LoRA) for those merge-makers out there to play with.
|
31 |
|
32 |
-
Thank you to [bartowski](https://huggingface.co/bartowski) for the [GGUF quants](https://huggingface.co/bartowski/L3.3-70B-Magnum-v4-SE-GGUF).
|
33 |
|
34 |
Thank you to [alpindale](https://huggingface.co/alpindale) for the [fp8 dynamic quant](https://huggingface.co/alpindale/L3.3-70B-Magnum-v4-SE-FP8).
|
35 |
|
|
|
29 |
|
30 |
[Here's the rsLoRA adapter](https://huggingface.co/Doctor-Shotgun/Magnum-v4-SE-70B-LoRA) for those merge-makers out there to play with.
|
31 |
|
32 |
+
Thank you to [bartowski](https://huggingface.co/bartowski) for the [imatrix GGUF quants](https://huggingface.co/bartowski/L3.3-70B-Magnum-v4-SE-GGUF) and [mradermacher](https://huggingface.co/mradermacher) for the [static GGUF quants](https://huggingface.co/mradermacher/L3.3-70B-Magnum-v4-SE-GGUF).
|
33 |
|
34 |
Thank you to [alpindale](https://huggingface.co/alpindale) for the [fp8 dynamic quant](https://huggingface.co/alpindale/L3.3-70B-Magnum-v4-SE-FP8).
|
35 |
|