auto-patch README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
---
|
2 |
-
base_model: rombodawg/
|
3 |
language:
|
4 |
- en
|
5 |
library_name: transformers
|
@@ -20,7 +20,7 @@ tags:
|
|
20 |
<!-- ### convert_type: hf -->
|
21 |
<!-- ### vocab_type: -->
|
22 |
<!-- ### tags: -->
|
23 |
-
static quants of https://huggingface.co/rombodawg/
|
24 |
|
25 |
<!-- provided-files -->
|
26 |
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Codellama-3-8B-Finetuned-Instruct-i1-GGUF
|
@@ -67,6 +67,6 @@ questions you might have and/or if you want some other model quantized.
|
|
67 |
|
68 |
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
|
69 |
me use its servers and providing upgrades to my workstation to enable
|
70 |
-
this work in my free time.
|
71 |
|
72 |
<!-- end -->
|
|
|
1 |
---
|
2 |
+
base_model: rombodawg/Llama-3-8B-Instruct-Coder
|
3 |
language:
|
4 |
- en
|
5 |
library_name: transformers
|
|
|
20 |
<!-- ### convert_type: hf -->
|
21 |
<!-- ### vocab_type: -->
|
22 |
<!-- ### tags: -->
|
23 |
+
static quants of https://huggingface.co/rombodawg/Llama-3-8B-Instruct-Coder
|
24 |
|
25 |
<!-- provided-files -->
|
26 |
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Codellama-3-8B-Finetuned-Instruct-i1-GGUF
|
|
|
67 |
|
68 |
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
|
69 |
me use its servers and providing upgrades to my workstation to enable
|
70 |
+
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
|
71 |
|
72 |
<!-- end -->
|