Commit
·
cbf4d7a
1
Parent(s):
5019332
Update README.md
Browse files
README.md
CHANGED
@@ -27,6 +27,8 @@ It uses a mixture of the following datasets:
|
|
27 |
- 13B (unquantized): [https://huggingface.co/ausboss/llama-13b-supercot](https://huggingface.co/ausboss/llama-13b-supercot)
|
28 |
- 13B 4-bit 128g CUDA: [https://huggingface.co/ausboss/llama-13b-supercot-4bit-128g](https://huggingface.co/ausboss/llama-13b-supercot-4bit-128g)
|
29 |
|
|
|
|
|
30 |
### Compatibility
|
31 |
This LoRA is compatible with any 7B, 13B or 30B 4-bit quantized LLaMa model, including ggml quantized converted bins
|
32 |
|
|
|
27 |
- 13B (unquantized): [https://huggingface.co/ausboss/llama-13b-supercot](https://huggingface.co/ausboss/llama-13b-supercot)
|
28 |
- 13B 4-bit 128g CUDA: [https://huggingface.co/ausboss/llama-13b-supercot-4bit-128g](https://huggingface.co/ausboss/llama-13b-supercot-4bit-128g)
|
29 |
|
30 |
+
(Thanks to all the awesome anons with supercomputers)
|
31 |
+
|
32 |
### Compatibility
|
33 |
This LoRA is compatible with any 7B, 13B or 30B 4-bit quantized LLaMa model, including ggml quantized converted bins
|
34 |
|