Commit
·
d72299b
1
Parent(s):
230c204
Update README.md
Browse files
README.md
CHANGED
@@ -18,10 +18,14 @@ It uses a mixture of the following datasets:
|
|
18 |
- Alpaca GPT4
|
19 |
|
20 |
### Merged Models
|
|
|
21 |
- GGML 30B 4-bit: [https://huggingface.co/gozfarb/llama-30b-supercot-ggml](https://huggingface.co/gozfarb/llama-30b-supercot-ggml)
|
22 |
- 30B (unquantized): [https://huggingface.co/ausboss/llama-30b-supercot](https://huggingface.co/ausboss/llama-30b-supercot)
|
23 |
- 30B 4-bit 128g CUDA: [https://huggingface.co/tsumeone/llama-30b-supercot-4bit-128g-cuda](https://huggingface.co/tsumeone/llama-30b-supercot-4bit-128g-cuda)
|
24 |
|
|
|
|
|
|
|
25 |
### Compatibility
|
26 |
This LoRA is compatible with any 7B, 13B or 30B 4-bit quantized LLaMa model, including ggml quantized converted bins
|
27 |
|
|
|
18 |
- Alpaca GPT4
|
19 |
|
20 |
### Merged Models
|
21 |
+
#### 30B
|
22 |
- GGML 30B 4-bit: [https://huggingface.co/gozfarb/llama-30b-supercot-ggml](https://huggingface.co/gozfarb/llama-30b-supercot-ggml)
|
23 |
- 30B (unquantized): [https://huggingface.co/ausboss/llama-30b-supercot](https://huggingface.co/ausboss/llama-30b-supercot)
|
24 |
- 30B 4-bit 128g CUDA: [https://huggingface.co/tsumeone/llama-30b-supercot-4bit-128g-cuda](https://huggingface.co/tsumeone/llama-30b-supercot-4bit-128g-cuda)
|
25 |
|
26 |
+
#### 13B
|
27 |
+
- 13B (unquantized): [https://huggingface.co/ausboss/llama-13b-supercot](https://huggingface.co/ausboss/llama-13b-supercot)
|
28 |
+
|
29 |
### Compatibility
|
30 |
This LoRA is compatible with any 7B, 13B or 30B 4-bit quantized LLaMa model, including ggml quantized converted bins
|
31 |
|