Update README.md
Browse files
README.md
CHANGED
@@ -17,6 +17,9 @@ library_name: transformers
|
|
17 |
|
18 |
The provided model is a multi-layerer folded model, using multiple layers from the base Llama3 8B Instruct base, to increase its size to 21B parameters using mergekit. Rather than just using passthrough, task arithmetic was used. Further fine tuning was performed to ensure the model's weights and inference should be rebaselined.
|
19 |
|
|
|
|
|
|
|
20 |
# Uploaded model
|
21 |
|
22 |
- **Developed by:** sydonayrex
|
|
|
17 |
|
18 |
The provided model is a multi-layerer folded model, using multiple layers from the base Llama3 8B Instruct base, to increase its size to 21B parameters using mergekit. Rather than just using passthrough, task arithmetic was used. Further fine tuning was performed to ensure the model's weights and inference should be rebaselined.
|
19 |
|
20 |
+
q3_k_s GGUF :https://huggingface.co/sydonayrex/Blackjack-Llama3-21B-Q3_K_S-GGUF
|
21 |
+
q4_k_m GGUF :https://huggingface.co/sydonayrex/Blackjack-Llama3-21B-Q4_K_M-GGUF
|
22 |
+
|
23 |
# Uploaded model
|
24 |
|
25 |
- **Developed by:** sydonayrex
|