roleplaiapp commited on
Commit
9af43dc
·
verified ·
1 Parent(s): 54f855d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +26 -0
README.md ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ pipeline_tag: text-generation
4
+ tags:
5
+ - f16
6
+ - gguf
7
+ - llama-cpp
8
+ - plato
9
+ - text-generation
10
+ ---
11
+
12
+ # roleplaiapp/plato-9b-f16-GGUF
13
+
14
+ **Repo:** `roleplaiapp/plato-9b-f16-GGUF`
15
+ **Original Model:** `plato-9b`
16
+ **Quantized File:** `plato-9b.f16.gguf`
17
+ **Quantization:** `GGUF`
18
+ **Quantization Method:** `f16`
19
+
20
+ ## Overview
21
+ This is a GGUF f16 quantized version of plato-9b
22
+ ## Quantization By
23
+ I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models.
24
+ I hope the community finds these quantizations useful.
25
+
26
+ Andrew Webby @ [RolePlai](https://roleplai.app/).