TheBloke commited on
Commit
2bea29a
·
1 Parent(s): c831c0f

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -50
README.md CHANGED
@@ -2,7 +2,7 @@
2
  inference: false
3
  language:
4
  - en
5
- license: other
6
  model_creator: Mikael110
7
  model_link: https://huggingface.co/Mikael110/llama-2-70b-guanaco-qlora
8
  model_name: Llama2 70B Guanaco QLoRA
@@ -14,17 +14,20 @@ tags:
14
  ---
15
 
16
  <!-- header start -->
17
- <div style="width: 100%;">
18
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
19
  </div>
20
  <div style="display: flex; justify-content: space-between; width: 100%;">
21
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
22
- <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
23
  </div>
24
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
25
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
26
  </div>
27
  </div>
 
 
28
  <!-- header end -->
29
 
30
  # Llama2 70B Guanaco QLoRA - GGML
@@ -35,17 +38,27 @@ tags:
35
 
36
  This repo contains GGML format model files for [Mikael110's Llama2 70b Guanaco QLoRA](https://huggingface.co/Mikael110/llama-2-70b-guanaco-qlora).
37
 
38
- CUDA GPU acceleration is now available for Llama 2 70B GGML files. Metal acceleration (macOS) is not yet available. I haven't tested AMD acceleration - let me know if it owrks. The following clients/libraries are known to work with these files, including with CUDA GPU acceleration:
 
 
 
 
 
 
 
 
39
  * [llama.cpp](https://github.com/ggerganov/llama.cpp), commit `e76d630` and later.
40
- * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI.
41
- * [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration on both Windows and macOS (note that there is currently no macOS GPU acceleration for Llama 70B models.)
42
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp), version 1.37 and later. A powerful GGML web UI, especially good for story telling.
 
43
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), version 0.1.77 and later. A Python library with LangChain support, and OpenAI-compatible API server.
 
44
 
45
  ## Repositories available
46
 
47
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GPTQ)
48
- * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML)
 
49
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-fp16)
50
  * [Mikael110's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Mikael110/llama-2-70b-guanaco-qlora)
51
 
@@ -54,12 +67,17 @@ CUDA GPU acceleration is now available for Llama 2 70B GGML files. Metal acceler
54
  ```
55
  ### Human: {prompt}
56
  ### Assistant:
 
57
  ```
58
 
59
  <!-- compatibility_ggml start -->
60
  ## Compatibility
61
 
62
- ### Requires llama.cpp [commit `e76d630`](https://github.com/ggerganov/llama.cpp/commit/e76d630df17e235e6b9ef416c45996765d2e36fb) or later.
 
 
 
 
63
 
64
  Or one of the other tools and libraries listed above.
65
 
@@ -88,68 +106,48 @@ Refer to the Provided Files table below to see what files use which methods, and
88
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
89
  | ---- | ---- | ---- | ---- | ---- | ----- |
90
  | [llama-2-70b-guanaco-qlora.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q2_K.bin) | q2_K | 2 | 28.59 GB| 31.09 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
91
- | [llama-2-70b-guanaco-qlora.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 36.15 GB| 38.65 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
92
- | [llama-2-70b-guanaco-qlora.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 33.04 GB| 35.54 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
93
  | [llama-2-70b-guanaco-qlora.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 29.75 GB| 32.25 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
 
 
94
  | [llama-2-70b-guanaco-qlora.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q4_0.bin) | q4_0 | 4 | 38.87 GB| 41.37 GB | Original quant method, 4-bit. |
95
- | [llama-2-70b-guanaco-qlora.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q4_1.bin) | q4_1 | 4 | 43.17 GB| 45.67 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
96
- | [llama-2-70b-guanaco-qlora.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 41.38 GB| 43.88 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
97
  | [llama-2-70b-guanaco-qlora.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 38.87 GB| 41.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
 
 
98
  | [llama-2-70b-guanaco-qlora.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q5_0.bin) | q5_0 | 5 | 47.46 GB| 49.96 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
99
- | [llama-2-70b-guanaco-qlora.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 48.75 GB| 51.25 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
100
  | [llama-2-70b-guanaco-qlora.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 47.46 GB| 49.96 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
101
- | llama-2-70b-guanaco-qlora.ggmlv3.q5_1.bin | q5_1 | 5 | 51.76 GB | 54.26 GB | Original quant method, 5-bit. Higher accuracy, slower inference than q5_0. |
102
- | llama-2-70b-guanaco-qlora.ggmlv3.q6_K.bin | q6_K | 6 | 56.59 GB | 59.09 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
103
- | llama-2-70b-guanaco-qlora.ggmlv3.q8_0.bin | q8_0 | 8 | 73.23 GB | 75.73 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
104
 
105
- ### q5_1, q6_K and q8_0 files require expansion from archive
106
 
107
- **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the q6_K and q8_0 files as multi-part ZIP files. They are not compressed, they are just for storing a .bin file in two parts.
108
 
109
- <details>
110
- <summary>Click for instructions regarding q5_1, q6_K and q8_0 files</summary>
111
-
112
- ### q5_1
113
- Please download:
114
- * `llama-2-70b-guanaco-qlora.ggmlv3.q5_1.zip`
115
- * `llama-2-70b-guanaco-qlora.ggmlv3.q5_1.z01`
116
-
117
- ### q6_K
118
- Please download:
119
- * `llama-2-70b-guanaco-qlora.ggmlv3.q6_K.zip`
120
- * `llama-2-70b-guanaco-qlora.ggmlv3.q6_K.z01`
121
-
122
- ### q8_0
123
- Please download:
124
- * `llama-2-70b-guanaco-qlora.ggmlv3.q8_0.zip`
125
- * `llama-2-70b-guanaco-qlora.ggmlv3.q8_0.z01`
126
-
127
- Then extract the .zip archive. This will will expand both parts automatically. On Linux I found I had to use `7zip` - the basic `unzip` tool did not work. Example:
128
- ```
129
- sudo apt update -y && sudo apt install 7zip
130
- 7zz x llama-2-70b-guanaco-qlora.ggmlv3.q6_K.zip
131
- </details>
132
 
133
- ## How to run in `llama.cpp`
134
 
135
  I use the following command line; adjust for your tastes and needs:
136
 
137
  ```
138
- ./main -t 10 -ngl 40 -gqa 8 -m llama-2-70b-guanaco-qlora.ggmlv3.q4_K_M.bin --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Human: Write a story about llamas\n### Assistant:"
139
  ```
140
- Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
 
 
141
 
142
- Change -ngl 40 to the number of GPU layers you have VRAM for. Use -ngl 100 to offload all layers to VRAM, if you have a 48GB card, or 2 x 24GB, or similar. Otherwise you can partially offload as many as you have VRAM for, on one or more GPUs.
143
 
144
  Remember the `-gqa 8` argument, required for Llama 70B models.
145
 
146
- If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
 
 
147
 
148
  ## How to run in `text-generation-webui`
149
 
150
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
151
 
152
  <!-- footer start -->
 
153
  ## Discord
154
 
155
  For further support, and discussions on these models and AI in general, join us at:
@@ -169,13 +167,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
169
  * Patreon: https://patreon.com/TheBlokeAI
170
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
171
 
172
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
173
 
174
- **Patreon special mentions**: Slarti, Chadd, John Detwiler, Pieter, zynix, K, Mano Prime, ReadyPlayerEmma, Ai Maven, Leonard Tan, Edmond Seymore, Joseph William Delisle, Luke @flexchar, Fred von Graf, Viktor Bowallius, Rishabh Srivastava, Nikolai Manek, Matthew Berman, Johann-Peter Hartmann, ya boyyy, Greatston Gnanesh, Femi Adebogun, Talal Aujan, Jonathan Leane, terasurfer, David Flickinger, William Sang, Ajan Kanaga, Vadim, Artur Olbinski, Raven Klaugh, Michael Levine, Oscar Rangel, Randy H, Cory Kujawski, RoA, Dave, Alex, Alexandros Triantafyllidis, Fen Risland, Eugene Pentland, vamX, Elle, Nathan LeClaire, Khalefa Al-Ahmad, Rainer Wilmers, subjectnull, Junyu Yang, Daniel P. Andersen, SuperWojo, LangChain4j, Mandus, Kalila, Illia Dulskyi, Trenton Dambrowitz, Asp the Wyvern, Derek Yates, Jeffrey Morgan, Deep Realms, Imad Khwaja, Pyrater, Preetika Verma, biorpg, Gabriel Tamborski, Stephen Murray, Spiking Neurons AB, Iucharbius, Chris Smitley, Willem Michiel, Luke Pendergrass, Sebastain Graf, senxiiz, Will Dee, Space Cruiser, Karl Bernard, Clay Pascal, Lone Striker, transmissions 11, webtim, WelcomeToTheClub, Sam, theTransient, Pierre Kircher, chris gileta, John Villwock, Sean Connelly, Willian Hasse
175
 
176
 
177
  Thank you to all my generous patrons and donaters!
178
 
 
 
179
  <!-- footer end -->
180
 
181
  # Original model card: Mikael110's Llama2 70b Guanaco QLoRA
 
2
  inference: false
3
  language:
4
  - en
5
+ license: llama2
6
  model_creator: Mikael110
7
  model_link: https://huggingface.co/Mikael110/llama-2-70b-guanaco-qlora
8
  model_name: Llama2 70B Guanaco QLoRA
 
14
  ---
15
 
16
  <!-- header start -->
17
+ <!-- 200823 -->
18
+ <div style="width: auto; margin-left: auto; margin-right: auto">
19
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
20
  </div>
21
  <div style="display: flex; justify-content: space-between; width: 100%;">
22
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
23
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
24
  </div>
25
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
26
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
27
  </div>
28
  </div>
29
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
30
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
31
  <!-- header end -->
32
 
33
  # Llama2 70B Guanaco QLoRA - GGML
 
38
 
39
  This repo contains GGML format model files for [Mikael110's Llama2 70b Guanaco QLoRA](https://huggingface.co/Mikael110/llama-2-70b-guanaco-qlora).
40
 
41
+ ### Important note regarding GGML files.
42
+
43
+ The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
44
+
45
+ Please use the GGUF models instead.
46
+
47
+ ### About GGML
48
+
49
+ GPU acceleration is now available for Llama 2 70B GGML files, with both CUDA (NVidia) and Metal (macOS). The following clients/libraries are known to work with these files, including with GPU acceleration:
50
  * [llama.cpp](https://github.com/ggerganov/llama.cpp), commit `e76d630` and later.
51
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI.
 
52
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp), version 1.37 and later. A powerful GGML web UI, especially good for story telling.
53
+ * [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration for both Windows and macOS. Use 0.1.11 or later for macOS GPU acceleration with 70B models.
54
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), version 0.1.77 and later. A Python library with LangChain support, and OpenAI-compatible API server.
55
+ * [ctransformers](https://github.com/marella/ctransformers), version 0.2.15 and later. A Python library with LangChain support, and OpenAI-compatible API server.
56
 
57
  ## Repositories available
58
 
59
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GPTQ)
60
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGUF)
61
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML)
62
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-fp16)
63
  * [Mikael110's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Mikael110/llama-2-70b-guanaco-qlora)
64
 
 
67
  ```
68
  ### Human: {prompt}
69
  ### Assistant:
70
+
71
  ```
72
 
73
  <!-- compatibility_ggml start -->
74
  ## Compatibility
75
 
76
+ ### Works with llama.cpp [commit `e76d630`](https://github.com/ggerganov/llama.cpp/commit/e76d630df17e235e6b9ef416c45996765d2e36fb) until August 21st, 2023
77
+
78
+ Will not work with `llama.cpp` after commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa).
79
+
80
+ For compatibility with latest llama.cpp, please use GGUF files instead.
81
 
82
  Or one of the other tools and libraries listed above.
83
 
 
106
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
107
  | ---- | ---- | ---- | ---- | ---- | ----- |
108
  | [llama-2-70b-guanaco-qlora.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q2_K.bin) | q2_K | 2 | 28.59 GB| 31.09 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
 
 
109
  | [llama-2-70b-guanaco-qlora.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 29.75 GB| 32.25 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
110
+ | [llama-2-70b-guanaco-qlora.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 33.04 GB| 35.54 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
111
+ | [llama-2-70b-guanaco-qlora.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 36.15 GB| 38.65 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
112
  | [llama-2-70b-guanaco-qlora.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q4_0.bin) | q4_0 | 4 | 38.87 GB| 41.37 GB | Original quant method, 4-bit. |
 
 
113
  | [llama-2-70b-guanaco-qlora.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 38.87 GB| 41.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
114
+ | [llama-2-70b-guanaco-qlora.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 41.38 GB| 43.88 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
115
+ | [llama-2-70b-guanaco-qlora.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q4_1.bin) | q4_1 | 4 | 43.17 GB| 45.67 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
116
  | [llama-2-70b-guanaco-qlora.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q5_0.bin) | q5_0 | 5 | 47.46 GB| 49.96 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
 
117
  | [llama-2-70b-guanaco-qlora.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 47.46 GB| 49.96 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
118
+ | [llama-2-70b-guanaco-qlora.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML/blob/main/llama-2-70b-guanaco-qlora.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 48.75 GB| 51.25 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
 
 
119
 
120
+ **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
121
 
122
+ ## How to run in `llama.cpp`
123
 
124
+ Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
125
 
126
+ For compatibility with latest llama.cpp, please use GGUF files instead.
127
 
128
  I use the following command line; adjust for your tastes and needs:
129
 
130
  ```
131
+ ./main -t 10 -ngl 40 -gqa 8 -m llama-2-70b-guanaco-qlora.ggmlv3.q4_K_M.bin --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Human: {prompt}\n### Assistant:"
132
  ```
133
+ Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. If you are fully offloading the model to GPU, use `-t 1`
134
+
135
+ Change `-ngl 40` to the number of GPU layers you have VRAM for. Use `-ngl 100` to offload all layers to VRAM - if you have a 48GB card, or 2 x 24GB, or similar. Otherwise you can partially offload as many as you have VRAM for, on one or more GPUs.
136
 
137
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
138
 
139
  Remember the `-gqa 8` argument, required for Llama 70B models.
140
 
141
+ Change `-c 4096` to the desired sequence length for this model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
142
+
143
+ For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
144
 
145
  ## How to run in `text-generation-webui`
146
 
147
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
148
 
149
  <!-- footer start -->
150
+ <!-- 200823 -->
151
  ## Discord
152
 
153
  For further support, and discussions on these models and AI in general, join us at:
 
167
  * Patreon: https://patreon.com/TheBlokeAI
168
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
169
 
170
+ **Special thanks to**: Aemon Algiz.
171
 
172
+ **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
173
 
174
 
175
  Thank you to all my generous patrons and donaters!
176
 
177
+ And thank you again to a16z for their generous grant.
178
+
179
  <!-- footer end -->
180
 
181
  # Original model card: Mikael110's Llama2 70b Guanaco QLoRA