Update README.md
Browse files
README.md
CHANGED
@@ -125,13 +125,13 @@ Make sure you are using `llama.cpp` from commit [6381d4e110bd0ec02843a60bbeb8b6f
|
|
125 |
For compatibility with older versions of llama.cpp, or for use with third-party clients and libaries, please use GGML files instead.
|
126 |
|
127 |
```
|
128 |
-
./main -t 10 -ngl 32 -m codellama-7b-python.q4_K_M.gguf --color -c
|
129 |
```
|
130 |
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
131 |
|
132 |
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
133 |
|
134 |
-
Change `-c 4096` to the desired sequence length for this model. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters should be set by llama.cpp automatically.
|
135 |
|
136 |
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
137 |
|
|
|
125 |
For compatibility with older versions of llama.cpp, or for use with third-party clients and libaries, please use GGML files instead.
|
126 |
|
127 |
```
|
128 |
+
./main -t 10 -ngl 32 -m codellama-7b-python.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "prompt TBC"
|
129 |
```
|
130 |
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
131 |
|
132 |
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
133 |
|
134 |
+
Change `-c 4096` to the desired sequence length for this model. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters should be set by llama.cpp automatically.
|
135 |
|
136 |
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
137 |
|