TheBloke commited on
Commit
bad2278
·
1 Parent(s): 1f853a1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -125,13 +125,13 @@ Make sure you are using `llama.cpp` from commit [6381d4e110bd0ec02843a60bbeb8b6f
125
  For compatibility with older versions of llama.cpp, or for use with third-party clients and libaries, please use GGML files instead.
126
 
127
  ```
128
- ./main -t 10 -ngl 32 -m codellama-7b-python.q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
129
  ```
130
  Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
131
 
132
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
133
 
134
- Change `-c 4096` to the desired sequence length for this model. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters should be set by llama.cpp automatically. If they are not, or if you need to change them manually, you can use `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
135
 
136
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
137
 
 
125
  For compatibility with older versions of llama.cpp, or for use with third-party clients and libaries, please use GGML files instead.
126
 
127
  ```
128
+ ./main -t 10 -ngl 32 -m codellama-7b-python.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "prompt TBC"
129
  ```
130
  Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
131
 
132
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
133
 
134
+ Change `-c 4096` to the desired sequence length for this model. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters should be set by llama.cpp automatically.
135
 
136
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
137