Update README.md
Browse files
README.md
CHANGED
@@ -27,16 +27,16 @@ GGUF files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
|
|
27 |
I use the following command line, adjust for your tastes and needs:
|
28 |
|
29 |
```
|
30 |
-
./main -t 2 -ngl
|
31 |
```
|
32 |
Change `-t 2` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
33 |
|
34 |
-
Change `-ngl
|
35 |
|
36 |
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`, you can use `--interactive-first` to start in interactive mode:
|
37 |
|
38 |
```
|
39 |
-
./main -t 2 -ngl
|
40 |
```
|
41 |
|
42 |
## Compatibility
|
|
|
27 |
I use the following command line, adjust for your tastes and needs:
|
28 |
|
29 |
```
|
30 |
+
./main -t 2 -ngl 28 -m gemma-7b-it.q4_K_M.gguf -p '<start_of_turn>user\nWhat is love?\n<end_of_turn>\n<start_of_turn>model\n' --no-penalize-nl -e --color --temp 0.95 -c 1024 -n 512 --repeat_penalty 1.2 --top_p 0.95 --top_k 50
|
31 |
```
|
32 |
Change `-t 2` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
33 |
|
34 |
+
Change `-ngl 28` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
35 |
|
36 |
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`, you can use `--interactive-first` to start in interactive mode:
|
37 |
|
38 |
```
|
39 |
+
./main -t 2 -ngl 28 -m gemma-7b-it.q4_K_M.gguf --in-prefix '<start_of_turn>user\n' --in-suffix '<end_of_turn>\n<start_of_turn>model\n' -i -ins --no-penalize-nl -e --color --temp 0.95 -c 1024 -n 512 --repeat_penalty 1.2 --top_p 0.95 --top_k 50
|
40 |
```
|
41 |
|
42 |
## Compatibility
|