freefallr commited on
Commit
8102ae4
·
1 Parent(s): bca02b0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -29,7 +29,8 @@ datasets:
29
  This model was created by [jphme](https://huggingface.co/jphme) and is a finetune of Meta's [Llama2 13b Chat](https://huggingface.co/meta-llama/Llama-2-13b-chat).
30
  This repository contains the model [jphme/Llama-2-13b-chat-german](https://huggingface.co/jphme/Llama-2-13b-chat-german) in GGUF format.
31
 
32
- ## Conversion Code
 
33
  ```
34
  # Convert original model to FP16 GGUF format
35
  python3 llama.cpp/convert.py ./original-models/Llama-2-13b-chat-german --outtype f16 --outfile ./converted_gguf/Llama-2-13b-chat-german-GGUF.fp16.bin
 
29
  This model was created by [jphme](https://huggingface.co/jphme) and is a finetune of Meta's [Llama2 13b Chat](https://huggingface.co/meta-llama/Llama-2-13b-chat).
30
  This repository contains the model [jphme/Llama-2-13b-chat-german](https://huggingface.co/jphme/Llama-2-13b-chat-german) in GGUF format.
31
 
32
+ ## Replication Steps
33
+ Clone llama.cpp *(Commit: 9e20231)*, compile it and use the provided `convert.py` file to convert the original model to GGUF with FP16 precision. The converted model will then be used to do further quantization.
34
  ```
35
  # Convert original model to FP16 GGUF format
36
  python3 llama.cpp/convert.py ./original-models/Llama-2-13b-chat-german --outtype f16 --outfile ./converted_gguf/Llama-2-13b-chat-german-GGUF.fp16.bin