waltervix commited on
Commit
359a14e
·
verified ·
1 Parent(s): f29ce60

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -39
README.md CHANGED
@@ -16,42 +16,42 @@ library_name: transformers
16
  This model was converted to GGUF format from [`Qwen/QwQ-32B-Preview`](https://huggingface.co/Qwen/QwQ-32B-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
17
  Refer to the [original model card](https://huggingface.co/Qwen/QwQ-32B-Preview) for more details on the model.
18
 
19
- ## Use with llama.cpp
20
- Install llama.cpp through brew (works on Mac and Linux)
21
-
22
- ```bash
23
- brew install llama.cpp
24
-
25
- ```
26
- Invoke the llama.cpp server or the CLI.
27
-
28
- ### CLI:
29
- ```bash
30
- llama-cli --hf-repo waltervix/QwQ-32B-Preview-Q2_K-GGUF --hf-file qwq-32b-preview-q2_k.gguf -p "The meaning to life and the universe is"
31
- ```
32
-
33
- ### Server:
34
- ```bash
35
- llama-server --hf-repo waltervix/QwQ-32B-Preview-Q2_K-GGUF --hf-file qwq-32b-preview-q2_k.gguf -c 2048
36
- ```
37
-
38
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
39
-
40
- Step 1: Clone llama.cpp from GitHub.
41
- ```
42
- git clone https://github.com/ggerganov/llama.cpp
43
- ```
44
-
45
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
46
- ```
47
- cd llama.cpp && LLAMA_CURL=1 make
48
- ```
49
-
50
- Step 3: Run inference through the main binary.
51
- ```
52
- ./llama-cli --hf-repo waltervix/QwQ-32B-Preview-Q2_K-GGUF --hf-file qwq-32b-preview-q2_k.gguf -p "The meaning to life and the universe is"
53
- ```
54
- or
55
- ```
56
- ./llama-server --hf-repo waltervix/QwQ-32B-Preview-Q2_K-GGUF --hf-file qwq-32b-preview-q2_k.gguf -c 2048
57
- ```
 
16
  This model was converted to GGUF format from [`Qwen/QwQ-32B-Preview`](https://huggingface.co/Qwen/QwQ-32B-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
17
  Refer to the [original model card](https://huggingface.co/Qwen/QwQ-32B-Preview) for more details on the model.
18
 
19
+
20
+ ## Run locally with Samantha Interface Assistant
21
+
22
+ <!-- header start -->
23
+ <!-- 200823 -->
24
+ <div style="width: auto; margin-left: auto; margin-right: auto">
25
+ <img src="https://i.ibb.co/5WP8Sbh/samantha-ia.png" alt="Samantha_IA" style="width: 70%; min-width: 400px; display: block; margin: auto;">
26
+ </div>
27
+ <!-- header end -->
28
+
29
+ **Github project:** https://github.com/controlecidadao/samantha_ia/blob/main/README.md
30
+
31
+ <br>
32
+
33
+
34
+ ## 📺 Video: Intelligence Challenge with Samantha - Microsoft Phi 3.5 vs Google Gemma 2
35
+
36
+ **Video:** https://www.youtube.com/watch?v=KgicCGMSygU
37
+
38
+ <br>
39
+
40
+ ## 👟 Testing a Model in 5 Steps with Samantha
41
+
42
+ Samantha needs just a `.gguf` model file to generate text. Follow these steps to perform a simple model test:
43
+
44
+ **1)** Open Windows Task Management by pressing `CTRL + SHIFT + ESC` and check available memory. Close some programs if necessary to free memory.
45
+
46
+ **2)** Visit [Hugging Face](https://huggingface.co/models?library=gguf&sort=trending&search=gguf) repository and click on the card to open the corresponding page. Locate the _Files and versions_ tab and choose a `.gguf` model that fits in your available memory.
47
+
48
+ **3)** Right click over the model download link icon and copy its URL.
49
+
50
+ **4)** Paste the model URL into Samantha's _Download models for testing_ field.
51
+
52
+ **5)** Insert a prompt into _User prompt_ field and press `Enter`. Keep the `$$$` sign at the end of your prompt. The model will be downloaded and the response will be generated using the default deterministic settings. You can track this process via Windows Task Management.
53
+
54
+
55
+ Every new model downloaded via this copy and paste procedure will replace the previous one to save hard drive space. Model download is saved as `MODEL_FOR_TESTING.gguf` in your _Downloads_ folder.
56
+
57
+ You can also download the model and save it permanently to your computer. For more datails, visit Samantha's project on Github.