putazon commited on
Commit
ba0bf15
·
verified ·
1 Parent(s): dd1d2db

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +31 -0
README.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: putazon/SearchQueryNER-6k-phi-4-4bit-v0-lora
3
+ tags:
4
+ - text-generation-inference
5
+ - transformers
6
+ - unsloth
7
+ - llama
8
+ - trl
9
+ - llama-cpp
10
+ - gguf-my-lora
11
+ license: apache-2.0
12
+ language:
13
+ - en
14
+ pipeline_tag: text-generation
15
+ ---
16
+
17
+ # putazon/SearchQueryNER-6k-phi-4-4bit-v0-lora-F16-GGUF
18
+ This LoRA adapter was converted to GGUF format from [`putazon/SearchQueryNER-6k-phi-4-4bit-v0-lora`](https://huggingface.co/putazon/SearchQueryNER-6k-phi-4-4bit-v0-lora) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
19
+ Refer to the [original adapter repository](https://huggingface.co/putazon/SearchQueryNER-6k-phi-4-4bit-v0-lora) for more details.
20
+
21
+ ## Use with llama.cpp
22
+
23
+ ```bash
24
+ # with cli
25
+ llama-cli -m base_model.gguf --lora SearchQueryNER-6k-phi-4-4bit-v0-lora-f16.gguf (...other args)
26
+
27
+ # with server
28
+ llama-server -m base_model.gguf --lora SearchQueryNER-6k-phi-4-4bit-v0-lora-f16.gguf (...other args)
29
+ ```
30
+
31
+ To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).