Update README.md
Browse files
README.md
CHANGED
@@ -11,9 +11,7 @@ inference: false
|
|
11 |
|
12 |
slim-sentiment has been fine-tuned for **sentiment analysis** function calls, generating output consisting of a python dictionary corresponding to specified keys.
|
13 |
|
14 |
-
Each slim model has a corresponding 'tool' in a separate repository, e.g.,
|
15 |
-
|
16 |
-
[**'slim-sentiment-tool'**](https://huggingface.co/llmware/slim-sentiment-tool), which a 4-bit quantized gguf version of the model that is intended to be used for inference.
|
17 |
|
18 |
Inference speed and loading time is much faster with the 'tool' versions of the model, and multiple tools can be deployed concurrently and run on a local CPU-based laptop or server.
|
19 |
|
@@ -33,8 +31,16 @@ Inference speed and loading time is much faster with the 'tool' versions of the
|
|
33 |
|
34 |
The intended use of SLIM models is to re-imagine traditional 'hard-coded' classifiers through the use of function calls, and to provide a natural language flexible tool that can be used as decision gates and processing steps in a complex LLM-based automation workflow.
|
35 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
<details>
|
37 |
-
<summary><b>Getting Started: </b> </summary>
|
38 |
|
39 |
model = AutoModelForCausalLM.from_pretrained("llmware/slim-sentiment")
|
40 |
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-sentiment")
|
@@ -60,36 +66,21 @@ The intended use of SLIM models is to re-imagine traditional 'hard-coded' classi
|
|
60 |
|
61 |
output_only = tokenizer.decode(outputs[0][start_of_input:], skip_special_tokens=True)
|
62 |
|
63 |
-
print("output only: ", output_only)
|
64 |
-
</details>
|
65 |
-
|
66 |
-
Sample output:
|
67 |
-
|
68 |
-
{"sentiment": ["negative"]}
|
69 |
-
|
70 |
-
|
71 |
-
## Prompt Instruction format: all of the SLIM models use a novel prompt instruction structured as follows:
|
72 |
-
|
73 |
-
"<human> " + {text} + "\n" +
|
74 |
-
|
75 |
-
"<{function}> " + {keys} + "</{function}>" +
|
76 |
-
|
77 |
-
"/n<bot>:"
|
78 |
-
|
79 |
-
For example, in this case, the prompt would be as follows:
|
80 |
-
|
81 |
-
"<human>" + "The stock market declined yesterday ..." + "\n" + "<classify> sentiment </classify>" + "\n<bot>:"
|
82 |
-
|
83 |
-
The model generation output will be a string in the form of a python dictionary, which can be converted as follows:
|
84 |
|
|
|
85 |
try:
|
86 |
output_only = ast.literal_eval(llm_string_output)
|
87 |
print("success - converted to python dictionary automatically")
|
88 |
-
|
89 |
except:
|
90 |
print("fail - could not convert to python dictionary automatically - ", llm_string_output)
|
91 |
|
92 |
-
|
|
|
|
|
|
|
|
|
|
|
93 |
## Using as Function Call in LLMWare
|
94 |
|
95 |
We envision the slim models deployed in a pipeline/workflow/templating framework that handles the prompt packaging more elegantly.
|
|
|
11 |
|
12 |
slim-sentiment has been fine-tuned for **sentiment analysis** function calls, generating output consisting of a python dictionary corresponding to specified keys.
|
13 |
|
14 |
+
Each slim model has a corresponding 'tool' in a separate repository, e.g., [**'slim-sentiment-tool'**](https://huggingface.co/llmware/slim-sentiment-tool), which a 4-bit quantized gguf version of the model that is intended to be used for inference.
|
|
|
|
|
15 |
|
16 |
Inference speed and loading time is much faster with the 'tool' versions of the model, and multiple tools can be deployed concurrently and run on a local CPU-based laptop or server.
|
17 |
|
|
|
31 |
|
32 |
The intended use of SLIM models is to re-imagine traditional 'hard-coded' classifiers through the use of function calls, and to provide a natural language flexible tool that can be used as decision gates and processing steps in a complex LLM-based automation workflow.
|
33 |
|
34 |
+
## Prompt format:
|
35 |
+
|
36 |
+
"<human> " + {text} + "\n" +
|
37 |
+
|
38 |
+
"<{function}> " + {keys} + "</{function}>" +
|
39 |
+
|
40 |
+
"/n<bot>:"
|
41 |
+
|
42 |
<details>
|
43 |
+
<summary><b>Getting Started Example: </b> </summary>
|
44 |
|
45 |
model = AutoModelForCausalLM.from_pretrained("llmware/slim-sentiment")
|
46 |
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-sentiment")
|
|
|
66 |
|
67 |
output_only = tokenizer.decode(outputs[0][start_of_input:], skip_special_tokens=True)
|
68 |
|
69 |
+
print("output only: ", output_only)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
70 |
|
71 |
+
# here's the fun part
|
72 |
try:
|
73 |
output_only = ast.literal_eval(llm_string_output)
|
74 |
print("success - converted to python dictionary automatically")
|
|
|
75 |
except:
|
76 |
print("fail - could not convert to python dictionary automatically - ", llm_string_output)
|
77 |
|
78 |
+
# sample output
|
79 |
+
{"sentiment": ["negative"]}
|
80 |
+
|
81 |
+
</details>
|
82 |
+
|
83 |
+
|
84 |
## Using as Function Call in LLMWare
|
85 |
|
86 |
We envision the slim models deployed in a pipeline/workflow/templating framework that handles the prompt packaging more elegantly.
|