Update README.md
Browse files
README.md
CHANGED
@@ -7,24 +7,27 @@ inference: false
|
|
7 |
|
8 |
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
|
10 |
-
**slim-sentiment** is part of the SLIM ("
|
11 |
|
12 |
slim-sentiment has been fine-tuned for **sentiment analysis** function calls, generating output consisting of a python dictionary corresponding to specified keys, e.g.:
|
13 |
|
14 |
`{"sentiment": ["positive"]}`
|
15 |
|
16 |
|
17 |
-
SLIM models re-imagine traditional 'hard-coded' classifiers through the use of function calls,
|
18 |
|
19 |
-
Each slim model has a
|
20 |
|
21 |
|
22 |
## Prompt format:
|
23 |
|
24 |
-
`"<human> " + {text} + "\n" +
|
25 |
-
|
|
|
|
|
|
|
26 |
<details>
|
27 |
-
<summary><b>
|
28 |
|
29 |
model = AutoModelForCausalLM.from_pretrained("llmware/slim-sentiment")
|
30 |
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-sentiment")
|
@@ -59,12 +62,11 @@ Each slim model has a corresponding 'quantized tool' version, e.g., [**'slim-se
|
|
59 |
except:
|
60 |
print("fail - could not convert to python dictionary automatically - ", llm_string_output)
|
61 |
|
62 |
-
</details>
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
|
67 |
-
<details>
|
|
|
|
|
68 |
<summary><b>Using as Function Call in LLMWare</b></summary>
|
69 |
|
70 |
We envision the slim models deployed in a pipeline/workflow/templating framework that handles the prompt packaging more elegantly.
|
|
|
7 |
|
8 |
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
|
10 |
+
**slim-sentiment** is part of the SLIM ("**S**tructured **L**anguage **I**nstruction **M**odel") series, consisting of small, specialized decoder-based models, fine-tuned for function-calling.
|
11 |
|
12 |
slim-sentiment has been fine-tuned for **sentiment analysis** function calls, generating output consisting of a python dictionary corresponding to specified keys, e.g.:
|
13 |
|
14 |
`{"sentiment": ["positive"]}`
|
15 |
|
16 |
|
17 |
+
SLIM models re-imagine traditional 'hard-coded' classifiers through the use of function calls, to provide a flexible natural language generative model that can be used as decision gates and processing steps in a complex LLM-based automation workflow.
|
18 |
|
19 |
+
Each slim model has a 'quantized tool' version, e.g., [**'slim-sentiment-tool'**](https://huggingface.co/llmware/slim-sentiment-tool).
|
20 |
|
21 |
|
22 |
## Prompt format:
|
23 |
|
24 |
+
`"<human> " + {text} + "\n" + `
|
25 |
+
`"<{function}> " + {keys} + "</{function}>"`
|
26 |
+
`+ "/n<bot>:" `
|
27 |
+
|
28 |
+
|
29 |
<details>
|
30 |
+
<summary><b> Transformers Script </b> </summary>
|
31 |
|
32 |
model = AutoModelForCausalLM.from_pretrained("llmware/slim-sentiment")
|
33 |
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-sentiment")
|
|
|
62 |
except:
|
63 |
print("fail - could not convert to python dictionary automatically - ", llm_string_output)
|
64 |
|
65 |
+
</details>
|
|
|
|
|
|
|
66 |
|
67 |
+
<details>
|
68 |
+
|
69 |
+
|
70 |
<summary><b>Using as Function Call in LLMWare</b></summary>
|
71 |
|
72 |
We envision the slim models deployed in a pipeline/workflow/templating framework that handles the prompt packaging more elegantly.
|