doberst commited on
Commit
5c2bd2e
·
verified ·
1 Parent(s): 9e4c66d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -5
README.md CHANGED
@@ -33,14 +33,41 @@ Inference speed and loading time is much faster with the 'tool' versions of the
33
 
34
  The intended use of SLIM models is to re-imagine traditional 'hard-coded' classifiers through the use of function calls, and to provide a natural language flexible tool that can be used as decision gates and processing steps in a complex LLM-based automation workflow.
35
 
36
- Example:
 
37
 
38
- text = "The stock market declined yesterday as investors worried increasingly about the slowing economy."
 
39
 
40
- model generation output- {"sentiment": ["negative"]}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
 
42
- function = "classify"
43
- keys = "sentiment"
44
 
45
  All of the SLIM models use a novel prompt instruction structured as follows:
46
 
 
33
 
34
  The intended use of SLIM models is to re-imagine traditional 'hard-coded' classifiers through the use of function calls, and to provide a natural language flexible tool that can be used as decision gates and processing steps in a complex LLM-based automation workflow.
35
 
36
+ <details>
37
+ <summary></summary><b>Getting Started: </b> </summary>
38
 
39
+ model = AutoModelForCausalLM.from_pretrained("llmware/slim-sentiment")
40
+ tokenizer = AutoTokenizer.from_pretrained("llmware/slim-sentiment")
41
 
42
+ function = "classify"
43
+ params = "sentiment"
44
+
45
+ text = "That was the worst earnings call of the year. The CEO should be fired."
46
+ text = "The stock market declined yesterday as investors worried increasingly about the slowing economy."
47
+
48
+ prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:"
49
+
50
+ inputs = tokenizer(prompt, return_tensors="pt")
51
+ start_of_input = len(inputs.input_ids[0])
52
+
53
+ outputs = model.generate(
54
+ inputs.input_ids.to('cpu'),
55
+ eos_token_id=tokenizer.eos_token_id,
56
+ pad_token_id=tokenizer.eos_token_id,
57
+ do_sample=True,
58
+ temperature=0.3,
59
+ max_new_tokens=100
60
+ )
61
+
62
+ output_only = tokenizer.decode(outputs[0][start_of_input:], skip_special_tokens=True)
63
+
64
+ print("output only: ", output_only)
65
+ </details>
66
+
67
+ Sample output:
68
+
69
+ {"sentiment": ["negative"]}
70
 
 
 
71
 
72
  All of the SLIM models use a novel prompt instruction structured as follows:
73