doberst commited on
Commit
94da0b0
·
verified ·
1 Parent(s): c4a4c08

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -6,20 +6,20 @@ license: apache-2.0
6
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
- **slim-ner-tool** is part of the SLIM ("Structured Language Instruction Model") model series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.
10
 
11
- slim-ner-tool is a 4_K_M quantized GGUF version of slim-ner, providing a small, fast inference implementation.
12
 
13
  Load in your favorite GGUF inference engine (see details in config.json to set up the prompt template), or try with llmware as follows:
14
 
15
  from llmware.models import ModelCatalog
16
 
17
  # to load the model and make a basic inference
18
- ner_tool = ModelCatalog().load_model("slim-ner-tool")
19
- response = ner_tool.function_call(text_sample)
20
 
21
  # this one line will download the model and run a series of tests
22
- ModelCatalog().test_run("slim-ner-tool", verbose=True)
23
 
24
 
25
  Slim models can also be loaded even more simply as part of a multi-model, multi-step LLMfx calls:
@@ -27,8 +27,8 @@ Slim models can also be loaded even more simply as part of a multi-model, multi-
27
  from llmware.agents import LLMfx
28
 
29
  llm_fx = LLMfx()
30
- llm_fx.load_tool("ner")
31
- response = llm_fx.named_entity_extraction(text)
32
 
33
 
34
  ### Model Description
@@ -39,7 +39,7 @@ Slim models can also be loaded even more simply as part of a multi-model, multi-
39
  - **Model type:** GGUF
40
  - **Language(s) (NLP):** English
41
  - **License:** Apache 2.0
42
- - **Quantized from model:** llmware/slim-sentiment (finetuned tiny llama)
43
 
44
  ## Uses
45
 
 
6
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
+ **slim-ratings-tool** is part of the SLIM ("Structured Language Instruction Model") model series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.
10
 
11
+ slim-ratings-tool is a 4_K_M quantized GGUF version of slim-ratings, providing a small, fast inference implementation.
12
 
13
  Load in your favorite GGUF inference engine (see details in config.json to set up the prompt template), or try with llmware as follows:
14
 
15
  from llmware.models import ModelCatalog
16
 
17
  # to load the model and make a basic inference
18
+ ratings_tool = ModelCatalog().load_model("slim-ratings-tool")
19
+ response = ratings_tool.function_call(text_sample)
20
 
21
  # this one line will download the model and run a series of tests
22
+ ModelCatalog().test_run("slim-ratings-tool", verbose=True)
23
 
24
 
25
  Slim models can also be loaded even more simply as part of a multi-model, multi-step LLMfx calls:
 
27
  from llmware.agents import LLMfx
28
 
29
  llm_fx = LLMfx()
30
+ llm_fx.load_tool("ratings")
31
+ response = llm_fx.ratings(text)
32
 
33
 
34
  ### Model Description
 
39
  - **Model type:** GGUF
40
  - **Language(s) (NLP):** English
41
  - **License:** Apache 2.0
42
+ - **Quantized from model:** llmware/slim-ratings (finetuned tiny llama)
43
 
44
  ## Uses
45