|
--- |
|
license: cc-by-sa-4.0 |
|
--- |
|
|
|
# SLIM-XSUM-TOOL |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
|
|
**slim-xsum-tool** is a 4_K_M quantized GGUF version of slim-xsum, providing a small, fast inference implementation, optimized for multi-model concurrent deployment. |
|
|
|
This model implements an 'extreme summarization' (e.g., 'xsum') function based on the parameter key "xsum" that generates an LLM text output in the form of a python dictionary as follows: |
|
|
|
`{'xsum': ['Stock Market declines on worries of interest rates.']} ` |
|
|
|
The intent of SLIMs is to forge a middle-ground between traditional encoder-based classifiers and open-ended API-based LLMs through the use of function-calling and small specialized LLMs. |
|
|
|
[**slim-xsum**](https://huggingface.co/llmware/slim-xsum) is the Pytorch version of the model, and suitable for fine-tuning for further domain adaptation. |
|
|
|
|
|
To pull the model via API: |
|
|
|
from huggingface_hub import snapshot_download |
|
snapshot_download("llmware/slim-xsum-tool", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False) |
|
|
|
|
|
Load in your favorite GGUF inference engine, or try with llmware as follows: |
|
|
|
from llmware.models import ModelCatalog |
|
|
|
# to load the model and make a basic inference |
|
model = ModelCatalog().load_model("slim-xsum-tool") |
|
response = model.function_call(text_sample) |
|
|
|
# this one line will download the model and run a series of tests |
|
ModelCatalog().tool_test_run("slim-xsum-tool", verbose=True) |
|
|
|
|
|
Note: please review [**config.json**](https://huggingface.co/llmware/slim-xsum-tool/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set. |
|
|
|
|
|
## Model Card Contact |
|
|
|
Darren Oberst & llmware team |
|
|
|
[Any questions? Join us on Discord](https://discord.gg/MhZn5Nc39h) |