Adapter GGUFs
Collection
Adapters which can be used together with a base model gguf.
•
1 item
•
Updated
GGUF adapter for llama-3.2 3B to equip it with ASCII cat generation capabilities
^—-^
(_='.')
//
|| |\_/|
\\ .-"""--._,' e b
\\/ \ =A/
\ \ /'
\| _|___/\ |
'-'-------'-
. .
\-"'"-'/
} ^^ {
=. - ,=
/^^^\ .
/ \ )
( Y ) |
=""'""...'Y
For more see generation examples.
For inference you need to locally clone both this adapter as well as the llama-3.2 3B (https://huggingface.co/pookie3000/Llama-3.2-3B-GGUF) base model. You then need to invoke the resulting model with an empty prompt. You can get a variety of cats by playing with temperature, top-p and other inference parameters.
More info can be found on: https://github.com/vossenwout/ascii-cat-llm-finetuning
You can also checkout my python inference notebook on: https://github.com/vossenwout/ascii-cat-llm-finetuning/blob/main/src/inference/notebooks/llama_cpp_inference.ipynb
llama.cpp local example
./llama-cli -m Llama-3.2-3B.F16.gguf --lora Llama-3.2-3B-ascii-cats-lora.gguf --prompt ""