pookie3000's picture
Update README.md
dfb8d97 verified
metadata
tags:
  - text-generation-inference
  - transformers
  - llama
  - trl
  - llama-cpp
  - gguf-my-lora
license: apache-2.0
language:
  - en

GGUF adapter for llama-3.2 3B to equip it with ASCII cat generation capabilities

Examples of ascii cats generated by my ascii cat adapter:

  ^—-^    
(_='.')  
//
||              |\_/|
 \\  .-"""--._,' e b 
  \\/         \   =A/
   \    \       /'
    \| _|___/\ |
     '-'-------'-
.       .         
\-"'"-'/
 } ^^ {     
=.  -  ,=   
  /^^^\  .
 /     \  )           
(   Y   ) |
=""'""...'Y 

For more see generation examples.

For inference you need to locally clone both this adapter as well as the llama-3.2 3B (https://huggingface.co/pookie3000/Llama-3.2-3B-GGUF) base model. You then need to invoke the resulting model with an empty prompt. You can get a variety of cats by playing with temperature, top-p and other inference parameters.

More info can be found on: https://github.com/vossenwout/ascii-cat-llm-finetuning

You can also checkout my python inference notebook on: https://github.com/vossenwout/ascii-cat-llm-finetuning/blob/main/src/inference/notebooks/llama_cpp_inference.ipynb

llama.cpp local example

./llama-cli -m Llama-3.2-3B.F16.gguf --lora Llama-3.2-3B-ascii-cats-lora.gguf --prompt ""