LFM2-350M-gguf / README.md
yasserrmd's picture
Update README.md
379ca71 verified
metadata
base_model:
  - LiquidAI/LFM2-350M

LFM2‑350M • Quantized Version (GGUF)

Quantized GGUF version of the LiquidAI/LFM2-350M model.

  • ✅ Format: GGUF
  • ✅ Use with: liquid_llama.cpp
  • ✅ Supported precisions: Q4_0, Q4_K, etc.

Download

wget https://huggingface.co/yasserrmd/LFM2-350M-gguf/resolve/main/lfm2-350m.Q4_K.gguf

(Adjust filename for other quant formats like Q4_0, if available.)

Notes

  • Only compatible with liquid_llama.cpp (not llama.cpp).
  • Replace Q4_K with your chosen quant version.