This is for debug.
Made by llama.cpp-b4453 (Windows CUDA12 Binary) and convert_hf_to_gguf.py (released same time).
MIT License
- Downloads last month
- 59
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.