This is for debug.
Made by llama.cpp-b4453 (Windows CUDA12 Binary) and convert_hf_to_gguf.py (released same time).

MIT License

Downloads last month
59
GGUF
Model size
41.9B params
Architecture
phimoe

4-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.