This is a quantized GGML version of OpenOrca-Platypus-13B quantized to 4_0 bits.
(link to the original model : https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B)
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.