Quantized_by: durga231

$ durga231/PolyCoder-2.7B-GGUF|

About GGUF|

GUF is a new format introduced by the llama.pp team on August 21st 2023. It is a replacement for GOMI, which is no longer supported by 1lama. cpp. Here is an incomplete list of clients and libraries that are known to support GGUF:

  • [1lama. appl (https://github.com/ggerganey/llama.app). The source project for GGUF. Offers a CLI and a server option.
  • [text-generation-webui] (httpa://github.com/gobabooga/text-generation-webuil. the most widely used wob Ul, with many features and powerful extensions. Supports GPU acceleration.
  • [KoboldCpp] (https://github.com/lostRuins/kebeldappLe a fully featured wob UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. [IM Studio] (httpa://Inatudio.ai/le an easy-to-use and powerful [lama-app-python) (https://github.com/abetlen/1lama-pp-python) a Python library with GPU accel, LangChain support, and OpenAI-compatiblo API server. (candlo) (https://github.com/huggingface/candle) a Rust ML framework with a focus on performance, including GPU support, and ease of use. (transformers) (https://github.com/marella/otransfoxmera)e a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransfor KEADNE GGUF .md-about-gguf end --> ‹I-- repositorios-availablo start --> <I-- README_GGUF .md-how-to-download start --›

#$ How to download GGUF files Note for manual downloaders: You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from:

  • IM Studio
  • LOLIMS Web UI Faraday.dev <--original-model-card start -->
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.