|
--- |
|
license: mit |
|
--- |
|
# Quantized_by: durga231 |
|
$ durga231/PolyCoder-2.7B-GGUF| |
|
- Modell creator: (NinedayWang) (https://huggingface.co/NinedayWang/PolyCoder-2.7B) |
|
- Original model: (PolyCoder-2.7B ) (https://huggingface.co/NinedayWang/PolyCoder-2.7B)| |
|
‹I-- description start --> |
|
$$ Description |
|
This repo contains GGUF format model files for (PolyCoder-2.7B-GGUF] |
|
(https://huggingface.co/NinedayWang/PolyCoder-2.7B). |
|
<I-- description end --> |
|
<1 -- README_ |
|
GGUF .md-about-gguf start --> |
|
### About GGUF| |
|
GUF is a new format introduced by the llama.pp team on August 21st 2023. It is a replacement for GOMI, which is no longer supported by 1lama. cpp. |
|
Here is an incomplete list of clients and libraries that are known to support GGUF: |
|
* [1lama. appl (https://github.com/ggerganey/llama.app). The source project for GGUF. Offers a CLI and a server option. |
|
* [text-generation-webui] (httpa://github.com/gobabooga/text-generation-webuil. the most widely used wob Ul, with many features and powerful extensions. Supports GPU acceleration. |
|
* [KoboldCpp] (https://github.com/lostRuins/kebeldappLe a fully featured wob UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. |
|
[IM Studio] (httpa://Inatudio.ai/le an easy-to-use and powerful |
|
[lama-app-python) (https://github.com/abetlen/1lama-pp-python) a Python library with GPU accel, LangChain support, and OpenAI-compatiblo API server. |
|
(candlo) (https://github.com/huggingface/candle) a Rust ML framework with a focus on performance, including GPU support, and ease of use. |
|
(transformers) (https://github.com/marella/otransfoxmera)e a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransfor |
|
KEADNE |
|
GGUF .md-about-gguf end --> |
|
‹I-- repositorios-availablo start --> |
|
<I-- README_GGUF .md-how-to-download start --› |
|
#$ How to download GGUF files |
|
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. |
|
The following clients/libraries will |
|
automatically |
|
download models for you, providing a list of available models to choose from: |
|
* IM Studio |
|
* LOLIMS Web UI |
|
Faraday.dev |
|
<!-- footer end --> |
|
<--original-model-card start --> |