Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,38 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
4 |
+
# Quantized_by: durga231
|
5 |
+
$ durga231/PolyCoder-2.7B-GGUF|
|
6 |
+
- Modell creator: (NinedayWang) (https://huggingface.co/NinedayWang/PolyCoder-2.7B)
|
7 |
+
- Original model: (PolyCoder-2.7B ) (https://huggingface.co/NinedayWang/PolyCoder-2.7B)|
|
8 |
+
‹I-- description start -->
|
9 |
+
$$ Description
|
10 |
+
This repo contains GGUF format model files for (PolyCoder-2.7B-GGUF]
|
11 |
+
(https://huggingface.co/NinedayWang/PolyCoder-2.7B).
|
12 |
+
<I-- description end -->
|
13 |
+
<1 -- README_
|
14 |
+
GGUF .md-about-gguf start -->
|
15 |
+
### About GGUF|
|
16 |
+
GUF is a new format introduced by the llama.pp team on August 21st 2023. It is a replacement for GOMI, which is no longer supported by 1lama. cpp.
|
17 |
+
Here is an incomplete list of clients and libraries that are known to support GGUF:
|
18 |
+
* [1lama. appl (https://github.com/ggerganey/llama.app). The source project for GGUF. Offers a CLI and a server option.
|
19 |
+
* [text-generation-webui] (httpa://github.com/gobabooga/text-generation-webuil. the most widely used wob Ul, with many features and powerful extensions. Supports GPU acceleration.
|
20 |
+
* [KoboldCpp] (https://github.com/lostRuins/kebeldappLe a fully featured wob UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
|
21 |
+
[IM Studio] (httpa://Inatudio.ai/le an easy-to-use and powerful
|
22 |
+
[lama-app-python) (https://github.com/abetlen/1lama-pp-python) a Python library with GPU accel, LangChain support, and OpenAI-compatiblo API server.
|
23 |
+
(candlo) (https://github.com/huggingface/candle) a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
24 |
+
(transformers) (https://github.com/marella/otransfoxmera)e a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransfor
|
25 |
+
KEADNE
|
26 |
+
GGUF .md-about-gguf end -->
|
27 |
+
‹I-- repositorios-availablo start -->
|
28 |
+
<I-- README_GGUF .md-how-to-download start --›
|
29 |
+
#$ How to download GGUF files
|
30 |
+
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
|
31 |
+
The following clients/libraries will
|
32 |
+
automatically
|
33 |
+
download models for you, providing a list of available models to choose from:
|
34 |
+
* IM Studio
|
35 |
+
* LOLIMS Web UI
|
36 |
+
Faraday.dev
|
37 |
+
<!-- footer end -->
|
38 |
+
<--original-model-card start -->
|