---
license: bigcode-openrail-m
pipeline_tag: text-generation
library_name: gguf
base_model: cognitivecomputations/dolphincoder-starcoder2-15b
---
**NOTE**: You will need a recent build of llama.cpp to run these quants (i.e. at least commit `494c870`).
GGUF importance matrix (imatrix) quants for https://huggingface.co/cognitivecomputations/dolphincoder-starcoder2-15b
* The importance matrix was trained for ~50K tokens (105 batches of 512 tokens) using a [general purpose imatrix calibration dataset](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384).
* The [imatrix is being used on the K-quants](https://github.com/ggerganov/llama.cpp/pull/4930) as well (under Q6_K).
> This model is based on StarCoder2-15b and is subject to bigcode-openrail-m license.
This Dolphin is really good at coding, I trained with a lot of coding data.
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
| Layers | Context | [Template](https://huggingface.co/cognitivecomputations/dolphincoder-starcoder2-15b#training) |
| --- | --- | --- |
|
40|
16384|
\<\|im_start\|\>system|
You are DolphinCoder, a helpful AI programming assistant.\<\|im_end\|\>
\<\|im_start\|\>user
{prompt}\<\|im_end\|\>
\<\|im_start\|\>assistant