File size: 8,560 Bytes
819216d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Magicoder-S-DS-6.7B - GGUF
- Model creator: https://huggingface.co/ise-uiuc/
- Original model: https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Magicoder-S-DS-6.7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q2_K.gguf) | Q2_K | 2.36GB |
| [Magicoder-S-DS-6.7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.IQ3_XS.gguf) | IQ3_XS | 2.61GB |
| [Magicoder-S-DS-6.7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [Magicoder-S-DS-6.7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [Magicoder-S-DS-6.7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [Magicoder-S-DS-6.7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q3_K.gguf) | Q3_K | 3.07GB |
| [Magicoder-S-DS-6.7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [Magicoder-S-DS-6.7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [Magicoder-S-DS-6.7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [Magicoder-S-DS-6.7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q4_0.gguf) | Q4_0 | 3.56GB |
| [Magicoder-S-DS-6.7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.IQ4_NL.gguf) | IQ4_NL | 3.59GB |
| [Magicoder-S-DS-6.7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [Magicoder-S-DS-6.7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q4_K.gguf) | Q4_K | 3.8GB |
| [Magicoder-S-DS-6.7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [Magicoder-S-DS-6.7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q4_1.gguf) | Q4_1 | 3.95GB |
| [Magicoder-S-DS-6.7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q5_0.gguf) | Q5_0 | 4.33GB |
| [Magicoder-S-DS-6.7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [Magicoder-S-DS-6.7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q5_K.gguf) | Q5_K | 4.46GB |
| [Magicoder-S-DS-6.7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q5_K_M.gguf) | Q5_K_M | 4.46GB |
| [Magicoder-S-DS-6.7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q5_1.gguf) | Q5_1 | 4.72GB |
| [Magicoder-S-DS-6.7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q6_K.gguf) | Q6_K | 5.15GB |
Original model description:
---
license: other
library_name: transformers
datasets:
- ise-uiuc/Magicoder-OSS-Instruct-75K
- ise-uiuc/Magicoder-Evol-Instruct-110K
license_name: deepseek
pipeline_tag: text-generation
---
# π© Magicoder: Source Code Is All You Need
> Refer to our GitHub repo [ise-uiuc/magicoder](https://github.com/ise-uiuc/magicoder/) for an up-to-date introduction to the Magicoder family!
* π©**Magicoder** is a model family empowered by πͺ**OSS-Instruct**, a novel approach to enlightening LLMs with open-source code snippets for generating *low-bias* and *high-quality* instruction data for code.
* πͺ**OSS-Instruct** mitigates the *inherent bias* of the LLM-synthesized instruction data by empowering them with *a wealth of open-source references* to produce more diverse, realistic, and controllable data.
![Overview of OSS-Instruct](assets/overview.svg)
![Overview of Result](assets/result.png)
## Model Details
### Model Description
* **Developed by:**
[Yuxiang Wei](https://yuxiang.cs.illinois.edu),
[Zhe Wang](https://github.com/zhewang2001),
[Jiawei Liu](https://jiawei-site.github.io),
[Yifeng Ding](https://yifeng-ding.com),
[Lingming Zhang](https://lingming.cs.illinois.edu)
* **License:** [DeepSeek](https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL)
* **Finetuned from model:** [deepseek-coder-6.7b-base](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base)
### Model Sources
* **Repository:** <https://github.com/ise-uiuc/magicoder>
* **Paper:** <https://arxiv.org/abs/2312.02120>
* **Demo (powered by [Gradio](https://www.gradio.app)):**
<https://github.com/ise-uiuc/magicoder/tree/main/demo>
### Training Data
* [Magicoder-OSS-Instruct-75K](https://huggingface.co/datasets/ise-uiuc/Magicoder_oss_instruct_75k): generated through **OSS-Instruct** using `gpt-3.5-turbo-1106` and used to train both Magicoder and Magicoder-S series.
* [Magicoder-Evol-Instruct-110K](https://huggingface.co/datasets/ise-uiuc/Magicoder_evol_instruct_110k): decontaminated and redistributed from [theblackcat102/evol-codealpaca-v1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1), used to further finetune Magicoder series and obtain Magicoder-S models.
## Uses
### Direct Use
Magicoders are designed and best suited for **coding tasks**.
### Out-of-Scope Use
Magicoders may not work well in non-coding tasks.
## Bias, Risks, and Limitations
Magicoders may sometimes make errors, producing misleading contents, or struggle to manage tasks that are not related to coding.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## How to Get Started with the Model
Use the code below to get started with the model. Make sure you installed the [transformers](https://huggingface.co/docs/transformers/index) library.
```python
from transformers import pipeline
import torch
MAGICODER_PROMPT = """You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions.
@@ Instruction
{instruction}
@@ Response
"""
instruction = <Your code instruction here>
prompt = MAGICODER_PROMPT.format(instruction=instruction)
generator = pipeline(
model="ise-uiuc/Magicoder-S-DS-6.7B",
task="text-generation",
torch_dtype=torch.bfloat16,
device_map="auto",
)
result = generator(prompt, max_length=1024, num_return_sequences=1, temperature=0.0)
print(result[0]["generated_text"])
```
## Technical Details
Refer to our GitHub repo: [ise-uiuc/magicoder](https://github.com/ise-uiuc/magicoder/).
## Citation
```bibtex
@misc{magicoder,
title={Magicoder: Source Code Is All You Need},
author={Yuxiang Wei and Zhe Wang and Jiawei Liu and Yifeng Ding and Lingming Zhang},
year={2023},
eprint={2312.02120},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Acknowledgements
* [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder): Evol-Instruct
* [DeepSeek-Coder](https://github.com/deepseek-ai/DeepSeek-Coder): Base model for Magicoder-DS
* [CodeLlama](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/): Base model for Magicoder-CL
* [StarCoder](https://arxiv.org/abs/2305.06161): Data decontamination
## Important Note
Magicoder models are trained on the synthetic data generated by OpenAI models. Please pay attention to OpenAI's [terms of use](https://openai.com/policies/terms-of-use) when using the models and the datasets. Magicoders will not compete with OpenAI's commercial products.
|