File size: 3,772 Bytes
a7c59ec d05ac8e a7c59ec e7d8235 d05ac8e a7c59ec d05ac8e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 |
---
tags:
- finetuned
- quantized
- 4-bit
- AWQ
- transformers
- pytorch
- mistral
- instruct
- text-generation
- conversational
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- finetune
- chatml
- DPO
model-index:
- name: Gecko-7B-v0.1-DPO
results: []
license: apache-2.0
base_model: NeuralNovel/Gecko-7B-v0.1-DPO
datasets:
- Intel/orca_dpo_pairs
language:
- en
quantized_by: Suparious
pipeline_tag: text-generation
model_creator: NeuralNovel
model_name: Gecko 7B 0.1 DPO
inference: false
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
---
# NeuralNovel/Gecko-7B-v0.1-DPO
- Model creator: [NeuralNovel](https://huggingface.co/NeuralNovel)
- Original model: [Gecko-7B-v0.1-DPO](https://huggingface.co/NeuralNovel/Gecko-7B-v0.1-DPO)

## Model Summary
Designed to generate instructive and narrative text, with a focus on mathematics & numeracy.
Full-parameter fine-tune (FFT) of Mistral-7B-Instruct-v0.2, with apache-2.0 license.
You may download and use this model for research, training and commercial purposes.
This model is suitable for commercial deployment.
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/Gecko-7B-v0.1-DPO-AWQ"
system_message = "You are Senzu, incarnated as a powerful AI."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
## Prompt template: ChatML
```plaintext
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
|