|
--- |
|
license: apache-2.0 |
|
pipeline_tag: text-generation |
|
tags: |
|
- finetuned |
|
inference: false |
|
base_model: mistralai/Mistral-7B-Instruct-v0.2 |
|
model_creator: Mistral AI_ |
|
model_name: Mistral 7B Instruct v0.2 |
|
model_type: mistral |
|
prompt_template: '<s>[INST] {prompt} [/INST] |
|
' |
|
quantized_by: wenqiglantz |
|
--- |
|
|
|
# Mistral 7B Instruct v0.2 - GGUF |
|
|
|
This is a quantized model for `mistralai/Mistral-7B-Instruct-v0.2`. Two quantization methods were used: |
|
- Q5_K_M: 5-bit, recommended, low quality loss. |
|
- Q4_K_M: 4-bit, recommended, offers balanced quality. |
|
|
|
<!-- description start --> |
|
## Description |
|
|
|
This repo contains GGUF format model files for [Mistral AI_'s Mistral 7B Instruct v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2). |
|
|
|
This model was quantized in Google Colab. Notebook link is [here](https://colab.research.google.com/drive/17zT5sLs_f3M404OWhEcwtnlmMKFz3FM7?usp=sharing). |
|
|