title: "mistral-ft-optimized-1218 Quantized in GGUF" | |
tags: | |
- GGUF | |
language: en | |
 | |
# Tsunemoto GGUF's of mistral-ft-optimized-1218 | |
This is a GGUF quantization of mistral-ft-optimized-1218. | |
## Original Repo Link: | |
[Original Repository](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) | |
## Original Model Card: | |
--- | |
This model is intended to be a strong base suitable for downstream fine-tuning on a variety of tasks. Based on our internal evaluations, we believe it's one of the strongest models for most down-stream tasks. You can read more about our development and evaluation process [here](https://openpipe.ai/blog/mistral-7b-fine-tune-optimized). |