Marx-3B-V2-GGUF / README.md
afrideva's picture
Upload README.md with huggingface_hub
51cb582
---
base_model: acrastt/Marx-3B-V2
datasets:
- totally-not-an-llm/EverythingLM-data-V2-sharegpt
inference: false
language:
- en
library_name: transformers
license: apache-2.0
model_creator: acrastt
model_name: Marx-3B-V2
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# acrastt/Marx-3B-V2-GGUF
Quantized GGUF model files for [Marx-3B-V2](https://huggingface.co/acrastt/Marx-3B-V2) from [acrastt](https://huggingface.co/acrastt)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [marx-3b-v2.fp16.gguf](https://huggingface.co/afrideva/Marx-3B-V2-GGUF/resolve/main/marx-3b-v2.fp16.gguf) | fp16 | 6.85 GB |
| [marx-3b-v2.q2_k.gguf](https://huggingface.co/afrideva/Marx-3B-V2-GGUF/resolve/main/marx-3b-v2.q2_k.gguf) | q2_k | 2.15 GB |
| [marx-3b-v2.q3_k_m.gguf](https://huggingface.co/afrideva/Marx-3B-V2-GGUF/resolve/main/marx-3b-v2.q3_k_m.gguf) | q3_k_m | 2.27 GB |
| [marx-3b-v2.q4_k_m.gguf](https://huggingface.co/afrideva/Marx-3B-V2-GGUF/resolve/main/marx-3b-v2.q4_k_m.gguf) | q4_k_m | 2.58 GB |
| [marx-3b-v2.q5_k_m.gguf](https://huggingface.co/afrideva/Marx-3B-V2-GGUF/resolve/main/marx-3b-v2.q5_k_m.gguf) | q5_k_m | 2.76 GB |
| [marx-3b-v2.q6_k.gguf](https://huggingface.co/afrideva/Marx-3B-V2-GGUF/resolve/main/marx-3b-v2.q6_k.gguf) | q6_k | 3.64 GB |
| [marx-3b-v2.q8_0.gguf](https://huggingface.co/afrideva/Marx-3B-V2-GGUF/resolve/main/marx-3b-v2.q8_0.gguf) | q8_0 | 3.64 GB |
## Original Model Card:
<a href="https://www.buymeacoffee.com/acrastt" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
This is [OpenLLaMA 3B V2](https://huggingface.co/openlm-research/open_llama_3b_v2) finetuned on [EverythingLM Data V2(ShareGPT format)](https://huggingface.co/datasets/totally-not-an-llm/EverythingLM-data-V2-sharegpt) for 2 epochs.
Prompt template:
```
### HUMAN:
{prompt}
### RESPONSE:
<leave a newline for the model to answer>
```
q4_1 GGML quant available [here](https://huggingface.co/NikolayKozloff/Marx-3B-V2/).</br>
q4_1 GGUF quant available [here]( https://huggingface.co/NikolayKozloff/Marx-3B-V2-GGUF/).