|
--- |
|
library_name: transformers |
|
tags: |
|
- mistral |
|
- quantized |
|
- text-generation-inference |
|
pipeline_tag: text-generation |
|
inference: false |
|
--- |
|
**GGUF quantizations for [ChaoticNeutrals/Prima-LelantaclesV5-7b](https://huggingface.co/ChaoticNeutrals/Prima-LelantaclesV5-7b).** |
|
|
|
*If you want any specific quantization to be added, feel free to ask.* |
|
|
|
All credits belong to the respective creators. |
|
|
|
`Base⇢ GGUF(F16)⇢ GGUF(Quants)` |
|
|
|
Using [llama.cpp](https://github.com/ggerganov/llama.cpp/)-b2222. **No --imatrix was used.** |
|
|
|
# Original model information: |
|
|
|
 |
|
|
|
 |
|
|
|
https://huggingface.co/ChaoticNeutrals/Prima-LelantaclesV5-7b/tree/main/ST%20presets |
|
|
|
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method. |
|
|
|
The following models were included in the merge: |
|
* [Test157t/Pasta-Lake-7b](https://huggingface.co/Test157t/Pasta-Lake-7b) + [Test157t/Prima-LelantaclesV4-7b-16k](https://huggingface.co/Test157t/Prima-LelantaclesV4-7b-16k) |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
merge_method: dare_ties |
|
base_model: Test157t/Prima-LelantaclesV4-7b-16k |
|
parameters: |
|
normalize: true |
|
models: |
|
- model: Test157t/Pasta-Lake-7b |
|
parameters: |
|
weight: 1 |
|
- model: Test157t/Prima-LelantaclesV4-7b-16k |
|
parameters: |
|
weight: 1 |
|
dtype: float16 |
|
|
|
``` |