File size: 831 Bytes
1a94f17 4d56d04 1a94f17 4d56d04 1a94f17 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
---
quantized_by: k4yt3x
base_model:
- k4yt3x/Arynia-LLaMA-70B
library_name: transformers
tags:
- mergekit
- merge
license: llama3.3
---
This repo contains the GGUF imatrix quants for [k4yt3x/Arynia-LLaMA-70B](https://huggingface.co/k4yt3x/Arynia-LLaMA-70B).
The imatrx was computed from [bartowski](https://huggingface.co/bartowski)'s [calibration_datav3.txt](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
The following precisions are available:
- [IQ4_XS](https://huggingface.co/k4yt3x/Arynia-LLaMA-70B-GGUF/resolve/main/Arynia-LLaMA-70B-IQ4_XS.gguf) (37.9 GB)
- [Q4_K_M](https://huggingface.co/k4yt3x/Arynia-LLaMA-70B-GGUF/resolve/main/Arynia-LLaMA-70B-Q4_K_M.gguf) (42.5 GB)
- [Q5_K_M](https://huggingface.co/k4yt3x/Arynia-LLaMA-70B-GGUF/resolve/main/Arynia-LLaMA-70B-Q5_K_M.gguf) (49.9 GB)
|