File size: 1,689 Bytes
f721fef 268a5be f721fef 268a5be f721fef 268a5be f721fef 268a5be f721fef 268a5be f721fef 268a5be f721fef 268a5be |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
---
language:
- en
- ko
library_name: transformers
license: cc-by-nc-4.0
pipeline_tag: text-generation
developers: Kanana LLM
training_regime: bf16 mixed precision
base_model:
- huihui-ai/kanana-nano-2.1b-instruct-abliterated
tags:
- abliterated
- uncensored
---
# Melvin56/kanana-nano-2.1b-instruct-abliterated-GGUF
Original Model : [huihui-ai/kanana-nano-2.1b-instruct-abliterated](https://huggingface.co/huihui-ai/kanana-nano-2.1b-instruct-abliterated)
All quants are made using the imatrix dataset.
| Model | Size (GB) |
|:-------------------------------------------------|:-------------:|
| Q2_K_S | 0.914 |
| Q2_K | 0.931 |
| Q3_K_M | 1.138 |
| Q4_K_M | 1.385 |
| Q5_K_M | 1.568 |
| Q6_K | 1.826 |
| Q8_0 | 2.223 |
| F16 | 4.177 |
| F32 | 8.342 |
| | CPU (AVX2) | CPU (ARM NEON) | Metal | cuBLAS | rocBLAS | SYCL | CLBlast | Vulkan | Kompute |
| :------------ | :---------: | :------------: | :---: | :----: | :-----: | :---: | :------: | :----: | :------: |
| K-quants | β
| β
| β
| β
| β
| β
| β
π’5 | β
π’5 | β |
| I-quants | β
π’4 | β
π’4 | β
π’4 | β
| β
| PartialΒΉ | β | β | β |
```
β
: feature works
π«: feature does not work
β: unknown, please contribute if you can test it youself
π’: feature is slow
ΒΉ: IQ3_S and IQ1_S, see #5886
Β²: Only with -ngl 0
Β³: Inference is 50% slower
β΄: Slower than K-quants of comparable size
β΅: Slower than cuBLAS/rocBLAS on similar cards
βΆ: Only q8_0 and iq4_nl
``` |