File size: 509 Bytes
37aee63 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
---
license: apache-2.0
datasets:
- lars1234/story_writing_benchmark
base_model:
- lars1234/Mistral-Small-24B-Instruct-2501-writer
---
# Mistral-Small-24B-Instruct-2501-writer-AWQ
This model is the 4-bit AWQ-quantized version of [Mistral-Small-24B-Instruct-2501-writer](lars1234/Mistral-Small-24B-Instruct-2501-writer).
- **Quantization Method**: AWQ (Activation-aware Weight Quantization)
- **Quantization Configuration**:
- Bit Width: 4-bit
- Group Size: 128
- Zero Point: Enabled
- Version: GEMM |