license: apache-2.0 | |
base_model: | |
- Qwen/Qwen2.5-7B-Instruct-1M | |
Quantized from [Qwen/Qwen2.5-7B-Instruct-1M](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-1M) down to 4 bits, GEMM |
license: apache-2.0 | |
base_model: | |
- Qwen/Qwen2.5-7B-Instruct-1M | |
Quantized from [Qwen/Qwen2.5-7B-Instruct-1M](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-1M) down to 4 bits, GEMM |