FP8 activation quantization performed with llm-compressor
- Downloads last month
- 12
Model tree for ig1/Qwen2.5-VL-72B-Instruct-FP8-Dynamic
Base model
Qwen/Qwen2.5-VL-72B-InstructFP8 activation quantization performed with llm-compressor
Base model
Qwen/Qwen2.5-VL-72B-Instruct