Looking Forward to models in GPTQModel formats like W4A16 and W8A16

#10
by X-SZM - opened

While quantization formats such as GGUF and AWQ are already available in the community, models in GPTQModelformats like W4A16 and W8A16 remain notably scarce. These formats demonstrate significant advantages in quantization effectiveness and reduced precision loss. Given that quantizing models using GPTQModel formats requires data calibration—a process challenging for general users to complete—it would be beneficial for organized community groups to share models in W4A16 and W8A16 formats. Such contributions would be highly anticipated.

X-SZM changed discussion title from Looking Forward to models in llm-compressor formats like W4A16 and W8A16 to Looking Forward to models in GPTQModel formats like W4A16 and W8A16

Sign up or log in to comment