![](https://cdn-avatars.huggingface.co/v1/production/uploads/1634939940597-61520789467e891de6a07e9d.jpeg)
cmarkea/Meta-Llama-3.1-70B-Instruct-4bit
Text Generation
•
Updated
•
15
•
1
Large model quantized with post-quantization performance very close to the original models, allowing it to run on reasonable infrastructure.