Llama.cpp Models LLM models in GGUF format for inference via the llama.cpp project MaziyarPanahi/Mixtral-8x22B-Instruct-v0.1-GGUF Text Generation • 141B • Updated Apr 18, 2024 • 2.52k • 33 QuantFactory/Meta-Llama-3-8B-Instruct-GGUF Text Generation • 8B • Updated Sep 5, 2024 • 23k • 323 TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF 47B • Updated Dec 14, 2023 • 32.9k • 649
MaziyarPanahi/Mixtral-8x22B-Instruct-v0.1-GGUF Text Generation • 141B • Updated Apr 18, 2024 • 2.52k • 33
Llama.cpp Models LLM models in GGUF format for inference via the llama.cpp project MaziyarPanahi/Mixtral-8x22B-Instruct-v0.1-GGUF Text Generation • 141B • Updated Apr 18, 2024 • 2.52k • 33 QuantFactory/Meta-Llama-3-8B-Instruct-GGUF Text Generation • 8B • Updated Sep 5, 2024 • 23k • 323 TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF 47B • Updated Dec 14, 2023 • 32.9k • 649
MaziyarPanahi/Mixtral-8x22B-Instruct-v0.1-GGUF Text Generation • 141B • Updated Apr 18, 2024 • 2.52k • 33