Hugging Face Hub supports all file formats, but has built-in features for GGUF format, a binary format that is optimized for quick loading and saving of models, making it highly efficient for inference purposes. GGUF is designed for use with GGML and other executors. GGUF was developed by @ggerganov who is also the developer of llama.cpp, a popular C/C++ LLM inference framework. Models initially developed in frameworks like PyTorch can be converted to GGUF format for use with those engines.
As we can see in this graph, unlike tensor-only file formats like safetensors – which is also a recommended model format for the Hub – GGUF encodes both the tensors and a standardized set of metadata.
You can browse all models with GGUF files filtering by the GGUF tag: hf.co/models?library=gguf.
For example, you can check out TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF for seeing GGUF files in action.
The Hub has a viewer for GGUF files that lets a user check out metadata & tensors info (name, shape, precison). The viewer is available on model page (example) & files page (example).
Llama.cpp has a helper script, scripts/hf.sh
, that makes it easy to download GGUF files from Hugging Face Hub. You can use it with a repo and file name, or with a URL to the GGUF file entry on the Hub:
./main \
-m $(./scripts/hf.sh --repo TheBloke/Mixtral-8x7B-v0.1-GGUF --file mixtral-8x7b-v0.1.Q4_K_M.gguf) \
-p "I believe the meaning of life is" -n 64
./main \
-m $(./scripts/hf.sh https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q4_K_M.gguf) \
-p "I believe the meaning of life is" -n 64
./main \
-m $(./scripts/hf.sh --url https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q4_K_M.gguf) \
-p "I believe the meaning of life is" -n 64
Find more information here.
GPT4All is an open-source LLM application developed by Nomic. Version 2.7.2 introduces a brand new, experimental feature called Model Discovery
.
Model Discovery
provides a built-in way to search for and download GGUF models from the Hub. To get started, open GPT4All and click Download Models
. From here, you can use the search bar to find a model.
After you have selected and downloaded a model, you can go to Settings
and provide an appropriate prompt template in the GPT4All format (%1
and %2
placeholders).
Then from the main page, you can select the model from the list of installed models and start a conversation.
We’ve also created a javascript GGUF parser that works on remotely hosted files (e.g. Hugging Face Hub).
npm install @huggingface/gguf
import { gguf } from "@huggingface/gguf";
// remote GGUF file from https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF
const URL_LLAMA = "https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/resolve/191239b/llama-2-7b-chat.Q2_K.gguf";
const { metadata, tensorInfos } = await gguf(URL_LLAMA);
Find more information here.