To upload models to the Hub, you’ll need to create an account at Hugging Face. Models on the Hub are Git-based repositories, which give you versioning, branches, discoverability and sharing features, integration with dozens of libraries, and more! You have control over what you want to upload to your repository, which could include checkpoints, configs, and any other files.
You can link repositories with an individual user, such as osanseviero/fashion_brands_patterns, or with an organization, such as facebook/bart-large-xsum. Organizations can collect models related to a company, community, or library! If you choose an organization, the model will be featured on the organization’s page, and every member of the organization will have the ability to contribute to the repository. You can create a new organization here.
There are several ways to upload models to the Hub, described below.
from_pretrained
, push_to_hub
and automated download metrics capabilities to your models, just like models in the Transformers, Diffusers and Timm libraries.Once your model is uploaded, we suggest adding a Model Card to your repo to document your model.
To create a brand new model repository, visit huggingface.co/new. Then follow these steps:
Afterwards, click Commit changes to upload your model to the Hub!
Inspect files and history
You can check your repository with all the recently added files!
The UI allows you to explore the model files and commits and to see the diff introduced by each commit:
You can add metadata to your model card. You can specify:
transformers
, spaCy
, etc.)Read more about model tags here.
Any repository that contains TensorBoard traces (filenames that contain tfevents
) is categorized with the TensorBoard
tag. As a convention, we suggest that you save traces under the runs/
subfolder. The “Training metrics” tab then makes it easy to review charts of the logged variables, like the loss or the accuracy.
Models trained with 🤗 Transformers will generate TensorBoard traces by default if tensorboard
is installed.
First check if your model is from a library that has built-in support to push to/load from the Hub, like Transformers, Diffusers, Timm, Asteroid, etc.: https://huggingface.co/docs/hub/models-libraries. Below we’ll show how easy this is for a library like Transformers:
from transformers import BertConfig, BertModel
config = BertConfig()
model = BertModel(config)
model.push_to_hub("nielsr/my-awesome-bert-model")
# reload
model = BertModel.from_pretrained("nielsr/my-awesome-bert-model")
In case your model is a (custom) PyTorch model, you can leverage the PyTorchModelHubMixin
class available in the huggingface_hub Python library. It is a minimal class which adds from_pretrained
and push_to_hub
capabilities to any nn.Module
, along with download metrics.
Here is how to use it (assuming you have run pip install huggingface_hub
):
import torch
import torch.nn as nn
from huggingface_hub import PyTorchModelHubMixin
class MyModel(nn.Module, PyTorchModelHubMixin):
def __init__(self, config: dict):
super().__init__()
self.param = nn.Parameter(torch.rand(config["num_channels"], config["hidden_size"]))
self.linear = nn.Linear(config["hidden_size"], config["num_classes"])
def forward(self, x):
return self.linear(x + self.param)
# create model
config = {"num_channels": 3, "hidden_size": 32, "num_classes": 10}
model = MyModel(config=config)
# save locally
model.save_pretrained("my-awesome-model", config=config)
# push to the hub
model.push_to_hub("my-awesome-model", config=config)
# reload
model = MyModel.from_pretrained("username/my-awesome-model")
As can be seen, the only thing required is to define all hyperparameters regarding the model architecture (such as hidden size, number of classes, dropout probability, etc.) in a Python dictionary often called the config
. Next, you can define a class which takes the config
as keyword argument in its init.
This comes with automated download metrics, meaning that you’ll be able to see how many times the model is downloaded, the same way they are available for models integrated natively in the Transformers, Diffusers or Timm libraries. With this mixin class, each separate checkpoint is stored on the Hub in a single repository consisting of 2 files:
pytorch_model.bin
or model.safetensors
file containing the weightsconfig.json
file which is a serialized version of the model configuration. This class is used for counting download metrics: everytime a user calls from_pretrained
to load a config.json
, the count goes up by one. See this guide regarding automated download metrics.It’s recommended to add a model card to each checkpoint so that people can read what the model is about, have a link to the paper, etc.
Visit the huggingface_hub’s documentation to learn more.
Alternatively, one can also simply programmatically upload files or folders to the hub: https://huggingface.co/docs/huggingface_hub/guides/upload.
Finally, since model repos are just Git repositories, you can also use Git to push your model files to the Hub. Follow the guide on Getting Started with Repositories to learn about using the git
CLI to commit and push your models.