At Hugging Face our intent is to provide the AI community with free storage space for public repositories. We do bill for storage space for private repositories, above a free tier (see table below).
We optimize our infrastructure continuously to scale our storage for the coming years of growth in Machine learning.
We do have mitigations in place to prevent abuse of free public storage, and in general we ask users and organizations to make sure any uploaded large model or dataset is as useful to the community as possible (as represented by numbers of likes or downloads, for instance).
Type of account | Public storage | Private storage |
---|---|---|
Free user or org | Best-effort* 🙏 | 100GB |
PRO | Unlimited ✅ | 1TB + pay-as-you-go |
Enterprise Hub | Unlimited ✅ | 1TB per seat + pay-as-you-go |
💡 Enterprise Hub includes 1TB of private storage per seat in the subscription: for example, if your organization has 40 members, then you have 40TB of included private storage.
*We aim to continue providing the AI community with free storage space for public repositories, please don’t abuse and upload dozens of TBs of generated anime 😁. If possible, we still ask that you consider upgrading to PRO and/or Enterprise Hub whenever possible.
Above the included 1TB (or 1TB per seat) of private storage in PRO and Enterprise Hub, private storage is invoiced at $25/TB/month, in 1TB increments. See our billing doc for more details.
In parallel to storage limits at the account (user or organization) level, there are some limitations to be aware of when dealing with a large amount of data in a specific repo. Given the time it takes to stream the data, getting an upload/push to fail at the end of the process or encountering a degraded experience, be it on hf.co or when working locally, can be very annoying. In the following section, we describe our recommendations on how to best structure your large repos.
We gathered a list of tips and recommendations for structuring your repo. If you are looking for more practical tips, check out this guide on how to upload large amount of data using the Python library.
Characteristic | Recommended | Tips |
---|---|---|
Repo size | - | contact us for large repos (TBs of data) |
Files per repo | <100k | merge data into fewer files |
Entries per folder | <10k | use subdirectories in repo |
File size | <20GB | split data into chunked files |
Commit size | <100 files* | upload files in multiple commits |
Commits per repo | - | upload multiple files per commit and/or squash history |
* Not relevant when using git
CLI directly
Please read the next section to understand better those limits and how to deal with them.
What are we talking about when we say “large uploads”, and what are their associated limitations? Large uploads can be very diverse, from repositories with a few huge files (e.g. model weights) to repositories with thousands of small files (e.g. an image dataset).
Under the hood, the Hub uses Git to version the data, which has structural implications on what you can do in your repo.
If your repo is crossing some of the numbers mentioned in the previous section, we strongly encourage you to check out git-sizer
,
which has very detailed documentation about the different factors that will impact your experience. Here is a TL;DR of factors to consider:
000/
to 999/
, each containing at most 1000 files, is already enough.huggingface_hub
’s super_squash_history
. Be aware that this is a non-revertible operation.One key way Hugging Face supports the machine learning ecosystem is by hosting datasets on the Hub, including very large ones. However, if your dataset is bigger than 300GB, you will need to ask us to grant more storage.
In this case, to ensure we can effectively support the open-source ecosystem, we require you to let us know via datasets@huggingface.co.
When you get in touch with us, please let us know:
For hosting large datasets on the Hub, we require the following for your dataset:
Please get in touch with us if any of these requirements are difficult for you to meet because of the type of data or domain you are working in.
Similarly to datasets, if you host models bigger than 300GB or if you plan on uploading a large number of smaller sized models (for instance, hundreds of automated quants) totalling more than 1TB, you will need to ask us to grant more storage.
To do that, to ensure we can effectively support the open-source ecosystem, please send an email with details of your project to models@huggingface.co.
If you need more model/ dataset storage than your allocated private storage for academic/ research purposes, please reach out to us at datasets@huggingface.co or models@huggingface.co along with a proposal of how you will use the storage grant.
< > Update on GitHub