One very difficult aspect when exploring potential models to use on your machine is knowing just how big of a model will fit into memory with your current graphics card (such as loading the model onto CUDA).
To help alleviate this, π€ Accelerate has a CLI interface through accelerate estimate-memory
. This tutorial will
help walk you through using it, what to expect, and at the end link to the interactive demo hosted on the π€ Hub which will
even let you post those results directly on the model repo!
Currently we support searching for models that can be used in timm
and transformers
.
This API will load the model into memory on the meta
device, so we are not actually downloading
and loading the full weights of the model into memory, nor do we need to. As a result itβs
perfectly fine to measure 8 billion parameter models (or more), without having to worry about
if your CPU can handle it!
When using accelerate estimate-memory
, you need to pass in the name of the model you want to use, potentially the framework
that model utilizing (if it canβt be found automatically), and the data types you want the model to be loaded in with.
For example, here is how we can calculate the memory footprint for bert-base-cased
:
accelerate estimate-memory bert-base-cased
This will download the config.json
for bert-based-cased
, load the model on the meta
device, and report back how much space
it will use:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Memory Usage for loading `bert-base-cased` β
βββββββββ¬ββββββββββββββ¬βββββββββββ¬ββββββββββββββββββββ€
β dtype βLargest LayerβTotal SizeβTraining using Adamβ
βββββββββΌββββββββββββββΌβββββββββββΌββββββββββββββββββββ€
βfloat32β 84.95 MB β413.18 MB β 1.61 GB β
βfloat16β 42.47 MB β206.59 MB β 826.36 MB β
β int8 β 21.24 MB β103.29 MB β 413.18 MB β
β int4 β 10.62 MB β 51.65 MB β 206.59 MB β
βββββββββ΄ββββββββββββββ΄βββββββββββ΄ββββββββββββββββββββ
By default it will return all the supported dtypes (int4
through float32
), but if you are interested in specific ones these can be filtered.
If the source library cannot be determined automatically (like it could in the case of bert-base-cased
), a library name can
be passed in.
accelerate estimate-memory HuggingFaceM4/idefics-80b-instruct --library_name transformers
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Memory Usage for loading `HuggingFaceM4/idefics-80b-instruct` |
βββββββββ¬ββββββββββββββ¬βββββββββββ¬ββββββββββββββββββββββββββββββββββββ€
β dtype βLargest LayerβTotal Sizeβ Training using Adam β
βββββββββΌββββββββββββββΌβββββββββββΌββββββββββββββββββββββββββββββββββββ€
βfloat32β 3.02 GB β297.12 GB β 1.16 TB β
βfloat16β 1.51 GB β148.56 GB β 594.24 GB β
β int8 β 772.52 MB β 74.28 GB β 297.12 GB β
β int4 β 386.26 MB β 37.14 GB β 148.56 GB β
βββββββββ΄ββββββββββββββ΄βββββββββββ΄ββββββββββββββββββββββββββββββββββββ
accelerate estimate-memory timm/resnet50.a1_in1k --library_name timm
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Memory Usage for loading `timm/resnet50.a1_in1k` β
βββββββββ¬ββββββββββββββ¬βββββββββββ¬ββββββββββββββββββββ€
β dtype βLargest LayerβTotal SizeβTraining using Adamβ
βββββββββΌββββββββββββββΌβββββββββββΌββββββββββββββββββββ€
βfloat32β 9.0 MB β 97.7 MB β 390.78 MB β
βfloat16β 4.5 MB β 48.85 MB β 195.39 MB β
β int8 β 2.25 MB β 24.42 MB β 97.7 MB β
β int4 β 1.12 MB β 12.21 MB β 48.85 MB β
βββββββββ΄ββββββββββββββ΄βββββββββββ΄ββββββββββββββββββββ
As mentioned earlier, while we return int4
through float32
by default, any dtype can be used from float32
, float16
, int8
, and int4
.
To do so, pass them in after specifying --dtypes
:
accelerate estimate-memory bert-base-cased --dtypes float32 float16
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Memory Usage for loading `bert-base-cased` β
βββββββββ¬ββββββββββββββ¬βββββββββββ¬ββββββββββββββββββββ€
β dtype βLargest LayerβTotal SizeβTraining using Adamβ
βββββββββΌββββββββββββββΌβββββββββββΌββββββββββββββββββββ€
βfloat32β 84.95 MB β413.18 MB β 1.61 GB β
βfloat16β 42.47 MB β206.59 MB β 826.36 MB β
βββββββββ΄ββββββββββββββ΄βββββββββββ΄ββββββββββββββββββββ
This calculator will tell you exactly how much memory is needed to purely load the model in, not to perform inference.
When performing inference however, you can expect to add up to an additional 20% as found by EleutherAI. Weβll be conducting research into finding a more accurate estimate to these values, and will update this calculator once done.
Lastly, we invite you to try the live Gradio demo of this utility, which includes an option to post a discussion thread on a models repository with this data. Doing so will help provide access to these numbers in the community faster and help users know what youβve learned!