Inference Endpoints Version

Hugging Face Inference Endpoints comes with a default serving container which is used for all supported Transformers and Sentence-Transformers tasks and for custom inference handler and implement batching. Below you will find information about the installed packages and versions used.

You can always upgrade installed packages and a custom packages by adding a requirements.txt file to your model repository. Read more in Add custom Dependencies.

Installed packages & version

The Hugging Face Inference Runtime has separate versions for PyTorch and TensorFlow for CPU and GPU, which are used based on the selected framework when creating an Inference Endpoint. The TensorFlow and PyTorch flavors are grouped together in the list below.

General

GPU

< > Update on GitHub