Hugging Face on Google Cloud

Hugging Face x Google Cloud

Hugging Face collaborates with Google across open science, open source, cloud, and hardware to enable companies to build their own AI with the latest open models from Hugging Face and the latest cloud and hardware features from Google Cloud.

Hugging Face enables new experiences for Google Cloud customers. They can easily train and deploy Hugging Face models on Google Kubernetes Engine (GKE), Vertex AI, and Cloud Run, on any hardware available in Google Cloud using Hugging Face Deep Learning Containers (DLCs) or our no-code integrations.

Deploy Models on Google Cloud

With Hugging Face DLCs

For advanced scenarios, you can pull any Hugging Face DLCs from the Google Cloud Artifact Registry directly in your environment. We are curating a list of notebook examples on how to deploy models with Hugging Face DLCs in:

From the Hub Model Page

On Vertex AI or GKE

If you want to deploy a model from the Hub in your Google Cloud account on Vertex AI or GKE, you can use our no-code integrations. Below, you will find step-by-step instructions on how to deploy Gemma 2 9B:

  1. On the model page, open the “Deploy” menu, and select “Google Cloud”. This will bring you straight into the Google Cloud Console.
  2. Select Vertex AI or GKE as a deployment option.
  3. Paste a Hugging Face Token with “Read access contents of all public gated repos you can access” permission.
  4. If Vertex AI is selected, click on “Deploy”. If GKE is selected, paste the manifest code and apply to your EKS cluster.

Alternatively, you can follow this short video.

On Hugging Face Inference Endpoints

If you want to deploy a model from the hub but you don’t have a Google Cloud environment, you can use Hugging Face Inference Endpoints on Google Cloud. Below, you will find step-by-step instructions on how to deploy Gemma 2 9B:

  1. On the model page, open the “Deploy” menu, and select “Inference Endpoints (dedicated)”. This will now bring you in the Inference Endpoint deployment page.
  2. Select Google Cloud Platform, scroll down and click on “Create Endpoint”.

Alternatively, you can follow this short video.

From Vertex AI Model Garden

On Vertex AI or GKE

If you are used to browse models directly from Vertex AI Model Garden, we brought more than 4000 models from the Hugging Face Hub to it. Below, you will find step-by-step instructions on how to deploy Gemma 2 9B:

  1. On Vertex AI Model Garden landing page, you can browse Hugging Face models:
    1. by clicking “Deploy From Hugging Face” at the top left
    2. by scrolling down to see our curated list of 12 open source models
    3. by clicking on “Hugging Face” in the Featured Partner section to access a catalog of 4000+ models hosted on the Hub.
  2. Once you found the model that you want to deploy, you can select Vertex AI or GKE as a deployment option.
  3. Paste a Hugging Face Token with “Read access contents of all public gated repos you can access” permission.
  4. If Vertex AI is selected, click on “Deploy”. If GKE is selected, paste the manifest code and apply to your EKS cluster.

Alternatively, you can follow this short video.

Train models on Google Cloud

With Hugging Face DLCs

For advanced scenarios, you can pull the containers from the Google Cloud Artifact Registry directly in your environment. We are curating a list of notebook examples on how to train models with Hugging Face DLCs in:

Support

If you have any issues using Hugging Face on Google Cloud, you can get community support by creating a new topic in the Forum dedicated to Google Cloud usage.

Hugging Face DLCs are open source and licensed under Apache 2.0 within the Google-Cloud-Containers repository. For premium support, our Expert Support Program gives you direct dedicated support from our team.

< > Update on GitHub