PaliGemma 2 is the latest multilingual vision-language model released by Google. It combines the SigLIP vision model with the Gemma 2 language model, enabling it to process both images and text inputs to generate text outputs for various tasks, including captioning, visual question answering, and object detection. Text Generation Inference (TGI) is a toolkit developed by Hugging Face for deploying and serving LLMs, with high performance text generation. Google Kubernetes Engine (GKE) is a fully-managed Kubernetes service in Google Cloud that can be used to deploy and operate containerized applications at scale using Google Cloud infrastructure.
This example showcases how to deploy Google PaliGemma 2 from the Hugging Face Hub on a GKE Cluster, running a purpose-built container to deploy LLMs and VLMs in a secure and managed environment with the Hugging Face DLC for TGI. Additionally, this example also presents different scenarios or use-cases where PaliGemma 2 can be used.
Some configuration steps such as the gcloud
, kubectl
, and gke-cloud-auth-plugin
installation are not required if running the example within the Google Cloud Shell, as it already comes with those dependencies installed. It’s also automatically logged in with the current account and project selected on Google Cloud.
Optionally, we recommend you set the following environment variables for convenience, and to avoid duplicating the values elsewhere in the example:
export PROJECT_ID=your-project-id
export LOCATION=your-location
export CLUSTER_NAME=your-cluster-name
First, you need to install both gcloud
and kubectl
in your local machine, which are the command-line tools to interact with Google Cloud and Kubernetes, respectively.
gcloud
, follow the instructions at Cloud SDK Documentation - Install the gcloud CLI.kubectl
, follow the instructions at Kubernetes Documentation - Install Tools.Additionally, to use kubectl
with the GKE Cluster credentials, you also need to install the gke-gcloud-auth-plugin
, that can be installed with gcloud
as follows:
gcloud components install gke-gcloud-auth-plugin
There are other ways to install the gke-gcloud-auth-plugin
that you can check in the GKE Documentation - Install kubectl and configure cluster access.
Then you need to login into your Google Cloud account and set the project ID to the one you want to use for the deployment of the GKE Cluster.
gcloud auth login
gcloud auth application-default login # Required for local development
gcloud config set project $PROJECT_ID
Once you are logged in, you need to enable the necessary service APIs in Google Cloud, such as the Google Kubernetes Engine API, the Google Container Registry API, and the Google Container File System API, which are necessary for the deployment of the GKE Cluster and the Hugging Face DLC for TGI.
gcloud services enable container.googleapis.com
gcloud services enable containerregistry.googleapis.com
gcloud services enable containerfilesystem.googleapis.com
google/paligemma2-3b-pt-224
is a gated model, as well as the rest of the official PaliGemma 2 models. In order to use any of them and being able to download the weights, you first need to accept their gating / license in one of the model cards.
Once you have been granted access to the PaliGemma 2 models on the Hub, you need to generate either a fine-grained or a read-access token. A fine-grained token allows you to scope permissions to the desired models, such google/paligemma2-3b-pt-224
, so you can download the weights, and is the recommended option. A read-access token would allow access to all the models your account has access to. To generate access tokens for the Hugging Face Hub you can follow the instructions at Hugging Face Hub Documentation - User access tokens.
After the access token is generated, the recommended way of setting it is via the Python CLI huggingface-cli
that comes with the huggingface_hub
Python SDK, that can be installed as follows:
pip install --upgrade --quiet huggingface_hub
And then login in with the generated access token with read-access over the gated/private model as:
huggingface-cli login
To deploy the GKE Cluster, the “Autopilot” mode will be used as it is the recommended one for most of the workloads, since the underlying infrastructure is managed by Google; meaning that there’s no need to create a node pool in advance or set up their ingress. Alternatively, you can also use the “Standard” mode, but that may require more configuration steps and being more aware / knowledgeable of Kubernetes.
Before creating the GKE Autopilot Cluster on a different version than the one pinned below, you should read the GKE Documentation - Optimize Autopilot Pod performance by choosing a machine series page, as not all the Kubernetes versions available on GKE support GPU accelerators (e.g. nvidia-l4
is not supported on GKE for Kubernetes 1.28.3 or lower).
gcloud container clusters create-auto $CLUSTER_NAME \
--project=$PROJECT_ID \
--location=$LOCATION \
--release-channel=stable \
--cluster-version=1.30 \
--no-autoprovisioning-enable-insecure-kubelet-readonly-port
If you want to change the Kubernetes version running on the GKE Cluster, you can do so, but make sure to check which are the latest supported Kubernetes versions in the location where you want to create the cluster on, with the following command:
gcloud container get-server-config \
--flatten="channels" \
--filter="channels.channel=STABLE" \
--format="yaml(channels.channel,channels.defaultVersion)" \
--location=$LOCATION
Additionally, note that you can also use the “RAPID” channel instead of the “STABLE” if you require any Kubernetes feature not shipped yet within the latest Kubernetes version released on the “STABLE” channel, even though using the “STABLE” channel is recommended. For more information please visit the GKE Documentation - Specifying cluster version.
Once the GKE Cluster is created, you need to get the credentials to access it via kubectl
:
gcloud container clusters get-credentials $CLUSTER_NAME --location=$LOCATION
Then you will be ready to use kubectl
commands that will be calling the Kubernetes Cluster you just created on GKE.
As google/paligemma2-3b-pt-224
is a gated model and requires a Hugging Face Hub access token to download the weights as mentioned before, you need to set a Kubernetes secret with the Hugging Face Hub token previously generated, with the following command (assuming that you have the huggingface_hub
Python SDK installed):
kubectl create secret generic hf-secret \
--from-literal=hf_token=$(python -c "from huggingface_hub import get_token; print(get_token())") \
--dry-run=client -o yaml | kubectl apply -f -
Alternatively, even if not recommended, you can also directly set the access token pasting it within the kubectl
command as follows (make sure to replace that with your own token):
kubectl create secret generic hf-secret \ --from-literal=hf_token=hf_*** \ --dry-run=client -o yaml | kubectl apply -f -
More information on how to set Kubernetes secrets in a GKE Cluster check the GKE Documentation - Specifying cluster version.
Now you can proceed to the Kubernetes deployment of the Hugging Face DLC for TGI, serving the google/paligemma2-3b-pt-224
model from the Hugging Face Hub. To explore all the models from the Hugging Face Hub that can be served with TGI, you can explore the models tagged with text-generation-inference
in the Hub.
PaliGemma 2 will be deployed from the following Kubernetes Deployment Manifest (including the Service):
apiVersion: apps/v1
kind: Deployment
metadata:
name: tgi
spec:
replicas: 1
selector:
matchLabels:
app: tgi
template:
metadata:
labels:
app: tgi
hf.co/model: google--paligemma2-3b-pt-224
hf.co/task: text-generation
spec:
containers:
- name: tgi
image: "us-central1-docker.pkg.dev/gcp-partnership-412108/deep-learning-images/huggingface-text-generation-inference-gpu.3.0.1"
# image: "us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-text-generation-inference-cu124.3-0.ubuntu2204.py311"
resources:
requests:
nvidia.com/gpu: 1
limits:
nvidia.com/gpu: 1
env:
- name: MODEL_ID
value: "google/paligemma2-3b-pt-224"
- name: NUM_SHARD
value: "1"
- name: PORT
value: "8080"
- name: HF_TOKEN
valueFrom:
secretKeyRef:
name: hf-secret
key: hf_token
volumeMounts:
- mountPath: /dev/shm
name: dshm
- mountPath: /tmp
name: tmp
volumes:
- name: dshm
emptyDir:
medium: Memory
sizeLimit: 1Gi
- name: tmp
emptyDir: {}
nodeSelector:
cloud.google.com/gke-accelerator: nvidia-l4
# ---
apiVersion: v1
kind: Service
metadata:
name: tgi
spec:
selector:
app: tgi
type: ClusterIP
ports:
- protocol: TCP
port: 8080
targetPort: 8080
You can either deploy by copying the content above into a file named deployment.yaml
and then deploy it with the following command:
kubectl apply -f deployment.yaml
Optionally, if you also want to deploy the Ingress to e.g. expose a public IP to access the Service, then you should then copy the following content into a file named ingress.yaml
:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tgi
# https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
annotations:
kubernetes.io/ingress.class: "gce"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: tgi
port:
number: 8080
And, then deploy it with the following command:
kubectl apply -f ingress.yaml
Alternatively, you can just clone the huggingface/Google-Cloud-Containers
repository from GitHub and the apply the configuration including all the Kubernetes Manifests mentioned above as it follows:
git clone https://github.com/huggingface/Google-Cloud-Containers
kubectl apply -f Google-Cloud-Containers/examples/gke/deploy-paligemma-2-with-tgi/config
The Kubernetes deployment may take a few minutes to be ready, so you can check the status of the pod/s being deployed on the default namespace with the following command:
kubectl get pods
Alternatively, you can just wait (700 seconds) for the deployment to be ready with the following command:
kubectl wait --for=condition=Available --timeout=700s deployment/tgi
To access the deployed TGI service, you have two options:
You can port-forward the deployed TGI service to port 8080 on your local machine using the following command:
kubectl port-forward service/tgi 8080:8080
This allows you to access the service via localhost:8080
.
If you’ve configured the ingress (as defined in the ingress.yaml
file), you can access the service using the external IP of the ingress. Retrieve the external IP with this command:
kubectl get ingress tgi -o jsonpath='{.status.loadBalancer.ingress.ip}'
Finally, to make sure that the service is healthy and reachable via either localhost
or the ingress IP (depending on how you exposed the service as of the step above), you can send the following curl
command:
curl http://localhost:8080/health
And that’s it, TGI is now reachable and healthy on GKE!
Before sending the curl
request for inference, you need to note that the PaliGemma variant that you are serving is google/paligemma2-3b-pt-224
i.e. the pre-trained variant, meaning that’s not particularly usable out of the box for any task, but just to transfer well to other tasks after the fine-tuning; anyway, it’s pre-trained on a set of given tasks following the previous PaLI: A Jointly-Scaled Multilingual Language-Image Model works, which are the following and, so on, the supported prompt formats that will work out of the box via the /generate
endpoint:
caption {lang}
: Simple captioning objective on datasets like WebLI and CC3M-35Locr
: Transcription of text on the image using a public OCR systemanswer en {question}
: Generated VQA on CC3M-35L and object-centric questions on OpenImagesquestion {lang} {English answer}
: Generated VQG on CC3M-35L in 35 languages for given English answersdetect {thing} ; {thing} ; ...
: Multi-object detection on generated open-world datasegment {thing} ; {thing} ; ...
: Multi-object instance segmentation on generated open-world datacaption <ymin><xmin><ymax><xmax>
: Grounded captioning of content within a specified boxThe PaliGemma and PaliGemma 2 models require the BOS token after the images and before the prefix and then \n
i.e. the line-break, as the separator token from suffix (input) and the prefix (output); which are both automatically included by the transformers.PaliGemmaProcessor
, meaning that there’s no need to provide those explicitly to the /generate
endpoint in TGI.
The images should be provided following the Markdown syntax for image rendering i.e. 
, which requires the image URL to be publicly accessible. Alternatively, you can provide images in the request using base64 encoding of the image data.
This means that the prompt formatting expected on the /generate
method is either:
<PROMPT>
if the image is provided via URL.<PROMPT>
if the image is provided using base64 encoding.Read more information about the technical details and implementation of PaliGemma on the papers / technical reports released by Google:
Note that the /v1/chat/completions
endpoint cannot be used, and will result in a “chat template error not found”, as the model is pre-trained and not fine-tuned for chat conversations, and does not have a chat template that can be applied within the v1/chat/completions
endpoint following the OpenAI OpenAPI specification.
To send a POST request to the TGI service using cURL
, you can run the following command:
curl http://localhost:8080/generate \
-d '{"inputs":"caption en","parameters":{"max_new_tokens":128,"seed":42}}' \
-H 'Content-Type: application/json'
Image | Input | Output |
---|---|---|
![]() | caption en | image of a man in a spacesuit |
You can install it via pip as pip install --upgrade --quiet huggingface_hub
, and then run the following snippet to mimic the cURL command above i.e. sending requests to the Generate API:
from huggingface_hub import InferenceClient
client = InferenceClient("http://localhost:8080", api_key="-")
generation = client.text_generation(
prompt="caption en",
max_new_tokens=128,
seed=42,
)
Or, if you don’t have a public URL with the image hosted, you can also send the base64 encoding of the image from the image file as it follows:
import base64
from huggingface_hub import InferenceClient
client = InferenceClient("http://localhost:8080", api_key="-")
with open("/path/to/image.png", "rb") as f:
b64_image = base64.b64encode(f.read()).decode("utf-8")
generation = client.text_generation(
prompt=f"caption en",
max_new_tokens=128,
seed=42,
)
Both producing the following output:
{"generated_text": "image of a man in a spacesuit"}
Finally, once you are done using TGI on the GKE Cluster, you can safely delete the GKE Cluster to avoid incurring in unnecessary costs.
gcloud container clusters delete $CLUSTER_NAME --location=$LOCATION
Alternatively, you can also downscale the replicas of the deployed pod to 0 in case you want to preserve the cluster, since the default GKE Cluster deployed with GKE Autopilot mode is running just a single e2-small
instance.
kubectl scale --replicas=0 deployment/tgi
📍 Find the complete example on GitHub here!