This tutorial guides you through setting up and running inference on TPU using Text Generation Inference (TGI) (documentation). TGI server is compatible with OpenAI messages API, and it offers an optimized solution for serving models on TPU.
Before starting, ensure you have:
First, connect to your TPU instance via SSH.
Install the HuggingFace Hub CLI:
pip install huggingface_hub
Log in to HuggingFace:
huggingface-cli login
We will use the gemma-2b-it
model for this tutorial:
We will use the Optimum-TPU image, a TPU-optimized TGI image provided by HuggingFace.
docker run -p 8080:80 \
--shm-size 16GB \
--privileged \
--net host \
-e LOG_LEVEL=text_generation_router=debug \
-v ~/hf_data:/data \
-e HF_TOKEN=$(cat ~/.cache/huggingface/token) \
ghcr.io/huggingface/optimum-tpu:v0.2.3-tgi \
--model-id google/gemma-2b-it \
--max-input-length 512 \
--max-total-tokens 1024 \
--max-batch-prefill-tokens 512 \
--max-batch-total-tokens 1024
Key parameters explained:
--shm-size 16GB --privileged --net=host
: Required for docker to access the TPU-v ~/hf_data:/data
: Volume mount for model storage--max-input-length
: Maximum input sequence length--max-total-tokens
: Maximum combined input and output tokens--max-batch-prefill-tokens
: Maximum tokens for batch processing--max-batch-total-tokens
: Maximum total tokens in a batchWait for the “Connected” message in the logs:
2025-01-11T10:40:00.256056Z INFO text_generation_router::server: router/src/server.rs:2393: Connected
Your TGI server is now ready to serve requests.
Query the server from another terminal on the TPU instance:
curl 0.0.0.0:8080/generate \
-X POST \
-d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \
-H 'Content-Type: application/json'
To query from outside the TPU instance:
curl 34.174.11.242:8080/generate \
-X POST \
-d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \
-H 'Content-Type: application/json'
You may need to configure GCP firewall rules to allow remote access:
gcloud compute firewall-rules create
to allow trafficKey parameters for inference requests:
inputs
: The prompt textmax_new_tokens
: Maximum number of tokens to generate