Text-Generation-Inference (TGI) is a highly optimized serving engine enabling serving Large Language Models (LLMs) in a way that better leverages the underlying hardware, Cloud TPU in this case.
We assume the reader already has a Cloud TPU instance up and running. If this is not the case, please see our guide to deploy one here
Optimum-TPU provides a make tpu-tgi
command at the root level to help you create local docker image.
HF_TOKEN=<your_hf_token_here>
MODEL_ID=google/gemma-2b
sudo docker run --net=host \
--privileged \
-v $(pwd)/data:/data \
-e HF_TOKEN=${HF_TOKEN} \
huggingface/optimum-tpu:latest \
--model-id ${MODEL_ID} \
--max-concurrent-requests 4 \
--max-input-length 32 \
--max-total-tokens 64 \
--max-batch-size 1
You can query the model using either the /generate
or /generate_stream
routes:
curl localhost/generate \
-X POST \
-d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \
-H 'Content-Type: application/json'
curl localhost/generate_stream \
-X POST \
-d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \
-H 'Content-Type: application/json'
Jetstream Pytorch is a highly optimized Pytorch engine for serving LLMs on Cloud TPU. It is possible to use this engine by setting the JETSTREAM_PT=1
environment variable.
When using Jetstream Pytorch engine, it is possible to enable quantization to reduce the memory footprint and increase the throughput. To enable quantization, set the QUANTIZATION=1
environment variable.
Note: Quantization is still experimental and may produce lower quality results compared to the non-quantized version.