TGI Configuration Reference Guide

Required Configuration

Required Environment Variables

Required Command Line Arguments

docker specific parameters

Those are needed to run a TPU container so that the docker container can properly access the TPU hardware

TGI specific parameters

Those are parameters used by TGI and optimum-TPU to configure the server behavior.

Optional Configuration

Optional Environment Variables

Note on warmup:

You can view more options in the TGI documentation. Not all parameters might be compatible with TPUs (for example, all the CUDA-specific parameters)

TIP for TGI: you can pass most parameters to TGI as docker environment variables or docker arguments. So you can pass `--model-id google/gemma-2b-it` or `-e MODEL_ID=google/gemma-2b-it` to the `docker run` command

Optional Command Line Arguments

You can view more options in the TGI documentation. Not all parameters might be compatible with TPUs (for example, all the CUDA-specific parameters)

Docker Requirements

When running TGI inside a container (recommended), the container should be started with:

Example Command

Here’s a complete example showing all major configuration options:

docker run -p 8080:80 \
    --shm-size 16GB \
    --privileged \
    --net host \
    -e QUANTIZATION=1 \
    -e MAX_BATCH_SIZE=2 \
    -e LOG_LEVEL=text_generation_router=debug \
    -v ~/hf_data:/data \
    -e HF_TOKEN=<your_hf_token_here> \
    ghcr.io/huggingface/optimum-tpu:v0.2.3-tgi \
    --model-id google/gemma-2b-it \
    --max-input-length 512 \
    --max-total-tokens 1024 \
    --max-batch-prefill-tokens 512 \
    --max-batch-total-tokens 1024
You need to replace with a HuggingFace access token that you can get [here](https://huggingface.co/settings/tokens)
If you already logged in via `huggingface-cli login`, then you can set HF_TOKEN=$(cat ~/.cache/huggingface/token) for more convenience

Additional Resources