This guide explains how to run Optimum-TPU within a Docker container using the official PyTorch/XLA image.
Before starting, ensure you have:
First, set the environment variables for the image URL and version:
export TPUVM_IMAGE_URL=us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla
export TPUVM_IMAGE_VERSION=r2.5.0_3.10_tpuvm
# Pull the image
docker pull ${TPUVM_IMAGE_URL}:${TPUVM_IMAGE_VERSION}
Launch the container with the necessary flags for TPU access:
docker run -ti \
--rm \
--shm-size 16GB
--privileged \
--net=host \
${TPUVM_IMAGE_URL}@sha256:${TPUVM_IMAGE_VERSION} \
bash
--shm-size 16GB --privileged --net=host
is required for docker to access the TPU
Once inside the container, install Optimum-TPU:
pip install optimum-tpu -f https://storage.googleapis.com/libtpu-releases/index.html
To verify your setup, run this simple test:
python3 -c "import torch_xla.core.xla_model as xm; print(xm.xla_device())"
You should see output indicating the XLA device is available (e.g., xla:0
).
After setting up your container, you can: