This guide walks you through setting up and running model training on TPU using the optimum-tpu
environment.
The huggingface-pytorch-training-tpu
Docker image provides a pre-configured environment for TPU training, featuring:
Before starting, ensure you have:
Launch the container with the following command:
docker run --rm --shm-size 16GB --net host --privileged \
-v$(pwd)/artifacts:/tmp/output \
-e HF_TOKEN=<your_hf_token_here> \
us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-pytorch-training-tpu.2.5.1.transformers.4.46.3.py310 \
jupyter notebook --allow-root --NotebookApp.token='' /notebooks
Required docker commands:
--shm-size 16GB
: Increase default shared memory allocation--net host
: Use host network mode for optimal performance--privileged
: Required for TPU access
Those are needed to run a TPU container so that the container can properly access the TPU hardwareOptional arguments:
--rm
: Automatically remove container when it exits-v$(pwd)/artifacts:/tmp/output
: Mount local directory for saving outputs-e HF_TOKEN=<your_hf_token_here>
: Pass HuggingFace token for model accessTo connect from outside the TPU instance:
http://[YOUR-IP]:8888
http://34.174.11.242:8888
To enable remote access, you may need to configure GCP firewall rules:
gcloud compute firewall-rules create [RULE_NAME] \ --allow tcp:8888
You now have access to the Jupyter Notebook environment, which includes:
Continue your TPU training journey with: