First TPU Training on Google Cloud

This guide walks you through setting up and running model training on TPU using the optimum-tpu environment.

Overview

The huggingface-pytorch-training-tpu Docker image provides a pre-configured environment for TPU training, featuring:

Prerequisites

Before starting, ensure you have:

1. Start the Jupyter Container

Launch the container with the following command:

docker run --rm --shm-size 16GB --net host --privileged \
    -v$(pwd)/artifacts:/tmp/output \
    -e HF_TOKEN=<your_hf_token_here> \
    us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-pytorch-training-tpu.2.5.1.transformers.4.46.3.py310 \
    jupyter notebook --allow-root --NotebookApp.token='' /notebooks
You need to replace with a HuggingFace access token that you can get [here](https://huggingface.co/settings/tokens)
If you already logged in via `huggingface-cli login`, then you can set HF_TOKEN=$(cat ~/.cache/huggingface/token) for more convenience

Understanding the Command Options:

Required docker commands:

Optional arguments:

2. Connect to the Jupyter Notebook

Accessing the Interface

To connect from outside the TPU instance:

External IP TPU

  1. Locate your TPU’s external IP in Google Cloud Console
  2. Access the Jupyter interface at http://[YOUR-IP]:8888

Firewall Configuration (Optional)

To enable remote access, you may need to configure GCP firewall rules:

  1. Create a new firewall rule:
    gcloud compute firewall-rules create [RULE_NAME] \
        --allow tcp:8888
  2. Ensure port 8888 is accessible
  3. Consider implementing these security practices:

3. Start Training Your Model

Jypter Notebook interface

You now have access to the Jupyter Notebook environment, which includes:

Next Steps

Continue your TPU training journey with:

  1. Gemma Fine-tuning Guide
  2. Manual Installation Guide