HF_TOKEN
: HuggingFace authentication tokendocker specific parameters
--shm-size 16GB
: Shared memory allocation--privileged
: Enable privileged container mode--net host
: Uses host network modeThose are needed to run a TPU container so that the docker container can properly access the TPU hardware
TGI specific parameters
--model-id
: Model identifier to load from the HuggingFace hubThose are parameters used by TGI and optimum-TPU to configure the server behavior.
JETSTREAM_PT_DISABLE
: Disable Jetstream PyTorch backendQUANTIZATION
: Enable int8 quantizationMAX_BATCH_SIZE
: Set batch processing size, that is static on TPUsLOG_LEVEL
: Set logging verbosity (useful for debugging). It can be set to info, debug or a comma separated list of attribute such text_generation_launcher,text_generation_router=debugSKIP_WARMUP
: Skip model warmup phaseYou can view more options in the TGI documentation. Not all parameters might be compatible with TPUs (for example, all the CUDA-specific parameters)
--max-input-length
: Maximum input sequence length--max-total-tokens
: Maximum combined input/output tokens--max-batch-prefill-tokens
: Maximum tokens for batch processing--max-batch-total-tokens
: Maximum total tokens in batchYou can view more options in the TGI documentation. Not all parameters might be compatible with TPUs (for example, all the CUDA-specific parameters)
When running TGI inside a container (recommended), the container should be started with:
Here’s a complete example showing all major configuration options:
docker run -p 8080:80 \ --shm-size 16GB \ --privileged \ --net host \ -e QUANTIZATION=1 \ -e MAX_BATCH_SIZE=2 \ -e LOG_LEVEL=text_generation_router=debug \ -v ~/hf_data:/data \ -e HF_TOKEN=<your_hf_token_here> \ ghcr.io/huggingface/optimum-tpu:v0.2.3-tgi \ --model-id google/gemma-2b-it \ --max-input-length 512 \ --max-total-tokens 1024 \ --max-batch-prefill-tokens 512 \ --max-batch-total-tokens 1024