Using TEI locally with an AMD GPU

Text-Embeddings-Inference supports the AMD GPUs officially supporting ROCm, including AMD Instinct MI210, MI250, MI300 and some of the AMD Radeon series GPUs.

To leverage AMD GPUs, Text-Embeddings-Inference relies on its Python backend, and not on the candle backend that is used for CPU, Nvidia GPUs and Metal. The support in the python backend is more limited (Bert embeddings) but easily extendible. We welcome contributions to extend the supported models.

Usage through docker

Using docker is the recommended approach.

docker run --rm -it --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --net host \
    --device=/dev/kfd --device=/dev/dri --group-add video --ipc=host --shm-size 32g \
    ghcr.io/huggingface/text-embeddings-inference:rocm-1.2.4 \
    --model-id sentence-transformers/all-MiniLM-L6-v2

and

curl 127.0.0.1:80/embed \
    -X POST -d '{"inputs":"What is Deep Learning?"}' \
    -H 'Content-Type: application/json'
< > Update on GitHub