Text Emebddings Inference (TEI)
Text Embeddings Inference (TEI) is a robust, production-ready engine designed for fast and efficient generation of text
embeddings from a wide range of transformer models. Built for scalability and reliability, TEI streamlines the deployment
of embedding models for search, retrieval, clustering, and semantic understanding tasks.
Key Features:
- Efficient Resource Utilization: Benefit from small Docker images and rapid boot times.
- Dynamic Batching: TEI incorporates token-based dynamic batching thus optimizing resource utilization during inference.
- Optimized Inference: TEI leverages Flash Attention, Candle, and cuBLASLt by using optimized transformers code for inference.
- Support for models* in both the Safetensors and ONNX format
- Production-Ready: TEI supports distributed tracing through Open Telemetry and exports Prometheus metrics.
< > Update on GitHub