TGI is a production-grade inference engine built in Rust and Python, designed for high-performance serving of open-source LLMs (e.g. LLaMA, Falcon, StarCoder, BLOOM and many more). The core features that make TGI a good choice are:
/v1/chat or /v1/completions APIs, Prometheus metrics, OpenTelemetry tracing, watermarking, logit controls, JSON schema guidanceBy default the TGI version will be the latest available one (with some delay). But you can also specify a differen version by chaning the container URL
When selecting a model to deploy, the Inference Endpoints UI automatically checks whether a model is supported by TGI. If it is, you’ll see
the option presented under Container Configuration where you can change the following settings:

1512 means users can send either a prompt of 1000 tokens and generate 512 new tokens,
or send a prompt of 1 token and generate 1511 new tokens. The larger this value, the larger amount each request
will be in your RAM and the less effective batching can be.Max Number of Tokens
this determines how many concurrent requests you can serve. If you set Max Number of Tokens to 100 and Max Batch Total Tokens to 100 as well,
you can only serve one requests at a time.In general zero-configuration (see below) is recommended for most cases. TGI supports several other configuration parameters and you’ll find a complete list in the TGI documentation. These can all be set by passing the values as environment variables to the container, link to guide.
Introduced in TGI v3, the zero-config mode helps you get the most out of your hardware without manual configuration and trial & error. If you leave the values undefined, TGI will on server startup automatically (based on the hardware it’s running on) select the maximal possible values for the max input lenght, max number of tokens, max batch prefill tokens and max batch total tokens. This means that you’ll use your hardware to it’s full capacity.
You can find the models that are supported by TGI by either:
If a model is supported by TGI the Inferernce Endpoints UI will indicate this by disabling/enabling the selection under Container Type configuration.

We also recommend reading the TGI documentation for more in-depth information
< > Update on GitHub