vLLM
vLLM is a high-performance, memory-efficient inference engine for open-source LLMs.
It delivers efficient scheduling, KV-cache handling, batching, and decoding—all wrapped in a production-ready server.
Core features:
- PagedAttention for memory efficiency
- Continuous batching
- Optimized CUDA/HIP execution:
- Speculative decoding & chunked prefill:
- Multi-backend and hardware support: Runs across NVIDIA, AMD, and AWS Neuron to name a few
Configuration

< > Update on GitHub