vllm / README.md
davanstrien's picture
davanstrien HF Staff
Update README.md with enhanced usage instructions for classify-dataset.py and generate-responses.py, including multi-GPU support and environment variable details.
6a9905a
|
raw
history blame
6.13 kB
metadata
viewer: false
tags:
  - uv-script
  - vllm
  - gpu
  - inference

vLLM Inference Scripts

Ready-to-run UV scripts for GPU-accelerated inference using vLLM.

These scripts use UV's inline script metadata to automatically manage dependencies - just run with uv run and everything installs automatically!

πŸ“‹ Available Scripts

classify-dataset.py

Batch text classification using BERT-style encoder models (e.g., BERT, RoBERTa, DeBERTa, ModernBERT) with vLLM's optimized inference engine.

Note: This script is specifically for encoder-only classification models, not generative LLMs.

Features:

  • πŸš€ High-throughput batch processing
  • 🏷️ Automatic label mapping from model config
  • πŸ“Š Confidence scores for predictions
  • πŸ€— Direct integration with Hugging Face Hub

Usage:

# Local execution (requires GPU)
uv run classify-dataset.py \
    davanstrien/ModernBERT-base-is-new-arxiv-dataset \
    username/input-dataset \
    username/output-dataset \
    --inference-column text \
    --batch-size 10000

HF Jobs execution:

hf jobs uv run \
    --flavor l4x1 \
    --image vllm/vllm-openai \
    https://huggingface.co/datasets/uv-scripts/vllm/resolve/main/classify-dataset.py \
    davanstrien/ModernBERT-base-is-new-arxiv-dataset \
    username/input-dataset \
    username/output-dataset \
    --inference-column text \
    --batch-size 100000

generate-responses.py

Generate responses for chat-formatted prompts using generative LLMs (e.g., Llama, Qwen, Mistral) with vLLM's high-performance inference engine.

Features:

  • πŸ’¬ Automatic chat template application
  • πŸ”€ Multi-GPU tensor parallelism support
  • πŸ“ Smart filtering for prompts exceeding context length
  • πŸ“Š Comprehensive dataset cards with generation metadata
  • ⚑ HF Transfer enabled for fast model downloads
  • πŸŽ›οΈ Full control over sampling parameters

Usage:

# Local execution with default Qwen model
uv run generate-responses.py \
    username/input-dataset \
    username/output-dataset \
    --messages-column messages \
    --max-tokens 1024

# With custom model and parameters
uv run generate-responses.py \
    username/input-dataset \
    username/output-dataset \
    --model-id meta-llama/Llama-3.1-8B-Instruct \
    --temperature 0.9 \
    --top-p 0.95 \
    --max-model-len 8192

HF Jobs execution (multi-GPU):

hf jobs uv run \
    --flavor l4x4 \
    --image vllm/vllm-openai \
    -e UV_PRERELEASE=if-necessary \
    -e HF_TOKEN=hf_*** \
    https://huggingface.co/datasets/uv-scripts/vllm/raw/main/generate-responses.py \
    davanstrien/cards_with_prompts \
    davanstrien/test-generated-responses \
    --model-id Qwen/Qwen3-30B-A3B-Instruct-2507 \
    --gpu-memory-utilization 0.9 \
    --max-tokens 600 \
    --max-model-len 8000

🎯 Requirements

All scripts in this collection require:

  • NVIDIA GPU with CUDA support
  • Python 3.10+
  • UV package manager (install UV)

πŸš€ Performance Tips

GPU Selection

  • L4 GPU (--flavor l4x1): Best value for classification and smaller models
  • L4x4 (--flavor l4x4): Multi-GPU setup for large models (30B+ parameters)
  • A10 GPU (--flavor a10g-large): Higher memory for larger models
  • A100 (--flavor a100-large): Maximum performance for demanding workloads
  • Adjust batch size and tensor parallelism based on GPU configuration

Batch Sizes

  • Classification: Start with 10,000 locally, up to 100,000 on HF Jobs
  • Generation: vLLM handles batching automatically - no manual configuration needed

Multi-GPU Tensor Parallelism

  • Auto-detects available GPUs by default
  • Use --tensor-parallel-size to manually specify
  • Required for models larger than single GPU memory (e.g., 30B+ models)

Handling Long Contexts

The generate-responses.py script includes smart prompt filtering:

  • Default behavior: Skips prompts exceeding max_model_len
  • Use --max-model-len: Limit context to reduce memory usage
  • Use --no-skip-long-prompts: Fail on long prompts instead of skipping
  • Skipped prompts receive empty responses and are logged

πŸ“š About vLLM

vLLM is a high-throughput inference engine optimized for:

  • Fast model serving with PagedAttention
  • Efficient batch processing
  • Support for various model architectures
  • Seamless integration with Hugging Face models

πŸ”§ Technical Details

UV Script Benefits

  • Zero setup: Dependencies install automatically on first run
  • Reproducible: Locked dependencies ensure consistent behavior
  • Self-contained: Everything needed is in the script file
  • Direct execution: Run from local files or URLs

Dependencies

Scripts use UV's inline metadata for automatic dependency management:

# /// script
# requires-python = ">=3.10"
# dependencies = [
#     "datasets",
#     "flashinfer-python",
#     "huggingface-hub[hf_transfer]",
#     "torch",
#     "transformers",
#     "vllm",
# ]
# ///

For bleeding-edge features, use the UV_PRERELEASE=if-necessary environment variable to allow pre-release versions when needed.

Docker Image

For HF Jobs, we recommend the official vLLM Docker image: vllm/vllm-openai

This image includes:

  • Pre-installed CUDA libraries
  • vLLM and all dependencies
  • UV package manager
  • Optimized for GPU inference

Environment Variables

  • HF_TOKEN: Your Hugging Face authentication token (auto-detected if logged in)
  • UV_PRERELEASE=if-necessary: Allow pre-release packages when required
  • HF_HUB_ENABLE_HF_TRANSFER=1: Automatically enabled for faster downloads

πŸ“ Contributing

Have a vLLM script to share? We welcome contributions that:

  • Solve real inference problems
  • Include clear documentation
  • Follow UV script best practices
  • Include HF Jobs examples

πŸ”— Resources