vllm / README.md
davanstrien's picture
davanstrien HF Staff
Add vLLM classification script
ce61544
|
raw
history blame
3.06 kB
metadata
viewer: false
tags:
  - uv-script
  - vllm
  - gpu
  - inference

vLLM Inference Scripts

Ready-to-run scripts for GPU-accelerated inference using vLLM.

πŸ“‹ Available Scripts

classify-dataset.py

Batch text classification using BERT-style models with vLLM's optimized inference engine.

Features:

  • πŸš€ High-throughput batch processing
  • 🏷️ Automatic label mapping from model config
  • πŸ“Š Confidence scores for predictions
  • πŸ€— Direct integration with Hugging Face Hub

Usage:

# Local execution (requires GPU)
uv run classify-dataset.py \
    davanstrien/ModernBERT-base-is-new-arxiv-dataset \
    username/input-dataset \
    username/output-dataset \
    --inference-column text \
    --batch-size 10000

HF Jobs execution:

hfjobs run \
    --flavor l4x1 \
    --secret HF_TOKEN=$(python -c "from huggingface_hub import HfFolder; print(HfFolder.get_token())") \
    vllm/vllm-openai:latest \
    /bin/bash -c '
        uv run https://huggingface.co/datasets/uv-scripts/vllm/resolve/main/classify-dataset.py \
            davanstrien/ModernBERT-base-is-new-arxiv-dataset \
            username/input-dataset \
            username/output-dataset \
            --inference-column text \
            --batch-size 100000
    ' \
    --project vllm-classify \
    --name my-classification-job

🎯 Requirements

All scripts in this collection require:

  • NVIDIA GPU with CUDA support
  • Python 3.10+
  • UV package manager (auto-installed via script)

πŸš€ Performance Tips

GPU Selection

  • L4 GPU (--flavor l4x1): Best value for classification tasks
  • A10 GPU (--flavor a10): Higher memory for larger models
  • Adjust batch size based on GPU memory

Batch Sizes

  • Local GPUs: Start with 10,000 and adjust based on memory
  • HF Jobs: Can use larger batches (50,000-100,000) with cloud GPUs

πŸ“š About vLLM

vLLM is a high-throughput inference engine optimized for:

  • Fast model serving with PagedAttention
  • Efficient batch processing
  • Support for various model architectures
  • Seamless integration with Hugging Face models

πŸ”§ Technical Details

Dependencies

Scripts use vLLM's nightly builds and FlashInfer for optimal performance:

# [[tool.uv.index]]
# url = "https://flashinfer.ai/whl/cu126/torch2.6"
# 
# [[tool.uv.index]]
# url = "https://wheels.vllm.ai/nightly"

Docker Image

For HF Jobs, we use the official vLLM Docker image: vllm/vllm-openai:latest

This image includes:

  • Pre-installed CUDA libraries
  • vLLM and all dependencies
  • UV package manager
  • Optimized for GPU inference

πŸ“ Contributing

Have a vLLM script to share? We welcome contributions that:

  • Solve real inference problems
  • Include clear documentation
  • Follow UV script best practices
  • Include HF Jobs examples

πŸ”— Resources