File size: 3,822 Bytes
ce61544 b09f138 ce61544 b09f138 ce61544 b09f138 ce61544 b09f138 ce61544 b09f138 ce61544 b09f138 ce61544 b09f138 ce61544 b09f138 ce61544 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 |
---
viewer: false
tags: [uv-script, vllm, gpu, inference]
---
# vLLM Inference Scripts
Ready-to-run UV scripts for GPU-accelerated inference using [vLLM](https://github.com/vllm-project/vllm).
These scripts use [UV's inline script metadata](https://docs.astral.sh/uv/guides/scripts/) to automatically manage dependencies - just run with `uv run` and everything installs automatically!
## π Available Scripts
### classify-dataset.py
Batch text classification using BERT-style encoder models (e.g., BERT, RoBERTa, DeBERTa, ModernBERT) with vLLM's optimized inference engine.
**Note**: This script is specifically for encoder-only classification models, not generative LLMs.
**Features:**
- π High-throughput batch processing
- π·οΈ Automatic label mapping from model config
- π Confidence scores for predictions
- π€ Direct integration with Hugging Face Hub
**Usage:**
```bash
# Local execution (requires GPU)
uv run classify-dataset.py \
davanstrien/ModernBERT-base-is-new-arxiv-dataset \
username/input-dataset \
username/output-dataset \
--inference-column text \
--batch-size 10000
```
**HF Jobs execution:**
```bash
hfjobs run \
--flavor l4x1 \
--secret HF_TOKEN=$(python -c "from huggingface_hub import HfFolder; print(HfFolder.get_token())") \
vllm/vllm-openai:latest \
/bin/bash -c '
uv run https://huggingface.co/datasets/uv-scripts/vllm/resolve/main/classify-dataset.py \
davanstrien/ModernBERT-base-is-new-arxiv-dataset \
username/input-dataset \
username/output-dataset \
--inference-column text \
--batch-size 100000
' \
--project vllm-classify \
--name my-classification-job
```
## π― Requirements
All scripts in this collection require:
- **NVIDIA GPU** with CUDA support
- **Python 3.10+**
- **UV package manager** ([install UV](https://docs.astral.sh/uv/getting-started/installation/))
## π Performance Tips
### GPU Selection
- **L4 GPU** (`--flavor l4x1`): Best value for classification tasks
- **A10 GPU** (`--flavor a10`): Higher memory for larger models
- Adjust batch size based on GPU memory
### Batch Sizes
- **Local GPUs**: Start with 10,000 and adjust based on memory
- **HF Jobs**: Can use larger batches (50,000-100,000) with cloud GPUs
## π About vLLM
vLLM is a high-throughput inference engine optimized for:
- Fast model serving with PagedAttention
- Efficient batch processing
- Support for various model architectures
- Seamless integration with Hugging Face models
## π§ Technical Details
### UV Script Benefits
- **Zero setup**: Dependencies install automatically on first run
- **Reproducible**: Locked dependencies ensure consistent behavior
- **Self-contained**: Everything needed is in the script file
- **Direct execution**: Run from local files or URLs
### Dependencies
Scripts use UV's inline metadata with custom package indexes for vLLM's optimized builds:
```python
# /// script
# requires-python = ">=3.10"
# dependencies = ["vllm", "datasets", "torch", ...]
#
# [[tool.uv.index]]
# url = "https://flashinfer.ai/whl/cu126/torch2.6"
#
# [[tool.uv.index]]
# url = "https://wheels.vllm.ai/nightly"
# ///
```
### Docker Image
For HF Jobs, we use the official vLLM Docker image: `vllm/vllm-openai:latest`
This image includes:
- Pre-installed CUDA libraries
- vLLM and all dependencies
- UV package manager
- Optimized for GPU inference
## π Contributing
Have a vLLM script to share? We welcome contributions that:
- Solve real inference problems
- Include clear documentation
- Follow UV script best practices
- Include HF Jobs examples
## π Resources
- [vLLM Documentation](https://docs.vllm.ai/)
- [UV Documentation](https://docs.astral.sh/uv/)
- [UV Scripts Organization](https://huggingface.co/uv-scripts) |