File size: 6,462 Bytes
bbe5ce0 90312a8 bbe5ce0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 |
# π OpenAI GPT OSS Models - Open Source Language Models with Reasoning
Generate responses with transparent chain-of-thought reasoning using OpenAI's new open source GPT models. Run on cloud GPUs with zero setup!
## π Quick Setup for HF Jobs (One-time)
```bash
# Install huggingface-hub CLI using uv
uv tool install huggingface-hub
# Login to Hugging Face
huggingface-cli login
# Now you're ready to run jobs!
```
Need more help? Check the [HF Jobs documentation](https://huggingface.co/docs/huggingface_hub/guides/job).
## π Try It Now! Copy & Run This Command:
```bash
# Generate 50 haiku with reasoning (~5 minutes on A10G)
huggingface-cli job run --gpu-flavor a10g-small \
uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_vllm.py \
--input-dataset davanstrien/haiku_dpo \
--output-dataset haiku-reasoning \
--prompt-column question \
--max-samples 50
```
That's it! Your dataset will be generated and pushed to `your-username/haiku-reasoning`. π
## π‘ What You Get
The models output structured reasoning in separate channels:
```json
{
"prompt": "Write a haiku about mountain serenity",
"think": "I need to create a haiku with 5-7-5 syllable structure. Mountains suggest stillness, permanence. For serenity, I'll use calm imagery like 'silent peaks' (3 syllables)...",
"content": "Silent peaks stand tall\nClouds drift through morning stillness\nPeace in stone and sky",
"reasoning_level": "high",
"model": "openai/gpt-oss-20b"
}
```
## π― More Examples
### Use Your Own Dataset
```bash
# Process your entire dataset
huggingface-cli job run --gpu-flavor a10g-small \
uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_vllm.py \
--input-dataset your-prompts \
--output-dataset my-responses
# Use the larger 120B model
huggingface-cli job run --gpu-flavor a100-large \
uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_vllm.py \
--input-dataset your-prompts \
--output-dataset my-responses-120b \
--model-id openai/gpt-oss-120b
```
### Process Different Dataset Types
```bash
# Math problems with step-by-step reasoning
huggingface-cli job run --gpu-flavor a10g-small \
uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_vllm.py \
--input-dataset math-problems \
--output-dataset math-solutions \
--reasoning-level high
# Code generation with explanation
huggingface-cli job run --gpu-flavor a10g-small \
uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_vllm.py \
--input-dataset code-prompts \
--output-dataset code-explained \
--max-tokens 1024
# Test with just 10 samples
huggingface-cli job run --gpu-flavor a10g-small \
uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_vllm.py \
--input-dataset your-dataset \
--output-dataset quick-test \
--max-samples 10
```
## π¦ Two Script Options
1. **`gpt_oss_vllm.py`** - High-performance batch generation using vLLM (recommended)
2. **`gpt_oss_transformers.py`** - Standard transformers implementation (fallback)
### Transformers Fallback (if vLLM has issues)
```bash
# Same command, different script!
huggingface-cli job run --gpu-flavor a10g-small \
uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_transformers.py \
--input-dataset davanstrien/haiku_dpo \
--output-dataset haiku-reasoning \
--prompt-column question \
--max-samples 50
```
## π° GPU Flavors and Costs
| Model | GPU Flavor | Memory | Cost/Hour | Best For |
|-------|------------|--------|-----------|----------|
| `gpt-oss-20b` | `a10g-large` | 48GB | $2.50 | 20B model (needs ~40GB) |
| `gpt-oss-20b` | `a100-large` | 80GB | $4.34 | 20B with headroom |
| `gpt-oss-120b` | `4xa100` | 320GB | $17.36 | 120B model (needs ~240GB) |
| `gpt-oss-120b` | `8xl40s` | 384GB | $23.50 | 120B maximum speed |
**Note**: The MXFP4 quantization is dequantized to bf16 during loading, which doubles memory requirements.
## π Local Execution
If you have a local GPU:
```bash
# Using vLLM (recommended)
uv run gpt_oss_vllm.py \
--input-dataset davanstrien/haiku_dpo \
--output-dataset haiku-reasoning \
--prompt-column question \
--max-samples 50
# Using Transformers
uv run gpt_oss_transformers.py \
--input-dataset davanstrien/haiku_dpo \
--output-dataset haiku-reasoning \
--prompt-column question \
--max-samples 50
```
## π οΈ Parameters
| Parameter | Description | Default |
|-----------|-------------|---------|
| `--input-dataset` | Source dataset on HF Hub | Required |
| `--output-dataset` | Output dataset name (auto-prefixed with your username) | Required |
| `--prompt-column` | Column containing prompts | `prompt` |
| `--model-id` | Model to use | `openai/gpt-oss-20b` |
| `--reasoning-level` | Reasoning depth (high/medium/low) | `high` |
| `--max-samples` | Limit number of examples | None (all) |
| `--temperature` | Generation temperature | `0.7` |
| `--max-tokens` | Max tokens to generate | `512` |
## π― Key Features
- **Open Source Models**: `openai/gpt-oss-20b` and `openai/gpt-oss-120b`
- **Structured Output**: Separate channels for reasoning (`analysis`) and response (`final`)
- **Zero Setup**: Run with a single command on HF Jobs
- **Flexible Input**: Works with any prompt dataset
- **Automatic Upload**: Results pushed directly to your Hub account
## π― Use Cases
1. **Training Data**: Create datasets with built-in reasoning explanations
2. **Evaluation**: Generate test sets where each answer includes its rationale
3. **Research**: Study how large models approach different types of problems
4. **Applications**: Build systems that can explain their outputs
## π€ Which Script to Use?
- **`gpt_oss_vllm.py`**: First choice for performance and scale
- **`gpt_oss_transformers.py`**: Fallback if vLLM has compatibility issues
## π§ Requirements
For HF Jobs:
- Hugging Face account (free)
- `huggingface-hub` CLI tool
For local execution:
- Python 3.10+
- GPU with CUDA support
- Hugging Face token
## π€ Contributing
This is part of the [uv-scripts](https://huggingface.co/uv-scripts) collection. Contributions and improvements welcome!
## π License
Apache 2.0 - Same as the OpenAI GPT OSS models
---
**Ready to generate data with reasoning?** Copy the command at the top and run it! π |