|
# π OpenAI GPT OSS Models - Works on Regular GPUs! |
|
|
|
Generate synthetic datasets with transparent reasoning using OpenAI's GPT OSS models. **No H100s required** - works on L4, A100, A10G, and even T4 GPUs! |
|
|
|
## π Key Discovery |
|
|
|
**The models work on regular datacenter GPUs!** Transformers automatically handles MXFP4 β bf16 conversion, making these models accessible on standard hardware. |
|
|
|
## π Quick Start |
|
|
|
### Test Locally (Single Prompt) |
|
```bash |
|
uv run gpt_oss_transformers.py --prompt "Write a haiku about mountains" |
|
``` |
|
|
|
### Run on HuggingFace Jobs (No GPU Required!) |
|
```bash |
|
# Generate haiku with reasoning (~$1.50/hr on A10G) |
|
hf jobs uv run --flavor a10g-small \ |
|
https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_transformers.py \ |
|
--input-dataset davanstrien/haiku_dpo \ |
|
--output-dataset username/haiku-reasoning \ |
|
--prompt-column question \ |
|
--max-samples 50 |
|
``` |
|
|
|
## π‘ What You Get |
|
|
|
The models output structured reasoning in separate channels: |
|
|
|
**Raw Output**: |
|
``` |
|
analysisI need to write a haiku about mountains. Haiku: 5-7-5 syllable structure... |
|
assistantfinalSilent peaks climb high, |
|
Echoing winds trace stone's breath, |
|
Dawn paints them gold bright. |
|
``` |
|
|
|
**Parsed Dataset**: |
|
```json |
|
{ |
|
"prompt": "Write a haiku about mountains", |
|
"think": "[Analysis] I need to write a haiku about mountains. Haiku: 5-7-5 syllable structure...", |
|
"content": "Silent peaks climb high,\nEchoing winds trace stone's breath,\nDawn paints them gold bright.", |
|
"reasoning_level": "high", |
|
"model": "openai/gpt-oss-20b" |
|
} |
|
``` |
|
|
|
## π₯οΈ GPU Requirements |
|
|
|
### β
Confirmed Working GPUs |
|
| GPU | Memory | Status | Notes | |
|
|-----|--------|--------|-------| |
|
| **L4** | 24GB | β
Tested | Works perfectly! | |
|
| **A100** | 40/80GB | β
Works | Great performance | |
|
| **A10G** | 24GB | β
Recommended | Best value at $1.50/hr | |
|
| **T4** | 16GB | β οΈ Limited | May need 8-bit for 20B | |
|
| **RTX 4090** | 24GB | β
Works | Consumer GPU support | |
|
|
|
### Memory Requirements |
|
- **20B model**: ~40GB VRAM when dequantized (use A100-40GB or 2xL4) |
|
- **120B model**: ~240GB VRAM when dequantized (use 4xA100-80GB) |
|
|
|
## π― Examples |
|
|
|
### Creative Writing with Reasoning |
|
```bash |
|
# Process haiku dataset with high reasoning |
|
uv run gpt_oss_transformers.py \ |
|
--input-dataset davanstrien/haiku_dpo \ |
|
--output-dataset my-haiku-reasoning \ |
|
--prompt-column question \ |
|
--reasoning-level high \ |
|
--max-samples 100 |
|
``` |
|
|
|
### Math Problems with Step-by-Step Solutions |
|
```bash |
|
# Generate math solutions with reasoning traces |
|
uv run gpt_oss_transformers.py \ |
|
--input-dataset gsm8k \ |
|
--output-dataset math-with-reasoning \ |
|
--prompt-column question \ |
|
--reasoning-level high |
|
``` |
|
|
|
### Test Different Reasoning Levels |
|
```bash |
|
# Compare reasoning levels |
|
for level in low medium high; do |
|
echo "Testing: $level" |
|
uv run gpt_oss_transformers.py \ |
|
--prompt "Explain gravity to a 5-year-old" \ |
|
--reasoning-level $level \ |
|
--debug |
|
done |
|
``` |
|
|
|
## π Script Options |
|
|
|
| Option | Description | Default | |
|
|--------|-------------|---------| |
|
| `--input-dataset` | HuggingFace dataset to process | - | |
|
| `--output-dataset` | Output dataset name | - | |
|
| `--prompt-column` | Column with prompts | `prompt` | |
|
| `--model-id` | Model to use | `openai/gpt-oss-20b` | |
|
| `--reasoning-level` | Reasoning depth: low/medium/high | `high` | |
|
| `--max-samples` | Limit samples to process | None | |
|
| `--temperature` | Sampling temperature | `0.7` | |
|
| `--max-tokens` | Max tokens to generate | `512` | |
|
| `--prompt` | Single prompt test (skip dataset) | - | |
|
| `--debug` | Show raw model output | `False` | |
|
|
|
## π§ Technical Details |
|
|
|
### Why It Works Without H100s |
|
|
|
1. **Automatic MXFP4 Handling**: When your GPU doesn't support MXFP4, you'll see: |
|
``` |
|
MXFP4 quantization requires triton >= 3.4.0 and triton_kernels installed, |
|
we will default to dequantizing the model to bf16 |
|
``` |
|
|
|
2. **No Flash Attention 3 Required**: FA3 needs Hopper architecture, but models work fine without it |
|
|
|
3. **Simple Loading**: Just use standard transformers: |
|
```python |
|
model = AutoModelForCausalLM.from_pretrained( |
|
"openai/gpt-oss-20b", |
|
torch_dtype=torch.bfloat16, |
|
device_map="auto" |
|
) |
|
``` |
|
|
|
### Channel Output Format |
|
|
|
The models use a simplified channel format: |
|
- `analysis`: Chain of thought reasoning |
|
- `commentary`: Meta operations (optional) |
|
- `final`: User-facing response |
|
|
|
### Reasoning Control |
|
|
|
Control reasoning depth via system message: |
|
```python |
|
messages = [ |
|
{ |
|
"role": "system", |
|
"content": f"...Reasoning: {level}..." |
|
}, |
|
{"role": "user", "content": prompt} |
|
] |
|
``` |
|
|
|
## π¨ Best Practices |
|
|
|
1. **Token Limits**: Use 1000+ tokens for detailed reasoning |
|
2. **Security**: Never expose reasoning channels to end users |
|
3. **Batch Size**: Keep at 1 for memory efficiency |
|
4. **Reasoning Levels**: |
|
- `low`: Quick responses |
|
- `medium`: Balanced reasoning |
|
- `high`: Detailed chain-of-thought |
|
|
|
## π Troubleshooting |
|
|
|
### Out of Memory |
|
- Use larger GPU flavor: `--flavor a100-large` |
|
- Reduce batch size to 1 |
|
- Try 8-bit quantization for smaller GPUs |
|
|
|
### No GPU Available |
|
- Use HuggingFace Jobs (no local GPU needed!) |
|
- Or use cloud instances with GPU support |
|
|
|
### Empty Reasoning |
|
- Increase `--max-tokens` to 1500+ |
|
- Ensure prompts trigger reasoning |
|
|
|
## π References |
|
|
|
- [OpenAI Cookbook: GPT OSS](https://cookbook.openai.com/articles/gpt-oss/run-transformers) |
|
- [Model: openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) |
|
- [HF Jobs Documentation](https://huggingface.co/docs/hub/spaces-gpu-jobs) |
|
|
|
## π The Bottom Line |
|
|
|
**You don't need H100s!** These models work great on regular datacenter GPUs. Just run the script and start generating datasets with transparent reasoning. |
|
|
|
--- |
|
|
|
*Last tested: 2025-08-05 on NVIDIA L4 GPUs - Working perfectly!* |