openai-oss / README.md
davanstrien's picture
davanstrien HF Staff
Fix memory issues and update GPU requirements
90312a8
|
raw
history blame
6.46 kB

πŸš€ OpenAI GPT OSS Models - Open Source Language Models with Reasoning

Generate responses with transparent chain-of-thought reasoning using OpenAI's new open source GPT models. Run on cloud GPUs with zero setup!

🏁 Quick Setup for HF Jobs (One-time)

# Install huggingface-hub CLI using uv
uv tool install huggingface-hub

# Login to Hugging Face
huggingface-cli login

# Now you're ready to run jobs!

Need more help? Check the HF Jobs documentation.

🌟 Try It Now! Copy & Run This Command:

# Generate 50 haiku with reasoning (~5 minutes on A10G)
huggingface-cli job run --gpu-flavor a10g-small \
    uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_vllm.py \
    --input-dataset davanstrien/haiku_dpo \
    --output-dataset haiku-reasoning \
    --prompt-column question \
    --max-samples 50

That's it! Your dataset will be generated and pushed to your-username/haiku-reasoning. πŸŽ‰

πŸ’‘ What You Get

The models output structured reasoning in separate channels:

{
  "prompt": "Write a haiku about mountain serenity",
  "think": "I need to create a haiku with 5-7-5 syllable structure. Mountains suggest stillness, permanence. For serenity, I'll use calm imagery like 'silent peaks' (3 syllables)...",
  "content": "Silent peaks stand tall\nClouds drift through morning stillness\nPeace in stone and sky",
  "reasoning_level": "high",
  "model": "openai/gpt-oss-20b"
}

🎯 More Examples

Use Your Own Dataset

# Process your entire dataset
huggingface-cli job run --gpu-flavor a10g-small \
    uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_vllm.py \
    --input-dataset your-prompts \
    --output-dataset my-responses

# Use the larger 120B model
huggingface-cli job run --gpu-flavor a100-large \
    uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_vllm.py \
    --input-dataset your-prompts \
    --output-dataset my-responses-120b \
    --model-id openai/gpt-oss-120b

Process Different Dataset Types

# Math problems with step-by-step reasoning
huggingface-cli job run --gpu-flavor a10g-small \
    uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_vllm.py \
    --input-dataset math-problems \
    --output-dataset math-solutions \
    --reasoning-level high

# Code generation with explanation
huggingface-cli job run --gpu-flavor a10g-small \
    uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_vllm.py \
    --input-dataset code-prompts \
    --output-dataset code-explained \
    --max-tokens 1024

# Test with just 10 samples
huggingface-cli job run --gpu-flavor a10g-small \
    uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_vllm.py \
    --input-dataset your-dataset \
    --output-dataset quick-test \
    --max-samples 10

πŸ“¦ Two Script Options

  1. gpt_oss_vllm.py - High-performance batch generation using vLLM (recommended)
  2. gpt_oss_transformers.py - Standard transformers implementation (fallback)

Transformers Fallback (if vLLM has issues)

# Same command, different script!
huggingface-cli job run --gpu-flavor a10g-small \
    uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_transformers.py \
    --input-dataset davanstrien/haiku_dpo \
    --output-dataset haiku-reasoning \
    --prompt-column question \
    --max-samples 50

πŸ’° GPU Flavors and Costs

Model GPU Flavor Memory Cost/Hour Best For
gpt-oss-20b a10g-large 48GB $2.50 20B model (needs ~40GB)
gpt-oss-20b a100-large 80GB $4.34 20B with headroom
gpt-oss-120b 4xa100 320GB $17.36 120B model (needs ~240GB)
gpt-oss-120b 8xl40s 384GB $23.50 120B maximum speed

Note: The MXFP4 quantization is dequantized to bf16 during loading, which doubles memory requirements.

πŸƒ Local Execution

If you have a local GPU:

# Using vLLM (recommended)
uv run gpt_oss_vllm.py \
    --input-dataset davanstrien/haiku_dpo \
    --output-dataset haiku-reasoning \
    --prompt-column question \
    --max-samples 50

# Using Transformers
uv run gpt_oss_transformers.py \
    --input-dataset davanstrien/haiku_dpo \
    --output-dataset haiku-reasoning \
    --prompt-column question \
    --max-samples 50

πŸ› οΈ Parameters

Parameter Description Default
--input-dataset Source dataset on HF Hub Required
--output-dataset Output dataset name (auto-prefixed with your username) Required
--prompt-column Column containing prompts prompt
--model-id Model to use openai/gpt-oss-20b
--reasoning-level Reasoning depth (high/medium/low) high
--max-samples Limit number of examples None (all)
--temperature Generation temperature 0.7
--max-tokens Max tokens to generate 512

🎯 Key Features

  • Open Source Models: openai/gpt-oss-20b and openai/gpt-oss-120b
  • Structured Output: Separate channels for reasoning (analysis) and response (final)
  • Zero Setup: Run with a single command on HF Jobs
  • Flexible Input: Works with any prompt dataset
  • Automatic Upload: Results pushed directly to your Hub account

🎯 Use Cases

  1. Training Data: Create datasets with built-in reasoning explanations
  2. Evaluation: Generate test sets where each answer includes its rationale
  3. Research: Study how large models approach different types of problems
  4. Applications: Build systems that can explain their outputs

πŸ€” Which Script to Use?

  • gpt_oss_vllm.py: First choice for performance and scale
  • gpt_oss_transformers.py: Fallback if vLLM has compatibility issues

πŸ”§ Requirements

For HF Jobs:

  • Hugging Face account (free)
  • huggingface-hub CLI tool

For local execution:

  • Python 3.10+
  • GPU with CUDA support
  • Hugging Face token

🀝 Contributing

This is part of the uv-scripts collection. Contributions and improvements welcome!

πŸ“œ License

Apache 2.0 - Same as the OpenAI GPT OSS models


Ready to generate data with reasoning? Copy the command at the top and run it! πŸš€