The Dataset Viewer has been disabled on this dataset.

πŸš€ OpenAI GPT OSS Models - Simple Generation Script

Generate synthetic datasets using OpenAI's GPT OSS models with transparent reasoning. Works on HuggingFace Jobs with L4 GPUs!

βœ… Tested & Working

Successfully tested on HF Jobs with l4x4 flavor (4x L4 GPUs = 96GB total memory).

πŸš€ Getting Started with HF Jobs

First-time Setup (2 minutes)

  1. Install HuggingFace CLI:
pip install huggingface-hub
  1. Login to HuggingFace:
huggingface-cli login

(Enter your HF token when prompted - get one at https://huggingface.co/settings/tokens)

  1. Run the script on HF Jobs:
hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
    https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_minimal.py \
    --input-dataset davanstrien/haiku_dpo \
    --output-dataset YOUR_USERNAME/gpt-oss-test \
    --prompt-column question \
    --max-samples 2

That's it! Your job will run on HuggingFace's GPUs and the output dataset will appear in your HF account.

🌟 Quick Start

# Run on HF Jobs (tested and working)
hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
    https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_minimal.py \
    --input-dataset davanstrien/haiku_dpo \
    --output-dataset username/gpt-oss-haiku \
    --prompt-column question \
    --max-samples 2 \
    --reasoning-effort high

πŸ“‹ Script Options

Option Description Default
--input-dataset HuggingFace dataset to process Required
--output-dataset Output dataset name Required
--prompt-column Column containing prompts prompt
--model-id Model to use openai/gpt-oss-20b
--max-samples Limit samples to process None (all)
--max-new-tokens Max tokens to generate Auto-scales: 512/1024/2048
--reasoning-effort Reasoning depth: low/medium/high medium
--temperature Sampling temperature 1.0
--top-p Top-p sampling 1.0

Note: max-new-tokens auto-scales based on reasoning-effort if not set:

  • low: 512 tokens
  • medium: 1024 tokens
  • high: 2048 tokens (prevents truncation of detailed reasoning)

πŸ’‘ What You Get

The output dataset contains:

  • prompt: Original prompt from input dataset
  • raw_output: Full model response with channel markers
  • model: Model ID used
  • reasoning_effort: The reasoning level used

Understanding the Output

The raw output contains special channel markers:

  • <|channel|>analysis<|message|> - Chain of thought reasoning
  • <|channel|>final<|message|> - The actual response

Example raw output structure:

<|channel|>analysis<|message|>
[Reasoning about the task...]
<|channel|>final<|message|>
[Actual haiku or response]

🎯 Examples

Test with Different Reasoning Levels

High reasoning (most detailed):

hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
    https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_minimal.py \
    --input-dataset davanstrien/haiku_dpo \
    --output-dataset username/haiku-high \
    --prompt-column question \
    --reasoning-effort high \
    --max-samples 5

Low reasoning (fastest):

hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
    https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_minimal.py \
    --input-dataset davanstrien/haiku_dpo \
    --output-dataset username/haiku-low \
    --prompt-column question \
    --reasoning-effort low \
    --max-samples 10

πŸ–₯️ GPU Requirements

Model Memory Required Recommended Flavor
openai/gpt-oss-20b ~40GB l4x4 (4x24GB = 96GB)

Note: The 20B model automatically dequantizes from MXFP4 to bf16 on non-Hopper GPUs, requiring more memory than the quantized size.

Reasoning Effort

The reasoning_effort parameter controls how much chain-of-thought reasoning the model generates:

  • low: Quick responses with minimal reasoning
  • medium: Balanced reasoning (default)
  • high: Detailed step-by-step reasoning

Sampling Parameters

OpenAI recommends temperature=1.0 and top_p=1.0 as defaults for GPT OSS models:

  • These settings provide good diversity without compromising quality
  • The model was trained to work well with these parameters
  • Adjust only if you need specific behavior (e.g., lower temperature for more deterministic output)

πŸ“š Resources


Last tested: 2025-01-06 on HF Jobs with l4x4 flavor

Downloads last month
4