π OpenAI GPT OSS Models - Simple Generation Script
Generate synthetic datasets using OpenAI's GPT OSS models with transparent reasoning. Works on HuggingFace Jobs with L4 GPUs!
β Tested & Working
Successfully tested on HF Jobs with l4x4
flavor (4x L4 GPUs = 96GB total memory).
π Getting Started with HF Jobs
First-time Setup (2 minutes)
- Install HuggingFace CLI:
pip install huggingface-hub
- Login to HuggingFace:
huggingface-cli login
(Enter your HF token when prompted - get one at https://huggingface.co/settings/tokens)
- Run the script on HF Jobs:
hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_minimal.py \
--input-dataset davanstrien/haiku_dpo \
--output-dataset YOUR_USERNAME/gpt-oss-test \
--prompt-column question \
--max-samples 2
That's it! Your job will run on HuggingFace's GPUs and the output dataset will appear in your HF account.
π Quick Start
# Run on HF Jobs (tested and working)
hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_minimal.py \
--input-dataset davanstrien/haiku_dpo \
--output-dataset username/gpt-oss-haiku \
--prompt-column question \
--max-samples 2 \
--reasoning-effort high
π Script Options
Option | Description | Default |
---|---|---|
--input-dataset |
HuggingFace dataset to process | Required |
--output-dataset |
Output dataset name | Required |
--prompt-column |
Column containing prompts | prompt |
--model-id |
Model to use | openai/gpt-oss-20b |
--max-samples |
Limit samples to process | None (all) |
--max-new-tokens |
Max tokens to generate | Auto-scales: 512/1024/2048 |
--reasoning-effort |
Reasoning depth: low/medium/high | medium |
--temperature |
Sampling temperature | 1.0 |
--top-p |
Top-p sampling | 1.0 |
Note: max-new-tokens
auto-scales based on reasoning-effort
if not set:
low
: 512 tokensmedium
: 1024 tokenshigh
: 2048 tokens (prevents truncation of detailed reasoning)
π‘ What You Get
The output dataset contains:
prompt
: Original prompt from input datasetraw_output
: Full model response with channel markersmodel
: Model ID usedreasoning_effort
: The reasoning level used
Understanding the Output
The raw output contains special channel markers:
<|channel|>analysis<|message|>
- Chain of thought reasoning<|channel|>final<|message|>
- The actual response
Example raw output structure:
<|channel|>analysis<|message|>
[Reasoning about the task...]
<|channel|>final<|message|>
[Actual haiku or response]
π― Examples
Test with Different Reasoning Levels
High reasoning (most detailed):
hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_minimal.py \
--input-dataset davanstrien/haiku_dpo \
--output-dataset username/haiku-high \
--prompt-column question \
--reasoning-effort high \
--max-samples 5
Low reasoning (fastest):
hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_minimal.py \
--input-dataset davanstrien/haiku_dpo \
--output-dataset username/haiku-low \
--prompt-column question \
--reasoning-effort low \
--max-samples 10
π₯οΈ GPU Requirements
Model | Memory Required | Recommended Flavor |
---|---|---|
openai/gpt-oss-20b | ~40GB | l4x4 (4x24GB = 96GB) |
Note: The 20B model automatically dequantizes from MXFP4 to bf16 on non-Hopper GPUs, requiring more memory than the quantized size.
Reasoning Effort
The reasoning_effort
parameter controls how much chain-of-thought reasoning the model generates:
low
: Quick responses with minimal reasoningmedium
: Balanced reasoning (default)high
: Detailed step-by-step reasoning
Sampling Parameters
OpenAI recommends temperature=1.0
and top_p=1.0
as defaults for GPT OSS models:
- These settings provide good diversity without compromising quality
- The model was trained to work well with these parameters
- Adjust only if you need specific behavior (e.g., lower temperature for more deterministic output)
π Resources
- OpenAI GPT OSS Model Collection - Both 20B and 120B models
- Model: openai/gpt-oss-20b
- HF Jobs Documentation - Complete guide to running jobs on HuggingFace
- HF CLI Guide - HuggingFace CLI installation and usage
- Dataset: davanstrien/haiku_dpo
Last tested: 2025-01-06 on HF Jobs with l4x4 flavor
- Downloads last month
- 4