File size: 5,867 Bytes
a4ee9cd
bbe5ce0
a4ee9cd
bbe5ce0
a4ee9cd
bbe5ce0
a4ee9cd
bbe5ce0
a4ee9cd
bbe5ce0
a4ee9cd
 
 
bbe5ce0
 
a4ee9cd
bbe5ce0
a4ee9cd
 
 
bbe5ce0
a4ee9cd
bbe5ce0
 
 
 
 
 
 
 
a4ee9cd
 
 
 
 
 
 
 
 
bbe5ce0
 
a4ee9cd
 
 
bbe5ce0
 
 
 
 
a4ee9cd
bbe5ce0
a4ee9cd
 
 
 
 
 
 
 
bbe5ce0
a4ee9cd
 
 
bbe5ce0
a4ee9cd
bbe5ce0
a4ee9cd
bbe5ce0
a4ee9cd
 
bbe5ce0
a4ee9cd
bbe5ce0
a4ee9cd
 
bbe5ce0
 
a4ee9cd
bbe5ce0
a4ee9cd
bbe5ce0
a4ee9cd
 
bbe5ce0
a4ee9cd
 
 
 
 
 
 
 
 
 
 
 
 
bbe5ce0
 
a4ee9cd
bbe5ce0
a4ee9cd
 
 
 
 
bbe5ce0
a4ee9cd
 
 
bbe5ce0
a4ee9cd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bbe5ce0
a4ee9cd
bbe5ce0
a4ee9cd
 
 
 
 
 
 
bbe5ce0
a4ee9cd
bbe5ce0
a4ee9cd
 
 
 
bbe5ce0
a4ee9cd
 
 
bbe5ce0
a4ee9cd
 
 
bbe5ce0
a4ee9cd
bbe5ce0
a4ee9cd
 
 
bbe5ce0
a4ee9cd
bbe5ce0
a4ee9cd
bbe5ce0
 
 
a4ee9cd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
# πŸš€ OpenAI GPT OSS Models - Works on Regular GPUs!

Generate synthetic datasets with transparent reasoning using OpenAI's GPT OSS models. **No H100s required** - works on L4, A100, A10G, and even T4 GPUs!

## πŸŽ‰ Key Discovery

**The models work on regular datacenter GPUs!** Transformers automatically handles MXFP4 β†’ bf16 conversion, making these models accessible on standard hardware.

## 🌟 Quick Start

### Test Locally (Single Prompt)
```bash
uv run gpt_oss_transformers.py --prompt "Write a haiku about mountains"
```

### Run on HuggingFace Jobs (No GPU Required!)
```bash
# Generate haiku with reasoning (~$1.50/hr on A10G)
hf jobs uv run --flavor a10g-small \
    https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_transformers.py \
    --input-dataset davanstrien/haiku_dpo \
    --output-dataset username/haiku-reasoning \
    --prompt-column question \
    --max-samples 50
```

## πŸ’‘ What You Get

The models output structured reasoning in separate channels:

**Raw Output**:
```
analysisI need to write a haiku about mountains. Haiku: 5-7-5 syllable structure...
assistantfinalSilent peaks climb high,
Echoing winds trace stone's breath,
Dawn paints them gold bright.
```

**Parsed Dataset**:
```json
{
  "prompt": "Write a haiku about mountains",
  "think": "[Analysis] I need to write a haiku about mountains. Haiku: 5-7-5 syllable structure...",
  "content": "Silent peaks climb high,\nEchoing winds trace stone's breath,\nDawn paints them gold bright.",
  "reasoning_level": "high",
  "model": "openai/gpt-oss-20b"
}
```

## πŸ–₯️ GPU Requirements

### βœ… Confirmed Working GPUs
| GPU | Memory | Status | Notes |
|-----|--------|--------|-------|
| **L4** | 24GB | βœ… Tested | Works perfectly! |
| **A100** | 40/80GB | βœ… Works | Great performance |
| **A10G** | 24GB | βœ… Recommended | Best value at $1.50/hr |
| **T4** | 16GB | ⚠️ Limited | May need 8-bit for 20B |
| **RTX 4090** | 24GB | βœ… Works | Consumer GPU support |

### Memory Requirements
- **20B model**: ~40GB VRAM when dequantized (use A100-40GB or 2xL4)
- **120B model**: ~240GB VRAM when dequantized (use 4xA100-80GB)

## 🎯 Examples

### Creative Writing with Reasoning
```bash
# Process haiku dataset with high reasoning
uv run gpt_oss_transformers.py \
    --input-dataset davanstrien/haiku_dpo \
    --output-dataset my-haiku-reasoning \
    --prompt-column question \
    --reasoning-level high \
    --max-samples 100
```

### Math Problems with Step-by-Step Solutions
```bash
# Generate math solutions with reasoning traces
uv run gpt_oss_transformers.py \
    --input-dataset gsm8k \
    --output-dataset math-with-reasoning \
    --prompt-column question \
    --reasoning-level high
```

### Test Different Reasoning Levels
```bash
# Compare reasoning levels
for level in low medium high; do
    echo "Testing: $level"
    uv run gpt_oss_transformers.py \
        --prompt "Explain gravity to a 5-year-old" \
        --reasoning-level $level \
        --debug
done
```

## πŸ“‹ Script Options

| Option | Description | Default |
|--------|-------------|---------|
| `--input-dataset` | HuggingFace dataset to process | - |
| `--output-dataset` | Output dataset name | - |
| `--prompt-column` | Column with prompts | `prompt` |
| `--model-id` | Model to use | `openai/gpt-oss-20b` |
| `--reasoning-level` | Reasoning depth: low/medium/high | `high` |
| `--max-samples` | Limit samples to process | None |
| `--temperature` | Sampling temperature | `0.7` |
| `--max-tokens` | Max tokens to generate | `512` |
| `--prompt` | Single prompt test (skip dataset) | - |
| `--debug` | Show raw model output | `False` |

## πŸ”§ Technical Details

### Why It Works Without H100s

1. **Automatic MXFP4 Handling**: When your GPU doesn't support MXFP4, you'll see:
   ```
   MXFP4 quantization requires triton >= 3.4.0 and triton_kernels installed, 
   we will default to dequantizing the model to bf16
   ```

2. **No Flash Attention 3 Required**: FA3 needs Hopper architecture, but models work fine without it

3. **Simple Loading**: Just use standard transformers:
   ```python
   model = AutoModelForCausalLM.from_pretrained(
       "openai/gpt-oss-20b",
       torch_dtype=torch.bfloat16,
       device_map="auto"
   )
   ```

### Channel Output Format

The models use a simplified channel format:
- `analysis`: Chain of thought reasoning
- `commentary`: Meta operations (optional)
- `final`: User-facing response

### Reasoning Control

Control reasoning depth via system message:
```python
messages = [
    {
        "role": "system", 
        "content": f"...Reasoning: {level}..."
    },
    {"role": "user", "content": prompt}
]
```

## 🚨 Best Practices

1. **Token Limits**: Use 1000+ tokens for detailed reasoning
2. **Security**: Never expose reasoning channels to end users
3. **Batch Size**: Keep at 1 for memory efficiency
4. **Reasoning Levels**:
   - `low`: Quick responses
   - `medium`: Balanced reasoning
   - `high`: Detailed chain-of-thought

## πŸ› Troubleshooting

### Out of Memory
- Use larger GPU flavor: `--flavor a100-large`
- Reduce batch size to 1
- Try 8-bit quantization for smaller GPUs

### No GPU Available
- Use HuggingFace Jobs (no local GPU needed!)
- Or use cloud instances with GPU support

### Empty Reasoning
- Increase `--max-tokens` to 1500+
- Ensure prompts trigger reasoning

## πŸ“š References

- [OpenAI Cookbook: GPT OSS](https://cookbook.openai.com/articles/gpt-oss/run-transformers)
- [Model: openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b)
- [HF Jobs Documentation](https://huggingface.co/docs/hub/spaces-gpu-jobs)

## πŸŽ‰ The Bottom Line

**You don't need H100s!** These models work great on regular datacenter GPUs. Just run the script and start generating datasets with transparent reasoning.

---

*Last tested: 2025-08-05 on NVIDIA L4 GPUs - Working perfectly!*