s1.1-32B-AWQ / README.md
Valdemardi's picture
Initial commit
d0a6451
metadata
pipeline_tag: text-generation
inference: true
license: apache-2.0
datasets:
  - simplescaling/s1K-1.1
library_name: transformers
base_model:
  - simplescaling/s1.1-32B
base_model_relation: quantized

Model Summary

Quantization Details

This quantized model was created using AutoAWQ version 0.2.8 with quant_config:

{
  "zero_point": True,
  "q_group_size": 128,
  "w_bit": 4,
  "version": "GEMM"
}

Model details

s1.1 is our sucessor of s1 with better reasoning performance by leveraging reasoning traces from r1 instead of Gemini.

This model is a successor of s1-32B with slightly better performance. Thanks to Ryan Marten for helping generate r1 traces for s1K.

Use

The model usage is documented here.

Evaluation

Metric s1-32B s1.1-32B o1-preview o1 DeepSeek-R1 DeepSeek-R1-Distill-Qwen-32B
# examples 1K 1K ? ? >800K 800K
AIME2024 56.7 56.7 40.0 74.4 79.8 72.6
AIME2025 I 26.7 60.0 37.5 ? 65.0 46.1
MATH500 93.0 95.4 81.4 94.8 97.3 94.3
GPQA-Diamond 59.6 63.6 75.2 77.3 71.5 62.1

Note that s1-32B and s1.1-32B use budget forcing in this table; specifically ignoring end-of-thinking and appending "Wait" once or twice.