sriting commited on
Commit
c876128
·
1 Parent(s): b2a1137

add evaluation table

Browse files
Files changed (1) hide show
  1. README.md +32 -0
README.md CHANGED
@@ -94,6 +94,38 @@ foundation for next-generation language model agents to reason and tackle real-w
94
 
95
  ## 2. Evaluation
96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97
 
98
 
99
  ## 3. Deployment Guide
 
94
 
95
  ## 2. Evaluation
96
 
97
+ **Performance of MiniMax-M1 on core benchmarks.**
98
+
99
+ | **Tasks** | **OpenAI-o3** | **Gemini 2.5<br>Pro (06-05)** | **Claude<br>4 Opus** | **Seed-<br>Thinking-<br>v1.5** | **DeepSeek-<br>R1** | **DeepSeek-<br>R1-0528** | **Qwen3-<br>235B-A22B** | **MiniMax-<br>M1-40K** | **MiniMax-<br>M1-80K** |
100
+ |:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
101
+ | *Extended<br>Thinking* | *100k* | *64k* | *64k* | *32k* | *32k* | *64k* | *32k* | *40K* | *80K* |
102
+ | | | | | ***Mathematics*** | | | | | |
103
+ | AIME 2024 | 91.6 | 92.0 | 76.0 | 86.7 | 79.8 | 91.4 | 85.7 | 83.3 | 86.0 |
104
+ | AIME 2025 | 88.9 | 88.0 | 75.5 | 74.0 | 70.0 | 87.5 | 81.5 | 74.6 | 76.9 |
105
+ | MATH-500 | 98.1 | 98.8 | 98.2 | 96.7 | 97.3 | 98.0 | 96.2 | 96.0 | 96.8 |
106
+ | | | | | ***General Coding*** | | | | | |
107
+ | LiveCodeBench<br>*(24/8~25/5)* | 75.8 | 77.1 | 56.6 | 67.5 | 55.9 | 73.1 | 65.9 | 62.3 | 65.0 |
108
+ | FullStackBench | 69.3 | -- | 70.3 | 69.9 | 70.1 | 69.4 | 62.9 | 67.6 | 68.3 |
109
+ | | | | | ***Reasoning & Knowledge*** | | | | | |
110
+ | GPQA Diamond | 83.3 | 86.4 | 79.6 | 77.3 | 71.5 | 81.0 | 71.1 | 69.2 | 70.0 |
111
+ | HLE *(no tools)* | 20.3 | 21.6 | 10.7 | 8.2 | 8.6\* | 17.7\* | 7.6\* | 7.2\* | 8.4\* |
112
+ | ZebraLogic | 95.8 | 91.6 | 95.1 | 84.4 | 78.7 | 95.1 | 80.3 | 80.1 | 86.8 |
113
+ | MMLU-Pro | 85.0 | 86.0 | 85.0 | 87.0 | 84.0 | 85.0 | 83.0 | 80.6 | 81.1 |
114
+ | | | | | ***Software Engineering*** | | | | | |
115
+ | SWE-bench Verified| 69.1 | 67.2 | 72.5 | 47.0 | 49.2 | 57.6 | 34.4 | 55.6 | 56.0 |
116
+ | | | | | ***Long Context*** | | | | | |
117
+ | OpenAI-MRCR *(128k)* | 56.5 | 76.8 | 48.9 | 54.3 | 35.8 | 51.5 | 27.7 | 76.1 | 73.4 |
118
+ | OpenAI-MRCR *(1M)* | -- | 58.8 | -- | -- | -- | -- | -- | 58.6 | 56.2 |
119
+ | LongBench-v2 | 58.8 | 65.0 | 55.6 | 52.5 | 58.3 | 52.1 | 50.1 | 61.0 | 61.5 |
120
+ | | | | | ***Agentic Tool Use*** | | | | | |
121
+ | TAU-bench *(airline)* | 52.0 | 50.0 | 59.6 | 44.0 | -- | 53.5 | 34.7 | 60.0 | 62.0 |
122
+ | TAU-bench *(retail)* | 73.9 | 67.0 | 81.4 | 55.7 | -- | 63.9 | 58.6 | 67.8 | 63.5 |
123
+ | | | | | ***Factuality*** | | | | | |
124
+ | SimpleQA | 49.4 | 54.0 | -- | 12.9 | 30.1 | 27.8 | 11.0 | 17.9 | 18.5 |
125
+ | | | | | ***General Assistant*** | | | | | |
126
+ | MultiChallenge | 56.5 | 51.8 | 45.8 | 43.0 | 40.7 | 45.0 | 40.0 | 44.7 | 44.7 |
127
+
128
+ \* conducted on the text-only HLE subset.
129
 
130
 
131
  ## 3. Deployment Guide