rank
int64 1
112
| model
stringlengths 5
65
| accuracy
float64 10.6
89.7
| parameters
float64 1.5
540
⌀ | extra_training_data
stringclasses 2
values | paper
stringlengths 0
110
| code
stringclasses 3
values | result
stringclasses 3
values | year
int64 2.02k
2.02k
| tags
sequencelengths 0
3
|
---|---|---|---|---|---|---|---|---|---|
1 | Gemini 2.0 Flash Experimental | 89.7 | null | No | No | No | 2,024 | [] |
|
2 | Qwen2.5-Math-72B-Instruct (TIR,Greedy) | 88.1 | 72 | Yes | Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement | No | Yes | 2,024 | [] |
3 | GPT-4 Turbo (MACM, w/code, voting) | 87.92 | null | No | MACM: Utilizing a Multi-Agent System for Condition Mining in Solving Complex Mathematical Problems | Yes | Yes | 2,024 | [
"code environment",
"majority voting",
"multi-agent"
] |
4 | Qwen2.5-Math-72B-Instruct (COT,Greedy) | 85.9 | 72 | Yes | Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement | No | Yes | 2,024 | [] |
5 | Qwen2.5-Math-7B-Instruct (TIR,Greedy) | 85.2 | 7 | Yes | Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement | No | Yes | 2,024 | [] |
6 | GPT-4-code model (CSV, w/ code, SC, k=16) | 84.3 | null | No | Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification | Yes | Yes | 2,023 | [
"multi-agent",
"majority voting",
"code environment"
] |
7 | Qwen2-Math-72B-Instruct (greedy) | 84 | 72 | Yes | Qwen2 Technical Report | Yes | Yes | 2,024 | [] |
8 | Qwen2.5-Math-7B-Instruct (COT,Greedy) | 83.6 | 7 | Yes | Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement | No | Yes | 2,024 | [] |
9 | Qwen2.5-Math-1.5B-Instruct (TIR,Greedy) | 79.9 | 1.5 | Yes | Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement | No | Yes | 2,024 | [] |
10 | OpenMath2-Llama3.1-70B (majority@256) | 79.6 | null | Yes | OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data | Yes | Yes | 2,024 | [] |
11 | OpenMath2-Llama3.1-8B (majority@256) | 76.1 | null | Yes | OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data | Yes | Yes | 2,024 | [] |
12 | Qwen2.5-Math-1.5B-Instruct (COT,Greedy) | 75.8 | 1.5 | Yes | Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement | No | Yes | 2,024 | [] |
13 | GPT-4-code model (CSV, w/ code) | 73.5 | null | No | Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification | Yes | Yes | 2,023 | [
"code environment"
] |
14 | CR (GPT-4-turbo model, w/ code) | 72.2 | null | No | Cumulative Reasoning with Large Language Models | Yes | Yes | 2,023 | [
"code environment"
] |
15 | OpenMath2-Llama3.1-70B | 71.9 | null | Yes | OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data | Yes | Yes | 2,024 | [] |
16 | LogicNet (with code interpreter) | 71.2 | null | Yes | Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification | Yes | Yes | 2,023 | [] |
17 | Qwen2-72B-Instruct-Step-DPO (0-shot CoT, w/o code) | 70.8 | null | Yes | Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs | Yes | Yes | 2,024 | [] |
18 | GPT-4-code model (w/ code) | 69.7 | null | No | Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification | Yes | Yes | 2,023 | [
"code environment"
] |
19 | OpenMath2-Llama3.1-8B | 67.8 | null | Yes | OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data | Yes | Yes | 2,024 | [] |
20 | AlphaMath-7B-SBS@3 | 66.3 | null | No | AlphaMath Almost Zero: Process Supervision without Process | Yes | Yes | 2,024 | [
"code environment"
] |
21 | Minerva 62B (maj5@256) | 64.9 | 62 | No | Solving Quantitative Reasoning Problems with Language Models | Yes | Yes | 2,022 | [] |
22 | DAMOMath-7B | 64.5 | 7 | Yes | 2,024 | [] |
|||
23 | MMOS-DeepSeekMath-7B (0-shot,k=50) | 63.7 | 7 | Yes | An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning | Yes | Yes | 2,024 | [
"code environment",
"zero-shot",
"majority voting"
] |
24 | GPT-4-code model (w/o code) | 60.8 | null | No | Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification | Yes | Yes | 2,023 | [] |
25 | OpenMath-CodeLlama-70B (w/ code, SC, k=50) | 60.4 | 70 | Yes | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | Yes | Yes | 2,024 | [
"code environment",
"majority voting"
] |
26 | OpenMath-CodeLlama-34B (w/ code, SC, k=50) | 60.2 | 34 | Yes | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | Yes | Yes | 2,024 | [
"code environment",
"majority voting"
] |
27 | ToRA-Code 34B model (w/ code, SC, k=50) | 60 | 34 | Yes | ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving | Yes | Yes | 2,023 | [
"majority voting",
"code environment",
"gpt-4 distillation"
] |
28 | DeepSeekMATH-RL-7B (w/ code, greedy decoding) | 58.8 | 7 | Yes | DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models | Yes | Yes | 2,024 | [] |
29 | OpenMath-Llama2-70B (w/ code, SC, k=50) | 58.3 | 70 | Yes | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | Yes | Yes | 2,024 | [
"code environment",
"majority voting"
] |
30 | CR (GPT-4 model, w/o code) | 58 | null | No | Cumulative Reasoning with Large Language Models | Yes | Yes | 2,023 | [] |
31 | OpenMath-CodeLlama-13B (w/ code, SC, k=50) | 57.6 | 13 | Yes | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | Yes | Yes | 2,024 | [
"code environment",
"majority voting"
] |
32 | OpenMath-Mistral-7B (w/ code, SC, k=50) | 57.2 | 7 | Yes | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | Yes | Yes | 2,024 | [
"code environment",
"majority voting"
] |
33 | ToRA 70B (w/ code, SC, k=50) | 56.9 | 70 | Yes | ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving | Yes | Yes | 2,023 | [
"majority voting",
"code environment",
"gpt-4 distillation"
] |
34 | SKiC (GPT-4 model) | 56.4 | null | No | Skills-in-Context Prompting: Unlocking Compositionality in Large Language Models | No | Yes | 2,023 | [
"code environment"
] |
35 | DART-Math-Llama3-70B-Prop2Diff (0-shot CoT, w/o code) | 56.1 | 70 | Yes | DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving | Yes | Yes | 2,024 | [] |
36 | OpenMath-CodeLlama-7B (w/ code, SC, k=50) | 55.6 | 7 | Yes | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | Yes | Yes | 2,024 | [
"code environment",
"majority voting"
] |
37 | MMOS-DeepSeekMath-7B (0-shot) | 55 | 7 | Yes | An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning | Yes | Yes | 2,024 | [] |
38 | DART-Math-Llama3-70B-Uniform (0-shot CoT, w/o code) | 54.9 | 70 | Yes | DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving | Yes | Yes | 2,024 | [] |
39 | PHP (GPT-4 model) | 53.9 | null | No | Progressive-Hint Prompting Improves Reasoning in Large Language Models | Yes | Yes | 2,023 | [] |
40 | DART-Math-DSMath-7B-Prop2Diff (0-shot CoT, w/o code) | 53.6 | 7 | Yes | DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving | Yes | Yes | 2,024 | [] |
41 | Gemini Ultra (4-shot) | 53.2 | null | No | Gemini: A Family of Highly Capable Multimodal Models | Yes | Yes | 2,023 | [] |
42 | DART-Math-DSMath-7B-Uniform (0-shot CoT, w/o code) | 52.9 | 7 | Yes | DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving | Yes | Yes | 2,024 | [] |
43 | GPT-4 model (w/ code, PAL) | 51.8 | null | No | PAL: Program-aided Language Models | Yes | Yes | 2,022 | [
"code environment"
] |
44 | DeepSeekMATH-RL-7B (greedy decoding) | 51.7 | 7 | Yes | DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models | Yes | Yes | 2,024 | [] |
45 | AlphaLLM (with MCTS) | 51 | null | No | Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing | Yes | Yes | 2,024 | [] |
46 | ToRA-Code 34B (w/ code) | 50.8 | 34 | Yes | ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving | Yes | Yes | 2,023 | [
"code environment",
"gpt-4 distillation"
] |
47 | OpenMath-CodeLlama-70B (w/ code) | 50.7 | 70 | Yes | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | Yes | No | 2,024 | [
"code environment"
] |
48 | Minerva 540B (maj1@k, k=64) | 50.3 | null | No | Solving Quantitative Reasoning Problems with Language Models | Yes | Yes | 2,022 | [
"majority voting"
] |
49 | ToRA 70B (w/ code) | 49.7 | 70 | Yes | ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving | Yes | Yes | 2,023 | [
"code environment",
"gpt-4 distillation"
] |
50 | MMOS-CODE-34B (0-shot) | 49.5 | 34 | Yes | An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning | Yes | Yes | 2,024 | [] |
51 | DeepSeekMath-7B-KPMath-Plus | 48.8 | 7 | No | Key-Point-Driven Data Synthesis with its Enhancement on Mathematical Reasoning | 2,024 | [] |
||
52 | PaLM 2 (few-shot, k=4, SC) | 48.8 | null | No | PaLM 2 Technical Report | Yes | No | 2,023 | [
"majority voting"
] |
53 | Llemma-34B-KPMath-Plus | 48.6 | 34 | No | Key-Point-Driven Data Synthesis with its Enhancement on Mathematical Reasoning | 2,024 | [] |
||
54 | OpenMath-CodeLlama-34B (w/ code) | 48.3 | 34 | Yes | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | Yes | Yes | 2,024 | [
"code environment"
] |
55 | Shepherd + DeepSeek-67B (SFT on MetaMATH + PRM rerank, k=256) | 48.1 | 67 | Yes | Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations | Yes | No | 2,023 | [
"rerank"
] |
56 | ToRA-Code 13B (w/ code) | 48.1 | 13 | Yes | ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving | Yes | Yes | 2,023 | [
"code environment",
"gpt-4 distillation"
] |
57 | Minerva 8B (maj5@256) | 47.6 | 8 | No | Solving Quantitative Reasoning Problems with Language Models | Yes | Yes | 2,022 | [] |
58 | Mistral-7B-KPMath-Plus | 46.8 | 7 | Yes | Key-Point-Driven Data Synthesis with its Enhancement on Mathematical Reasoning | 2,024 | [] |
||
59 | DART-Math-Llama3-8B-Prop2Diff (0-shot CoT, w/o code) | 46.6 | 8 | Yes | DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving | Yes | Yes | 2,024 | [] |
60 | OpenMath-Llama2-70B (w/ code) | 46.3 | 70 | Yes | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | Yes | No | 2,024 | [] |
61 | OpenMath-CodeLlama-13B (w/ code) | 45.5 | 13 | Yes | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | Yes | No | 2,024 | [] |
62 | DART-Math-Mistral-7B-Prop2Diff (0-shot CoT, w/o code) | 45.5 | 7 | Yes | DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving | No | Yes | 2,024 | [] |
63 | DART-Math-Llama3-8B-Uniform (0-shot CoT, w/o code) | 45.3 | 8 | Yes | DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving | Yes | Yes | 2,024 | [] |
64 | MathCoder-CL-34B | 45.2 | 34 | Yes | MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning | Yes | No | 2,023 | [] |
65 | MathCoder-L-34B | 45.1 | 34 | Yes | MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning | Yes | No | 2,023 | [] |
66 | MMIQC-72B | 45 | 72 | Yes | Augmenting Math Word Problems via Iterative Question Composing | Yes | Yes | 2,024 | [] |
67 | ToRA-Code 7B (w/ code) | 44.6 | 7 | Yes | ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving | Yes | Yes | 2,023 | [
"code environment",
"gpt-4 distillation"
] |
68 | OpenMath-Mistral-7B (w/ code) | 44.5 | 7 | Yes | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | Yes | No | 2,024 | [] |
69 | MMOS-CODE-7B (0-shot) | 44.3 | 7 | Yes | An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning | Yes | Yes | 2,024 | [] |
70 | OpenMath-CodeLlama-7B (w/ code) | 43.6 | 7 | Yes | OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset | Yes | No | 2,024 | [] |
71 | Shepherd+Mistral-7B (SFT on MetaMATH + PRM RL+ PRM rerank, k=256) | 43.5 | 7 | Yes | Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations | Yes | No | 2,023 | [
"rerank"
] |
72 | DART-Math-Mistral-7B-Uniform (0-shot CoT, w/o code) | 43.5 | 7 | Yes | DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving | Yes | Yes | 2,024 | [] |
73 | Minerva 62B (maj1@k, k=64) | 43.4 | 62 | No | Solving Quantitative Reasoning Problems with Language Models | Yes | Yes | 2,022 | [
"majority voting"
] |
74 | ToRA 13B (w/ code) | 43 | 13 | Yes | ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving | Yes | Yes | 2,023 | [
"code environment",
"gpt-4 distillation"
] |
75 | GPT-4 | 42.5 | null | No | Sparks of Artificial General Intelligence: Early experiments with GPT-4 | Yes | Yes | 2,023 | [] |
76 | SFT-Mistral-7B | 41.8 | 7 | Yes | 2,024 | [] |
|||
77 | Llama2-13B-KPMath-Plus | 41 | 13 | No | Key-Point-Driven Data Synthesis with its Enhancement on Mathematical Reasoning | 2,024 | [] |
||
78 | ToRA 7B (w/ code) | 40.1 | 7 | Yes | ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving | Yes | Yes | 2,023 | [
"code environment",
"gpt-4 distillation"
] |
79 | MathCoder-CL-13B | 35.9 | 13 | Yes | MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning | Yes | No | 2,023 | [] |
80 | MuggleMATH-70B | 35.6 | 70 | Yes | MuggleMath: Assessing the Impact of Query and Response Augmentation on Math Reasoning | Yes | No | 2,023 | [] |
81 | PaLM 2 (few-shot, k=4, CoT) | 34.3 | null | No | PaLM 2 Technical Report | Yes | No | 2,023 | [] |
82 | Minerva 540B | 33.6 | 540 | No | Solving Quantitative Reasoning Problems with Language Models | Yes | No | 2,022 | [] |
83 | Minerva 540B (5-shot) | 33.6 | 540 | No | Galactica: A Large Language Model for Science | Yes | No | 2,022 | [] |
84 | Shepherd + Mistral-7B (SFT on MetaMATH + PRM RL) | 33 | 7 | Yes | Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations | Yes | No | 2,023 | [] |
85 | WizardMath-7B-V1.1 | 33 | 7 | Yes | WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct | Yes | No | 2,023 | [] |
86 | Gemini Pro (4-shot) | 32.6 | null | No | Gemini: A Family of Highly Capable Multimodal Models | Yes | Yes | 2,023 | [] |
87 | MuggleMATH-13B | 30.7 | 13 | Yes | MuggleMath: Assessing the Impact of Query and Response Augmentation on Math Reasoning | Yes | No | 2,023 | [] |
88 | MathCoder-CL-7B | 30.2 | 7 | Yes | MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning | Yes | No | 2,023 | [] |
89 | MathCoder-L-13B | 29.9 | 13 | Yes | MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning | Yes | No | 2,023 | [] |
90 | Qwen2idae-16x14B (4-shot) | 29.9 | null | Yes | Parameter-Efficient Sparsity Crafting from Dense to Mixture-of-Experts for Instruction Tuning on General Tasks | Yes | No | 2,024 | [] |
91 | OpenChat-3.5-1210 7B | 28.9 | 7 | No | OpenChat: Advancing Open-source Language Models with Mixed-Quality Data | Yes | No | 2,023 | [] |
92 | OpenChat-3.5 7B | 28.6 | 7 | No | OpenChat: Advancing Open-source Language Models with Mixed-Quality Data | Yes | No | 2,023 | [] |
93 | Mixtral 8x7B (maj@4) | 28.4 | null | No | Mixtral of Experts | Yes | Yes | 2,024 | [] |
94 | Minerva 62B (4-shot) | 27.6 | 62 | No | Solving Quantitative Reasoning Problems with Language Models | Yes | Yes | 2,022 | [] |
95 | MetaMath 70B | 26 | 70 | Yes | MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models | Yes | No | 2,023 | [
"fine-tuned"
] |
96 | MuggleMATH 7B | 25.8 | 7 | Yes | MuggleMath: Assessing the Impact of Query and Response Augmentation on Math Reasoning | Yes | No | 2,023 | [] |
97 | Minerva 8B (maj1@k, k=64) | 25.4 | 8 | No | Solving Quantitative Reasoning Problems with Language Models | Yes | Yes | 2,022 | [
"majority voting"
] |
98 | MathCoder-L-7B | 23.3 | 7 | Yes | MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning | Yes | No | 2,023 | [] |
99 | WizardMath-70B-V1.0 | 22.7 | 70 | Yes | WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct | Yes | No | 2,023 | [] |
100 | Camelidae-8×34B (4-shot) | 22.6 | null | Yes | Parameter-Efficient Sparsity Crafting from Dense to Mixture-of-Experts for Instruction Tuning on General Tasks | Yes | No | 2,024 | [] |
Subsets and Splits