nlile commited on
Commit
a16675c
·
verified ·
1 Parent(s): 3b1b0d6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -15
README.md CHANGED
@@ -39,35 +39,92 @@ language:
39
  - en
40
  tags:
41
  - math
 
 
 
 
 
 
42
  ---
43
 
44
- # LLM Leaderboard Data for Hendrycks MATH Dataset
45
 
46
- A Papers with Code attempt to aggregate yearly (2022-2024) LLM/Foundation model performance on Hendrycks' MATH evaluation.
47
 
48
- Data converted from source: [Math Word Problem Solving on MATH](https://paperswithcode.com/sota/math-word-problem-solving-on-math)
49
 
50
- ## Evaluation
51
 
52
- Introduced by Hendrycks et al. in *Measuring Mathematical Problem Solving With the MATH Dataset*. MATH is a dataset comprising 12,500 challenging competition mathematics problems. Each problem in MATH has a full step-by-step solution useful for teaching models to generate answer derivations and explanations.
53
 
54
- ## Visualizations
55
 
56
- ### Model Accuracy Trends
 
 
57
 
58
- ![Model Accuracy Trends](img/model_accuracy_trends.png)
59
 
60
- *Figure 1: Trends in model accuracy from 2022 to 2024, illustrating improvement rates over time.*
 
 
61
 
62
- ### Top 20% Model Accuracy
63
 
64
- ![Top 20% Model Accuracy](img/top_20_percent_accuracy.png)
 
 
 
 
 
 
 
65
 
66
- *Figure 2: Accuracy distribution among the top 20% performing models on the Hendrycks MATH dataset.*
67
 
68
- ### Standard Deviation vs Median (Top 20%)
 
69
 
70
- ![Top 20% Standard Deviation vs Median](img/top_20_std_vs_median.png)
 
 
 
71
 
72
- *Figure 3: Relationship between standard deviation and median accuracy scores for the top 20% models.*
 
 
 
 
73
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
  - en
40
  tags:
41
  - math
42
+ - llm
43
+ - benchmarks
44
+ - saturation
45
+ - evaluation
46
+ - leaderboard
47
+ - parameter-scaling
48
  ---
49
 
50
+ # LLM Leaderboard Data for Hendrycks MATH Dataset (2022–2024)
51
 
52
+ This dataset aggregates yearly performance (20222024) of large language models (LLMs) on the Hendrycks MATH benchmark. It is specifically compiled to explore performance evolution, benchmark saturation, parameter scaling trends, and evaluation metrics of foundation models solving complex math word problems.
53
 
54
+ Original source data: [Math Word Problem Solving on MATH (Papers with Code)](https://paperswithcode.com/sota/math-word-problem-solving-on-math)
55
 
56
+ ## About Hendrycks' MATH Benchmark
57
 
58
+ Introduced by Hendrycks et al., the [MATH dataset](https://arxiv.org/abs/2103.03874) includes 12,500 challenging competition math problems, each accompanied by detailed solutions. These problems provide an ideal setting for evaluating and training AI models in advanced mathematical reasoning.
59
 
60
+ ## Dataset Highlights
61
 
62
+ - **Performance Evolution**: Significant increase in accuracy over three years (benchmark saturation analysis).
63
+ - **Parameter Scaling**: Insight into how model size (parameters) correlates with accuracy improvements.
64
+ - **Benchmark Saturation**: Clear evidence of performance brackets becoming saturated, indicating the need for new and more challenging mathematical reasoning benchmarks.
65
 
66
+ ## Key Insights from the Dataset (2022–2024)
67
 
68
+ - **Rapid Accuracy Gains**: Top model accuracy jumped dramatically—from approximately 65% in 2022 to nearly 90% in 2024.
69
+ - **Performance Bracket Saturation**: Models achieving over 80% accuracy increased significantly, illustrating benchmark saturation and highlighting a potential ceiling in current dataset challenges.
70
+ - **Efficiency in Parameter Scaling**: Smaller parameter models now perform tasks previously requiring large parameter counts, emphasizing efficiency gains alongside increased accuracy.
71
 
72
+ ## Dataset Structure
73
 
74
+ - **Number of Examples**: 112
75
+ - **Data Format**: CSV (converted from Papers with Code)
76
+ - **Features include**:
77
+ - Model ranking and year-specific accuracy
78
+ - Parameter counts and extra training data
79
+ - Direct links to relevant academic papers and model code
80
+
81
+ ## Practical Usage
82
 
83
+ Here's how to quickly load and interact with the dataset:
84
 
85
+ ```python
86
+ from datasets import load_dataset
87
 
88
+ data = load_dataset("your_dataset_name_here")
89
+ df = data['train'].to_pandas()
90
+ df.head()
91
+ ```
92
 
93
+ ## Visualizations
94
+
95
+ ### Model Accuracy Improvement (2022–2024)
96
+ ![Model Accuracy Trends](img/model_accuracy_trends.png)
97
+ *Rapid growth in top accuracy indicating approaching benchmark saturation.*
98
 
99
+ ### Accuracy Distribution Among Top 20%
100
+ ![Top 20% Model Accuracy](img/top_20_percent_accuracy.png)
101
+ *Sharp increase in the number of high-performing models over three years.*
102
+
103
+ ### Parameter Scaling and Model Accuracy
104
+ ![Standard Deviation vs Median Accuracy](img/top_20_std_vs_median.png)
105
+ *Visualizing consistency in accuracy improvements and the diminishing returns from scaling model parameters.*
106
+
107
+ ## Citation
108
+
109
+ Please cite the original Hendrycks MATH dataset paper and this dataset aggregation/analysis:
110
+
111
+ **MATH Dataset:**
112
+
113
+ ```bibtex
114
+ @article{hendrycks2021math,
115
+ title={Measuring Mathematical Problem Solving With the MATH Dataset},
116
+ author={Hendrycks, Dan and Burns, Collin and Basart, Steven and Zou, Andy and Mazeika, Mantas and Song, Dawn and Steinhardt, Jacob},
117
+ journal={arXiv preprint arXiv:2103.03874},
118
+ year={2021}
119
+ }
120
+ ```
121
+
122
+ ```bibtex
123
+ @misc{nlile2024mathbenchmark,
124
+ author = {nlile},
125
+ title = {LLM Leaderboard Data for Hendrycks MATH Dataset (2022-2024): Benchmark Saturation and Performance Trends},
126
+ year = {2024},
127
+ publisher = {Hugging Face},
128
+ url = {https://huggingface.co/datasets/nlile/math_benchmark_test_saturation/}
129
+ }
130
+ ```