nlile's picture
Update README.md
a16675c verified
metadata
dataset_info:
  features:
    - name: rank
      dtype: int64
    - name: model
      dtype: string
    - name: accuracy
      dtype: float64
    - name: parameters
      dtype: float64
    - name: extra_training_data
      dtype: string
    - name: paper
      dtype: string
    - name: code
      dtype: string
    - name: result
      dtype: string
    - name: year
      dtype: int64
    - name: tags
      sequence: string
  splits:
    - name: train
      num_bytes: 19092
      num_examples: 112
  download_size: 9472
  dataset_size: 19092
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
task_categories:
  - question-answering
  - text-generation
language:
  - en
tags:
  - math
  - llm
  - benchmarks
  - saturation
  - evaluation
  - leaderboard
  - parameter-scaling

LLM Leaderboard Data for Hendrycks MATH Dataset (2022–2024)

This dataset aggregates yearly performance (2022–2024) of large language models (LLMs) on the Hendrycks MATH benchmark. It is specifically compiled to explore performance evolution, benchmark saturation, parameter scaling trends, and evaluation metrics of foundation models solving complex math word problems.

Original source data: Math Word Problem Solving on MATH (Papers with Code)

About Hendrycks' MATH Benchmark

Introduced by Hendrycks et al., the MATH dataset includes 12,500 challenging competition math problems, each accompanied by detailed solutions. These problems provide an ideal setting for evaluating and training AI models in advanced mathematical reasoning.

Dataset Highlights

  • Performance Evolution: Significant increase in accuracy over three years (benchmark saturation analysis).
  • Parameter Scaling: Insight into how model size (parameters) correlates with accuracy improvements.
  • Benchmark Saturation: Clear evidence of performance brackets becoming saturated, indicating the need for new and more challenging mathematical reasoning benchmarks.

Key Insights from the Dataset (2022–2024)

  • Rapid Accuracy Gains: Top model accuracy jumped dramatically—from approximately 65% in 2022 to nearly 90% in 2024.
  • Performance Bracket Saturation: Models achieving over 80% accuracy increased significantly, illustrating benchmark saturation and highlighting a potential ceiling in current dataset challenges.
  • Efficiency in Parameter Scaling: Smaller parameter models now perform tasks previously requiring large parameter counts, emphasizing efficiency gains alongside increased accuracy.

Dataset Structure

  • Number of Examples: 112
  • Data Format: CSV (converted from Papers with Code)
  • Features include:
    • Model ranking and year-specific accuracy
    • Parameter counts and extra training data
    • Direct links to relevant academic papers and model code

Practical Usage

Here's how to quickly load and interact with the dataset:

from datasets import load_dataset

data = load_dataset("your_dataset_name_here")
df = data['train'].to_pandas()
df.head()

Visualizations

Model Accuracy Improvement (2022–2024)

Model Accuracy Trends Rapid growth in top accuracy indicating approaching benchmark saturation.

Accuracy Distribution Among Top 20%

Top 20% Model Accuracy Sharp increase in the number of high-performing models over three years.

Parameter Scaling and Model Accuracy

Standard Deviation vs Median Accuracy Visualizing consistency in accuracy improvements and the diminishing returns from scaling model parameters.

Citation

Please cite the original Hendrycks MATH dataset paper and this dataset aggregation/analysis:

MATH Dataset:

@article{hendrycks2021math,
  title={Measuring Mathematical Problem Solving With the MATH Dataset},
  author={Hendrycks, Dan and Burns, Collin and Basart, Steven and Zou, Andy and Mazeika, Mantas and Song, Dawn and Steinhardt, Jacob},
  journal={arXiv preprint arXiv:2103.03874},
  year={2021}
}
@misc{nlile2024mathbenchmark,
  author = {nlile},
  title = {LLM Leaderboard Data for Hendrycks MATH Dataset (2022-2024): Benchmark Saturation and Performance Trends},
  year = {2024},
  publisher = {Hugging Face},
  url = {https://huggingface.co/datasets/nlile/math_benchmark_test_saturation/}
}