Datasets:
metadata
task_categories:
- question-answering
- summarization
language:
- en
tags:
- numeric
- arithmetic
- math
pretty_name: NumericBench
size_categories:
- 10K<n<100K
configs:
- config_name: arithmetic_operation
data_files: arithmetic_operation/*.json
- config_name: mixed_number_sting
data_files: mixed_number_sting/*.json
- config_name: num_list
data_files: num_list/*.json
- config_name: sequence
data_files: sequence/*.json
- config_name: stock-single-trun
data_files: stock/single-turn/*.json
- config_name: stock-multi-turn
data_files: stock/multi-turn/*.json
- config_name: weather-single-turn
data_files: weather/single-turn/*.json
- config_name: weather-multi-turn
data_files: weather/multi-turn/*.json
Introduction
NumericBench is a comprehensive benchmark designed to evaluate the numerical reasoning capabilities of Large Language Models, addressing their limitations in tasks like arithmetic, number recognition, contextual retrieval, comparison, summarization, and logical reasoning. By incorporating diverse datasets ranging from synthetic number lists to real-world domains like stock trends and weather patterns, NumericBench systematically tests LLMs in both structured and noisy contexts. Experiments on models such as GPT-4o and DeepSeek-V3 reveal significant weaknesses, emphasizing the need for numerically-aware modeling to enhance LLMs' real-world applicability.
Github Repo: https://github.com/TreeAI-Lab/NumericBench