Datasets:
File size: 5,011 Bytes
4906ab8 b2f802c 4906ab8 b2f802c 4906ab8 ddc37b9 e55b451 25cf9d9 e55b451 25cf9d9 e55b451 115f080 e55b451 a9a09dd ddc37b9 10c4f86 92eb644 e380d3b ddc37b9 c3d57a0 ddc37b9 731822f bdb22ae 731822f 720b6ae 731822f bdb22ae 731822f 3c18e26 731822f c3d57a0 ddc37b9 731822f c3d57a0 731822f 5d83608 731822f c3d57a0 731822f c3d57a0 731822f c3d57a0 731822f ddc37b9 a9a09dd e380d3b 3d5749f e380d3b b2f802c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 |
---
language:
- en
size_categories:
- 10K<n<100K
task_categories:
- question-answering
- summarization
- table-question-answering
pretty_name: NumericBench
tags:
- numeric
- arithmetic
- math
configs:
- config_name: arithmetic_operation
data_files:
- split: test
path: arithmetic_operation/*.json
- config_name: mixed_number_string
data_files:
- split: test
path: mixed_number_string/*.json
- config_name: num_list
data_files:
- split: test
path: num_list/*.json
- config_name: sequence
data_files:
- split: test
path: sequence/*.json
- config_name: stock-single-turn
data_files:
- split: test
path: stock/single-turn/*.json
- config_name: stock-multi-turn
data_files:
- split: test
path: stock/multi-turn/*.json
- config_name: weather-single-turn
data_files:
- split: test
path: weather/single-turn/*.json
- config_name: weather-multi-turn
data_files:
- split: test
path: weather/multi-turn/*.json
---
# Introduction
**NumericBench** is a comprehensive benchmark designed to evaluate the numerical reasoning capabilities of Large Language Models, addressing their limitations in tasks like arithmetic, number recognition, contextual retrieval, comparison, summarization, and logical reasoning. By incorporating diverse datasets ranging from synthetic number lists to real-world domains like stock trends and weather patterns, NumericBench systematically tests LLMs in both structured and noisy contexts. Experiments on models such as GPT-4o and DeepSeek-V3 reveal significant weaknesses, emphasizing the need for numerically-aware modeling to enhance LLMs' real-world applicability.
Github Repo: https://github.com/TreeAI-Lab/NumericBench
Arxiv Paper: https://arxiv.org/abs/2502.11075
Arxiv Paper on Hugging Face: https://huggingface.co/papers/2502.11075
# How to use it?
## Loading Data
``` python
from huggingface_hub import hf_hub_download
import json
dataset_name_list = ['arithmetic_operation/context_arithmetic_operation.json', 'arithmetic_operation/arithmetic_operation.json', 'arithmetic_operation/different_digit_arithmetic_operation.json',
'mixed_number_sting/mixed_number_string_500_per_sample.json',
'num_list/num_list_500_per_sample_1000_length.json', 'num_list/num_list_500_per_sample_100_length.json',
'sequence/sequence_500_sample_100_length.json',
'stock/single-turn/stock_500_per_sample_150_length.json', 'stock/single-turn/stock_500_per_sample_300_length.json', 'stock/multi-turn/stock_multi_turn_100_per_sample_100_length.json',
'weather/single-turn/weather_500_per_sample_200_length.json', 'weather/single-turn/weather_500_per_sample_400_length.json', 'weather/multi-turn/weather_multi_turn_100_per_sample_100_length.json'
]
REPO_ID = "TreeAILab/NumericBench"
for dataset_name in dataset_name_list:
with open(hf_hub_download(repo_id=REPO_ID, filename=dataset_name, repo_type="dataset")) as f:
dataset = json.load(f)
```
Alternatively, you can download the dataset from [this link](https://huggingface.co/datasets/TreeAILab/NumericBench/resolve/main/dataset.zip?download=true).
## Data Format
Due to the excessive length of the content, "..." is used to indicate omission.
### single-turn
``` json
{
"system_prompt": "...",
"system_prompt_cot": "...",
"description": "...",
"data": [
{
"idx": 0,
"question_index": 0,
"question": "What is the result of A + B? Please round the answer to two decimal places. ",
"struct_data": "{'A': 6.755, 'B': -1.225}",
"answer": 5.53,
"ability": "1-digit integer with 3 decimal places"
}, ...
]
}
```
### multi-turn
``` json
{
"system_prompt": "...",
"system_prompt_cot": "...",
"description": "...",
"data": [
{
"idx": 0,
"multi_turn_QA": [
{
"turn_index": 0,
"question_index": 6,
"struct_data": "...",
"question": "...",
"answer": "F",
"ability": "contextual retrieval"
}, ...
]
}, ...
]
}
```
# Dataset statistics
<p align="center">
<img src="./figure/data_statistics.png" width=900>
</p>
For more details, please refer to our paper.
# Citation
```
@misc{li2025exposingnumeracygapsbenchmark,
title={Exposing Numeracy Gaps: A Benchmark to Evaluate Fundamental Numerical Abilities in Large Language Models},
author={Haoyang Li and Xuejia Chen and Zhanchao XU and Darian Li and Nicole Hu and Fei Teng and Yiming Li and Luyu Qiu and Chen Jason Zhang and Qing Li and Lei Chen},
year={2025},
eprint={2502.11075},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.11075},
}
``` |