NumericBench / README.md
Gresham
docs(readme): refactor
c3d57a0
|
raw
history blame
4.4 kB
metadata
task_categories:
  - question-answering
  - summarization
language:
  - en
tags:
  - numeric
  - arithmetic
  - math
pretty_name: NumericBench
size_categories:
  - 10K<n<100K
configs:
  - config_name: arithmetic_operation
    data_files: arithmetic_operation/*.json
  - config_name: mixed_number_sting
    data_files: mixed_number_sting/*.json
  - config_name: num_list
    data_files: num_list/*.json
  - config_name: sequence
    data_files: sequence/*.json
  - config_name: stock-single-trun
    data_files: stock/single-turn/*.json
  - config_name: stock-multi-turn
    data_files: stock/multi-turn/*.json
  - config_name: weather-single-turn
    data_files: weather/single-turn/*.json
  - config_name: weather-multi-turn
    data_files: weather/multi-turn/*.json

Introduction

NumericBench is a comprehensive benchmark designed to evaluate the numerical reasoning capabilities of Large Language Models, addressing their limitations in tasks like arithmetic, number recognition, contextual retrieval, comparison, summarization, and logical reasoning. By incorporating diverse datasets ranging from synthetic number lists to real-world domains like stock trends and weather patterns, NumericBench systematically tests LLMs in both structured and noisy contexts. Experiments on models such as GPT-4o and DeepSeek-V3 reveal significant weaknesses, emphasizing the need for numerically-aware modeling to enhance LLMs' real-world applicability.

Github Repo: https://github.com/TreeAI-Lab/NumericBench

How to use it?

Loading Data

from huggingface_hub import hf_hub_download
import pandas as pd

dataset_name_list = ['arithmetic_operation/context_arithmetic_operation.json', 'arithmetic_operation/arithmetic_operation.json',
            'mixed_number_sting/mixed_number_string_500_per_sample.json', 
            'num_list/num_list_500_per_sample_1000_length.json', 'num_list/num_list_500_per_sample_100_length.json', 
            'sequence/sequence_500_sample_100_length.json', 
            'stock/single-turn/stock_500_per_sample_150_length.json', 'stock/single-turn/stock_500_per_sample_300_length.json', 'stock/multi-turn/stock_multi_turn_100_per_sample_100_length.json',
            'weather/single-turn/weather_500_per_sample_200_length.json', 'weather/single-turn/weather_500_per_sample_400_length.json', 'weather/multi-turn/weather_multi_turn_100_per_sample_100_length.json'
            ]

REPO_ID = "TreeAILab/NumericBench"

for dataset_name in dataset_name_list:
    dataset = pd.read_json(
        hf_hub_download(repo_id=REPO_ID, filename=dataset_name, repo_type="dataset")
    )

Alternatively, you can download the dataset from this link.

Data Format

Due to the excessive length of the content, "..." is used to indicate omission.

single-turn

{
    "system_prompt": "...",
    "system_prompt_cot": "...",
    "description": "...",
    "data": [
        {
            "idx": 0,
            "question_index": 0,
            "question": "What is the result of A + B? Please round the answer to two decimal places. ",
            "struct_data": "{'A': 6.755, 'B': -1.225}",
            "answer": 5.53,
            "ability": "1-digit integer with 3 decimal places"
        },
                {
            "idx": 1,
            "question_index": 1,
            "question": "What is the result of A - B? Please round the answer to two decimal places. ",
            "struct_data": "{'A': 6.755, 'B': -1.225}",
            "answer": 7.98,
            "ability": "1-digit integer with 3 decimal places"
        }
    ]
}

multi-turn

{
    "system_prompt": "...",
    "system_prompt_cot": "...",
    "description": "...",
    "data": [
        {
            "idx": 0,
            "multi_turn_QA": [
                {
                    "turn_index": 0,
                    "question_index": 6,
                    "struct_data": "...",
                    "question": "...",
                    "answer": "F",
                    "ability": "contextual retrieval"
                }, ...
            ]
        }, ...
    ]
}

Evaluation

Dataset statistics