Datasets:
dataset_info:
features:
- name: task_id
dtype: string
- name: query
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 5175349
num_examples: 76
download_size: 1660121
dataset_size: 5175349
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
license: apache-2.0
language:
- en
- zh
- jp
- es
- el
tags:
- finance
- multilingual
pretty_name: PolyFiQA-Easy
size_categories:
- n<1K
task_categories:
- question-answering
Dataset Card for PolyFiQA-Easy
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://huggingface.co/collections/TheFinAI/multifinben-6826f6fc4bc13d8af4fab223
- Repository: https://huggingface.co/datasets/TheFinAI/polyfiqa-easy
- Paper: MultiFinBen: A Multilingual, Multimodal, and Difficulty-Aware Benchmark for Financial LLM Evaluation
- Leaderboard: https://huggingface.co/spaces/TheFinAI/Open-FinLLM-Leaderboard
Dataset Summary
PolyFiQA-Easy is a multilingual financial question-answering dataset designed to evaluate financial reasoning in a simplified setting. Each instance consists of a task identifier, a query prompt, an associated financial question, and the correct answer. The Easy split focuses on queries that can be answered with minimal document retrieval, making it ideal for low-latency or resource-constrained systems.
Supported Tasks and Leaderboards
- Tasks:
- question-answering
- Evaluation Metrics:
- ROUGE-1
Languages
- English (en)
- Chinese (zh)
- Japanese (jp)
- Spanish (es)
- Greek (el)
Dataset Structure
Data Instances
Each instance in the dataset contains:
task_id
: A unique identifier for the query-task pair.query
: A brief query statement from the financial domain.question
: The full question posed based on the query context.answer
: The correct answer string.
Data Fields
Field | Type | Description |
---|---|---|
task_id | string | Unique ID per task |
query | string | Financial query (short form) |
question | string | Full natural-language financial question |
answer | string | Ground-truth answer to the question |
Data Splits
Split | # Examples | Size (bytes) |
---|---|---|
test | 76 | 5,175,349 |
Dataset Creation
Curation Rationale
PolyFiQA-Easy was curated to provide a lightweight yet robust benchmark for financial question answering with minimal retrieval burden. It aims to evaluate models’ reasoning on self-contained or short-context questions in finance.
Source Data
Initial Data Collection
The source data was derived from a diverse collection of English financial reports. Questions were derived from real-world financial scenarios and manually adapted to fit a concise QA format.
Source Producers
Data was created by researchers and annotators with backgrounds in finance, NLP, and data curation.
Annotations
Annotation Process
Questions and answers were authored and verified through a multi-step validation pipeline involving domain experts.
Annotators
A team of finance researchers and data scientists.
Personal and Sensitive Information
The dataset contains no personal or sensitive information. All content is synthetic or anonymized for safe usage.
Considerations for Using the Data
Social Impact of Dataset
PolyFiQA-Easy contributes to research in financial NLP by enabling multilingual evaluation under constrained settings.
Discussion of Biases
- May over-represent English financial contexts.
- Questions emphasize clarity and answerability over real-world ambiguity.
Other Known Limitations
- Limited size (76 examples).
- Focused on easy questions; may not generalize to complex reasoning tasks.
Additional Information
Dataset Curators
- The FinAI Team
Licensing Information
- License: Apache License 2.0
Citation Information
If you use this dataset, please cite:
@misc{peng2025multifinbenmultilingualmultimodaldifficultyaware,
title={MultiFinBen: A Multilingual, Multimodal, and Difficulty-Aware Benchmark for Financial LLM Evaluation},
author={Xueqing Peng and Lingfei Qian and Yan Wang and Ruoyu Xiang and Yueru He and Yang Ren and Mingyang Jiang and Jeff Zhao and Huan He and Yi Han and Yun Feng and Yuechen Jiang and Yupeng Cao and Haohang Li and Yangyang Yu and Xiaoyu Wang and Penglei Gao and Shengyuan Lin and Keyi Wang and Shanshan Yang and Yilun Zhao and Zhiwei Liu and Peng Lu and Jerry Huang and Suyuchen Wang and Triantafillos Papadopoulos and Polydoros Giannouris and Efstathia Soufleri and Nuo Chen and Guojun Xiong and Zhiyang Deng and Yijia Zhao and Mingquan Lin and Meikang Qiu and Kaleb E Smith and Arman Cohan and Xiao-Yang Liu and Jimin Huang and Alejandro Lopez-Lira and Xi Chen and Junichi Tsujii and Jian-Yun Nie and Sophia Ananiadou and Qianqian Xie},
year={2025},
eprint={2506.14028},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.14028},
}