Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
alvinming's picture
Update README.md
b98de2f verified
metadata
dataset_info:
  - config_name: question_only
    features:
      - name: qid
        dtype: string
      - name: question
        dtype: string
      - name: question_no_placeholder
        dtype: string
    splits:
      - name: test
        num_bytes: 144919
        num_examples: 100
    download_size: 81631
    dataset_size: 144919
  - config_name: question_with_checklist
    features:
      - name: qid
        dtype: string
      - name: question
        dtype: string
      - name: category
        dtype: string
      - name: checklist_id
        dtype: string
      - name: checklist
        dtype: string
      - name: question_no_placeholder
        dtype: string
      - name: checklist_no_placeholder
        dtype: string
    splits:
      - name: test
        num_bytes: 1197365
        num_examples: 672
    download_size: 161176
    dataset_size: 1197365
configs:
  - config_name: question_only
    data_files:
      - split: test
        path: question_only/test-*
  - config_name: question_with_checklist
    data_files:
      - split: test
        path: question_with_checklist/test-*

Dataset Overview

LiveResearchBench provides expert-curated, real-world tasks spanning daily life, enterprise, and academia, each requiring extensive, real-time web search, multi-source reasoning, and cross-domain synthesis. DeepEval offers human-aligned protocols for reliable, systematic evaluation of agentic systems on open-ended deep research tasks.

📌 Quick Links

Project Page

Paper

Codebase

Dataset Fields

Subsets:

  • question_with_checklist: Full dataset with questions and per-question checklists
  • question_only: Questions without checklists

For each entry in the dataset:

{
    'qid': 'market6VWmPyxptfK47civ',  # Unique query identifier
    'question': 'What is the size, growth rate...',  # Research question
    'checklists': [  # List of checklist items for coverage evaluation
        'Does the report provide data for the U.S. electric vehicle market...',
        'Does the report discuss the size, growth rate...',
        # ... more items
    ]
}

Loading the Dataset

Default: Static Mode (No Placeholders)

The default static mode loads questions and checklists with dates already filled in (e.g., 2025 instead of {{current_year}}):

from liveresearchbench.common.io_utils import load_liveresearchbench_dataset

# Load static version 
benchmark_data = load_liveresearchbench_dataset(use_realtime=False)

Example:

  • Question: "What is the size, growth rate, and segmentation of the U.S. electric vehicle market in 2025?"

Realtime Mode

For dynamic evaluation with current dates, use realtime mode:

# Load realtime version (replaces {{current_year}} etc.)
benchmark_data = load_liveresearchbench_dataset(use_realtime=True)

The following placeholders will be replaced by the current date:

  • {{current_year}} → 2025 (current year)
  • {{last_year}} → 2024 (current year - 1)
  • {{current_date}} → October 29, 2025 (formatted date)

Example:

  • Question: "What is the size, growth rate, and segmentation of the U.S. electric vehicle market in 2025?" (automatically updated each year)

Accessing Questions and Checklists

from liveresearchbench.common.io_utils import (
    load_liveresearchbench_dataset,
    get_question_for_qid,
    get_checklists_for_qid
)

# Load dataset
benchmark_data = load_liveresearchbench_dataset()

# Get question for a specific query ID
qid = "market6VWmPyxptfK47civ"
question = get_question_for_qid(benchmark_data, qid)

# Get checklist items for a specific query ID
checklists = get_checklists_for_qid(benchmark_data, qid)
print(f"Found {len(checklists)} checklist items")

Ethical Considerations

This release is for research purposes only in support of an academic paper. Our datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP.

Citation

If you find this dataset helpful, please consider citing:

@article{sfr2025liveresearchbench,
      title={LiveResearchBench: A Live Benchmark for User-Centric Deep Research in the Wild}, 
      author={Jiayu Wang and Yifei Ming and Riya Dulepet and Qinglin Chen and Austin Xu and Zixuan Ke and Frederic Sala and Aws Albarghouthi and Caiming Xiong and Shafiq Joty},
  year={2025},
  url={https://arxiv.org/abs/2510.14240}
}