metadata
license: cc-by-sa-4.0
task_categories:
- question-answering
language:
- en
tags:
- legal
size_categories:
- n<1K
dataset_info:
- config_name: default
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: float64
splits:
- name: test
num_examples: 117
- config_name: corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_examples: 116
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: queries
num_examples: 117
configs:
- config_name: default
data_files:
- split: test
path: data/default.jsonl
- config_name: corpus
data_files:
- split: corpus
path: data/corpus.jsonl
- config_name: queries
data_files:
- split: queries
path: data/queries.jsonl
Bar Exam QA Benchmark ๐
The dataset includes questions from multistate bar exams and answers sourced from expert annotations.
Task category | t2t |
Domains | Legal, Written |
Reference | https://reglab.github.io/legal-rag-benchmarks/ |
This dataset was produced by modifying the reglab Bar Exam QA
dataset, by combining question and prompt text into a single query, and using expert annotated passages as answers.
As a benchmark, this dataset is best designed for legal information retrieval and question answering related tasks.
Citation
author = {Zheng, Lucia and Guha, Neel and Arifov, Javokhir and Zhang, Sarah and Skreta, Michal and Manning, Christopher D. and Henderson, Peter and Ho, Daniel E.},
title = {A Reasoning-Focused Legal Retrieval Benchmark},
year = {2025},
series = {CSLAW '25 (forthcoming)}
}