Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
mteb-barexam-qa / README.md
umarbutler's picture
Minor correction
a8161be verified
metadata
license: cc-by-sa-4.0
task_categories:
  - text-retrieval
  - question-answering
language:
  - en
tags:
  - legal
  - law
size_categories:
  - n<1K
source_datasets:
  - reglab/barexam_qa
dataset_info:
  - config_name: default
    features:
      - name: query-id
        dtype: string
      - name: corpus-id
        dtype: string
      - name: score
        dtype: float64
    splits:
      - name: test
        num_examples: 117
  - config_name: corpus
    features:
      - name: _id
        dtype: string
      - name: title
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: corpus
        num_examples: 116
  - config_name: queries
    features:
      - name: _id
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: queries
        num_examples: 117
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/default.jsonl
  - config_name: corpus
    data_files:
      - split: corpus
        path: data/corpus.jsonl
  - config_name: queries
    data_files:
      - split: queries
        path: data/queries.jsonl
pretty_name: Bar Exam QA MTEB Benchmark

Bar Exam QA MTEB Benchmark πŸ‹

This is the test split of the Bar Exam QA dataset formatted in the Massive Text Embedding Benchmark (MTEB) information retrieval dataset format.

This dataset is intended to facilitate the consistent and reproducible evaluation of information retrieval models on Bar Exam QA with the mteb embedding model evaluation framework.

More specifically, this dataset tests the ability of information retrieval models to identify legal provisions relevant to US bar exam questions.

This dataset has been processed into the MTEB format by Isaacus, a legal AI research company.

Methodology πŸ§ͺ

To understand how Bar Exam QA was created, refer to its documentation.

This dataset was formatted by concatenating the prompt and question columns of the source data delimited by a single space (or, where there was no prompt, reverting to the question only) into queries (or anchors), and treating the gold_passage column as relevant (or positive) passages.

Structure πŸ—‚οΈ

As per the MTEB information retrieval dataset format, this dataset comprises three splits, default, corpus and queries.

The default split pairs queries (query-id) with relevant passages (corpus-id), each pair having a score of 1.

The corpus split contains relevant passages from Bar Exam QA, with the text of a passage being stored in the text key and its id being stored in the _id key.

The queries split contains queries, with the text of a query being stored in the text key and its id being stored in the _id key.

License πŸ“œ

To the extent that any intellectual property rights reside in the contributions made by Isaacus in formatting and processing this dataset, Isaacus licenses those contributions under the same license terms as the source dataset. You are free to use this dataset without citing Isaacus.

The source dataset is licensed under CC BY SA 4.0.

Citation πŸ”–

@inproceedings{Zheng_2025, series={CSLAW ’25},
   title={A Reasoning-Focused Legal Retrieval Benchmark},
   url={http://dx.doi.org/10.1145/3709025.3712219},
   DOI={10.1145/3709025.3712219},
   booktitle={Proceedings of the Symposium on Computer Science and Law on ZZZ},
   publisher={ACM},
   author={Zheng, Lucia and Guha, Neel and Arifov, Javokhir and Zhang, Sarah and Skreta, Michal and Manning, Christopher D. and Henderson, Peter and Ho, Daniel E.},
   year={2025},
   month=mar, pages={169–193},
   collection={CSLAW ’25},
   eprint={2505.03970}
}