Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 3,806 Bytes
cd9669d
c2a3ba3
adb5624
9b15794
c2a3ba3
 
 
cd9669d
c2a3ba3
9b15794
c2a3ba3
 
9b15794
 
adb5624
9b15794
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
adb5624
9b15794
 
 
 
 
 
 
 
 
 
 
 
 
cd9669d
9b15794
a8161be
9b15794
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cd9669d
9b15794
0001025
 
 
 
 
 
 
 
 
 
49c07b6
 
 
cd9669d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
license: cc-by-sa-4.0
task_categories:
- text-retrieval
- question-answering
language:
- en
tags:
- legal
- law
size_categories:
- n<1K
source_datasets:
- reglab/barexam_qa
dataset_info:
- config_name: default
  features:
  - name: query-id
    dtype: string
  - name: corpus-id
    dtype: string
  - name: score
    dtype: float64
  splits:
  - name: test
    num_examples: 117
- config_name: corpus
  features:
  - name: _id
    dtype: string
  - name: title
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: corpus
    num_examples: 116
- config_name: queries
  features:
  - name: _id
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: queries
    num_examples: 117
configs:
- config_name: default
  data_files:
  - split: test
    path: data/default.jsonl
- config_name: corpus
  data_files:
  - split: corpus
    path: data/corpus.jsonl
- config_name: queries
  data_files:
  - split: queries
    path: data/queries.jsonl
pretty_name: Bar Exam QA MTEB Benchmark
---
# Bar Exam QA MTEB Benchmark πŸ‹
This is the test split of the [Bar Exam QA](https://huggingface.co/datasets/reglab/barexam_qa) dataset formatted in the [Massive Text Embedding Benchmark (MTEB)](https://github.com/embeddings-benchmark/mteb) information retrieval dataset format.

This dataset is intended to facilitate the consistent and reproducible evaluation of information retrieval models on Bar Exam QA with the [`mteb`](https://github.com/embeddings-benchmark/mteb) embedding model evaluation framework.

More specifically, this dataset tests the ability of information retrieval models to identify legal provisions relevant to US bar exam questions.

This dataset has been processed into the MTEB format by [Isaacus](https://isaacus.com/), a legal AI research company.

## Methodology πŸ§ͺ
To understand how Bar Exam QA was created, refer to its [documentation](https://huggingface.co/datasets/reglab/barexam_qa).

This dataset was formatted by concatenating the `prompt` and `question` columns of the source data delimited by a single space (or, where there was no `prompt`, reverting to the `question` only) into queries (or anchors), and treating the `gold_passage` column as relevant (or positive) passages.

## Structure πŸ—‚οΈ
As per the MTEB information retrieval dataset format, this dataset comprises three splits, `default`, `corpus` and `queries`.

The `default` split pairs queries (`query-id`) with relevant passages (`corpus-id`), each pair having a `score` of 1.

The `corpus` split contains relevant passages from Bar Exam QA, with the text of a passage being stored in the `text` key and its id being stored in the `_id` key.

The `queries` split contains queries, with the text of a query being stored in the `text` key and its id being stored in the `_id` key.

## License πŸ“œ
To the extent that any intellectual property rights reside in the contributions made by Isaacus in formatting and processing this dataset, Isaacus licenses those contributions under the same license terms as the source dataset. You are free to use this dataset without citing Isaacus.

The source dataset is licensed under [CC BY SA 4.0](https://choosealicense.com/licenses/cc-by-sa-4.0/).

## Citation πŸ”–
```bibtex
@inproceedings{Zheng_2025, series={CSLAW ’25},
   title={A Reasoning-Focused Legal Retrieval Benchmark},
   url={http://dx.doi.org/10.1145/3709025.3712219},
   DOI={10.1145/3709025.3712219},
   booktitle={Proceedings of the Symposium on Computer Science and Law on ZZZ},
   publisher={ACM},
   author={Zheng, Lucia and Guha, Neel and Arifov, Javokhir and Zhang, Sarah and Skreta, Michal and Manning, Christopher D. and Henderson, Peter and Ho, Daniel E.},
   year={2025},
   month=mar, pages={169–193},
   collection={CSLAW ’25},
   eprint={2505.03970}
}
```