Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
umarbutler commited on
Commit
9b15794
Β·
verified Β·
1 Parent(s): 0815d13

Shape up README

Browse files
Files changed (1) hide show
  1. README.md +76 -55
README.md CHANGED
@@ -1,72 +1,93 @@
1
  ---
2
  license: cc-by-sa-4.0
3
  task_categories:
 
4
  - question-answering
5
  language:
6
  - en
7
  tags:
8
  - legal
 
9
  size_categories:
10
  - n<1K
 
 
11
  dataset_info:
12
- - config_name: default
13
- features:
14
- - name: query-id
15
- dtype: string
16
- - name: corpus-id
17
- dtype: string
18
- - name: score
19
- dtype: float64
20
- splits:
21
- - name: test
22
- num_examples: 117
23
- - config_name: corpus
24
- features:
25
- - name: _id
26
- dtype: string
27
- - name: title
28
- dtype: string
29
- - name: text
30
- dtype: string
31
- splits:
32
- - name: corpus
33
- num_examples: 116
34
- - config_name: queries
35
- features:
36
- - name: _id
37
- dtype: string
38
- - name: text
39
- dtype: string
40
- splits:
41
- - name: queries
42
- num_examples: 117
43
-
44
  configs:
45
- - config_name: default
46
- data_files:
47
- - split: test
48
- path: data/default.jsonl
49
- - config_name: corpus
50
- data_files:
51
- - split: corpus
52
- path: data/corpus.jsonl
53
- - config_name: queries
54
- data_files:
55
- - split: queries
56
- path: data/queries.jsonl
57
-
58
  ---
59
- # Bar Exam QA Benchmark πŸ“
60
- The dataset includes questions from multistate bar exams and answers sourced from expert annotations.
61
- | | |
62
- | --- | --- |
63
- | Task category | t2t |
64
- | Domains | Legal, Written |
65
- | Reference | https://reglab.github.io/legal-rag-benchmarks/ |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
 
67
- This dataset was produced by modifying the reglab ```Bar Exam QA``` dataset, by combining question and prompt text into a single query, and using expert annotated passages as answers.
68
- As a benchmark, this dataset is best designed for legal information retrieval and question answering related tasks.
69
- # Citation
70
  ```@inproceedings{zheng2025,
71
  author = {Zheng, Lucia and Guha, Neel and Arifov, Javokhir and Zhang, Sarah and Skreta, Michal and Manning, Christopher D. and Henderson, Peter and Ho, Daniel E.},
72
  title = {A Reasoning-Focused Legal Retrieval Benchmark},
 
1
  ---
2
  license: cc-by-sa-4.0
3
  task_categories:
4
+ - text-retrieval
5
  - question-answering
6
  language:
7
  - en
8
  tags:
9
  - legal
10
+ - law
11
  size_categories:
12
  - n<1K
13
+ source_datasets:
14
+ - reglab/barexam_qa
15
  dataset_info:
16
+ - config_name: default
17
+ features:
18
+ - name: query-id
19
+ dtype: string
20
+ - name: corpus-id
21
+ dtype: string
22
+ - name: score
23
+ dtype: float64
24
+ splits:
25
+ - name: test
26
+ num_examples: 117
27
+ - config_name: corpus
28
+ features:
29
+ - name: _id
30
+ dtype: string
31
+ - name: title
32
+ dtype: string
33
+ - name: text
34
+ dtype: string
35
+ splits:
36
+ - name: corpus
37
+ num_examples: 116
38
+ - config_name: queries
39
+ features:
40
+ - name: _id
41
+ dtype: string
42
+ - name: text
43
+ dtype: string
44
+ splits:
45
+ - name: queries
46
+ num_examples: 117
 
47
  configs:
48
+ - config_name: default
49
+ data_files:
50
+ - split: test
51
+ path: data/default.jsonl
52
+ - config_name: corpus
53
+ data_files:
54
+ - split: corpus
55
+ path: data/corpus.jsonl
56
+ - config_name: queries
57
+ data_files:
58
+ - split: queries
59
+ path: data/queries.jsonl
60
+ pretty_name: Bar Exam QA MTEB Benchmark
61
  ---
62
+ # Bar Exam QA MTEB Benchmark πŸ‹
63
+ This is the [Bar Exam QA](https://huggingface.co/datasets/reglab/barexam_qa) dataset formatted in the [Massive Text Embedding Benchmark (MTEB)](https://github.com/embeddings-benchmark/mteb) information retrieval dataset format.
64
+
65
+ This dataset is intended to facilitate the consistent and reproducible evaluation of information retrieval models on Bar Exam QA with the [`mteb`](https://github.com/embeddings-benchmark/mteb) embedding model evaluation framework.
66
+
67
+ More specifically, this dataset tests the ability of information retrieval models to identify legal provisions relevant to US bar exam questions.
68
+
69
+ This dataset has been processed into the MTEB format by [Isaacus](https://isaacus.com/), a legal AI research company.
70
+
71
+ ## Methodology πŸ§ͺ
72
+ To understand how Bar Exam QA was created, refer to its [documentation](https://huggingface.co/datasets/reglab/barexam_qa).
73
+
74
+ This dataset was formatted by concatenating the `prompt` and `question` columns of the source data delimited by a single space (or, where there was no `prompt`, reverting to the `question` only) into queries (or anchors), and treating the `gold_passage` column as relevant (or positive) passages.
75
+
76
+ ## Structure πŸ—‚οΈ
77
+ As per the MTEB information retrieval dataset format, this dataset comprises three splits, `default`, `corpus` and `queries`.
78
+
79
+ The `default` split pairs queries (`query-id`) with relevant passages (`corpus-id`), each pair having a `score` of 1.
80
+
81
+ The `corpus` split contains relevant passages from Bar Exam QA, with the text of a passage being stored in the `text` key and its id being stored in the `_id` key.
82
+
83
+ The `queries` split contains queries, with the text of a query being stored in the `text` key and its id being stored in the `_id` key.
84
+
85
+ ## License πŸ“œ
86
+ To the extent that any intellectual property rights reside in the contributions made by Isaacus in formatting and processing this dataset, Isaacus licenses those contributions under the same license terms as the source dataset. You are free to use this dataset without citing Isaacus.
87
+
88
+ The source dataset is licensed under [CC BY SA 4.0](https://choosealicense.com/licenses/cc-by-sa-4.0/).
89
 
90
+ ## Citation πŸ”–
 
 
91
  ```@inproceedings{zheng2025,
92
  author = {Zheng, Lucia and Guha, Neel and Arifov, Javokhir and Zhang, Sarah and Skreta, Michal and Manning, Christopher D. and Henderson, Peter and Ho, Daniel E.},
93
  title = {A Reasoning-Focused Legal Retrieval Benchmark},