Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,29 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-sa-4.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-sa-4.0
|
3 |
+
task_categories:
|
4 |
+
- question-answering
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
tags:
|
8 |
+
- legal
|
9 |
+
size_categories:
|
10 |
+
- n<1K
|
11 |
+
---
|
12 |
+
# Bar Exam QA Benchmark 📝
|
13 |
+
The dataset includes questions from multistate bar exams and answers sourced from expert annotations.
|
14 |
+
| | |
|
15 |
+
| --- | --- |
|
16 |
+
| Task category | t2t |
|
17 |
+
| Domains | Legal, Written |
|
18 |
+
| Reference | https://reglab.github.io/legal-rag-benchmarks/ |
|
19 |
+
|
20 |
+
This dataset was produced by modifying the reglab ```Bar Exam QA``` dataset, by combining question and prompt text into a single query, and using expert annotated passages as answers.
|
21 |
+
As a benchmark, this dataset is best designed for legal information retrieval and question answering related tasks.
|
22 |
+
# Citation
|
23 |
+
```@inproceedings{zheng2025,
|
24 |
+
author = {Zheng, Lucia and Guha, Neel and Arifov, Javokhir and Zhang, Sarah and Skreta, Michal and Manning, Christopher D. and Henderson, Peter and Ho, Daniel E.},
|
25 |
+
title = {A Reasoning-Focused Legal Retrieval Benchmark},
|
26 |
+
year = {2025},
|
27 |
+
series = {CSLAW '25 (forthcoming)}
|
28 |
+
}
|
29 |
+
```
|