language: fa | |
pretty_name: Iranian Legal Question Answering Dataset (Farsi) | |
tags: | |
- Farsi | |
- Persian | |
- legal | |
- juridical | |
- Iran | |
- QA | |
- question | |
- answer | |
task_categories: | |
- question-answering | |
license: cc | |
# Iranian Legal Question Answering Dataset (Farsi) | |
This dataset includes over 570k questions and more than 1.9m answers, all in written form. The questions were posed by ordinary Persian speakers (Iranians), while the responses were provided by attorneys from various specialties. | |
## Dataset Description | |
Question records without corresponding answers have been excluded from the dataset. | |
This dataset will be updated periodically with new records. | |
The reference for this dataset is [dadrah.ir](https://dadrah.ir/) website. | |
## Usage | |
<details> | |
Huggingface datasets library: | |
```python | |
from datasets import load_dataset | |
dataset = load_dataset('PerSets/iran-legal-persian-qa') | |
``` | |
Pandas library: | |
```python | |
import pandas | |
import os | |
data_files = [file for file in os.listdir() if file.startswith("train") and file.endswith(".jsonl")] | |
df = pd.DataFrame() | |
for file in data_files: | |
df = pd.concat([df, pd.read_json(file, lines=True)], ignore_index=True) | |
``` | |
Vanilla Python: <br> | |
(very slow - not recommended) | |
```python | |
import json | |
import os | |
data_files = [file for file in os.listdir() if file.startswith("train") and file.endswith(".jsonl")] | |
train = [] | |
for file in data_files: | |
with open(file, encoding="utf-8") as f: | |
for line in f: | |
obj = json.loads(line) | |
train.append(obj) | |
``` | |
</details> | |
## License | |
CC0 |