File size: 1,554 Bytes
32820d0
 
 
 
 
 
 
 
 
 
 
 
f08fda2
 
32820d0
 
 
 
 
 
 
 
 
 
 
0ce9a71
 
9a504fd
 
32820d0
 
 
 
 
 
c351486
32820d0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
language: fa
pretty_name: Iranian Legal Question Answering Dataset (Farsi)
tags:
- Farsi
- Persian
- legal
- juridical
- Iran
- QA
- question
- answer
task_categories:
- question-answering
license: cc
---

# Iranian Legal Question Answering Dataset (Farsi)

This dataset includes over 570k questions and more than 1.9m answers, all in written form. The questions were posed by ordinary Persian speakers (Iranians), while the responses were provided by attorneys from various specialties.

## Dataset Description

Question records without corresponding answers have been excluded from the dataset.

This dataset will be updated periodically with new records.

The reference for this dataset is [dadrah.ir](https://dadrah.ir/) website.

## Usage
<details>

Huggingface datasets library:
```python
from datasets import load_dataset
dataset = load_dataset('PerSets/iran-legal-persian-qa')
```

Pandas library:
```python
import pandas
import os

data_files = [file for file in os.listdir() if file.startswith("train") and file.endswith(".jsonl")]
df = pd.DataFrame()
for file in data_files:
    df = pd.concat([df, pd.read_json(file, lines=True)], ignore_index=True)
```

Vanilla Python: <br>
(very slow - not recommended)
```python
import json
import os

data_files = [file for file in os.listdir() if file.startswith("train") and file.endswith(".jsonl")]

train = []
for file in data_files:
    with open(file, encoding="utf-8") as f:
        for line in f:
            obj = json.loads(line)
            train.append(obj)
```
</details>

## License
CC0