Moryjj commited on
Commit
72b7ec5
·
verified ·
1 Parent(s): 36b2054

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -4
README.md CHANGED
@@ -12,21 +12,25 @@ tags:
12
  - Simplification
13
  - text-to-text
14
  ---
15
- # Persian Simplification Model (ParsT5 Base)
16
 
17
  ---
18
 
19
  ## Overview
20
 
21
- This model is a fine-tuned version of ParsT5 (base) designed explicitly for the Persian Simplification Task. The training data consists of Persian legal texts. The model is trained using supervised fine-tuning and employs the **Unlimiformer Algorithm** to handle large inputs effectively.
22
 
23
- - **Architecture**: ParsT5-base
24
  - **Language**: Persian
25
  - **Task**: Text Simplification
26
  - **Training Setup**:
27
  - **Algorithm for reducing computation**: Unlimiformer
28
  - **Epochs**: 12
29
  - **Hardware**: NVIDIA GPU 4070
 
 
 
 
30
 
31
  ---
32
 
@@ -45,7 +49,7 @@ The following table summarizes the readability scores for the original texts and
45
 
46
  ## Evaluation Results
47
 
48
- The fine-tuned model was evaluated using **Rouge** and **BERTScore** metrics. For comparison, the performance of two other Persian LLMs based on LLaMA is also presented:
49
 
50
 
51
  | Prediction Model | Rouge1 | Rouge2 | RougeL | Precision | Recall | F1 |
 
12
  - Simplification
13
  - text-to-text
14
  ---
15
+ # Persian Simplification Model (parsT5 Base)
16
 
17
  ---
18
 
19
  ## Overview
20
 
21
+ This model is a fine-tuned ParsT5 (base) version designed explicitly for the Persian Simplification Task. The training data consists of Persian legal texts. The model is trained using supervised fine-tuning and employs the **Unlimiformer Algorithm** to handle large inputs effectively.
22
 
23
+ - **Architecture**: Ahmad/parsT5-base
24
  - **Language**: Persian
25
  - **Task**: Text Simplification
26
  - **Training Setup**:
27
  - **Algorithm for reducing computation**: Unlimiformer
28
  - **Epochs**: 12
29
  - **Hardware**: NVIDIA GPU 4070
30
+ - **Trainable Blocks**: Last Encoder-Decoder
31
+ - **Optimizer** : AdamW + lr_scheduler
32
+ - **Input max Tokens**: 4096
33
+ - **Output max Tokens**: 512
34
 
35
  ---
36
 
 
49
 
50
  ## Evaluation Results
51
 
52
+ The fine-tuned model was evaluated using **Rouge** and **BERTScore (mBERT)** metrics. For comparison, the performance of two other Persian LLMs based on LLaMA is also presented:
53
 
54
 
55
  | Prediction Model | Rouge1 | Rouge2 | RougeL | Precision | Recall | F1 |