Update README.md
Browse files
README.md
CHANGED
@@ -62,8 +62,8 @@ The model may reflect biases present in the training data, such as jurisdictiona
|
|
62 |
|
63 |
### Recommendations
|
64 |
|
65 |
-
-
|
66 |
-
- Avoid using for legal tasks where complete precision is mandatory.
|
67 |
|
68 |
|
69 |
### Training Data
|
@@ -99,9 +99,8 @@ The model may reflect biases present in the training data, such as jurisdictiona
|
|
99 |
- Validation was performed on the `validation` split of the Multi-LexSum dataset, consisting of 4,818 examples.
|
100 |
|
101 |
#### Metrics
|
102 |
-
- **
|
103 |
-
- **
|
104 |
-
- **ROUGE-L:** 0.49
|
105 |
|
106 |
### Results
|
107 |
- The model produces reliable short and long summaries for legal documents, maintaining coherence and relevance.
|
|
|
62 |
|
63 |
### Recommendations
|
64 |
|
65 |
+
- A legal expert should always review outputs.
|
66 |
+
- Avoid using it for legal tasks where complete precision is mandatory.
|
67 |
|
68 |
|
69 |
### Training Data
|
|
|
99 |
- Validation was performed on the `validation` split of the Multi-LexSum dataset, consisting of 4,818 examples.
|
100 |
|
101 |
#### Metrics
|
102 |
+
- **bert_score Short Summary Precision :** 0.84
|
103 |
+
- **bert_score Long Summary Precision :** 0.81
|
|
|
104 |
|
105 |
### Results
|
106 |
- The model produces reliable short and long summaries for legal documents, maintaining coherence and relevance.
|