Update README.md
Browse files
README.md
CHANGED
@@ -24,6 +24,8 @@ license: mit
|
|
24 |
- **Task:** Extractive QA, Evidence Extraction
|
25 |
- **Dataset:** [ViWikiFC](https://arxiv.org/abs/2405.07615)
|
26 |
|
|
|
|
|
27 |
---
|
28 |
|
29 |
## Usage Example
|
|
|
24 |
- **Task:** Extractive QA, Evidence Extraction
|
25 |
- **Dataset:** [ViWikiFC](https://arxiv.org/abs/2405.07615)
|
26 |
|
27 |
+
QATCForQuestionAnswering utilizes XLM-RoBERTa as a pre-trained language model. We further enhance it by incorporating a Token Classification mechanism, allowing the model to not only predict answer spans but also classify tokens as part of the rationale selection process. During training, we introduce Rationale Regularization Loss, which consists of sparsity and continuity constraints to encourage more precise and interpretable token-level predictions. This loss function ensures that the model effectively learns to identify relevant rationale tokens while maintaining coherence in token selection.
|
28 |
+
|
29 |
---
|
30 |
|
31 |
## Usage Example
|