Update README.md
Browse files
README.md
CHANGED
@@ -26,8 +26,6 @@ license: mit
|
|
26 |
|
27 |
QATCForQuestionAnswering utilizes XLM-RoBERTa as a pre-trained language model. We further enhance it by incorporating a Token Classification mechanism, allowing the model to not only predict answer spans but also classify tokens as part of the rationale selection process. During training, we introduce Rationale Regularization Loss, which consists of sparsity and continuity constraints to encourage more precise and interpretable token-level predictions. This loss function ensures that the model effectively learns to identify relevant rationale tokens while maintaining coherence in token selection.
|
28 |
|
29 |
-
---
|
30 |
-
|
31 |
## Usage Example
|
32 |
|
33 |
Direct Model Usage
|
@@ -84,14 +82,11 @@ evidence = extract_evidence_tfidf_qatc(
|
|
84 |
print(evidence)
|
85 |
# evidence: sau khi thống nhất việt nam tiếp tục gặp khó khăn do sự sụp đổ và tan rã của đồng minh liên xô cùng khối phía đông các lệnh cấm vận của hoa kỳ chiến tranh với campuchia biên giới giáp trung quốc và hậu quả của chính sách bao cấp sau nhiều năm áp dụng
|
86 |
```
|
87 |
-
---
|
88 |
|
89 |
## **Evaluation Results**
|
90 |
|
91 |
**SemViQA-QATC** plays a crucial role in the **SemViQA** system by enhancing accuracy in evidence extraction. When integrated into a pipeline, this model helps determine whether a claim is supported or refuted based on retrieved evidence.
|
92 |
|
93 |
-
---
|
94 |
-
|
95 |
## **Citation**
|
96 |
|
97 |
If you use **SemViQA-QATC** in your research, please cite:
|
@@ -111,8 +106,6 @@ If you use **SemViQA-QATC** in your research, please cite:
|
|
111 |
🔗 **Paper Link:** [SemViQA on arXiv](https://arxiv.org/abs/2503.00955)
|
112 |
🔗 **Source Code:** [GitHub - SemViQA](https://github.com/DAVID-NGUYEN-S16/SemViQA)
|
113 |
|
114 |
-
---
|
115 |
-
|
116 |
## About
|
117 |
|
118 |
*Built by Dien X. Tran*
|
|
|
26 |
|
27 |
QATCForQuestionAnswering utilizes XLM-RoBERTa as a pre-trained language model. We further enhance it by incorporating a Token Classification mechanism, allowing the model to not only predict answer spans but also classify tokens as part of the rationale selection process. During training, we introduce Rationale Regularization Loss, which consists of sparsity and continuity constraints to encourage more precise and interpretable token-level predictions. This loss function ensures that the model effectively learns to identify relevant rationale tokens while maintaining coherence in token selection.
|
28 |
|
|
|
|
|
29 |
## Usage Example
|
30 |
|
31 |
Direct Model Usage
|
|
|
82 |
print(evidence)
|
83 |
# evidence: sau khi thống nhất việt nam tiếp tục gặp khó khăn do sự sụp đổ và tan rã của đồng minh liên xô cùng khối phía đông các lệnh cấm vận của hoa kỳ chiến tranh với campuchia biên giới giáp trung quốc và hậu quả của chính sách bao cấp sau nhiều năm áp dụng
|
84 |
```
|
|
|
85 |
|
86 |
## **Evaluation Results**
|
87 |
|
88 |
**SemViQA-QATC** plays a crucial role in the **SemViQA** system by enhancing accuracy in evidence extraction. When integrated into a pipeline, this model helps determine whether a claim is supported or refuted based on retrieved evidence.
|
89 |
|
|
|
|
|
90 |
## **Citation**
|
91 |
|
92 |
If you use **SemViQA-QATC** in your research, please cite:
|
|
|
106 |
🔗 **Paper Link:** [SemViQA on arXiv](https://arxiv.org/abs/2503.00955)
|
107 |
🔗 **Source Code:** [GitHub - SemViQA](https://github.com/DAVID-NGUYEN-S16/SemViQA)
|
108 |
|
|
|
|
|
109 |
## About
|
110 |
|
111 |
*Built by Dien X. Tran*
|