Update README.md
Browse files
README.md
CHANGED
@@ -182,7 +182,7 @@ Below are the detailed results of our model's performance across all test sets:
|
|
182 |
#### Expanded Test Set Results
|
183 |
Comparison data is sourced from Wang et al. (2023), used various models as encoding layer:
|
184 |
- bert-mini (MinBert)
|
185 |
-
- bert-
|
186 |
- roberta-base (RoBERTa)
|
187 |
- deberta-v3-base (DeBERTa)
|
188 |
- Chem_GraphCodeBert (GraphCodeBert)
|
|
|
182 |
#### Expanded Test Set Results
|
183 |
Comparison data is sourced from Wang et al. (2023), used various models as encoding layer:
|
184 |
- bert-mini (MinBert)
|
185 |
+
- bert-tiny (TinBert)
|
186 |
- roberta-base (RoBERTa)
|
187 |
- deberta-v3-base (DeBERTa)
|
188 |
- Chem_GraphCodeBert (GraphCodeBert)
|