Update README.md
Browse files
README.md
CHANGED
@@ -9,6 +9,25 @@ metrics:
|
|
9 |
---
|
10 |
|
11 |
# biomedtra-small for QA
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
This model is a fine-tuned version of [mrm8488/biomedtra-small-es](https://huggingface.co/mrm8488/biomedtra-small-es) on the [squad_es (v2)](https://huggingface.co/datasets/squad_es) training dataset.
|
13 |
|
14 |
|
@@ -29,18 +48,12 @@ Evaluated on the [hackathon-pln-es/biomed_squad_es_v2](https://huggingface.co/da
|
|
29 |
|
30 |
The model was trained for 5 epochs, choosing the epoch with the best f1 score.
|
31 |
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
eval_HasAns_total = 562
|
39 |
-
eval_NoAns_exact = 27.4783
|
40 |
-
eval_NoAns_f1 = 27.4783
|
41 |
-
eval_NoAns_total = 575
|
42 |
-
|
43 |
-
```
|
44 |
|
45 |
## Team
|
46 |
Santiago Maximo: [smaximo](https://huggingface.co/smaximo)
|
|
|
9 |
---
|
10 |
|
11 |
# biomedtra-small for QA
|
12 |
+
|
13 |
+
This model was trained as part of the "Extractive QA Biomedicine" project developed during the 2022 [Hackathon](https://somosnlp.org/hackathon) organized by SOMOS NLP.
|
14 |
+
|
15 |
+
## Motivation
|
16 |
+
|
17 |
+
Taking into account the existence of masked language models trained on Spanish Biomedical corpus, the objective of this project is to use them to generate extractice QA models for Biomedicine and compare their effectiveness with general masked language models.
|
18 |
+
|
19 |
+
The models trained during the [Hackathon](https://somosnlp.org/hackathon) were:
|
20 |
+
|
21 |
+
[hackathon-pln-es/roberta-base-bne-squad2-es](https://huggingface.co/hackathon-pln-es/roberta-base-bne-squad2-es)
|
22 |
+
|
23 |
+
[hackathon-pln-es/roberta-base-biomedical-clinical-es-squad2-es](https://huggingface.co/hackathon-pln-es/roberta-base-biomedical-clinical-es-squad2-es)
|
24 |
+
|
25 |
+
[hackathon-pln-es/roberta-base-biomedical-es-squad2-es](https://huggingface.co/hackathon-pln-es/roberta-base-biomedical-es-squad2-es)
|
26 |
+
|
27 |
+
[hackathon-pln-es/biomedtra-small-es-squad2-es](https://huggingface.co/hackathon-pln-es/biomedtra-small-es-squad2-es)
|
28 |
+
|
29 |
+
## Description
|
30 |
+
|
31 |
This model is a fine-tuned version of [mrm8488/biomedtra-small-es](https://huggingface.co/mrm8488/biomedtra-small-es) on the [squad_es (v2)](https://huggingface.co/datasets/squad_es) training dataset.
|
32 |
|
33 |
|
|
|
48 |
|
49 |
The model was trained for 5 epochs, choosing the epoch with the best f1 score.
|
50 |
|
51 |
+
|Model |Base Model Domain|exact |f1 |HasAns_exact|HasAns_f1|NoAns_exact|NoAns_f1|
|
52 |
+
|--------------------------------------------------------------|-----------------|-------|-------|------------|---------|-----------|--------|
|
53 |
+
|hackathon-pln-es/roberta-base-bne-squad2-es |General |67.6341|75.6988|53.7367 |70.0526 |81.2174 |81.2174 |
|
54 |
+
|hackathon-pln-es/roberta-base-biomedical-clinical-es-squad2-es|Biomedical |66.8426|75.2346|53.0249 |70.0031 |80.3478 |80.3478 |
|
55 |
+
|hackathon-pln-es/roberta-base-biomedical-es-squad2-es |Biomedical |67.6341|74.5612|47.6868 |61.7012 |87.1304 | 87.1304|
|
56 |
+
|hackathon-pln-es/biomedtra-small-es-squad2-es |Biomedical |29.6394|36.317 |32.2064 |45.716 |27.1304 |27.1304 |
|
|
|
|
|
|
|
|
|
|
|
|
|
57 |
|
58 |
## Team
|
59 |
Santiago Maximo: [smaximo](https://huggingface.co/smaximo)
|