HiTZ
/

Token Classification
Transformers
Safetensors
deberta-v2
question-answering
ragerri commited on
Commit
1e263a5
·
verified ·
1 Parent(s): 13c18d2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -42,33 +42,33 @@ widget:
42
 
43
  This model is a fine-tuned version of [mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) for a **novel extractive task**
44
  which consists of **identifying the explanation of the correct answer** written by medical doctors. The model
45
- has been fine-tuned using the multilingual [https://huggingface.co/datasets/HiTZ/casimedicos-squad](https://huggingface.co/datasets/HiTZ/casimedicos-squad) dataset.
 
46
 
47
 
48
  ## Performance
49
 
50
- F1 partial match scores (as defined in [SQuAD extractive QA task](https://huggingface.co/datasets/rajpurkar/squad_v2) are reported in the following
51
- table:
52
 
53
- <img src="https://raw.githubusercontent.com/hitz-zentroa/multilingual-abstrct/main/resources/multilingual-abstrct-results.png" style="width: 75%;">
54
 
55
- ### Training hyperparameters
56
 
57
  The following hyperparameters were used during training:
58
  - learning_rate: 5e-05
59
- - train_batch_size: 16
60
  - eval_batch_size: 8
61
- - seed: 42
62
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
  - lr_scheduler_type: linear
64
- - num_epochs: 3.0
65
 
66
  ### Framework versions
67
 
68
- - Transformers 4.40.0.dev0
69
  - Pytorch 2.1.2+cu121
70
  - Datasets 2.16.1
71
  - Tokenizers 0.15.2
72
 
73
- **Contact**: [Anar Yeginbergen](https://ixa.ehu.eus/node/13807?language=en) and [Rodrigo Agerri](https://ragerri.github.io/)
74
  HiTZ Center - Ixa, University of the Basque Country UPV/EHU
 
42
 
43
  This model is a fine-tuned version of [mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) for a **novel extractive task**
44
  which consists of **identifying the explanation of the correct answer** written by medical doctors. The model
45
+ has been fine-tuned using the multilingual [https://huggingface.co/datasets/HiTZ/casimedicos-squad](https://huggingface.co/datasets/HiTZ/casimedicos-squad) dataset,
46
+ which includes English, French, Italian and Spanish.
47
 
48
 
49
  ## Performance
50
 
51
+ The model scores **74.64 F1 partial match** (as defined in [SQuAD extractive QA task](https://huggingface.co/datasets/rajpurkar/squad_v2) averaged across the 4 languages.
 
52
 
53
+ <!--<img src="https://raw.githubusercontent.com/hitz-zentroa/multilingual-abstrct/main/resources/multilingual-abstrct-results.png" style="width: 75%;"> -->
54
 
55
+ ### Fine-tuning hyperparameters
56
 
57
  The following hyperparameters were used during training:
58
  - learning_rate: 5e-05
59
+ - train_batch_size: 48
60
  - eval_batch_size: 8
61
+ - seed: random
62
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
  - lr_scheduler_type: linear
64
+ - num_epochs: 20.0
65
 
66
  ### Framework versions
67
 
68
+ - Transformers 4.30.0.dev0
69
  - Pytorch 2.1.2+cu121
70
  - Datasets 2.16.1
71
  - Tokenizers 0.15.2
72
 
73
+ **Contact**: [Iakes Goenaga](http://www.hitz.eus/es/node/65) and [Rodrigo Agerri](https://ragerri.github.io/)
74
  HiTZ Center - Ixa, University of the Basque Country UPV/EHU