denizspynk commited on
Commit
11a0baf
·
1 Parent(s): 5855d78

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -53
README.md CHANGED
@@ -10,14 +10,6 @@ metrics:
10
  model-index:
11
  - name: req_mod_ner_modelv2
12
  results: []
13
- language:
14
- - nl
15
- widget:
16
- - text: "De Oplossing ondersteunt het zoeken op de metadata van zaken, documenten en objecten en op gegevens uit de basisregistraties die gekoppeld zijn aan een zaak."
17
- - text: "De Oplossing ondersteunt parafering en het plaatsen van een gecertificeerde elektronische handtekening."
18
- - text: "De Aangeboden oplossing stelt de medewerker in staat een zaak te registreren."
19
- - text: "Het Financieel systeem heeft functionaliteit om een debiteurenadministratie te voeren."
20
- - text: "Als gebruiker wil ik dat de oplossing mij naar zaken laat zoeken op basis van zaaknummer, zaaktitel, omschrijving en datum."
21
  ---
22
 
23
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -25,15 +17,13 @@ should probably proofread and complete it, then remove this comment. -->
25
 
26
  # req_mod_ner_modelv2
27
 
28
- This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-ner](https://huggingface.co/pdelobelle/robbert-v2-dutch-ner) on the req_mod_ner dataset.
29
- This req_mod_ner dataset is private and currently contains 300 examples (240 train, 35 eval and 35 test).
30
-
31
  It achieves the following results on the evaluation set:
32
- - Loss: 0.6964
33
- - Precision: 0.544
34
- - Recall: 0.5862
35
- - F1: 0.5643
36
- - Accuracy: 0.9153
37
 
38
  ## Model description
39
 
@@ -52,50 +42,34 @@ More information needed
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
55
- - learning_rate: 1e-05
56
- - train_batch_size: 2
57
- - eval_batch_size: 2
58
  - seed: 42
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
- - num_epochs: 32
62
 
63
  ### Training results
64
 
65
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
66
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
67
- | No log | 1.0 | 120 | 0.6075 | 0.8095 | 0.1466 | 0.2482 | 0.8822 |
68
- | No log | 2.0 | 240 | 0.4917 | 0.6667 | 0.1897 | 0.2953 | 0.8878 |
69
- | No log | 3.0 | 360 | 0.4429 | 0.5 | 0.3362 | 0.4021 | 0.8918 |
70
- | No log | 4.0 | 480 | 0.4255 | 0.5 | 0.4914 | 0.4957 | 0.9007 |
71
- | 0.507 | 5.0 | 600 | 0.4278 | 0.5085 | 0.5172 | 0.5128 | 0.9007 |
72
- | 0.507 | 6.0 | 720 | 0.4321 | 0.5294 | 0.5431 | 0.5362 | 0.9064 |
73
- | 0.507 | 7.0 | 840 | 0.4574 | 0.5410 | 0.5690 | 0.5546 | 0.9064 |
74
- | 0.507 | 8.0 | 960 | 0.4720 | 0.5804 | 0.5603 | 0.5702 | 0.9096 |
75
- | 0.1626 | 9.0 | 1080 | 0.4947 | 0.5197 | 0.5690 | 0.5432 | 0.9056 |
76
- | 0.1626 | 10.0 | 1200 | 0.5013 | 0.5159 | 0.5603 | 0.5372 | 0.9096 |
77
- | 0.1626 | 11.0 | 1320 | 0.5306 | 0.5271 | 0.5862 | 0.5551 | 0.9104 |
78
- | 0.1626 | 12.0 | 1440 | 0.5450 | 0.5070 | 0.6207 | 0.5581 | 0.9112 |
79
- | 0.0687 | 13.0 | 1560 | 0.5753 | 0.5152 | 0.5862 | 0.5484 | 0.9112 |
80
- | 0.0687 | 14.0 | 1680 | 0.5746 | 0.5547 | 0.6121 | 0.5820 | 0.9169 |
81
- | 0.0687 | 15.0 | 1800 | 0.5925 | 0.5328 | 0.6293 | 0.5771 | 0.9144 |
82
- | 0.0687 | 16.0 | 1920 | 0.6200 | 0.5656 | 0.5948 | 0.5798 | 0.9144 |
83
- | 0.0368 | 17.0 | 2040 | 0.6442 | 0.5583 | 0.5776 | 0.5678 | 0.9169 |
84
- | 0.0368 | 18.0 | 2160 | 0.6468 | 0.5317 | 0.5776 | 0.5537 | 0.9136 |
85
- | 0.0368 | 19.0 | 2280 | 0.6563 | 0.5403 | 0.5776 | 0.5583 | 0.9153 |
86
- | 0.0368 | 20.0 | 2400 | 0.6683 | 0.5323 | 0.5690 | 0.5500 | 0.9104 |
87
- | 0.0227 | 21.0 | 2520 | 0.6766 | 0.5074 | 0.5948 | 0.5476 | 0.9096 |
88
- | 0.0227 | 22.0 | 2640 | 0.6784 | 0.4965 | 0.6121 | 0.5483 | 0.9072 |
89
- | 0.0227 | 23.0 | 2760 | 0.6897 | 0.5583 | 0.5776 | 0.5678 | 0.9144 |
90
- | 0.0227 | 24.0 | 2880 | 0.6858 | 0.5182 | 0.6121 | 0.5613 | 0.9112 |
91
- | 0.0146 | 25.0 | 3000 | 0.6828 | 0.5224 | 0.6034 | 0.5600 | 0.9128 |
92
- | 0.0146 | 26.0 | 3120 | 0.6937 | 0.5528 | 0.5862 | 0.5690 | 0.9169 |
93
- | 0.0146 | 27.0 | 3240 | 0.6939 | 0.5397 | 0.5862 | 0.5620 | 0.9144 |
94
- | 0.0146 | 28.0 | 3360 | 0.6934 | 0.5476 | 0.5948 | 0.5702 | 0.9169 |
95
- | 0.0146 | 29.0 | 3480 | 0.6848 | 0.5147 | 0.6034 | 0.5556 | 0.9120 |
96
- | 0.0132 | 30.0 | 3600 | 0.6864 | 0.5231 | 0.5862 | 0.5528 | 0.9112 |
97
- | 0.0132 | 31.0 | 3720 | 0.6948 | 0.544 | 0.5862 | 0.5643 | 0.9161 |
98
- | 0.0132 | 32.0 | 3840 | 0.6964 | 0.544 | 0.5862 | 0.5643 | 0.9153 |
99
 
100
 
101
  ### Framework versions
@@ -103,4 +77,4 @@ The following hyperparameters were used during training:
103
  - Transformers 4.24.0
104
  - Pytorch 2.0.0
105
  - Datasets 2.9.0
106
- - Tokenizers 0.11.0
 
10
  model-index:
11
  - name: req_mod_ner_modelv2
12
  results: []
 
 
 
 
 
 
 
 
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
17
 
18
  # req_mod_ner_modelv2
19
 
20
+ This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-ner](https://huggingface.co/pdelobelle/robbert-v2-dutch-ner) on the None dataset.
 
 
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 0.7808
23
+ - Precision: 0.6389
24
+ - Recall: 0.5948
25
+ - F1: 0.6161
26
+ - Accuracy: 0.9217
27
 
28
  ## Model description
29
 
 
42
  ### Training hyperparameters
43
 
44
  The following hyperparameters were used during training:
45
+ - learning_rate: 0.0001
46
+ - train_batch_size: 1
47
+ - eval_batch_size: 1
48
  - seed: 42
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
51
+ - num_epochs: 16
52
 
53
  ### Training results
54
 
55
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
56
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
57
+ | No log | 1.0 | 240 | 0.4479 | 0.4045 | 0.3103 | 0.3512 | 0.8951 |
58
+ | No log | 2.0 | 480 | 0.4099 | 0.5224 | 0.6034 | 0.5600 | 0.9112 |
59
+ | 0.4167 | 3.0 | 720 | 0.4394 | 0.5735 | 0.6724 | 0.6190 | 0.9209 |
60
+ | 0.4167 | 4.0 | 960 | 0.5204 | 0.6195 | 0.6034 | 0.6114 | 0.9177 |
61
+ | 0.1551 | 5.0 | 1200 | 0.5692 | 0.5556 | 0.7328 | 0.6320 | 0.9136 |
62
+ | 0.1551 | 6.0 | 1440 | 0.5518 | 0.5414 | 0.6207 | 0.5783 | 0.9144 |
63
+ | 0.0727 | 7.0 | 1680 | 0.6763 | 0.616 | 0.6638 | 0.6390 | 0.9201 |
64
+ | 0.0727 | 8.0 | 1920 | 0.7255 | 0.6204 | 0.5776 | 0.5982 | 0.9153 |
65
+ | 0.0375 | 9.0 | 2160 | 0.7353 | 0.6667 | 0.5862 | 0.6239 | 0.9225 |
66
+ | 0.0375 | 10.0 | 2400 | 0.7023 | 0.5862 | 0.5862 | 0.5862 | 0.9144 |
67
+ | 0.0276 | 11.0 | 2640 | 0.7364 | 0.6053 | 0.5948 | 0.6 | 0.9169 |
68
+ | 0.0276 | 12.0 | 2880 | 0.7443 | 0.6034 | 0.6034 | 0.6034 | 0.9169 |
69
+ | 0.0183 | 13.0 | 3120 | 0.7658 | 0.6404 | 0.6293 | 0.6348 | 0.9217 |
70
+ | 0.0183 | 14.0 | 3360 | 0.7693 | 0.6518 | 0.6293 | 0.6404 | 0.9241 |
71
+ | 0.0118 | 15.0 | 3600 | 0.7794 | 0.6481 | 0.6034 | 0.625 | 0.9225 |
72
+ | 0.0118 | 16.0 | 3840 | 0.7808 | 0.6389 | 0.5948 | 0.6161 | 0.9217 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73
 
74
 
75
  ### Framework versions
 
77
  - Transformers 4.24.0
78
  - Pytorch 2.0.0
79
  - Datasets 2.9.0
80
+ - Tokenizers 0.11.0