Commit
·
8710755
1
Parent(s):
ee98ef9
Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
tags:
|
4 |
- generated_from_trainer
|
5 |
metrics:
|
@@ -11,17 +11,30 @@ model-index:
|
|
11 |
- name: req_mod_ner_modelv2
|
12 |
results: []
|
13 |
widget:
|
14 |
-
- text:
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
- text:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
---
|
20 |
|
21 |
|
22 |
# req_mod_ner_modelv2
|
23 |
|
24 |
-
This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-ner](https://huggingface.co/pdelobelle/robbert-v2-dutch-ner) on a private dataset with 300 sentences/phrases with
|
25 |
- Actor (used for all types of software users and groups of users)
|
26 |
- COTS (abbreviation for Commercial Off-The-Shelf Software)
|
27 |
- Function (used for functions, functionality, features)
|
@@ -29,7 +42,11 @@ This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-ner](https://
|
|
29 |
- Entity (used for all entities stored/processed by the software)
|
30 |
- Attribute (used for attributes of entities)
|
31 |
|
32 |
-
|
|
|
|
|
|
|
|
|
33 |
- Loss: 0.6791
|
34 |
- Precision: 0.7515
|
35 |
- Recall: 0.7299
|
@@ -47,17 +64,14 @@ It achieves the following results on the evaluation set:
|
|
47 |
| Entity | 0.78 | 0.83 | 0.81 | 35 |
|
48 |
| Attribute | 0.92 | 0.71 | 0.80 | 31 |
|
49 |
|
50 |
-
## Model description
|
51 |
-
|
52 |
-
More information needed
|
53 |
|
54 |
## Intended uses & limitations
|
55 |
|
56 |
-
|
57 |
|
58 |
## Training and evaluation data
|
59 |
|
60 |
-
|
61 |
|
62 |
## Training procedure
|
63 |
|
@@ -99,4 +113,4 @@ The following hyperparameters were used during training:
|
|
99 |
- Transformers 4.24.0
|
100 |
- Pytorch 2.0.0
|
101 |
- Datasets 2.9.0
|
102 |
-
- Tokenizers 0.11.0
|
|
|
1 |
---
|
2 |
+
license: cc-by-nc-sa-4.0
|
3 |
tags:
|
4 |
- generated_from_trainer
|
5 |
metrics:
|
|
|
11 |
- name: req_mod_ner_modelv2
|
12 |
results: []
|
13 |
widget:
|
14 |
+
- text: >-
|
15 |
+
De Oplossing ondersteunt het zoeken op de metadata van zaken, documenten en
|
16 |
+
objecten en op gegevens uit de basisregistraties die gekoppeld zijn aan een
|
17 |
+
zaak.
|
18 |
+
- text: >-
|
19 |
+
De Oplossing ondersteunt parafering en het plaatsen van een gecertificeerde
|
20 |
+
elektronische handtekening.
|
21 |
+
- text: >-
|
22 |
+
De Aangeboden oplossing stelt de medewerker in staat een zaak te
|
23 |
+
registreren.
|
24 |
+
- text: >-
|
25 |
+
Het Financieel systeem heeft functionaliteit om een debiteurenadministratie
|
26 |
+
te voeren.
|
27 |
+
- text: >-
|
28 |
+
Als gebruiker wil ik dat de oplossing mij naar zaken laat zoeken op basis
|
29 |
+
van zaaknummer, zaaktitel, omschrijving en datum.
|
30 |
+
language:
|
31 |
+
- nl
|
32 |
---
|
33 |
|
34 |
|
35 |
# req_mod_ner_modelv2
|
36 |
|
37 |
+
This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-ner](https://huggingface.co/pdelobelle/robbert-v2-dutch-ner) on a private dataset with 300 sentences/phrases with 1,954 token labels (IOB2 format) aimed at extracting software requirement related named entities. The following labels are used:
|
38 |
- Actor (used for all types of software users and groups of users)
|
39 |
- COTS (abbreviation for Commercial Off-The-Shelf Software)
|
40 |
- Function (used for functions, functionality, features)
|
|
|
42 |
- Entity (used for all entities stored/processed by the software)
|
43 |
- Attribute (used for attributes of entities)
|
44 |
|
45 |
+
Please contact me via [LinkedIn](https://www.linkedin.com/in/denizayhan/) if you have any questions about this model or the dataset used.
|
46 |
+
|
47 |
+
The dataset and this model were created as part of the final project assignment of the Natural Language Understanding course (XCS224U) from Professional AI Program of the Stanford School of Engineering.
|
48 |
+
|
49 |
+
The model achieves the following results on the evaluation set:
|
50 |
- Loss: 0.6791
|
51 |
- Precision: 0.7515
|
52 |
- Recall: 0.7299
|
|
|
64 |
| Entity | 0.78 | 0.83 | 0.81 | 35 |
|
65 |
| Attribute | 0.92 | 0.71 | 0.80 | 31 |
|
66 |
|
|
|
|
|
|
|
67 |
|
68 |
## Intended uses & limitations
|
69 |
|
70 |
+
The model performs automated extraction of functionality concepts from source documents for which software requirements are needed. Its intended use is as a preceding processing step for Question-Answering.
|
71 |
|
72 |
## Training and evaluation data
|
73 |
|
74 |
+
The model was trained on the req_mod_ner dataset. This dataset is private and contains 300 sentences/phrases and 1,954 IOB2 labels.
|
75 |
|
76 |
## Training procedure
|
77 |
|
|
|
113 |
- Transformers 4.24.0
|
114 |
- Pytorch 2.0.0
|
115 |
- Datasets 2.9.0
|
116 |
+
- Tokenizers 0.11.0
|