Rahka commited on
Commit
49bca5a
·
1 Parent(s): 119a213

test README

Browse files
Files changed (1) hide show
  1. README.md +55 -21
README.md CHANGED
@@ -10,32 +10,40 @@ model-index:
10
  - task:
11
  type: text-classification
12
  dataset:
13
- name: mdk_gov_data_titles_clf
14
  type: and-effect/mdk_gov_data_titles_clf
15
  metrics:
16
- - type: Accuracy (Bezeichnung)
17
- value: 0.7
18
- - type: Macro Precision (Bezeichnung)
19
- value: 0.5
20
- - type: Macro Recall (Bezeichnung)
21
- value: 0.61
22
- - type: Macro F1(Bezeichnung)
23
- value: 0.58
24
- - type: Accuracy (Thema)
25
- value: 0.92
26
- - type: Macro Precision (Thema)
27
- value: 0.93
28
- - type: Macro Recall (Thema)
29
- value: 0.91
30
- - type: Macro F1 (Thema)
31
- value: 0.9
 
 
 
 
 
 
 
 
32
  ---
33
 
34
- # Model Card for Musterdatenkatalog Classifier test
35
 
36
  <!-- Provide a quick summary of what the model is/does. -->
37
 
38
-
39
 
40
  # Model Details
41
 
@@ -62,7 +70,7 @@ This model is based on bert-base-german-cased and fine-tuned on and-effect/mdk_g
62
 
63
  # Direct Use
64
 
65
- [More Information Needed]
66
 
67
  ## Get Started with Sentence Transformers
68
  Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
@@ -146,7 +154,7 @@ Users (both direct and downstream) should be made aware of the risks, biases and
146
 
147
  <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
148
 
149
- [More Information Needed]
150
 
151
  ## Training Procedure [optional]
152
 
@@ -156,6 +164,32 @@ Users (both direct and downstream) should be made aware of the risks, biases and
156
 
157
  [More Information Needed]
158
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
159
  ### Speeds, Sizes, Times
160
 
161
  <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
 
10
  - task:
11
  type: text-classification
12
  dataset:
13
+ name: and-effect/mdk_gov_data_titles_clf
14
  type: and-effect/mdk_gov_data_titles_clf
15
  metrics:
16
+ - type: evaluate-metric/accuracy
17
+ value: '0.7'
18
+ name: Accuracy Bezeichnung
19
+ - type: evaluate-metric/precision
20
+ value: '0.5'
21
+ name: Precision Bezeichnung
22
+ - type: evaluate-metric/recall
23
+ value: '0.61'
24
+ name: Recall Bezeichnung
25
+ - type: evaluate-metric/f1
26
+ value: '0.58'
27
+ name: F1 Bezeichnung
28
+ - type: evaluate-metric/accuracy
29
+ value: '0.92'
30
+ name: Accuracy Thema
31
+ - type: evaluate-metric/precision
32
+ value: '0.93'
33
+ name: Precision Thema
34
+ - type: evaluate-metric/recall
35
+ value: '0.91'
36
+ name: Recall Thema
37
+ - type: evaluate-metric/f1
38
+ value: '0.9'
39
+ name: F1 Thema
40
  ---
41
 
42
+ # Model Card for Musterdatenkatalog Classifier
43
 
44
  <!-- Provide a quick summary of what the model is/does. -->
45
 
46
+ [More Information Needed]
47
 
48
  # Model Details
49
 
 
70
 
71
  # Direct Use
72
 
73
+ This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
74
 
75
  ## Get Started with Sentence Transformers
76
  Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
 
154
 
155
  <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
156
 
157
+ You can find all information about the training data [here](https://huggingface.co/datasets/and-effect/mdk_gov_data_titles_clf)
158
 
159
  ## Training Procedure [optional]
160
 
 
164
 
165
  [More Information Needed]
166
 
167
+ ## Training Parameter
168
+ The model was trained with the parameters:
169
+
170
+ **DataLoader**:
171
+ `torch.utils.data.dataloader.DataLoader`
172
+
173
+ **Loss**:
174
+ `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
175
+
176
+ Hyperparameter:
177
+ ```
178
+ {
179
+ "epochs": [More Information Needed],
180
+ "evaluation_steps": 0,
181
+ "evaluator": NoneType,
182
+ "max_grad_norm": 1,
183
+ "optimizer_class": <class 'torch.optim.adamw.AdamW'>,
184
+ "optimizer_params": {'learning rate': 2e-05},
185
+ "scheduler": WarmupLinear,
186
+ "steps_per_epoch": null,
187
+ "warmup_steps": 100,
188
+ "weight_decay":0.01
189
+ }
190
+ ```
191
+
192
+
193
  ### Speeds, Sizes, Times
194
 
195
  <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->