Model save
Browse files- README.md +5 -17
- model.safetensors +1 -1
- training_args.bin +1 -1
README.md
CHANGED
@@ -3,11 +3,6 @@ license: mit
|
|
3 |
base_model: surrey-nlp/roberta-base-finetuned-abbr
|
4 |
tags:
|
5 |
- generated_from_trainer
|
6 |
-
metrics:
|
7 |
-
- precision
|
8 |
-
- recall
|
9 |
-
- f1
|
10 |
-
- accuracy
|
11 |
model-index:
|
12 |
- name: bert-base-NER-finetuned-ner
|
13 |
results: []
|
@@ -19,12 +14,6 @@ should probably proofread and complete it, then remove this comment. -->
|
|
19 |
# bert-base-NER-finetuned-ner
|
20 |
|
21 |
This model is a fine-tuned version of [surrey-nlp/roberta-base-finetuned-abbr](https://huggingface.co/surrey-nlp/roberta-base-finetuned-abbr) on an unknown dataset.
|
22 |
-
It achieves the following results on the evaluation set:
|
23 |
-
- Loss: 0.4915
|
24 |
-
- Precision: 0.8180
|
25 |
-
- Recall: 0.8640
|
26 |
-
- F1: 0.8404
|
27 |
-
- Accuracy: 0.8172
|
28 |
|
29 |
## Model description
|
30 |
|
@@ -44,16 +33,15 @@ More information needed
|
|
44 |
|
45 |
The following hyperparameters were used during training:
|
46 |
- learning_rate: 2e-05
|
47 |
-
- train_batch_size:
|
48 |
- eval_batch_size: 4
|
49 |
- seed: 42
|
|
|
|
|
50 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
51 |
- lr_scheduler_type: linear
|
52 |
-
- num_epochs:
|
53 |
-
|
54 |
-
### Training results
|
55 |
-
|
56 |
-
|
57 |
|
58 |
### Framework versions
|
59 |
|
|
|
3 |
base_model: surrey-nlp/roberta-base-finetuned-abbr
|
4 |
tags:
|
5 |
- generated_from_trainer
|
|
|
|
|
|
|
|
|
|
|
6 |
model-index:
|
7 |
- name: bert-base-NER-finetuned-ner
|
8 |
results: []
|
|
|
14 |
# bert-base-NER-finetuned-ner
|
15 |
|
16 |
This model is a fine-tuned version of [surrey-nlp/roberta-base-finetuned-abbr](https://huggingface.co/surrey-nlp/roberta-base-finetuned-abbr) on an unknown dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
|
18 |
## Model description
|
19 |
|
|
|
33 |
|
34 |
The following hyperparameters were used during training:
|
35 |
- learning_rate: 2e-05
|
36 |
+
- train_batch_size: 8
|
37 |
- eval_batch_size: 4
|
38 |
- seed: 42
|
39 |
+
- gradient_accumulation_steps: 4
|
40 |
+
- total_train_batch_size: 32
|
41 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
42 |
- lr_scheduler_type: linear
|
43 |
+
- num_epochs: 20
|
44 |
+
- mixed_precision_training: Native AMP
|
|
|
|
|
|
|
45 |
|
46 |
### Framework versions
|
47 |
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 430918012
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a527ab00033365c6f135308a22aaac4a0ba4c6b2ff0bf9d83074f11bfdb6935d
|
3 |
size 430918012
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4728
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6da4aff15c405ff227cf5b6c522356efc6e52c3eea8f325f99b3b07dde60d3c2
|
3 |
size 4728
|