Vrepol commited on
Commit
fd8e297
·
verified ·
1 Parent(s): b8d9a55

End of training

Browse files
Files changed (4) hide show
  1. README.md +5 -5
  2. model.safetensors +1 -1
  3. tokenizer.json +2 -14
  4. training_args.bin +1 -1
README.md CHANGED
@@ -15,8 +15,8 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 1.2662
19
- - Model Preparation Time: 0.0047
20
 
21
  ## Model description
22
 
@@ -48,9 +48,9 @@ The following hyperparameters were used during training:
48
 
49
  | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time |
50
  |:-------------:|:-----:|:----:|:---------------:|:----------------------:|
51
- | 1.3884 | 1.0 | 157 | 1.1736 | 0.0047 |
52
- | 1.4028 | 2.0 | 314 | 1.2312 | 0.0047 |
53
- | 1.3876 | 3.0 | 471 | 1.2051 | 0.0047 |
54
 
55
 
56
  ### Framework versions
 
15
 
16
  This model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 1.2260
19
+ - Model Preparation Time: 0.0044
20
 
21
  ## Model description
22
 
 
48
 
49
  | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time |
50
  |:-------------:|:-----:|:----:|:---------------:|:----------------------:|
51
+ | 1.4597 | 1.0 | 157 | 1.2989 | 0.0044 |
52
+ | 1.3505 | 2.0 | 314 | 1.2006 | 0.0044 |
53
+ | 1.3229 | 3.0 | 471 | 1.2647 | 0.0044 |
54
 
55
 
56
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2354661c8f4f506ca7555f641dcdbbff6e614d4cd9f71ee63663d9a51f6648e7
3
  size 409184912
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:085e0883ac041cc285f22c400206bee3abdf605bbc3b8221e97c6fd65f6ae185
3
  size 409184912
tokenizer.json CHANGED
@@ -1,19 +1,7 @@
1
  {
2
  "version": "1.0",
3
- "truncation": {
4
- "direction": "Right",
5
- "max_length": 512,
6
- "strategy": "LongestFirst",
7
- "stride": 0
8
- },
9
- "padding": {
10
- "strategy": "BatchLongest",
11
- "direction": "Right",
12
- "pad_to_multiple_of": null,
13
- "pad_id": 0,
14
- "pad_type_id": 0,
15
- "pad_token": "[PAD]"
16
- },
17
  "added_tokens": [
18
  {
19
  "id": 0,
 
1
  {
2
  "version": "1.0",
3
+ "truncation": null,
4
+ "padding": null,
 
 
 
 
 
 
 
 
 
 
 
 
5
  "added_tokens": [
6
  {
7
  "id": 0,
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ab45a7fed64f90f8244a2d33445024b1ae3c1f88e42c99748b654ceb108e85a4
3
  size 5368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:889dd2e9ae825ca9738d8ef715b86002f7bd8cb749896750b15bb1da5d627617
3
  size 5368