Pentyala commited on
Commit
d1cc596
·
verified ·
1 Parent(s): 26bd350

End of training

Browse files
Files changed (1) hide show
  1. README.md +12 -13
README.md CHANGED
@@ -14,9 +14,9 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # flan-t5-base
16
 
17
- This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 4.0833
20
 
21
  ## Model description
22
 
@@ -47,16 +47,16 @@ The following hyperparameters were used during training:
47
 
48
  | Training Loss | Epoch | Step | Validation Loss |
49
  |:-------------:|:-----:|:----:|:---------------:|
50
- | No log | 1.0 | 7 | 27.0397 |
51
- | 33.0333 | 2.0 | 14 | 18.4465 |
52
- | 18.4933 | 3.0 | 21 | 9.4768 |
53
- | 18.4933 | 4.0 | 28 | 5.0764 |
54
- | 7.5911 | 5.0 | 35 | 4.6019 |
55
- | 4.8876 | 6.0 | 42 | 4.4286 |
56
- | 4.8876 | 7.0 | 49 | 4.2941 |
57
- | 4.4966 | 8.0 | 56 | 4.1877 |
58
- | 4.3397 | 9.0 | 63 | 4.1130 |
59
- | 4.2611 | 10.0 | 70 | 4.0833 |
60
 
61
 
62
  ### Framework versions
@@ -64,5 +64,4 @@ The following hyperparameters were used during training:
64
  - PEFT 0.14.0
65
  - Transformers 4.48.3
66
  - Pytorch 2.5.1+cu124
67
- - Datasets 3.3.0
68
  - Tokenizers 0.21.0
 
14
 
15
  # flan-t5-base
16
 
17
+ This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 4.0238
20
 
21
  ## Model description
22
 
 
47
 
48
  | Training Loss | Epoch | Step | Validation Loss |
49
  |:-------------:|:-----:|:----:|:---------------:|
50
+ | No log | 1.0 | 7 | 26.5616 |
51
+ | 32.0769 | 2.0 | 14 | 18.4927 |
52
+ | 18.4675 | 3.0 | 21 | 10.0570 |
53
+ | 18.4675 | 4.0 | 28 | 4.8303 |
54
+ | 7.7849 | 5.0 | 35 | 4.5457 |
55
+ | 4.9112 | 6.0 | 42 | 4.3735 |
56
+ | 4.9112 | 7.0 | 49 | 4.2257 |
57
+ | 4.5517 | 8.0 | 56 | 4.1220 |
58
+ | 4.3929 | 9.0 | 63 | 4.0520 |
59
+ | 4.2833 | 10.0 | 70 | 4.0238 |
60
 
61
 
62
  ### Framework versions
 
64
  - PEFT 0.14.0
65
  - Transformers 4.48.3
66
  - Pytorch 2.5.1+cu124
 
67
  - Tokenizers 0.21.0