Gadwala commited on
Commit
d9308a3
·
verified ·
1 Parent(s): 17e274e

End of training

Browse files
Files changed (3) hide show
  1. README.md +15 -15
  2. adapter_config.json +2 -2
  3. adapter_model.safetensors +1 -1
README.md CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.8891
20
 
21
  ## Model description
22
 
@@ -39,7 +39,7 @@ The following hyperparameters were used during training:
39
  - train_batch_size: 4
40
  - eval_batch_size: 4
41
  - seed: 42
42
- - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
  - num_epochs: 10
45
 
@@ -47,22 +47,22 @@ The following hyperparameters were used during training:
47
 
48
  | Training Loss | Epoch | Step | Validation Loss |
49
  |:-------------:|:-----:|:----:|:---------------:|
50
- | No log | 1.0 | 7 | 3.6454 |
51
- | 4.0678 | 2.0 | 14 | 2.5448 |
52
- | 3.1325 | 3.0 | 21 | 1.9621 |
53
- | 3.1325 | 4.0 | 28 | 1.6217 |
54
- | 2.3717 | 5.0 | 35 | 1.2839 |
55
- | 1.887 | 6.0 | 42 | 1.1278 |
56
- | 1.887 | 7.0 | 49 | 0.9841 |
57
- | 1.5733 | 8.0 | 56 | 1.0033 |
58
- | 1.4319 | 9.0 | 63 | 0.9238 |
59
- | 1.3325 | 10.0 | 70 | 0.8891 |
60
 
61
 
62
  ### Framework versions
63
 
64
  - PEFT 0.14.0
65
- - Transformers 4.47.1
66
- - Pytorch 2.5.1+cu121
67
- - Datasets 3.2.0
68
  - Tokenizers 0.21.0
 
16
 
17
  This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 4.0254
20
 
21
  ## Model description
22
 
 
39
  - train_batch_size: 4
40
  - eval_batch_size: 4
41
  - seed: 42
42
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
  - num_epochs: 10
45
 
 
47
 
48
  | Training Loss | Epoch | Step | Validation Loss |
49
  |:-------------:|:-----:|:----:|:---------------:|
50
+ | No log | 1.0 | 7 | 26.9414 |
51
+ | 33.214 | 2.0 | 14 | 18.4409 |
52
+ | 18.853 | 3.0 | 21 | 9.5696 |
53
+ | 18.853 | 4.0 | 28 | 5.1493 |
54
+ | 7.8655 | 5.0 | 35 | 4.6459 |
55
+ | 4.9801 | 6.0 | 42 | 4.4168 |
56
+ | 4.9801 | 7.0 | 49 | 4.2528 |
57
+ | 4.5873 | 8.0 | 56 | 4.1346 |
58
+ | 4.3958 | 9.0 | 63 | 4.0561 |
59
+ | 4.2927 | 10.0 | 70 | 4.0254 |
60
 
61
 
62
  ### Framework versions
63
 
64
  - PEFT 0.14.0
65
+ - Transformers 4.48.3
66
+ - Pytorch 2.5.1+cu124
67
+ - Datasets 3.3.0
68
  - Tokenizers 0.21.0
adapter_config.json CHANGED
@@ -23,8 +23,8 @@
23
  "rank_pattern": {},
24
  "revision": null,
25
  "target_modules": [
26
- "q",
27
- "v"
28
  ],
29
  "task_type": "SEQ_2_SEQ_LM",
30
  "use_dora": false,
 
23
  "rank_pattern": {},
24
  "revision": null,
25
  "target_modules": [
26
+ "v",
27
+ "q"
28
  ],
29
  "task_type": "SEQ_2_SEQ_LM",
30
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a16b6d5ea9a2213bbac7e06d7f21183039fcf61cb26b40efa6917253dbbc0379
3
  size 6655648
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04c9bf7d04a66088236e9b672699c8b153759c25fdd07bf8ee8eaa83ba2613cc
3
  size 6655648