moro01525 commited on
Commit
3ed3c56
·
verified ·
1 Parent(s): 6072eb4

T5 addestrato nella generazione di ricette dopo 3 epoch

Browse files
README.md CHANGED
@@ -1,5 +1,4 @@
1
  ---
2
- base_model: moro01525/T5_FineTuning
3
  tags:
4
  - generated_from_trainer
5
  model-index:
@@ -12,9 +11,7 @@ should probably proofread and complete it, then remove this comment. -->
12
 
13
  # T5_FineTuning
14
 
15
- This model is a fine-tuned version of [moro01525/T5_FineTuning](https://huggingface.co/moro01525/T5_FineTuning) on the None dataset.
16
- It achieves the following results on the evaluation set:
17
- - Loss: 0.9083
18
 
19
  ## Model description
20
 
@@ -33,32 +30,14 @@ More information needed
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
- - learning_rate: 2e-05
37
  - train_batch_size: 4
38
- - eval_batch_size: 8
39
  - seed: 55
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
  - num_epochs: 1
43
 
44
- ### Training results
45
-
46
- | Training Loss | Epoch | Step | Validation Loss |
47
- |:-------------:|:------:|:-----:|:---------------:|
48
- | 1.0593 | 0.0780 | 1500 | 0.9724 |
49
- | 1.0372 | 0.1560 | 3000 | 0.9577 |
50
- | 0.9998 | 0.2340 | 4500 | 0.9462 |
51
- | 1.0283 | 0.3120 | 6000 | 0.9380 |
52
- | 0.9937 | 0.3900 | 7500 | 0.9302 |
53
- | 0.9964 | 0.4680 | 9000 | 0.9246 |
54
- | 0.976 | 0.5461 | 10500 | 0.9196 |
55
- | 0.9768 | 0.6241 | 12000 | 0.9158 |
56
- | 0.9591 | 0.7021 | 13500 | 0.9130 |
57
- | 0.9719 | 0.7801 | 15000 | 0.9111 |
58
- | 0.9695 | 0.8581 | 16500 | 0.9096 |
59
- | 0.9774 | 0.9361 | 18000 | 0.9083 |
60
-
61
-
62
  ### Framework versions
63
 
64
  - Transformers 4.42.4
 
1
  ---
 
2
  tags:
3
  - generated_from_trainer
4
  model-index:
 
11
 
12
  # T5_FineTuning
13
 
14
+ This model was trained from scratch on an unknown dataset.
 
 
15
 
16
  ## Model description
17
 
 
30
  ### Training hyperparameters
31
 
32
  The following hyperparameters were used during training:
33
+ - learning_rate: 5e-05
34
  - train_batch_size: 4
35
+ - eval_batch_size: 4
36
  - seed: 55
37
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
38
  - lr_scheduler_type: linear
39
  - num_epochs: 1
40
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
  ### Framework versions
42
 
43
  - Transformers 4.42.4
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "moro01525/T5_FineTuning",
3
  "architectures": [
4
  "T5ForConditionalGeneration"
5
  ],
 
1
  {
2
+ "_name_or_path": "/content/drive/MyDrive/T5_FineTuning",
3
  "architectures": [
4
  "T5ForConditionalGeneration"
5
  ],
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a4c48fb44600419a4141cd6ac53b794e37964da38f22a620b0b06cb5359ca1af
3
  size 242041896
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d81a8249b502a6989da2cf8f4c2160bbebdb1e10d87c37fdaea1bffc9ed6f25d
3
  size 242041896
runs/Aug04_15-39-16_d06c3c072175/events.out.tfevents.1722785959.d06c3c072175.791.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10425049f7a21d82414f4e9f9d71109a670286471202ad4183044b12c8dd6bc8
3
+ size 8950
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4a3e845200e72f81228a3f827f7ea444a0050a61ad25b7de74994b8513c3e7a6
3
  size 5112
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f9ee624d132c5d1298233e3e4c973671007096015f75c9ceb4b2f79c8d90dd5a
3
  size 5112