Rakuto commited on
Commit
80b3e11
·
verified ·
1 Parent(s): 5d604f7

End of training

Browse files
README.md CHANGED
@@ -18,18 +18,18 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.8110
22
- - Rewards/chosen: -0.1016
23
- - Rewards/rejected: -0.1421
24
  - Rewards/accuracies: 1.0
25
- - Rewards/margins: 0.0405
26
- - Logps/rejected: -1.4214
27
- - Logps/chosen: -1.0163
28
- - Logits/rejected: -0.3132
29
- - Logits/chosen: -0.2798
30
- - Nll Loss: 0.7661
31
- - Log Odds Ratio: -0.4487
32
- - Log Odds Chosen: 0.5905
33
 
34
  ## Model description
35
 
@@ -52,36 +52,41 @@ The following hyperparameters were used during training:
52
  - train_batch_size: 2
53
  - eval_batch_size: 2
54
  - seed: 42
55
- - gradient_accumulation_steps: 32
56
- - total_train_batch_size: 64
 
 
 
57
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
58
  - lr_scheduler_type: linear
59
  - num_epochs: 10
60
 
61
  ### Training results
62
 
63
- | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Nll Loss | Log Odds Ratio | Log Odds Chosen |
64
- |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|:--------------:|:---------------:|
65
- | 1.8976 | 0.4 | 1 | 1.6382 | -0.1720 | -0.2027 | 1.0 | 0.0307 | -2.0269 | -1.7196 | -0.1521 | -0.1194 | 1.5851 | -0.5313 | 0.3650 |
66
- | 1.5573 | 0.8 | 2 | 1.5334 | -0.1585 | -0.1906 | 1.0 | 0.0320 | -1.9059 | -1.5854 | -0.1364 | -0.1050 | 1.4813 | -0.5207 | 0.3907 |
67
- | 1.4427 | 1.2 | 3 | 1.4338 | -0.1440 | -0.1771 | 1.0 | 0.0332 | -1.7712 | -1.4397 | -0.1360 | -0.1062 | 1.3828 | -0.5097 | 0.4184 |
68
- | 1.3493 | 1.6 | 4 | 1.3429 | -0.1341 | -0.1681 | 1.0 | 0.0341 | -1.6811 | -1.3406 | -0.1349 | -0.1049 | 1.2929 | -0.5009 | 0.4413 |
69
- | 1.2683 | 2.0 | 5 | 1.2643 | -0.1271 | -0.1620 | 1.0 | 0.0349 | -1.6205 | -1.2712 | -0.1409 | -0.1101 | 1.2150 | -0.4929 | 0.4625 |
70
- | 1.1736 | 2.4 | 6 | 1.1920 | -0.1219 | -0.1574 | 1.0 | 0.0355 | -1.5743 | -1.2191 | -0.1502 | -0.1193 | 1.1433 | -0.4869 | 0.4787 |
71
- | 1.1212 | 2.8 | 7 | 1.1234 | -0.1175 | -0.1539 | 1.0 | 0.0364 | -1.5392 | -1.1754 | -0.1601 | -0.1286 | 1.0754 | -0.4801 | 0.4978 |
72
- | 1.0518 | 3.2 | 8 | 1.0610 | -0.1138 | -0.1509 | 1.0 | 0.0371 | -1.5093 | -1.1382 | -0.1737 | -0.1422 | 1.0136 | -0.4741 | 0.5149 |
73
- | 0.9805 | 3.6 | 9 | 1.0012 | -0.1105 | -0.1484 | 1.0 | 0.0379 | -1.4837 | -1.1049 | -0.1969 | -0.1645 | 0.9544 | -0.4682 | 0.5320 |
74
- | 0.9299 | 4.0 | 10 | 0.9496 | -0.1079 | -0.1465 | 1.0 | 0.0386 | -1.4653 | -1.0794 | -0.2201 | -0.1875 | 0.9033 | -0.4628 | 0.5477 |
75
- | 0.8761 | 4.4 | 11 | 0.9070 | -0.1059 | -0.1451 | 1.0 | 0.0392 | -1.4510 | -1.0591 | -0.2431 | -0.2105 | 0.8612 | -0.4584 | 0.5608 |
76
- | 0.8337 | 4.8 | 12 | 0.8864 | -0.1049 | -0.1444 | 1.0 | 0.0394 | -1.4436 | -1.0492 | -0.2562 | -0.2232 | 0.8407 | -0.4566 | 0.5669 |
77
- | 0.7975 | 5.2 | 13 | 0.8664 | -0.1041 | -0.1439 | 1.0 | 0.0398 | -1.4386 | -1.0406 | -0.2727 | -0.2397 | 0.8210 | -0.4541 | 0.5740 |
78
- | 0.788 | 5.6 | 14 | 0.8492 | -0.1033 | -0.1433 | 1.0 | 0.0400 | -1.4329 | -1.0326 | -0.2837 | -0.2507 | 0.8040 | -0.4524 | 0.5794 |
79
- | 0.78 | 6.0 | 15 | 0.8334 | -0.1026 | -0.1429 | 1.0 | 0.0402 | -1.4287 | -1.0264 | -0.2944 | -0.2614 | 0.7883 | -0.4508 | 0.5839 |
80
- | 0.7395 | 6.4 | 16 | 0.8211 | -0.1021 | -0.1424 | 1.0 | 0.0403 | -1.4244 | -1.0214 | -0.3054 | -0.2722 | 0.7761 | -0.4500 | 0.5865 |
81
- | 0.7446 | 6.8 | 17 | 0.8164 | -0.1019 | -0.1423 | 1.0 | 0.0404 | -1.4229 | -1.0187 | -0.3054 | -0.2722 | 0.7715 | -0.4492 | 0.5888 |
82
- | 0.7518 | 7.2 | 18 | 0.8125 | -0.1018 | -0.1423 | 1.0 | 0.0405 | -1.4226 | -1.0175 | -0.3106 | -0.2775 | 0.7677 | -0.4487 | 0.5903 |
83
- | 0.7431 | 7.6 | 19 | 0.8107 | -0.1016 | -0.1422 | 1.0 | 0.0405 | -1.4217 | -1.0162 | -0.3104 | -0.2768 | 0.7658 | -0.4484 | 0.5912 |
84
- | 0.726 | 8.0 | 20 | 0.8110 | -0.1016 | -0.1421 | 1.0 | 0.0405 | -1.4214 | -1.0163 | -0.3132 | -0.2798 | 0.7661 | -0.4487 | 0.5905 |
 
 
85
 
86
 
87
  ### Framework versions
 
18
 
19
  This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.1857
22
+ - Rewards/chosen: -0.0409
23
+ - Rewards/rejected: -0.2434
24
  - Rewards/accuracies: 1.0
25
+ - Rewards/margins: 0.2025
26
+ - Logps/rejected: -2.4335
27
+ - Logps/chosen: -0.4085
28
+ - Logits/rejected: 0.8148
29
+ - Logits/chosen: 0.8225
30
+ - Nll Loss: 0.1910
31
+ - Log Odds Ratio: -0.0807
32
+ - Log Odds Chosen: 3.0282
33
 
34
  ## Model description
35
 
 
52
  - train_batch_size: 2
53
  - eval_batch_size: 2
54
  - seed: 42
55
+ - distributed_type: multi-GPU
56
+ - num_devices: 2
57
+ - gradient_accumulation_steps: 4
58
+ - total_train_batch_size: 16
59
+ - total_eval_batch_size: 4
60
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
  - lr_scheduler_type: linear
62
  - num_epochs: 10
63
 
64
  ### Training results
65
 
66
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Nll Loss | Log Odds Ratio | Log Odds Chosen |
67
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|:--------------:|:---------------:|
68
+ | 1.73 | 0.4545 | 5 | 1.2118 | -0.1132 | -0.1521 | 1.0 | 0.0389 | -1.5213 | -1.1321 | -0.1983 | -0.1272 | 1.1998 | -0.4688 | 0.5319 |
69
+ | 1.0085 | 0.9091 | 10 | 0.7476 | -0.0935 | -0.1373 | 1.0 | 0.0438 | -1.3732 | -0.9350 | -0.3241 | -0.2540 | 0.7468 | -0.4291 | 0.6477 |
70
+ | 0.6122 | 1.3636 | 15 | 0.4753 | -0.0840 | -0.1319 | 1.0 | 0.0478 | -1.3188 | -0.8404 | -0.1818 | -0.1180 | 0.4833 | -0.4009 | 0.7369 |
71
+ | 0.4028 | 1.8182 | 20 | 0.3733 | -0.0752 | -0.1281 | 1.0 | 0.0529 | -1.2813 | -0.7520 | 0.2061 | 0.2680 | 0.3812 | -0.3680 | 0.8492 |
72
+ | 0.3401 | 2.2727 | 25 | 0.3224 | -0.0662 | -0.1275 | 1.0 | 0.0613 | -1.2753 | -0.6623 | 0.3445 | 0.4051 | 0.3264 | -0.3228 | 1.0203 |
73
+ | 0.2887 | 2.7273 | 30 | 0.2842 | -0.0594 | -0.1348 | 1.0 | 0.0754 | -1.3481 | -0.5942 | 0.4254 | 0.4851 | 0.2877 | -0.2670 | 1.2678 |
74
+ | 0.2511 | 3.1818 | 35 | 0.2600 | -0.0548 | -0.1480 | 1.0 | 0.0932 | -1.4803 | -0.5480 | 0.5433 | 0.6008 | 0.2635 | -0.2161 | 1.5488 |
75
+ | 0.2359 | 3.6364 | 40 | 0.2407 | -0.0512 | -0.1628 | 1.0 | 0.1117 | -1.6283 | -0.5118 | 0.5920 | 0.6431 | 0.2450 | -0.1765 | 1.8227 |
76
+ | 0.2203 | 4.0909 | 45 | 0.2269 | -0.0488 | -0.1775 | 1.0 | 0.1287 | -1.7752 | -0.4879 | 0.6563 | 0.7034 | 0.2316 | -0.1486 | 2.0606 |
77
+ | 0.2074 | 4.5455 | 50 | 0.2172 | -0.0470 | -0.1928 | 1.0 | 0.1457 | -1.9277 | -0.4705 | 0.6884 | 0.7261 | 0.2224 | -0.1270 | 2.2857 |
78
+ | 0.1906 | 5.0 | 55 | 0.2090 | -0.0452 | -0.2047 | 1.0 | 0.1595 | -2.0471 | -0.4522 | 0.6969 | 0.7304 | 0.2147 | -0.1122 | 2.4730 |
79
+ | 0.1874 | 5.4545 | 60 | 0.2026 | -0.0440 | -0.2144 | 1.0 | 0.1703 | -2.1438 | -0.4404 | 0.7157 | 0.7442 | 0.2074 | -0.1026 | 2.6147 |
80
+ | 0.1724 | 5.9091 | 65 | 0.1976 | -0.0431 | -0.2228 | 1.0 | 0.1797 | -2.2285 | -0.4311 | 0.7320 | 0.7527 | 0.2030 | -0.0952 | 2.7358 |
81
+ | 0.17 | 6.3636 | 70 | 0.1938 | -0.0425 | -0.2303 | 1.0 | 0.1878 | -2.3029 | -0.4247 | 0.7594 | 0.7743 | 0.1996 | -0.0898 | 2.8362 |
82
+ | 0.1751 | 6.8182 | 75 | 0.1908 | -0.0419 | -0.2341 | 1.0 | 0.1923 | -2.3415 | -0.4187 | 0.7825 | 0.7965 | 0.1964 | -0.0869 | 2.8953 |
83
+ | 0.1546 | 7.2727 | 80 | 0.1888 | -0.0415 | -0.2375 | 1.0 | 0.1960 | -2.3754 | -0.4151 | 0.7961 | 0.8077 | 0.1944 | -0.0843 | 2.9440 |
84
+ | 0.1667 | 7.7273 | 85 | 0.1876 | -0.0413 | -0.2395 | 1.0 | 0.1983 | -2.3954 | -0.4125 | 0.7989 | 0.8092 | 0.1931 | -0.0828 | 2.9736 |
85
+ | 0.1624 | 8.1818 | 90 | 0.1867 | -0.0411 | -0.2418 | 1.0 | 0.2008 | -2.4182 | -0.4105 | 0.8080 | 0.8159 | 0.1921 | -0.0815 | 3.0056 |
86
+ | 0.1668 | 8.6364 | 95 | 0.1861 | -0.0410 | -0.2429 | 1.0 | 0.2020 | -2.4292 | -0.4095 | 0.8145 | 0.8227 | 0.1912 | -0.0809 | 3.0200 |
87
+ | 0.1478 | 9.0909 | 100 | 0.1859 | -0.0409 | -0.2433 | 1.0 | 0.2024 | -2.4331 | -0.4093 | 0.8108 | 0.8183 | 0.1912 | -0.0808 | 3.0252 |
88
+ | 0.157 | 9.5455 | 105 | 0.1858 | -0.0409 | -0.2433 | 1.0 | 0.2024 | -2.4331 | -0.4092 | 0.8130 | 0.8202 | 0.1912 | -0.0808 | 3.0251 |
89
+ | 0.1628 | 10.0 | 110 | 0.1857 | -0.0409 | -0.2434 | 1.0 | 0.2025 | -2.4335 | -0.4085 | 0.8148 | 0.8225 | 0.1910 | -0.0807 | 3.0282 |
90
 
91
 
92
  ### Framework versions
model-00001-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f0b34b00ea7987a8b658c318060509bae4b84fdd203fc3aa9d533b26a3057ce6
3
  size 4965799096
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f88cb13ed4bcc849e9891cb27c73b5572a551605b4f6ac3dd01dea53ddbc764
3
  size 4965799096
model-00002-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:614ebfe53ab928785dd1447f45b3f4dbfada0b14383c69f97d6f6824534cd01b
3
  size 1459729952
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:62334339dd36aed77eb5a2f9760912d0140aeee30332899f87e6e9e5b34218d6
3
  size 1459729952
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:68a289fefa0539d7c070c4455fa974fae25943053b0f7d6f0a6b03a9832618f3
3
  size 5496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22ad71fbd43a63e734c73ed43bd445666c817027a22ab2af9f132edb0c44e042
3
  size 5496