Rakuto commited on
Commit
e1f7016
·
verified ·
1 Parent(s): 80b3e11

End of training

Browse files
README.md CHANGED
@@ -18,18 +18,18 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.1857
22
- - Rewards/chosen: -0.0409
23
- - Rewards/rejected: -0.2434
24
- - Rewards/accuracies: 1.0
25
- - Rewards/margins: 0.2025
26
- - Logps/rejected: -2.4335
27
- - Logps/chosen: -0.4085
28
- - Logits/rejected: 0.8148
29
- - Logits/chosen: 0.8225
30
- - Nll Loss: 0.1910
31
- - Log Odds Ratio: -0.0807
32
- - Log Odds Chosen: 3.0282
33
 
34
  ## Model description
35
 
@@ -48,45 +48,29 @@ More information needed
48
  ### Training hyperparameters
49
 
50
  The following hyperparameters were used during training:
51
- - learning_rate: 8e-06
52
- - train_batch_size: 2
53
- - eval_batch_size: 2
54
  - seed: 42
55
- - distributed_type: multi-GPU
56
- - num_devices: 2
57
- - gradient_accumulation_steps: 4
58
- - total_train_batch_size: 16
59
- - total_eval_batch_size: 4
60
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
  - lr_scheduler_type: linear
 
62
  - num_epochs: 10
63
 
64
  ### Training results
65
 
66
- | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Nll Loss | Log Odds Ratio | Log Odds Chosen |
67
- |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|:--------------:|:---------------:|
68
- | 1.73 | 0.4545 | 5 | 1.2118 | -0.1132 | -0.1521 | 1.0 | 0.0389 | -1.5213 | -1.1321 | -0.1983 | -0.1272 | 1.1998 | -0.4688 | 0.5319 |
69
- | 1.0085 | 0.9091 | 10 | 0.7476 | -0.0935 | -0.1373 | 1.0 | 0.0438 | -1.3732 | -0.9350 | -0.3241 | -0.2540 | 0.7468 | -0.4291 | 0.6477 |
70
- | 0.6122 | 1.3636 | 15 | 0.4753 | -0.0840 | -0.1319 | 1.0 | 0.0478 | -1.3188 | -0.8404 | -0.1818 | -0.1180 | 0.4833 | -0.4009 | 0.7369 |
71
- | 0.4028 | 1.8182 | 20 | 0.3733 | -0.0752 | -0.1281 | 1.0 | 0.0529 | -1.2813 | -0.7520 | 0.2061 | 0.2680 | 0.3812 | -0.3680 | 0.8492 |
72
- | 0.3401 | 2.2727 | 25 | 0.3224 | -0.0662 | -0.1275 | 1.0 | 0.0613 | -1.2753 | -0.6623 | 0.3445 | 0.4051 | 0.3264 | -0.3228 | 1.0203 |
73
- | 0.2887 | 2.7273 | 30 | 0.2842 | -0.0594 | -0.1348 | 1.0 | 0.0754 | -1.3481 | -0.5942 | 0.4254 | 0.4851 | 0.2877 | -0.2670 | 1.2678 |
74
- | 0.2511 | 3.1818 | 35 | 0.2600 | -0.0548 | -0.1480 | 1.0 | 0.0932 | -1.4803 | -0.5480 | 0.5433 | 0.6008 | 0.2635 | -0.2161 | 1.5488 |
75
- | 0.2359 | 3.6364 | 40 | 0.2407 | -0.0512 | -0.1628 | 1.0 | 0.1117 | -1.6283 | -0.5118 | 0.5920 | 0.6431 | 0.2450 | -0.1765 | 1.8227 |
76
- | 0.2203 | 4.0909 | 45 | 0.2269 | -0.0488 | -0.1775 | 1.0 | 0.1287 | -1.7752 | -0.4879 | 0.6563 | 0.7034 | 0.2316 | -0.1486 | 2.0606 |
77
- | 0.2074 | 4.5455 | 50 | 0.2172 | -0.0470 | -0.1928 | 1.0 | 0.1457 | -1.9277 | -0.4705 | 0.6884 | 0.7261 | 0.2224 | -0.1270 | 2.2857 |
78
- | 0.1906 | 5.0 | 55 | 0.2090 | -0.0452 | -0.2047 | 1.0 | 0.1595 | -2.0471 | -0.4522 | 0.6969 | 0.7304 | 0.2147 | -0.1122 | 2.4730 |
79
- | 0.1874 | 5.4545 | 60 | 0.2026 | -0.0440 | -0.2144 | 1.0 | 0.1703 | -2.1438 | -0.4404 | 0.7157 | 0.7442 | 0.2074 | -0.1026 | 2.6147 |
80
- | 0.1724 | 5.9091 | 65 | 0.1976 | -0.0431 | -0.2228 | 1.0 | 0.1797 | -2.2285 | -0.4311 | 0.7320 | 0.7527 | 0.2030 | -0.0952 | 2.7358 |
81
- | 0.17 | 6.3636 | 70 | 0.1938 | -0.0425 | -0.2303 | 1.0 | 0.1878 | -2.3029 | -0.4247 | 0.7594 | 0.7743 | 0.1996 | -0.0898 | 2.8362 |
82
- | 0.1751 | 6.8182 | 75 | 0.1908 | -0.0419 | -0.2341 | 1.0 | 0.1923 | -2.3415 | -0.4187 | 0.7825 | 0.7965 | 0.1964 | -0.0869 | 2.8953 |
83
- | 0.1546 | 7.2727 | 80 | 0.1888 | -0.0415 | -0.2375 | 1.0 | 0.1960 | -2.3754 | -0.4151 | 0.7961 | 0.8077 | 0.1944 | -0.0843 | 2.9440 |
84
- | 0.1667 | 7.7273 | 85 | 0.1876 | -0.0413 | -0.2395 | 1.0 | 0.1983 | -2.3954 | -0.4125 | 0.7989 | 0.8092 | 0.1931 | -0.0828 | 2.9736 |
85
- | 0.1624 | 8.1818 | 90 | 0.1867 | -0.0411 | -0.2418 | 1.0 | 0.2008 | -2.4182 | -0.4105 | 0.8080 | 0.8159 | 0.1921 | -0.0815 | 3.0056 |
86
- | 0.1668 | 8.6364 | 95 | 0.1861 | -0.0410 | -0.2429 | 1.0 | 0.2020 | -2.4292 | -0.4095 | 0.8145 | 0.8227 | 0.1912 | -0.0809 | 3.0200 |
87
- | 0.1478 | 9.0909 | 100 | 0.1859 | -0.0409 | -0.2433 | 1.0 | 0.2024 | -2.4331 | -0.4093 | 0.8108 | 0.8183 | 0.1912 | -0.0808 | 3.0252 |
88
- | 0.157 | 9.5455 | 105 | 0.1858 | -0.0409 | -0.2433 | 1.0 | 0.2024 | -2.4331 | -0.4092 | 0.8130 | 0.8202 | 0.1912 | -0.0808 | 3.0251 |
89
- | 0.1628 | 10.0 | 110 | 0.1857 | -0.0409 | -0.2434 | 1.0 | 0.2025 | -2.4335 | -0.4085 | 0.8148 | 0.8225 | 0.1910 | -0.0807 | 3.0282 |
90
 
91
 
92
  ### Framework versions
 
18
 
19
  This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.1513
22
+ - Rewards/chosen: -0.0341
23
+ - Rewards/rejected: -0.2967
24
+ - Rewards/accuracies: 0.9688
25
+ - Rewards/margins: 0.2626
26
+ - Logps/rejected: -2.9667
27
+ - Logps/chosen: -0.3407
28
+ - Logits/rejected: 0.9754
29
+ - Logits/chosen: 0.9495
30
+ - Nll Loss: 0.1436
31
+ - Log Odds Ratio: -0.0679
32
+ - Log Odds Chosen: 3.8381
33
 
34
  ## Model description
35
 
 
48
  ### Training hyperparameters
49
 
50
  The following hyperparameters were used during training:
51
+ - learning_rate: 6e-06
52
+ - train_batch_size: 4
53
+ - eval_batch_size: 4
54
  - seed: 42
 
 
 
 
 
55
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
56
  - lr_scheduler_type: linear
57
+ - lr_scheduler_warmup_steps: 10
58
  - num_epochs: 10
59
 
60
  ### Training results
61
 
62
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Nll Loss | Log Odds Ratio | Log Odds Chosen |
63
+ |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|:--------------:|:---------------:|
64
+ | 0.4124 | 1.0 | 44 | 0.3215 | -0.0650 | -0.1203 | 1.0 | 0.0553 | -1.2029 | -0.6500 | 0.4436 | 0.4952 | 0.2843 | -0.3461 | 0.9531 |
65
+ | 0.2262 | 2.0 | 88 | 0.2173 | -0.0463 | -0.1815 | 1.0 | 0.1352 | -1.8149 | -0.4629 | 0.6564 | 0.6969 | 0.2000 | -0.1529 | 2.1688 |
66
+ | 0.1567 | 3.0 | 132 | 0.1848 | -0.0402 | -0.2383 | 0.9688 | 0.1981 | -2.3833 | -0.4018 | 0.8186 | 0.8245 | 0.1734 | -0.0986 | 2.9871 |
67
+ | 0.1483 | 4.0 | 176 | 0.1688 | -0.0372 | -0.2683 | 0.9688 | 0.2311 | -2.6830 | -0.3718 | 0.9081 | 0.8980 | 0.1594 | -0.0814 | 3.4098 |
68
+ | 0.1313 | 5.0 | 220 | 0.1597 | -0.0355 | -0.2806 | 0.9688 | 0.2451 | -2.8056 | -0.3550 | 0.9172 | 0.9010 | 0.1511 | -0.0746 | 3.6020 |
69
+ | 0.1173 | 6.0 | 264 | 0.1558 | -0.0348 | -0.2900 | 0.9688 | 0.2552 | -2.9003 | -0.3481 | 0.9633 | 0.9417 | 0.1476 | -0.0712 | 3.7352 |
70
+ | 0.131 | 7.0 | 308 | 0.1525 | -0.0342 | -0.2935 | 0.9688 | 0.2592 | -2.9346 | -0.3424 | 0.9745 | 0.9506 | 0.1446 | -0.0690 | 3.7927 |
71
+ | 0.1097 | 8.0 | 352 | 0.1516 | -0.0341 | -0.2956 | 0.9688 | 0.2614 | -2.9556 | -0.3411 | 0.9658 | 0.9406 | 0.1438 | -0.0684 | 3.8234 |
72
+ | 0.0973 | 9.0 | 396 | 0.1512 | -0.0340 | -0.2965 | 0.9688 | 0.2625 | -2.9653 | -0.3403 | 0.9691 | 0.9441 | 0.1434 | -0.0682 | 3.8372 |
73
+ | 0.1161 | 10.0 | 440 | 0.1513 | -0.0341 | -0.2967 | 0.9688 | 0.2626 | -2.9667 | -0.3407 | 0.9754 | 0.9495 | 0.1436 | -0.0679 | 3.8381 |
 
 
 
 
 
 
 
 
 
 
 
 
74
 
75
 
76
  ### Framework versions
model-00001-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3f88cb13ed4bcc849e9891cb27c73b5572a551605b4f6ac3dd01dea53ddbc764
3
  size 4965799096
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ebc8e835d41d51bab7d3def601ef54277ca8267d3a791c9a883a18a0e65fbebc
3
  size 4965799096
model-00002-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:62334339dd36aed77eb5a2f9760912d0140aeee30332899f87e6e9e5b34218d6
3
  size 1459729952
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a10819e43048c01a4631306d7c2ffea05b327cbad6903842ec47e6a98b1e2ab
3
  size 1459729952
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:22ad71fbd43a63e734c73ed43bd445666c817027a22ab2af9f132edb0c44e042
3
  size 5496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d81b791e629c71c9cfb2c36bde9032762e7a3e65d18776e4c33da1f1d52c7a67
3
  size 5496