End of training
Browse files- README.md +28 -44
- model-00001-of-00002.safetensors +1 -1
- model-00002-of-00002.safetensors +1 -1
- training_args.bin +1 -1
README.md
CHANGED
@@ -18,18 +18,18 @@ should probably proofread and complete it, then remove this comment. -->
|
|
18 |
|
19 |
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on an unknown dataset.
|
20 |
It achieves the following results on the evaluation set:
|
21 |
-
- Loss: 0.
|
22 |
-
- Rewards/chosen: -0.
|
23 |
-
- Rewards/rejected: -0.
|
24 |
-
- Rewards/accuracies:
|
25 |
-
- Rewards/margins: 0.
|
26 |
-
- Logps/rejected: -2.
|
27 |
-
- Logps/chosen: -0.
|
28 |
-
- Logits/rejected: 0.
|
29 |
-
- Logits/chosen: 0.
|
30 |
-
- Nll Loss: 0.
|
31 |
-
- Log Odds Ratio: -0.
|
32 |
-
- Log Odds Chosen: 3.
|
33 |
|
34 |
## Model description
|
35 |
|
@@ -48,45 +48,29 @@ More information needed
|
|
48 |
### Training hyperparameters
|
49 |
|
50 |
The following hyperparameters were used during training:
|
51 |
-
- learning_rate:
|
52 |
-
- train_batch_size:
|
53 |
-
- eval_batch_size:
|
54 |
- seed: 42
|
55 |
-
- distributed_type: multi-GPU
|
56 |
-
- num_devices: 2
|
57 |
-
- gradient_accumulation_steps: 4
|
58 |
-
- total_train_batch_size: 16
|
59 |
-
- total_eval_batch_size: 4
|
60 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
61 |
- lr_scheduler_type: linear
|
|
|
62 |
- num_epochs: 10
|
63 |
|
64 |
### Training results
|
65 |
|
66 |
-
| Training Loss | Epoch
|
67 |
-
|
68 |
-
|
|
69 |
-
|
|
70 |
-
| 0.
|
71 |
-
| 0.
|
72 |
-
| 0.
|
73 |
-
| 0.
|
74 |
-
| 0.
|
75 |
-
| 0.
|
76 |
-
| 0.
|
77 |
-
| 0.
|
78 |
-
| 0.1906 | 5.0 | 55 | 0.2090 | -0.0452 | -0.2047 | 1.0 | 0.1595 | -2.0471 | -0.4522 | 0.6969 | 0.7304 | 0.2147 | -0.1122 | 2.4730 |
|
79 |
-
| 0.1874 | 5.4545 | 60 | 0.2026 | -0.0440 | -0.2144 | 1.0 | 0.1703 | -2.1438 | -0.4404 | 0.7157 | 0.7442 | 0.2074 | -0.1026 | 2.6147 |
|
80 |
-
| 0.1724 | 5.9091 | 65 | 0.1976 | -0.0431 | -0.2228 | 1.0 | 0.1797 | -2.2285 | -0.4311 | 0.7320 | 0.7527 | 0.2030 | -0.0952 | 2.7358 |
|
81 |
-
| 0.17 | 6.3636 | 70 | 0.1938 | -0.0425 | -0.2303 | 1.0 | 0.1878 | -2.3029 | -0.4247 | 0.7594 | 0.7743 | 0.1996 | -0.0898 | 2.8362 |
|
82 |
-
| 0.1751 | 6.8182 | 75 | 0.1908 | -0.0419 | -0.2341 | 1.0 | 0.1923 | -2.3415 | -0.4187 | 0.7825 | 0.7965 | 0.1964 | -0.0869 | 2.8953 |
|
83 |
-
| 0.1546 | 7.2727 | 80 | 0.1888 | -0.0415 | -0.2375 | 1.0 | 0.1960 | -2.3754 | -0.4151 | 0.7961 | 0.8077 | 0.1944 | -0.0843 | 2.9440 |
|
84 |
-
| 0.1667 | 7.7273 | 85 | 0.1876 | -0.0413 | -0.2395 | 1.0 | 0.1983 | -2.3954 | -0.4125 | 0.7989 | 0.8092 | 0.1931 | -0.0828 | 2.9736 |
|
85 |
-
| 0.1624 | 8.1818 | 90 | 0.1867 | -0.0411 | -0.2418 | 1.0 | 0.2008 | -2.4182 | -0.4105 | 0.8080 | 0.8159 | 0.1921 | -0.0815 | 3.0056 |
|
86 |
-
| 0.1668 | 8.6364 | 95 | 0.1861 | -0.0410 | -0.2429 | 1.0 | 0.2020 | -2.4292 | -0.4095 | 0.8145 | 0.8227 | 0.1912 | -0.0809 | 3.0200 |
|
87 |
-
| 0.1478 | 9.0909 | 100 | 0.1859 | -0.0409 | -0.2433 | 1.0 | 0.2024 | -2.4331 | -0.4093 | 0.8108 | 0.8183 | 0.1912 | -0.0808 | 3.0252 |
|
88 |
-
| 0.157 | 9.5455 | 105 | 0.1858 | -0.0409 | -0.2433 | 1.0 | 0.2024 | -2.4331 | -0.4092 | 0.8130 | 0.8202 | 0.1912 | -0.0808 | 3.0251 |
|
89 |
-
| 0.1628 | 10.0 | 110 | 0.1857 | -0.0409 | -0.2434 | 1.0 | 0.2025 | -2.4335 | -0.4085 | 0.8148 | 0.8225 | 0.1910 | -0.0807 | 3.0282 |
|
90 |
|
91 |
|
92 |
### Framework versions
|
|
|
18 |
|
19 |
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on an unknown dataset.
|
20 |
It achieves the following results on the evaluation set:
|
21 |
+
- Loss: 0.1513
|
22 |
+
- Rewards/chosen: -0.0341
|
23 |
+
- Rewards/rejected: -0.2967
|
24 |
+
- Rewards/accuracies: 0.9688
|
25 |
+
- Rewards/margins: 0.2626
|
26 |
+
- Logps/rejected: -2.9667
|
27 |
+
- Logps/chosen: -0.3407
|
28 |
+
- Logits/rejected: 0.9754
|
29 |
+
- Logits/chosen: 0.9495
|
30 |
+
- Nll Loss: 0.1436
|
31 |
+
- Log Odds Ratio: -0.0679
|
32 |
+
- Log Odds Chosen: 3.8381
|
33 |
|
34 |
## Model description
|
35 |
|
|
|
48 |
### Training hyperparameters
|
49 |
|
50 |
The following hyperparameters were used during training:
|
51 |
+
- learning_rate: 6e-06
|
52 |
+
- train_batch_size: 4
|
53 |
+
- eval_batch_size: 4
|
54 |
- seed: 42
|
|
|
|
|
|
|
|
|
|
|
55 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
56 |
- lr_scheduler_type: linear
|
57 |
+
- lr_scheduler_warmup_steps: 10
|
58 |
- num_epochs: 10
|
59 |
|
60 |
### Training results
|
61 |
|
62 |
+
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Nll Loss | Log Odds Ratio | Log Odds Chosen |
|
63 |
+
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|:--------------:|:---------------:|
|
64 |
+
| 0.4124 | 1.0 | 44 | 0.3215 | -0.0650 | -0.1203 | 1.0 | 0.0553 | -1.2029 | -0.6500 | 0.4436 | 0.4952 | 0.2843 | -0.3461 | 0.9531 |
|
65 |
+
| 0.2262 | 2.0 | 88 | 0.2173 | -0.0463 | -0.1815 | 1.0 | 0.1352 | -1.8149 | -0.4629 | 0.6564 | 0.6969 | 0.2000 | -0.1529 | 2.1688 |
|
66 |
+
| 0.1567 | 3.0 | 132 | 0.1848 | -0.0402 | -0.2383 | 0.9688 | 0.1981 | -2.3833 | -0.4018 | 0.8186 | 0.8245 | 0.1734 | -0.0986 | 2.9871 |
|
67 |
+
| 0.1483 | 4.0 | 176 | 0.1688 | -0.0372 | -0.2683 | 0.9688 | 0.2311 | -2.6830 | -0.3718 | 0.9081 | 0.8980 | 0.1594 | -0.0814 | 3.4098 |
|
68 |
+
| 0.1313 | 5.0 | 220 | 0.1597 | -0.0355 | -0.2806 | 0.9688 | 0.2451 | -2.8056 | -0.3550 | 0.9172 | 0.9010 | 0.1511 | -0.0746 | 3.6020 |
|
69 |
+
| 0.1173 | 6.0 | 264 | 0.1558 | -0.0348 | -0.2900 | 0.9688 | 0.2552 | -2.9003 | -0.3481 | 0.9633 | 0.9417 | 0.1476 | -0.0712 | 3.7352 |
|
70 |
+
| 0.131 | 7.0 | 308 | 0.1525 | -0.0342 | -0.2935 | 0.9688 | 0.2592 | -2.9346 | -0.3424 | 0.9745 | 0.9506 | 0.1446 | -0.0690 | 3.7927 |
|
71 |
+
| 0.1097 | 8.0 | 352 | 0.1516 | -0.0341 | -0.2956 | 0.9688 | 0.2614 | -2.9556 | -0.3411 | 0.9658 | 0.9406 | 0.1438 | -0.0684 | 3.8234 |
|
72 |
+
| 0.0973 | 9.0 | 396 | 0.1512 | -0.0340 | -0.2965 | 0.9688 | 0.2625 | -2.9653 | -0.3403 | 0.9691 | 0.9441 | 0.1434 | -0.0682 | 3.8372 |
|
73 |
+
| 0.1161 | 10.0 | 440 | 0.1513 | -0.0341 | -0.2967 | 0.9688 | 0.2626 | -2.9667 | -0.3407 | 0.9754 | 0.9495 | 0.1436 | -0.0679 | 3.8381 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
74 |
|
75 |
|
76 |
### Framework versions
|
model-00001-of-00002.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4965799096
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ebc8e835d41d51bab7d3def601ef54277ca8267d3a791c9a883a18a0e65fbebc
|
3 |
size 4965799096
|
model-00002-of-00002.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 1459729952
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2a10819e43048c01a4631306d7c2ffea05b327cbad6903842ec47e6a98b1e2ab
|
3 |
size 1459729952
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 5496
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d81b791e629c71c9cfb2c36bde9032762e7a3e65d18776e4c33da1f1d52c7a67
|
3 |
size 5496
|