End of training
Browse files- README.md +40 -35
- model-00001-of-00002.safetensors +1 -1
- model-00002-of-00002.safetensors +1 -1
- training_args.bin +1 -1
README.md
CHANGED
@@ -18,18 +18,18 @@ should probably proofread and complete it, then remove this comment. -->
|
|
18 |
|
19 |
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on an unknown dataset.
|
20 |
It achieves the following results on the evaluation set:
|
21 |
-
- Loss: 0.
|
22 |
-
- Rewards/chosen: -0.
|
23 |
-
- Rewards/rejected: -0.
|
24 |
- Rewards/accuracies: 1.0
|
25 |
-
- Rewards/margins: 0.
|
26 |
-
- Logps/rejected: -
|
27 |
-
- Logps/chosen: -
|
28 |
-
- Logits/rejected:
|
29 |
-
- Logits/chosen:
|
30 |
-
- Nll Loss: 0.
|
31 |
-
- Log Odds Ratio: -0.
|
32 |
-
- Log Odds Chosen:
|
33 |
|
34 |
## Model description
|
35 |
|
@@ -52,36 +52,41 @@ The following hyperparameters were used during training:
|
|
52 |
- train_batch_size: 2
|
53 |
- eval_batch_size: 2
|
54 |
- seed: 42
|
55 |
-
-
|
56 |
-
-
|
|
|
|
|
|
|
57 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
58 |
- lr_scheduler_type: linear
|
59 |
- num_epochs: 10
|
60 |
|
61 |
### Training results
|
62 |
|
63 |
-
| Training Loss | Epoch
|
64 |
-
|
65 |
-
| 1.
|
66 |
-
| 1.
|
67 |
-
|
|
68 |
-
|
|
69 |
-
|
|
70 |
-
|
|
71 |
-
|
|
72 |
-
|
|
73 |
-
| 0.
|
74 |
-
| 0.
|
75 |
-
| 0.
|
76 |
-
| 0.
|
77 |
-
| 0.
|
78 |
-
| 0.
|
79 |
-
| 0.
|
80 |
-
| 0.
|
81 |
-
| 0.
|
82 |
-
| 0.
|
83 |
-
| 0.
|
84 |
-
| 0.
|
|
|
|
|
85 |
|
86 |
|
87 |
### Framework versions
|
|
|
18 |
|
19 |
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on an unknown dataset.
|
20 |
It achieves the following results on the evaluation set:
|
21 |
+
- Loss: 0.1857
|
22 |
+
- Rewards/chosen: -0.0409
|
23 |
+
- Rewards/rejected: -0.2434
|
24 |
- Rewards/accuracies: 1.0
|
25 |
+
- Rewards/margins: 0.2025
|
26 |
+
- Logps/rejected: -2.4335
|
27 |
+
- Logps/chosen: -0.4085
|
28 |
+
- Logits/rejected: 0.8148
|
29 |
+
- Logits/chosen: 0.8225
|
30 |
+
- Nll Loss: 0.1910
|
31 |
+
- Log Odds Ratio: -0.0807
|
32 |
+
- Log Odds Chosen: 3.0282
|
33 |
|
34 |
## Model description
|
35 |
|
|
|
52 |
- train_batch_size: 2
|
53 |
- eval_batch_size: 2
|
54 |
- seed: 42
|
55 |
+
- distributed_type: multi-GPU
|
56 |
+
- num_devices: 2
|
57 |
+
- gradient_accumulation_steps: 4
|
58 |
+
- total_train_batch_size: 16
|
59 |
+
- total_eval_batch_size: 4
|
60 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
61 |
- lr_scheduler_type: linear
|
62 |
- num_epochs: 10
|
63 |
|
64 |
### Training results
|
65 |
|
66 |
+
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Nll Loss | Log Odds Ratio | Log Odds Chosen |
|
67 |
+
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|:--------------:|:---------------:|
|
68 |
+
| 1.73 | 0.4545 | 5 | 1.2118 | -0.1132 | -0.1521 | 1.0 | 0.0389 | -1.5213 | -1.1321 | -0.1983 | -0.1272 | 1.1998 | -0.4688 | 0.5319 |
|
69 |
+
| 1.0085 | 0.9091 | 10 | 0.7476 | -0.0935 | -0.1373 | 1.0 | 0.0438 | -1.3732 | -0.9350 | -0.3241 | -0.2540 | 0.7468 | -0.4291 | 0.6477 |
|
70 |
+
| 0.6122 | 1.3636 | 15 | 0.4753 | -0.0840 | -0.1319 | 1.0 | 0.0478 | -1.3188 | -0.8404 | -0.1818 | -0.1180 | 0.4833 | -0.4009 | 0.7369 |
|
71 |
+
| 0.4028 | 1.8182 | 20 | 0.3733 | -0.0752 | -0.1281 | 1.0 | 0.0529 | -1.2813 | -0.7520 | 0.2061 | 0.2680 | 0.3812 | -0.3680 | 0.8492 |
|
72 |
+
| 0.3401 | 2.2727 | 25 | 0.3224 | -0.0662 | -0.1275 | 1.0 | 0.0613 | -1.2753 | -0.6623 | 0.3445 | 0.4051 | 0.3264 | -0.3228 | 1.0203 |
|
73 |
+
| 0.2887 | 2.7273 | 30 | 0.2842 | -0.0594 | -0.1348 | 1.0 | 0.0754 | -1.3481 | -0.5942 | 0.4254 | 0.4851 | 0.2877 | -0.2670 | 1.2678 |
|
74 |
+
| 0.2511 | 3.1818 | 35 | 0.2600 | -0.0548 | -0.1480 | 1.0 | 0.0932 | -1.4803 | -0.5480 | 0.5433 | 0.6008 | 0.2635 | -0.2161 | 1.5488 |
|
75 |
+
| 0.2359 | 3.6364 | 40 | 0.2407 | -0.0512 | -0.1628 | 1.0 | 0.1117 | -1.6283 | -0.5118 | 0.5920 | 0.6431 | 0.2450 | -0.1765 | 1.8227 |
|
76 |
+
| 0.2203 | 4.0909 | 45 | 0.2269 | -0.0488 | -0.1775 | 1.0 | 0.1287 | -1.7752 | -0.4879 | 0.6563 | 0.7034 | 0.2316 | -0.1486 | 2.0606 |
|
77 |
+
| 0.2074 | 4.5455 | 50 | 0.2172 | -0.0470 | -0.1928 | 1.0 | 0.1457 | -1.9277 | -0.4705 | 0.6884 | 0.7261 | 0.2224 | -0.1270 | 2.2857 |
|
78 |
+
| 0.1906 | 5.0 | 55 | 0.2090 | -0.0452 | -0.2047 | 1.0 | 0.1595 | -2.0471 | -0.4522 | 0.6969 | 0.7304 | 0.2147 | -0.1122 | 2.4730 |
|
79 |
+
| 0.1874 | 5.4545 | 60 | 0.2026 | -0.0440 | -0.2144 | 1.0 | 0.1703 | -2.1438 | -0.4404 | 0.7157 | 0.7442 | 0.2074 | -0.1026 | 2.6147 |
|
80 |
+
| 0.1724 | 5.9091 | 65 | 0.1976 | -0.0431 | -0.2228 | 1.0 | 0.1797 | -2.2285 | -0.4311 | 0.7320 | 0.7527 | 0.2030 | -0.0952 | 2.7358 |
|
81 |
+
| 0.17 | 6.3636 | 70 | 0.1938 | -0.0425 | -0.2303 | 1.0 | 0.1878 | -2.3029 | -0.4247 | 0.7594 | 0.7743 | 0.1996 | -0.0898 | 2.8362 |
|
82 |
+
| 0.1751 | 6.8182 | 75 | 0.1908 | -0.0419 | -0.2341 | 1.0 | 0.1923 | -2.3415 | -0.4187 | 0.7825 | 0.7965 | 0.1964 | -0.0869 | 2.8953 |
|
83 |
+
| 0.1546 | 7.2727 | 80 | 0.1888 | -0.0415 | -0.2375 | 1.0 | 0.1960 | -2.3754 | -0.4151 | 0.7961 | 0.8077 | 0.1944 | -0.0843 | 2.9440 |
|
84 |
+
| 0.1667 | 7.7273 | 85 | 0.1876 | -0.0413 | -0.2395 | 1.0 | 0.1983 | -2.3954 | -0.4125 | 0.7989 | 0.8092 | 0.1931 | -0.0828 | 2.9736 |
|
85 |
+
| 0.1624 | 8.1818 | 90 | 0.1867 | -0.0411 | -0.2418 | 1.0 | 0.2008 | -2.4182 | -0.4105 | 0.8080 | 0.8159 | 0.1921 | -0.0815 | 3.0056 |
|
86 |
+
| 0.1668 | 8.6364 | 95 | 0.1861 | -0.0410 | -0.2429 | 1.0 | 0.2020 | -2.4292 | -0.4095 | 0.8145 | 0.8227 | 0.1912 | -0.0809 | 3.0200 |
|
87 |
+
| 0.1478 | 9.0909 | 100 | 0.1859 | -0.0409 | -0.2433 | 1.0 | 0.2024 | -2.4331 | -0.4093 | 0.8108 | 0.8183 | 0.1912 | -0.0808 | 3.0252 |
|
88 |
+
| 0.157 | 9.5455 | 105 | 0.1858 | -0.0409 | -0.2433 | 1.0 | 0.2024 | -2.4331 | -0.4092 | 0.8130 | 0.8202 | 0.1912 | -0.0808 | 3.0251 |
|
89 |
+
| 0.1628 | 10.0 | 110 | 0.1857 | -0.0409 | -0.2434 | 1.0 | 0.2025 | -2.4335 | -0.4085 | 0.8148 | 0.8225 | 0.1910 | -0.0807 | 3.0282 |
|
90 |
|
91 |
|
92 |
### Framework versions
|
model-00001-of-00002.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4965799096
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3f88cb13ed4bcc849e9891cb27c73b5572a551605b4f6ac3dd01dea53ddbc764
|
3 |
size 4965799096
|
model-00002-of-00002.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 1459729952
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:62334339dd36aed77eb5a2f9760912d0140aeee30332899f87e6e9e5b34218d6
|
3 |
size 1459729952
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 5496
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:22ad71fbd43a63e734c73ed43bd445666c817027a22ab2af9f132edb0c44e042
|
3 |
size 5496
|