End of training
Browse files
README.md
CHANGED
@@ -44,36 +44,36 @@ More information needed
|
|
44 |
| step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl |
|
45 |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
46 |
| **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 |
|
47 |
-
| 0 | 0 |
|
48 |
-
| 2500 | 0.0404 |
|
49 |
-
| 5000 | 0.0808 |
|
50 |
-
| 7500 | 0.1212 |
|
51 |
-
| 10000 | 0.1616 |
|
52 |
-
| 12500 | 0.2020 |
|
53 |
-
| 15000 | 0.2424 |
|
54 |
-
| 17500 | 0.2828 |
|
55 |
-
| 20000 | 0.3232 |
|
56 |
-
| 22500 | 0.3636 |
|
57 |
-
| 25000 | 0.4040 |
|
58 |
-
| 27500 | 0.4444 |
|
59 |
-
| 30000 | 0.4848 |
|
60 |
-
| 32500 | 0.5253 |
|
61 |
-
| 35000 | 0.5657 |
|
62 |
-
| 37500 | 0.6061 |
|
63 |
-
| 40000 | 0.6465 |
|
64 |
-
| 42500 | 0.6869 |
|
65 |
-
| 45000 | 0.7273 |
|
66 |
-
| 47500 | 0.7677 | 50.5 |
|
67 |
-
| 50000 | 0.8081 |
|
68 |
-
| 52500 | 0.8485 |
|
69 |
-
| 55000 | 0.8889 |
|
70 |
-
| 57500 | 0.9293 |
|
71 |
-
| 60000 | 0.9697 |
|
72 |
-
| 61875 | 1.0 |
|
73 |
|
74 |
# Resource Usage Comparison
|
75 |
|
76 |
-
- VRAM Use: 7.
|
77 |
|
78 |
# Distillation (Teacher -> Student) Architecture Difference:
|
79 |
|
@@ -93,7 +93,7 @@ More information needed
|
|
93 |
<br/>
|
94 |
|
95 |
# Train Dataset
|
96 |
-
Trained on 145,
|
97 |
|
98 |
- Num Samples: `247,500`
|
99 |
- Subset: `20231101.en`
|
@@ -103,7 +103,7 @@ Trained on 145,744,973 tokens from the [wikimedia/wikipedia](https://huggingface
|
|
103 |
# Training Objective
|
104 |
|
105 |
```
|
106 |
-
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=10.0, loss_fn=
|
107 |
```
|
108 |
|
109 |
# Hyperparameters
|
@@ -120,9 +120,9 @@ The following hyperparameters were used during training:
|
|
120 |
- lr_scheduler_type: `linear`
|
121 |
- lr_scheduler_warmup_ratio: `0.5`
|
122 |
- num_epochs: `1.0`
|
123 |
-
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=10.0, loss_fn=
|
124 |
- train_embeddings: `True`
|
125 |
-
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at
|
126 |
- student_model_name_or_path: `None`
|
127 |
- student_config_name_or_path: `None`
|
128 |
- student_model_config: `None`
|
@@ -155,5 +155,5 @@ The following hyperparameters were used during training:
|
|
155 |
# Framework Versions
|
156 |
- Distily 0.2.0
|
157 |
- Transformers 4.44.1
|
158 |
-
- Pytorch 2.
|
159 |
- Datasets 2.21.0
|
|
|
44 |
| step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl |
|
45 |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
46 |
| **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 |
|
47 |
+
| 0 | 0 | 850403524608.0 | 85212151152640.0 | 21.0960 | 24.9736 | 100.106 | 12.533 | 2952790016.0 | 25013889531904.0 |
|
48 |
+
| 2500 | 0.0404 | 748.0 | 6560.0 | 2.6187 | 25.0229 | 99.909 | 12.509 | 464.0 | 2656.0 |
|
49 |
+
| 5000 | 0.0808 | 324.0 | 1416.0 | 1.9058 | 24.9884 | 100.046 | 12.526 | 249.0 | 300.0 |
|
50 |
+
| 7500 | 0.1212 | 217.0 | 752.0 | 1.6210 | 24.9597 | 100.161 | 12.54 | 181.0 | 190.0 |
|
51 |
+
| 10000 | 0.1616 | 172.0 | 716.0 | 1.4406 | 25.0292 | 99.883 | 12.505 | 151.0 | 170.0 |
|
52 |
+
| 12500 | 0.2020 | 124.0 | 458.0 | 1.2049 | 25.0192 | 99.923 | 12.51 | 104.0 | 152.0 |
|
53 |
+
| 15000 | 0.2424 | 104.5 | 412.0 | 1.0694 | 24.9961 | 100.016 | 12.522 | 88.5 | 145.0 |
|
54 |
+
| 17500 | 0.2828 | 92.0 | 346.0 | 0.9815 | 25.0156 | 99.937 | 12.512 | 82.0 | 100.0 |
|
55 |
+
| 20000 | 0.3232 | 83.0 | 314.0 | 0.8990 | 25.0162 | 99.935 | 12.512 | 68.0 | 105.0 |
|
56 |
+
| 22500 | 0.3636 | 70.5 | 230.0 | 0.7774 | 24.9804 | 100.079 | 12.53 | 57.5 | 72.5 |
|
57 |
+
| 25000 | 0.4040 | 65.0 | 222.0 | 0.7207 | 25.0102 | 99.959 | 12.515 | 51.75 | 96.5 |
|
58 |
+
| 27500 | 0.4444 | 64.5 | 202.0 | 0.6891 | 25.0139 | 99.945 | 12.513 | 49.25 | 80.0 |
|
59 |
+
| 30000 | 0.4848 | 62.0 | 200.0 | 0.6958 | 24.9984 | 100.006 | 12.521 | 48.25 | 66.0 |
|
60 |
+
| 32500 | 0.5253 | 65.0 | 221.0 | 0.6815 | 25.0096 | 99.962 | 12.515 | 47.5 | 402.0 |
|
61 |
+
| 35000 | 0.5657 | 60.25 | 189.0 | 0.6250 | 25.0328 | 99.869 | 12.504 | 43.25 | 95.5 |
|
62 |
+
| 37500 | 0.6061 | 59.0 | 165.0 | 0.6019 | 25.0027 | 99.989 | 12.519 | 43.5 | 72.5 |
|
63 |
+
| 40000 | 0.6465 | 57.5 | 155.0 | 0.5777 | 24.9823 | 100.071 | 12.529 | 39.0 | 90.5 |
|
64 |
+
| 42500 | 0.6869 | 59.25 | 161.0 | 0.5669 | 25.0242 | 99.903 | 12.508 | 39.5 | 57.0 |
|
65 |
+
| 45000 | 0.7273 | 53.0 | 149.0 | 0.4764 | 25.0157 | 99.937 | 12.512 | 34.0 | 42.25 |
|
66 |
+
| 47500 | 0.7677 | 50.5 | 130.0 | 0.4510 | 24.979 | 100.084 | 12.531 | 33.0 | 40.0 |
|
67 |
+
| 50000 | 0.8081 | 50.5 | 129.0 | 0.4420 | 24.9761 | 100.096 | 12.532 | 31.75 | 37.0 |
|
68 |
+
| 52500 | 0.8485 | 50.25 | 134.0 | 0.4333 | 24.9616 | 100.154 | 12.539 | 31.625 | 37.0 |
|
69 |
+
| 55000 | 0.8889 | 49.25 | 124.0 | 0.4170 | 24.9558 | 100.177 | 12.542 | 30.375 | 35.0 |
|
70 |
+
| 57500 | 0.9293 | 49.0 | 124.5 | 0.4120 | 24.9149 | 100.342 | 12.563 | 30.25 | 34.75 |
|
71 |
+
| 60000 | 0.9697 | 49.0 | 123.5 | 0.4089 | 24.9811 | 100.076 | 12.529 | 30.125 | 34.25 |
|
72 |
+
| 61875 | 1.0 | 49.0 | 123.0 | 0.4086 | 24.9186 | 100.327 | 12.561 | 30.125 | 34.25 |
|
73 |
|
74 |
# Resource Usage Comparison
|
75 |
|
76 |
+
- VRAM Use: 7.7831 GB
|
77 |
|
78 |
# Distillation (Teacher -> Student) Architecture Difference:
|
79 |
|
|
|
93 |
<br/>
|
94 |
|
95 |
# Train Dataset
|
96 |
+
Trained on 145,731,638 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
|
97 |
|
98 |
- Num Samples: `247,500`
|
99 |
- Subset: `20231101.en`
|
|
|
103 |
# Training Objective
|
104 |
|
105 |
```
|
106 |
+
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=10.0, loss_fn=raw_mse, layer_mapper=layer-2))
|
107 |
```
|
108 |
|
109 |
# Hyperparameters
|
|
|
120 |
- lr_scheduler_type: `linear`
|
121 |
- lr_scheduler_warmup_ratio: `0.5`
|
122 |
- num_epochs: `1.0`
|
123 |
+
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=10.0, loss_fn=raw_mse, layer_mapper=layer-2))`
|
124 |
- train_embeddings: `True`
|
125 |
+
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f6f213d8430>`
|
126 |
- student_model_name_or_path: `None`
|
127 |
- student_config_name_or_path: `None`
|
128 |
- student_model_config: `None`
|
|
|
155 |
# Framework Versions
|
156 |
- Distily 0.2.0
|
157 |
- Transformers 4.44.1
|
158 |
+
- Pytorch 2.5.0.dev20240821+cu121
|
159 |
- Datasets 2.21.0
|
logs/attn_loss_fn=raw_mse, attn_weight=10.0, projector=identity/events.out.tfevents.1724319775.e3f806ea38c9
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:876c84b91a2986e18efa4ac943507bf521e1b7dd0c5d9b651fa646cb8d3b1896
|
3 |
+
size 588
|