End of training
Browse files
README.md
CHANGED
@@ -44,36 +44,36 @@ More information needed
|
|
44 |
| step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl |
|
45 |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
46 |
| **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 |
|
47 |
-
| 0 | 0 |
|
48 |
-
| 2500 | 0.0404 |
|
49 |
-
| 5000 | 0.0808 |
|
50 |
-
| 7500 | 0.1212 |
|
51 |
-
| 10000 | 0.1616 |
|
52 |
-
| 12500 | 0.2020 |
|
53 |
-
| 15000 | 0.2424 |
|
54 |
-
| 17500 | 0.2828 | 92.
|
55 |
-
| 20000 | 0.3232 |
|
56 |
-
| 22500 | 0.3636 |
|
57 |
-
| 25000 | 0.4040 |
|
58 |
-
| 27500 | 0.4444 |
|
59 |
-
| 30000 | 0.4848 |
|
60 |
-
| 32500 | 0.5253 |
|
61 |
-
| 35000 | 0.5657 |
|
62 |
-
| 37500 | 0.6061 |
|
63 |
-
| 40000 | 0.6465 |
|
64 |
-
| 42500 | 0.6869 |
|
65 |
-
| 45000 | 0.7273 |
|
66 |
-
| 47500 | 0.7677 |
|
67 |
-
| 50000 | 0.8081 | 50.
|
68 |
-
| 52500 | 0.8485 |
|
69 |
-
| 55000 | 0.8889 |
|
70 |
-
| 57500 | 0.9293 |
|
71 |
-
| 60000 | 0.9697 |
|
72 |
-
| 61875 | 1.0 |
|
73 |
|
74 |
# Resource Usage Comparison
|
75 |
|
76 |
-
- VRAM Use: 7.
|
77 |
|
78 |
# Distillation (Teacher -> Student) Architecture Difference:
|
79 |
|
@@ -93,7 +93,7 @@ More information needed
|
|
93 |
<br/>
|
94 |
|
95 |
# Train Dataset
|
96 |
-
Trained on 145,
|
97 |
|
98 |
- Num Samples: `247,500`
|
99 |
- Subset: `20231101.en`
|
@@ -103,7 +103,7 @@ Trained on 145,705,155 tokens from the [wikimedia/wikipedia](https://huggingface
|
|
103 |
# Training Objective
|
104 |
|
105 |
```
|
106 |
-
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=10.0, loss_fn=
|
107 |
```
|
108 |
|
109 |
# Hyperparameters
|
@@ -120,9 +120,9 @@ The following hyperparameters were used during training:
|
|
120 |
- lr_scheduler_type: `linear`
|
121 |
- lr_scheduler_warmup_ratio: `0.5`
|
122 |
- num_epochs: `1.0`
|
123 |
-
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=10.0, loss_fn=
|
124 |
- train_embeddings: `True`
|
125 |
-
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at
|
126 |
- student_model_name_or_path: `None`
|
127 |
- student_config_name_or_path: `None`
|
128 |
- student_model_config: `None`
|
|
|
44 |
| step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl |
|
45 |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
46 |
| **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 |
|
47 |
+
| 0 | 0 | 2473901162496.0 | 170424302305280.0 | 30.7740 | 26.7214 | 93.558 | 11.713 | 4060086272.0 | 71468255805440.0 |
|
48 |
+
| 2500 | 0.0404 | 1200.0 | 12160.0 | 9.8335 | 26.6807 | 93.701 | 11.731 | 776.0 | 13952.0 |
|
49 |
+
| 5000 | 0.0808 | 412.0 | 2240.0 | 8.3973 | 26.664 | 93.759 | 11.739 | 290.0 | 434.0 |
|
50 |
+
| 7500 | 0.1212 | 245.0 | 908.0 | 7.6617 | 26.6719 | 93.732 | 11.735 | 218.0 | 198.0 |
|
51 |
+
| 10000 | 0.1616 | 183.0 | 672.0 | 7.2411 | 26.7416 | 93.487 | 11.705 | 165.0 | 202.0 |
|
52 |
+
| 12500 | 0.2020 | 132.0 | 504.0 | 6.6894 | 26.7138 | 93.585 | 11.717 | 115.0 | 154.0 |
|
53 |
+
| 15000 | 0.2424 | 113.0 | 436.0 | 6.4125 | 26.7091 | 93.601 | 11.719 | 89.5 | 139.0 |
|
54 |
+
| 17500 | 0.2828 | 92.5 | 340.0 | 6.1942 | 26.6603 | 93.772 | 11.74 | 71.0 | 131.0 |
|
55 |
+
| 20000 | 0.3232 | 75.0 | 276.0 | 5.9314 | 26.7037 | 93.62 | 11.721 | 65.0 | 109.0 |
|
56 |
+
| 22500 | 0.3636 | 65.5 | 215.0 | 5.6582 | 26.6905 | 93.666 | 11.727 | 50.75 | 82.0 |
|
57 |
+
| 25000 | 0.4040 | 63.0 | 196.0 | 5.5585 | 26.6163 | 93.927 | 11.76 | 43.75 | 85.5 |
|
58 |
+
| 27500 | 0.4444 | 58.5 | 203.0 | 5.4882 | 26.6291 | 93.882 | 11.754 | 41.0 | 68.5 |
|
59 |
+
| 30000 | 0.4848 | 58.75 | 196.0 | 5.4760 | 26.6873 | 93.678 | 11.728 | 41.0 | 62.75 |
|
60 |
+
| 32500 | 0.5253 | 58.5 | 174.0 | 5.4524 | 26.667 | 93.749 | 11.737 | 39.5 | 62.25 |
|
61 |
+
| 35000 | 0.5657 | 57.0 | 168.0 | 5.3756 | 26.6906 | 93.666 | 11.727 | 36.5 | 49.5 |
|
62 |
+
| 37500 | 0.6061 | 57.25 | 158.0 | 5.3384 | 26.6664 | 93.751 | 11.738 | 37.5 | 49.5 |
|
63 |
+
| 40000 | 0.6465 | 55.25 | 155.0 | 5.3176 | 26.673 | 93.728 | 11.735 | 34.75 | 58.75 |
|
64 |
+
| 42500 | 0.6869 | 54.75 | 150.0 | 5.2917 | 26.7045 | 93.617 | 11.721 | 35.25 | 53.0 |
|
65 |
+
| 45000 | 0.7273 | 50.5 | 132.0 | 5.1569 | 26.7113 | 93.593 | 11.718 | 30.125 | 46.5 |
|
66 |
+
| 47500 | 0.7677 | 50.5 | 124.5 | 5.1275 | 26.704 | 93.619 | 11.721 | 29.5 | 36.5 |
|
67 |
+
| 50000 | 0.8081 | 50.0 | 122.5 | 5.1100 | 26.6758 | 93.718 | 11.733 | 29.125 | 38.0 |
|
68 |
+
| 52500 | 0.8485 | 48.75 | 119.5 | 5.0953 | 26.6639 | 93.76 | 11.739 | 28.875 | 35.5 |
|
69 |
+
| 55000 | 0.8889 | 48.5 | 117.5 | 5.0750 | 26.718 | 93.57 | 11.715 | 28.25 | 35.75 |
|
70 |
+
| 57500 | 0.9293 | 48.0 | 117.0 | 5.0700 | 26.7088 | 93.602 | 11.719 | 28.0 | 33.25 |
|
71 |
+
| 60000 | 0.9697 | 48.25 | 116.5 | 5.0649 | 26.711 | 93.594 | 11.718 | 27.875 | 33.25 |
|
72 |
+
| 61875 | 1.0 | 48.25 | 116.5 | 5.0644 | 26.6823 | 93.695 | 11.731 | 27.875 | 33.25 |
|
73 |
|
74 |
# Resource Usage Comparison
|
75 |
|
76 |
+
- VRAM Use: 7.7851 GB
|
77 |
|
78 |
# Distillation (Teacher -> Student) Architecture Difference:
|
79 |
|
|
|
93 |
<br/>
|
94 |
|
95 |
# Train Dataset
|
96 |
+
Trained on 145,744,973 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
|
97 |
|
98 |
- Num Samples: `247,500`
|
99 |
- Subset: `20231101.en`
|
|
|
103 |
# Training Objective
|
104 |
|
105 |
```
|
106 |
+
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=10.0, loss_fn=cos, layer_mapper=layer-2))
|
107 |
```
|
108 |
|
109 |
# Hyperparameters
|
|
|
120 |
- lr_scheduler_type: `linear`
|
121 |
- lr_scheduler_warmup_ratio: `0.5`
|
122 |
- num_epochs: `1.0`
|
123 |
+
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=10.0, loss_fn=cos, layer_mapper=layer-2))`
|
124 |
- train_embeddings: `True`
|
125 |
+
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7fc5f0404a60>`
|
126 |
- student_model_name_or_path: `None`
|
127 |
- student_config_name_or_path: `None`
|
128 |
- student_model_config: `None`
|
logs/attn_loss_fn=cos, attn_weight=10.0, projector=orthogonal/events.out.tfevents.1724295072.e3f806ea38c9
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0690b170c3ffc3d65712256639e5c276331ec6fce8bc4df8ead2f6f7c311aaa8
|
3 |
+
size 588
|