End of training
Browse files
README.md
CHANGED
@@ -15,14 +15,14 @@ This student model is distilled from the teacher model [roneneldan/TinyStories-3
|
|
15 |
The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
|
16 |
|
17 |
It achieves the following results on the evaluation set:
|
18 |
-
- eval_enwikippl:
|
19 |
-
- eval_frwikippl:
|
20 |
-
- eval_zhwikippl:
|
21 |
-
- eval_tinystoriesppl:
|
22 |
-
- eval_loss:
|
23 |
-
- eval_runtime: 6.
|
24 |
-
- eval_samples_per_second: 76.
|
25 |
-
- eval_steps_per_second: 9.
|
26 |
|
27 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
28 |
should probably proofread and complete it, then remove this comment.
|
@@ -47,7 +47,7 @@ More information needed
|
|
47 |
The following hyperparameters were used during training:
|
48 |
- distillation_objective: DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl, layer_mapper=None, projector=None), hs_loss_component=LossComponent(label=hs, weight=0, loss_fn=None, layer_mapper=None, projector=None), attn_loss_component=LossComponent(label=attn, weight=0, loss_fn=None, layer_mapper=None, projector=None))
|
49 |
- train_embeddings: True
|
50 |
-
- learning_rate: 0.
|
51 |
- train_batch_size: 8
|
52 |
- eval_batch_size: 8
|
53 |
- seed: 42
|
@@ -62,20 +62,20 @@ Peak GPU Memory: 8.0568 GB
|
|
62 |
| step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl |
|
63 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
64 |
| **teacher eval** | | 169.9865 | 47377.9414 | | | | | 3.9789 | 4998.1294 |
|
65 |
-
| 0 | 0 | 21321.3555 | 56774.5312 | 6.6010 | 6.
|
66 |
-
| 500 | 0.0808 |
|
67 |
-
| 1000 | 0.1616 |
|
68 |
-
| 1500 | 0.2424 |
|
69 |
-
| 2000 | 0.3232 |
|
70 |
-
| 2500 | 0.4040 |
|
71 |
-
| 3000 | 0.4848 |
|
72 |
-
| 3500 | 0.5656 |
|
73 |
-
| 4000 | 0.6464 |
|
74 |
-
| 4500 | 0.7272 |
|
75 |
-
| 5000 | 0.8080 |
|
76 |
-
| 5500 | 0.8888 |
|
77 |
-
| 6000 | 0.9696 |
|
78 |
-
| 6188 | 1.0 |
|
79 |
|
80 |
### Framework versions
|
81 |
- Distily 0.2.0
|
|
|
15 |
The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
|
16 |
|
17 |
It achieves the following results on the evaluation set:
|
18 |
+
- eval_enwikippl: 3607.3171
|
19 |
+
- eval_frwikippl: 29425.125
|
20 |
+
- eval_zhwikippl: 52510.3125
|
21 |
+
- eval_tinystoriesppl: 1167.9218
|
22 |
+
- eval_loss: 5.1093
|
23 |
+
- eval_runtime: 6.505
|
24 |
+
- eval_samples_per_second: 76.864
|
25 |
+
- eval_steps_per_second: 9.685
|
26 |
|
27 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
28 |
should probably proofread and complete it, then remove this comment.
|
|
|
47 |
The following hyperparameters were used during training:
|
48 |
- distillation_objective: DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl, layer_mapper=None, projector=None), hs_loss_component=LossComponent(label=hs, weight=0, loss_fn=None, layer_mapper=None, projector=None), attn_loss_component=LossComponent(label=attn, weight=0, loss_fn=None, layer_mapper=None, projector=None))
|
49 |
- train_embeddings: True
|
50 |
+
- learning_rate: 0.0004
|
51 |
- train_batch_size: 8
|
52 |
- eval_batch_size: 8
|
53 |
- seed: 42
|
|
|
62 |
| step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl |
|
63 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
64 |
| **teacher eval** | | 169.9865 | 47377.9414 | | | | | 3.9789 | 4998.1294 |
|
65 |
+
| 0 | 0 | 21321.3555 | 56774.5312 | 6.6010 | 6.5635 | 76.178 | 9.598 | 11289.9248 | 60744.7383 |
|
66 |
+
| 500 | 0.0808 | 3754.7207 | 29512.3027 | 5.1110 | 6.5245 | 76.634 | 9.656 | 1235.4543 | 53915.7461 |
|
67 |
+
| 1000 | 0.1616 | 3629.7410 | 29470.7617 | 5.1093 | 6.5374 | 76.483 | 9.637 | 1179.3701 | 52678.6953 |
|
68 |
+
| 1500 | 0.2424 | 3604.8032 | 29425.125 | 5.1093 | 6.5363 | 76.496 | 9.638 | 1167.5359 | 52510.3125 |
|
69 |
+
| 2000 | 0.3232 | 3604.8032 | 29425.125 | 5.1093 | 6.5198 | 76.689 | 9.663 | 1167.3427 | 52510.3125 |
|
70 |
+
| 2500 | 0.4040 | 3607.3171 | 29425.125 | 5.1093 | 6.5054 | 76.86 | 9.684 | 1167.9218 | 52510.3125 |
|
71 |
+
| 3000 | 0.4848 | 3607.3171 | 29425.125 | 5.1093 | 6.5103 | 76.801 | 9.677 | 1167.9218 | 52510.3125 |
|
72 |
+
| 3500 | 0.5656 | 3607.3171 | 29425.125 | 5.1093 | 6.4942 | 76.992 | 9.701 | 1167.9218 | 52510.3125 |
|
73 |
+
| 4000 | 0.6464 | 3607.3171 | 29425.125 | 5.1093 | 6.516 | 76.734 | 9.669 | 1167.9218 | 52510.3125 |
|
74 |
+
| 4500 | 0.7272 | 3607.3171 | 29425.125 | 5.1093 | 6.5209 | 76.677 | 9.661 | 1167.9218 | 52510.3125 |
|
75 |
+
| 5000 | 0.8080 | 3607.3171 | 29425.125 | 5.1093 | 6.502 | 76.899 | 9.689 | 1167.9218 | 52510.3125 |
|
76 |
+
| 5500 | 0.8888 | 3607.3171 | 29425.125 | 5.1093 | 6.5361 | 76.498 | 9.639 | 1167.9218 | 52510.3125 |
|
77 |
+
| 6000 | 0.9696 | 3607.3171 | 29425.125 | 5.1093 | 6.4994 | 76.93 | 9.693 | 1167.9218 | 52510.3125 |
|
78 |
+
| 6188 | 1.0 | 3607.3171 | 29425.125 | 5.1093 | 6.505 | 76.864 | 9.685 | 1167.9218 | 52510.3125 |
|
79 |
|
80 |
### Framework versions
|
81 |
- Distily 0.2.0
|
logs/dropout=0, learning_rate=0.0004, weight_decay=0.1/events.out.tfevents.1723870375.5f530b1cf724
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5a14592e0d6c72a3ab368153e1a868275469b0d7ac0d61e8d181e1c8b3383ab9
|
3 |
+
size 307
|