End of training
Browse files- README.md +8 -7
- logs/attn_norm=layernorm, attn_projector=mlp, attn_weight=5, learning_rate=0.0001, per_device_train_batch_size=4, warmup_ratio=0.2/events.out.tfevents.1725085808.a7e428977e35 +3 -0
- logs/attn_norm=layernorm, attn_projector=mlp, attn_weight=5, learning_rate=0.0001, per_device_train_batch_size=4, warmup_ratio=0.2/events.out.tfevents.1725095182.a7e428977e35 +3 -0
- model.safetensors +1 -1
- training_args.bin +1 -1
README.md
CHANGED
@@ -44,7 +44,7 @@ More information needed
|
|
44 |
|
45 |
# Resource Usage Comparison
|
46 |
|
47 |
-
- VRAM Use: 7.
|
48 |
|
49 |
# Distillation (Teacher -> Student) Architecture Difference:
|
50 |
|
@@ -75,7 +75,7 @@ More information needed
|
|
75 |
<br/>
|
76 |
|
77 |
# Train Dataset
|
78 |
-
Trained on 226,
|
79 |
|
80 |
- Num Samples: `396,000`
|
81 |
- Subset: `20231101.en`
|
@@ -95,15 +95,16 @@ The following hyperparameters were used during training:
|
|
95 |
<summary>Expand</summary>
|
96 |
|
97 |
- learning_rate: `0.0001`
|
98 |
-
- train_batch_size: `
|
99 |
- eval_batch_size: `8`
|
100 |
- seed: `42`
|
101 |
- optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
|
102 |
- lr_scheduler_type: `polynomial`
|
|
|
103 |
- num_epochs: `1.0`
|
104 |
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm, projector=mlp))`
|
105 |
- train_embeddings: `True`
|
106 |
-
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at
|
107 |
- student_model_name_or_path: `None`
|
108 |
- student_config_name_or_path: `distilbert/distilgpt2`
|
109 |
- student_model_config: `None`
|
@@ -123,7 +124,7 @@ The following hyperparameters were used during training:
|
|
123 |
- gradient_accumulation_steps: `1`
|
124 |
- weight_decay: `0.0`
|
125 |
- max_grad_norm: `1.0`
|
126 |
-
- warmup_ratio: `0`
|
127 |
- warmup_steps: `0`
|
128 |
- gradient_checkpointing: `True`
|
129 |
|
@@ -134,5 +135,5 @@ The following hyperparameters were used during training:
|
|
134 |
# Framework Versions
|
135 |
- Distily 0.4.1
|
136 |
- Transformers 4.44.2
|
137 |
-
- Pytorch 2.
|
138 |
-
- Datasets 2.
|
|
|
44 |
|
45 |
# Resource Usage Comparison
|
46 |
|
47 |
+
- VRAM Use: 7.4145 GB
|
48 |
|
49 |
# Distillation (Teacher -> Student) Architecture Difference:
|
50 |
|
|
|
75 |
<br/>
|
76 |
|
77 |
# Train Dataset
|
78 |
+
Trained on 226,137,833 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
|
79 |
|
80 |
- Num Samples: `396,000`
|
81 |
- Subset: `20231101.en`
|
|
|
95 |
<summary>Expand</summary>
|
96 |
|
97 |
- learning_rate: `0.0001`
|
98 |
+
- train_batch_size: `4`
|
99 |
- eval_batch_size: `8`
|
100 |
- seed: `42`
|
101 |
- optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
|
102 |
- lr_scheduler_type: `polynomial`
|
103 |
+
- lr_scheduler_warmup_ratio: `0.2`
|
104 |
- num_epochs: `1.0`
|
105 |
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm, projector=mlp))`
|
106 |
- train_embeddings: `True`
|
107 |
+
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7fd06c06f5e0>`
|
108 |
- student_model_name_or_path: `None`
|
109 |
- student_config_name_or_path: `distilbert/distilgpt2`
|
110 |
- student_model_config: `None`
|
|
|
124 |
- gradient_accumulation_steps: `1`
|
125 |
- weight_decay: `0.0`
|
126 |
- max_grad_norm: `1.0`
|
127 |
+
- warmup_ratio: `0.2`
|
128 |
- warmup_steps: `0`
|
129 |
- gradient_checkpointing: `True`
|
130 |
|
|
|
135 |
# Framework Versions
|
136 |
- Distily 0.4.1
|
137 |
- Transformers 4.44.2
|
138 |
+
- Pytorch 2.4.0+cu121
|
139 |
+
- Datasets 2.18.0
|
logs/attn_norm=layernorm, attn_projector=mlp, attn_weight=5, learning_rate=0.0001, per_device_train_batch_size=4, warmup_ratio=0.2/events.out.tfevents.1725085808.a7e428977e35
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4e19bfe10b065c8b9564cf2bb6322de8c38622797e74d27416432c6895bc89e7
|
3 |
+
size 47486046
|
logs/attn_norm=layernorm, attn_projector=mlp, attn_weight=5, learning_rate=0.0001, per_device_train_batch_size=4, warmup_ratio=0.2/events.out.tfevents.1725095182.a7e428977e35
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:eba9b1f4b2d70df7f05a50fa1fc367cae97a8454e8531303624fe380ccfad428
|
3 |
+
size 529
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 163832792
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:edc2a89d9b93a9c39e0bb7c3211860d6573064462f7394851f5550aad564fd7d
|
3 |
size 163832792
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 5560
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5b7b9a733e8eddf676f77eff2f4387758e33482198ef07a3b029094fffc69e3b
|
3 |
size 5560
|