lapp0 commited on
Commit
4fb48cb
·
verified ·
1 Parent(s): 4048731

End of training

Browse files
README.md ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: gpt2
3
+ datasets:
4
+ - wikimedia/wikipedia
5
+ library_name: Distily
6
+ license: creativeml-openrail-m
7
+ tags:
8
+ - generated_from_trainer
9
+ - D
10
+ - i
11
+ - s
12
+ - t
13
+ - i
14
+ - l
15
+ - y
16
+ base_model_relation: finetune
17
+ model-index:
18
+ - name: distily_verify_update7
19
+ results: []
20
+ ---
21
+
22
+
23
+ # Summary
24
+
25
+ Distilled with [Distily](https://github.com/lapp0/distily) library
26
+ using teacher model [gpt2](https://huggingface.co/gpt2)
27
+ on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia).
28
+
29
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
30
+ should probably proofread and complete it, then remove this comment.
31
+
32
+ # Model description
33
+
34
+ More information needed
35
+
36
+ # Intended uses & limitations
37
+
38
+ More information needed
39
+ -->
40
+
41
+ # Model Architecture:
42
+ - **Architecture**: `GPT2LMHeadModel`
43
+ - **Total Parameters**: 81,912,576
44
+ - **Data Type (dtype)**: torch.bfloat16
45
+ - **Model Size**: 0.16 GB
46
+
47
+ <details>
48
+ <summary>Student Model Details</summary>
49
+
50
+ ```
51
+ GPT2LMHeadModel(
52
+ (transformer): GPT2Model(
53
+ (wte): Embedding(50257, 768)
54
+ (wpe): Embedding(1024, 768)
55
+ (drop): Dropout(p=0.1, inplace=False)
56
+ (h): ModuleList(
57
+ (0-5): 6 x GPT2Block(
58
+ (ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
59
+ (attn): GPT2FlashAttention2(
60
+ (c_attn): Conv1D()
61
+ (c_proj): Conv1D()
62
+ (attn_dropout): Dropout(p=0.1, inplace=False)
63
+ (resid_dropout): Dropout(p=0.1, inplace=False)
64
+ )
65
+ (ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
66
+ (mlp): GPT2MLP(
67
+ (c_fc): Conv1D()
68
+ (c_proj): Conv1D()
69
+ (act): NewGELUActivation()
70
+ (dropout): Dropout(p=0.1, inplace=False)
71
+ )
72
+ )
73
+ )
74
+ (ln_f): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
75
+ )
76
+ (lm_head): Linear(in_features=768, out_features=50257, bias=False)
77
+ )
78
+ ```
79
+
80
+ </details>
81
+ <br/>
82
+
83
+
84
+
85
+ # Resource Usage
86
+
87
+ - VRAM Use: 15.7100 GB
88
+
89
+ # Distillation (Teacher -> Student) Architecture Difference:
90
+
91
+ - **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel`
92
+ - **Total Parameters**: 124,439,808 -> 81,912,576
93
+ - **Data Type (dtype)**: torch.bfloat16 -> torch.bfloat16
94
+ - **Model Size**: 0.24 GB -> 0.16 GB
95
+
96
+ <details>
97
+ <summary>Module Diff Details</summary>
98
+
99
+ ```diff
100
+ --- teacher model modules
101
+ +++ student model modules
102
+ @@ -4,7 +4,7 @@
103
+ (wpe): Embedding(1024, 768)
104
+ (drop): Dropout(p=0.1, inplace=False)
105
+ (h): ModuleList(
106
+ - (0-11): 12 x GPT2Block(
107
+ + (0-5): 6 x GPT2Block(
108
+ (ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
109
+ (attn): GPT2FlashAttention2(
110
+ (c_attn): Conv1D()
111
+
112
+ ```
113
+
114
+ </details>
115
+ <br/>
116
+
117
+ # Train Dataset
118
+ Trained on 3,217,794 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
119
+
120
+ - Num Samples: `3,960`
121
+ - Subset: `20231101.en`
122
+ - Split: `train`
123
+
124
+
125
+ # Training Objective
126
+
127
+ ```
128
+ DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5.0, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm_teacher_only_affine, projector=orthogonal))
129
+ ```
130
+
131
+ # Hyperparameters
132
+ The following hyperparameters were used during training:
133
+
134
+ <details>
135
+ <summary>Expand</summary>
136
+
137
+ - learning_rate: `0.0002`
138
+ - train_batch_size: `16`
139
+ - eval_batch_size: `8`
140
+ - seed: `42`
141
+ - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
142
+ - lr_scheduler_type: `polynomial`
143
+ - num_epochs: `1.0`
144
+ - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5.0, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm_teacher_only_affine, projector=orthogonal))`
145
+ - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f341c3acbb0>`
146
+ - student_model_name_or_path: `None`
147
+ - student_config_name_or_path: `distilbert/distilgpt2`
148
+ - student_model_config: `None`
149
+ - reinitialize_weights: `None`
150
+ - copy_teacher_modules: `[('lm_head', False)]`
151
+ - student_model_as_bitnet: `False`
152
+ - teacher_model_name_or_path: `gpt2`
153
+ - teacher_load_in_8bit: `False`
154
+ - teacher_load_in_4bit: `False`
155
+ - dataset_uri: `wikimedia/wikipedia`
156
+ - dataset_subset: `20231101.en`
157
+ - dataset_split: `train`
158
+ - dataset_column_name: `text`
159
+ - dataset_sample_size: `4000`
160
+ - dataset_test_size: `0.01`
161
+ - gradient_accumulation_steps: `1`
162
+ - weight_decay: `0.0`
163
+ - max_grad_norm: `1.0`
164
+ - warmup_ratio: `0.0`
165
+ - warmup_steps: `0`
166
+ - gradient_checkpointing: `True`
167
+
168
+ </details>
169
+ <br/>
170
+
171
+
172
+ # Framework Versions
173
+ - Distily 0.5.0
174
+ - Transformers 4.44.2
175
+ - Pytorch 2.3.0
176
+ - Datasets 2.21.0
benchmarks.shelve.bak ADDED
File without changes
benchmarks.shelve.dat ADDED
File without changes
benchmarks.shelve.dir ADDED
File without changes
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 50256,
4
+ "eos_token_id": 50256,
5
+ "transformers_version": "4.44.2"
6
+ }
logs/events.out.tfevents.1725464970.261a4d6fb516 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a35b549d4edd7adbf23f9d06263d8520cf5c4a5660104f53f4515aff293ca396
3
+ size 249