lapp0 commited on
Commit
9de7300
·
verified ·
1 Parent(s): 62aa849

Training in progress, step 5000

Browse files
README.md CHANGED
@@ -78,13 +78,13 @@ GPT2LMHeadModel(
78
 
79
  # Resource Usage
80
 
81
- - Max Train VRAM Use: 15.7128 GB
82
- - Available VRAM: 23.6429 GB
83
  - GPUs:
84
  - 1x NVIDIA GeForce RTX 4090
85
- - CPUs: 128
86
- - CPU Memory: 503.5412 GB
87
- - CPU Memory Bandwidth: 3200 GB/s
88
 
89
  # Distillation (Teacher -> Student) Architecture Difference:
90
 
@@ -115,7 +115,7 @@ GPT2LMHeadModel(
115
  <br/>
116
 
117
  # Train Dataset
118
- Trained on 521,413,804 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
119
 
120
  - Num Samples: `990,000`
121
  - Subset: `20231101.en`
@@ -125,7 +125,7 @@ Trained on 521,413,804 tokens from the [wikimedia/wikipedia](https://huggingface
125
  # Training Objective
126
 
127
  ```
128
- DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm_teacher_only_affine, projector=mlp_64_l4))
129
  ```
130
 
131
  # Hyperparameters
@@ -141,8 +141,8 @@ The following hyperparameters were used during training:
141
  - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
142
  - lr_scheduler_type: `polynomial`
143
  - num_epochs: `1.0`
144
- - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm_teacher_only_affine, projector=mlp_64_l4))`
145
- - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f5b3ddf36d0>`
146
  - student_model_name_or_path: `None`
147
  - student_config_name_or_path: `distilbert/distilgpt2`
148
  - student_model_config: `None`
@@ -171,6 +171,6 @@ The following hyperparameters were used during training:
171
 
172
  # Framework Versions
173
  - Distily 0.5.0
174
- - Transformers 4.44.2
175
  - Pytorch 2.4.0+cu121
176
- - Datasets 2.18.0
 
78
 
79
  # Resource Usage
80
 
81
+ - Max Train VRAM Use: 15.7121 GB
82
+ - Available VRAM: 23.6497 GB
83
  - GPUs:
84
  - 1x NVIDIA GeForce RTX 4090
85
+ - CPUs: 32
86
+ - CPU Memory: 125.6976 GB
87
+ - CPU Memory Bandwidth: 800 GB/s
88
 
89
  # Distillation (Teacher -> Student) Architecture Difference:
90
 
 
115
  <br/>
116
 
117
  # Train Dataset
118
+ Trained on 521,283,083 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
119
 
120
  - Num Samples: `990,000`
121
  - Subset: `20231101.en`
 
125
  # Training Objective
126
 
127
  ```
128
+ DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm_teacher_only_affine, projector=mlp_256_l2))
129
  ```
130
 
131
  # Hyperparameters
 
141
  - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
142
  - lr_scheduler_type: `polynomial`
143
  - num_epochs: `1.0`
144
+ - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm_teacher_only_affine, projector=mlp_256_l2))`
145
+ - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7fb428e54250>`
146
  - student_model_name_or_path: `None`
147
  - student_config_name_or_path: `distilbert/distilgpt2`
148
  - student_model_config: `None`
 
171
 
172
  # Framework Versions
173
  - Distily 0.5.0
174
+ - Transformers 4.44.1
175
  - Pytorch 2.4.0+cu121
176
+ - Datasets 2.21.0
config.json CHANGED
@@ -40,7 +40,7 @@
40
  }
41
  },
42
  "torch_dtype": "bfloat16",
43
- "transformers_version": "4.44.2",
44
  "use_cache": true,
45
  "vocab_size": 50257
46
  }
 
40
  }
41
  },
42
  "torch_dtype": "bfloat16",
43
+ "transformers_version": "4.44.1",
44
  "use_cache": true,
45
  "vocab_size": 50257
46
  }
generation_config.json CHANGED
@@ -2,5 +2,5 @@
2
  "_from_model_config": true,
3
  "bos_token_id": 50256,
4
  "eos_token_id": 50256,
5
- "transformers_version": "4.44.2"
6
  }
 
2
  "_from_model_config": true,
3
  "bos_token_id": 50256,
4
  "eos_token_id": 50256,
5
+ "transformers_version": "4.44.1"
6
  }
logs/attn_norm=layernorm_teacher_only_affine, attn_projector=mlp_256_l2, attn_weight=5, learning_rate=0.0002, per_device_train_batch_size=16, warmup_ratio=0/completed.flag ADDED
File without changes
logs/attn_norm=layernorm_teacher_only_affine, attn_projector=mlp_64_l2, attn_weight=5, learning_rate=0.0002, per_device_train_batch_size=16, warmup_ratio=0/events.out.tfevents.1725515566.e3f806ea38c9 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76715d578e76fca994a0bf4f0c9d0a49b4fdad02ac19adfad3d2051755116e21
3
+ size 160931
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5000b8c43bd6398dc6f89374c9853d691929c91c6349ee55ea2ab251960af260
3
  size 163832792
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f3c7e020d07baa31cfbde8a54b4d8c94aa26477dd8296ae6ba801c969ec60e1
3
  size 163832792
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:130c7eae9b39d66432f000f730f48a0b6c500276aac3fe62cc57a86dadeb472e
3
  size 5560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:92d38304062a51837368dbe664b48ed459265fff77cb911d17641c4cbbd3a9b1
3
  size 5560