shivanandmn commited on
Commit
f5338ca
·
verified ·
1 Parent(s): cd19cbd

Model save

Browse files
Files changed (3) hide show
  1. README.md +79 -0
  2. generation_config.json +7 -0
  3. modeling_parallel_gpt2.py +496 -0
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - accuracy
7
+ - bleu
8
+ model-index:
9
+ - name: parallel-mean-bottleneck-gpt2-medium-wikitext
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # parallel-mean-bottleneck-gpt2-medium-wikitext
17
+
18
+ This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 3.1859
21
+ - Accuracy: 0.4194
22
+ - Perplexity: 24.1889
23
+ - Bleu: 0.1461
24
+
25
+ ## Model description
26
+
27
+ More information needed
28
+
29
+ ## Intended uses & limitations
30
+
31
+ More information needed
32
+
33
+ ## Training and evaluation data
34
+
35
+ More information needed
36
+
37
+ ## Training procedure
38
+
39
+ ### Training hyperparameters
40
+
41
+ The following hyperparameters were used during training:
42
+ - learning_rate: 0.0001
43
+ - train_batch_size: 64
44
+ - eval_batch_size: 64
45
+ - seed: 42
46
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
+ - lr_scheduler_type: linear
48
+ - lr_scheduler_warmup_ratio: 0.1
49
+ - num_epochs: 5
50
+
51
+ ### Training results
52
+
53
+ | Training Loss | Epoch | Step | Accuracy | Bleu | Validation Loss | Perplexity |
54
+ |:-------------:|:------:|:----:|:--------:|:------:|:---------------:|:----------:|
55
+ | 6.0432 | 0.2806 | 500 | 0.1909 | 0.0378 | 5.9180 | 371.6605 |
56
+ | 5.0476 | 0.5612 | 1000 | 0.2633 | 0.0612 | 4.8985 | 134.0910 |
57
+ | 4.3528 | 0.8418 | 1500 | 0.3182 | 0.0834 | 4.2398 | 69.3933 |
58
+ | 3.9497 | 1.1223 | 2000 | 0.3520 | 0.1054 | 3.8879 | 48.8078 |
59
+ | 3.7614 | 1.4029 | 2500 | 0.3674 | 0.1207 | 3.7128 | 40.9670 |
60
+ | 3.6543 | 1.6835 | 3000 | 0.3780 | 0.1310 | 3.5902 | 36.2404 |
61
+ | 3.5527 | 1.9641 | 3500 | 0.3864 | 0.1337 | 3.5048 | 33.2757 |
62
+ | 3.4348 | 2.2447 | 4000 | 0.3923 | 0.1361 | 3.4401 | 31.1898 |
63
+ | 3.3739 | 2.5253 | 4500 | 3.3868 | 0.3974 | 29.5718 | 0.1419 |
64
+ | 3.3441 | 2.8058 | 5000 | 3.3419 | 0.4020 | 28.2718 | 0.1394 |
65
+ | 3.2252 | 3.0864 | 5500 | 3.3067 | 0.4057 | 27.2940 | 0.1432 |
66
+ | 3.2188 | 3.3670 | 6000 | 3.2775 | 0.4088 | 26.5107 | 0.1421 |
67
+ | 3.1971 | 3.6476 | 6500 | 3.2502 | 0.4115 | 25.7958 | 0.1426 |
68
+ | 3.1722 | 3.9282 | 7000 | 3.2266 | 0.4143 | 25.1936 | 0.1446 |
69
+ | 3.1052 | 4.2088 | 7500 | 3.2103 | 0.4163 | 24.7864 | 0.1433 |
70
+ | 3.0672 | 4.4893 | 8000 | 3.1967 | 0.4180 | 24.4514 | 0.1438 |
71
+ | 3.0774 | 4.7699 | 8500 | 3.1859 | 0.4194 | 24.1889 | 0.1461 |
72
+
73
+
74
+ ### Framework versions
75
+
76
+ - Transformers 4.49.0
77
+ - Pytorch 2.6.0+cu124
78
+ - Datasets 3.3.2
79
+ - Tokenizers 0.21.0
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 50256,
4
+ "eos_token_id": 50256,
5
+ "transformers_version": "4.49.0",
6
+ "use_cache": false
7
+ }
modeling_parallel_gpt2.py ADDED
@@ -0,0 +1,496 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ """PyTorch OpenAI GPT-2 model modified to support parallel-gpt2, code copied from Huggingface"""
3
+
4
+
5
+ import warnings
6
+ from typing import Optional, Tuple, Union
7
+
8
+ import torch
9
+ import torch.utils.checkpoint
10
+ from torch import nn
11
+
12
+ from transformers.modeling_outputs import (
13
+ BaseModelOutputWithPastAndCrossAttentions,
14
+ CausalLMOutputWithCrossAttentions
15
+ )
16
+ from transformers.generation import GenerationMixin
17
+ from transformers.utils.model_parallel_utils import assert_device_map, get_device_map
18
+ from src.models.modeling_gpt2 import GPT2PreTrainedModel, GPT2Block
19
+ from transformers.models.gpt2.configuration_gpt2 import GPT2Config
20
+ from transformers.modeling_attn_mask_utils import _prepare_4d_attention_mask_for_sdpa, _prepare_4d_causal_attention_mask_for_sdpa
21
+
22
+ class ParallelGPT2Config(GPT2Config):
23
+ model_type = "parallel-gpt2"
24
+ architectures = ["ParallelGPT2LMHeadModel"]
25
+
26
+ class ParallelGPT2PretrainedModel(GPT2PreTrainedModel):
27
+ config_class = ParallelGPT2Config
28
+
29
+ class ParallelGPT2Model(ParallelGPT2PretrainedModel):
30
+ _supports_param_buffer_assignment = False
31
+
32
+ def __init__(self, config):
33
+ super().__init__(config)
34
+
35
+ self.embed_dim = config.hidden_size
36
+
37
+ self.wte = nn.Embedding(config.vocab_size, self.embed_dim)
38
+ self.wpe = nn.Embedding(config.max_position_embeddings, self.embed_dim)
39
+
40
+ self.drop = nn.Dropout(config.embd_pdrop)
41
+ if config.num_hidden_layers % 2 != 0:
42
+ raise ValueError("Number of hidden layers must be even")
43
+ self.h = nn.ModuleList([GPT2Block(config, layer_idx=i) for i in range(config.num_hidden_layers)])
44
+ self.ln_f = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_epsilon)
45
+ self.config.bottleneck_method = getattr(config, "bottleneck_method", "mean")
46
+ if self.config.bottleneck_method=="concat":
47
+ self.bottleneck = nn.Linear(2*self.embed_dim, self.embed_dim)
48
+
49
+ # Model parallel
50
+ self.model_parallel = False
51
+ self.device_map = None
52
+ self.gradient_checkpointing = False
53
+ self._attn_implementation = config._attn_implementation
54
+
55
+ # Initialize weights and apply final processing
56
+ self.post_init()
57
+
58
+
59
+ def parallelize(self, device_map=None):
60
+ # Check validity of device_map
61
+ warnings.warn(
62
+ "`GPT2Model.parallelize` is deprecated and will be removed in v5 of Transformers, you should load your"
63
+ " model with `device_map='balanced'` in the call to `from_pretrained`. You can also provide your own"
64
+ " `device_map` but it needs to be a dictionary module_name to device, so for instance {'h.0': 0, 'h.1': 1,"
65
+ " ...}",
66
+ FutureWarning,
67
+ )
68
+ self.device_map = (
69
+ get_device_map(len(self.h), range(torch.cuda.device_count())) if device_map is None else device_map
70
+ )
71
+ assert_device_map(self.device_map, len(self.h))
72
+ self.model_parallel = True
73
+ self.first_device = "cpu" if "cpu" in self.device_map.keys() else "cuda:" + str(min(self.device_map.keys()))
74
+ self.last_device = "cuda:" + str(max(self.device_map.keys()))
75
+ self.wte = self.wte.to(self.first_device)
76
+ self.wpe = self.wpe.to(self.first_device)
77
+ # Load onto devices
78
+ for k, v in self.device_map.items():
79
+ for block in v:
80
+ cuda_device = "cuda:" + str(k)
81
+ self.h[block] = self.h[block].to(cuda_device)
82
+ # ln_f to last
83
+ self.ln_f = self.ln_f.to(self.last_device)
84
+
85
+ def deparallelize(self):
86
+ self.model_parallel = False
87
+ self.device_map = None
88
+ self.first_device = "cpu"
89
+ self.last_device = "cpu"
90
+ self.wte = self.wte.to("cpu")
91
+ self.wpe = self.wpe.to("cpu")
92
+ for index in range(len(self.h)):
93
+ self.h[index] = self.h[index].to("cpu")
94
+ self.ln_f = self.ln_f.to("cpu")
95
+ torch.cuda.empty_cache()
96
+
97
+ def get_input_embeddings(self):
98
+ return self.wte
99
+
100
+ def set_input_embeddings(self, new_embeddings):
101
+ self.wte = new_embeddings
102
+
103
+ def _prune_heads(self, heads_to_prune):
104
+ """
105
+ Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer}
106
+ """
107
+ for layer, heads in heads_to_prune.items():
108
+ self.h[layer].attn.prune_heads(heads)
109
+
110
+
111
+ def forward(
112
+ self,
113
+ input_ids: Optional[torch.LongTensor] = None,
114
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
115
+ attention_mask: Optional[torch.FloatTensor] = None,
116
+ token_type_ids: Optional[torch.LongTensor] = None,
117
+ position_ids: Optional[torch.LongTensor] = None,
118
+ head_mask: Optional[torch.FloatTensor] = None,
119
+ inputs_embeds: Optional[torch.FloatTensor] = None,
120
+ encoder_hidden_states: Optional[torch.Tensor] = None,
121
+ encoder_attention_mask: Optional[torch.FloatTensor] = None,
122
+ use_cache: Optional[bool] = None,
123
+ output_attentions: Optional[bool] = None,
124
+ output_hidden_states: Optional[bool] = None,
125
+ return_dict: Optional[bool] = None,
126
+ ) -> Union[Tuple, BaseModelOutputWithPastAndCrossAttentions]:
127
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
128
+ output_hidden_states = (
129
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
130
+ )
131
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
132
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
133
+
134
+ if input_ids is not None and inputs_embeds is not None:
135
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
136
+ elif input_ids is not None:
137
+ self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask)
138
+ input_shape = input_ids.size()
139
+ input_ids = input_ids.view(-1, input_shape[-1])
140
+ batch_size = input_ids.shape[0]
141
+ elif inputs_embeds is not None:
142
+ input_shape = inputs_embeds.size()[:-1]
143
+ batch_size = inputs_embeds.shape[0]
144
+ else:
145
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
146
+
147
+ device = input_ids.device if input_ids is not None else inputs_embeds.device
148
+
149
+ if token_type_ids is not None:
150
+ token_type_ids = token_type_ids.view(-1, input_shape[-1])
151
+
152
+ if past_key_values is None:
153
+ past_length = 0
154
+ past_key_values = tuple([None] * len(self.h))
155
+ else:
156
+ past_length = past_key_values[0][0].size(-2)
157
+ if position_ids is None:
158
+ position_ids = torch.arange(past_length, input_shape[-1] + past_length, dtype=torch.long, device=device)
159
+ position_ids = position_ids.unsqueeze(0)
160
+
161
+ if inputs_embeds is None:
162
+ inputs_embeds = self.wte(input_ids)
163
+ position_embeds = self.wpe(position_ids)
164
+ hidden_states = inputs_embeds + position_embeds.to(inputs_embeds.device)
165
+
166
+ # Attention mask.
167
+ _use_sdpa = self._attn_implementation == "sdpa" and output_attentions is False and head_mask is None
168
+ attention_mask = attention_mask.view(batch_size, -1) if attention_mask is not None else None
169
+ if self._attn_implementation == "flash_attention_2":
170
+ attention_mask = attention_mask if (attention_mask is not None and 0 in attention_mask) else None
171
+ elif _use_sdpa:
172
+ attention_mask = _prepare_4d_causal_attention_mask_for_sdpa(
173
+ attention_mask=attention_mask,
174
+ input_shape=(batch_size, input_shape[-1]),
175
+ inputs_embeds=inputs_embeds,
176
+ past_key_values_length=past_length,
177
+ )
178
+ else:
179
+ if attention_mask is not None:
180
+ # We create a 3D attention mask from a 2D tensor mask.
181
+ # Sizes are [batch_size, 1, 1, to_seq_length]
182
+ # So we can broadcast to [batch_size, num_heads, from_seq_length, to_seq_length]
183
+ # this attention mask is more simple than the triangular masking of causal attention
184
+ # used in OpenAI GPT, we just need to prepare the broadcast dimension here.
185
+ attention_mask = attention_mask[:, None, None, :]
186
+
187
+ # Since attention_mask is 1.0 for positions we want to attend and 0.0 for
188
+ # masked positions, this operation will create a tensor which is 0.0 for
189
+ # positions we want to attend and the dtype's smallest value for masked positions.
190
+ # Since we are adding it to the raw scores before the softmax, this is
191
+ # effectively the same as removing these entirely.
192
+ attention_mask = attention_mask.to(dtype=self.dtype) # fp16 compatibility
193
+ attention_mask = (1.0 - attention_mask) * torch.finfo(self.dtype).min
194
+
195
+ # If a 2D or 3D attention mask is provided for the cross-attention
196
+ # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
197
+ if self.config.add_cross_attention and encoder_hidden_states is not None:
198
+ encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size()
199
+ encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
200
+ if encoder_attention_mask is None:
201
+ encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device)
202
+ if _use_sdpa:
203
+ encoder_attention_mask = _prepare_4d_attention_mask_for_sdpa(
204
+ mask=encoder_attention_mask, dtype=inputs_embeds.dtype, tgt_len=input_shape[-1]
205
+ )
206
+ elif not self._attn_implementation == "flash_attention_2":
207
+ encoder_attention_mask = self.invert_attention_mask(encoder_attention_mask)
208
+ else:
209
+ encoder_attention_mask = None
210
+
211
+ # Prepare head mask if needed
212
+ # 1.0 in head_mask indicate we keep the head
213
+ # attention_probs has shape bsz x n_heads x N x N
214
+ # head_mask has shape n_layer x batch x n_heads x N x N
215
+ head_mask = self.get_head_mask(head_mask, self.config.n_layer)
216
+
217
+ if token_type_ids is not None:
218
+ token_type_embeds = self.wte(token_type_ids)
219
+ hidden_states = hidden_states + token_type_embeds
220
+
221
+ hidden_states = self.drop(hidden_states)
222
+
223
+ output_shape = (-1,) + input_shape[1:] + (hidden_states.size(-1),)
224
+
225
+ if self.gradient_checkpointing and self.training:
226
+ if use_cache:
227
+ logger.warning_once(
228
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
229
+ )
230
+ use_cache = False
231
+
232
+ presents = () if use_cache else None
233
+ all_self_attentions_left = () if output_attentions else None
234
+ all_self_attentions_right = () if output_attentions else None
235
+ all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None
236
+ all_hidden_states = () if output_hidden_states else None
237
+ for i in range(0, len(self.h), 2):
238
+ block_left, layer_past_left = self.h[i], past_key_values[i]
239
+ block_right, layer_past_right = self.h[i+1], past_key_values[i+1]
240
+ # Model parallel
241
+ if self.model_parallel:
242
+ torch.cuda.set_device(hidden_states.device)
243
+ # Ensure layer_past is on same device as hidden_states (might not be correct)
244
+ if layer_past is not None:
245
+ layer_past = tuple(past_state.to(hidden_states.device) for past_state in layer_past)
246
+ # Ensure that attention_mask is always on the same device as hidden_states
247
+ if attention_mask is not None:
248
+ attention_mask = attention_mask.to(hidden_states.device)
249
+ if isinstance(head_mask, torch.Tensor):
250
+ head_mask = head_mask.to(hidden_states.device)
251
+ if output_hidden_states:
252
+ all_hidden_states = all_hidden_states + (hidden_states,)
253
+
254
+ if self.gradient_checkpointing and self.training:
255
+ outputs_left = self._gradient_checkpointing_func(
256
+ block_left.__call__,
257
+ hidden_states,
258
+ None,
259
+ attention_mask,
260
+ head_mask[i],
261
+ encoder_hidden_states,
262
+ encoder_attention_mask,
263
+ use_cache,
264
+ output_attentions,
265
+ )
266
+ outputs_right = self._gradient_checkpointing_func(
267
+ block_right.__call__,
268
+ hidden_states,
269
+ None,
270
+ attention_mask,
271
+ head_mask[i+1],
272
+ encoder_hidden_states,
273
+ encoder_attention_mask,
274
+ use_cache,
275
+ output_attentions,
276
+ )
277
+ else:
278
+ outputs_left = block_left(
279
+ hidden_states,
280
+ layer_past=layer_past_left,
281
+ attention_mask=attention_mask,
282
+ head_mask=head_mask[i],
283
+ encoder_hidden_states=encoder_hidden_states,
284
+ encoder_attention_mask=encoder_attention_mask,
285
+ use_cache=use_cache,
286
+ output_attentions=output_attentions,
287
+ )
288
+ outputs_right = block_right(
289
+ hidden_states,
290
+ layer_past=layer_past_right,
291
+ attention_mask=attention_mask,
292
+ head_mask=head_mask[i+1],
293
+ encoder_hidden_states=encoder_hidden_states,
294
+ encoder_attention_mask=encoder_attention_mask,
295
+ use_cache=use_cache,
296
+ output_attentions=output_attentions,
297
+ )
298
+ if self.config.bottleneck_method=="concat":
299
+ hidden_states = torch.cat((outputs_left[0], outputs_right[0]), dim=-1)
300
+ hidden_states = self.bottleneck(hidden_states)
301
+ elif self.config.bottleneck_method=="add":
302
+ hidden_states = (outputs_left[0] + outputs_right[0]) ## taking add
303
+ elif self.config.bottleneck_method=="mean":
304
+ hidden_states = (outputs_left[0] + outputs_right[0]) / 2 ## taking mean
305
+ if use_cache is True:
306
+ presents = presents + (outputs_left[1], outputs_right[1])
307
+
308
+ if output_attentions:
309
+ all_self_attentions_left = all_self_attentions_left + (outputs_left[2 if use_cache else 1],)
310
+ all_self_attentions_right = all_self_attentions_right + (outputs_right[2 if use_cache else 1],)
311
+ if self.config.add_cross_attention:
312
+ all_cross_attentions_left = all_cross_attentions_left + (outputs_left[3 if use_cache else 2],)
313
+ all_cross_attentions_right = all_cross_attentions_right + (outputs_right[3 if use_cache else 2],)
314
+
315
+ # Model Parallel: If it's the last layer for that device, put things on the next device
316
+ if self.model_parallel:
317
+ for k, v in self.device_map.items():
318
+ if i == v[-1] and "cuda:" + str(k) != self.last_device:
319
+ hidden_states = hidden_states.to("cuda:" + str(k + 1))
320
+
321
+ hidden_states = self.ln_f(hidden_states)
322
+
323
+ hidden_states = hidden_states.view(output_shape)
324
+ # Add last hidden state
325
+ if output_hidden_states:
326
+ all_hidden_states = all_hidden_states + (hidden_states,)
327
+
328
+ if not return_dict:
329
+ return tuple(
330
+ v
331
+ for v in [hidden_states, presents, all_hidden_states, all_self_attentions_left, all_cross_attentions]
332
+ if v is not None
333
+ )
334
+
335
+ return BaseModelOutputWithPastAndCrossAttentions(
336
+ last_hidden_state=hidden_states,
337
+ past_key_values=presents,
338
+ hidden_states=all_hidden_states,
339
+ attentions=all_self_attentions_left,
340
+ cross_attentions=all_cross_attentions,
341
+ )
342
+
343
+
344
+ class ParallelGPT2LMHeadModel(ParallelGPT2PretrainedModel, GenerationMixin):
345
+ _tied_weights_keys = ["lm_head.weight"]
346
+
347
+ def __init__(self, config):
348
+ super().__init__(config)
349
+ self.transformer = ParallelGPT2Model(config)
350
+ self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False)
351
+
352
+ # Model parallel
353
+ self.model_parallel = False
354
+ self.device_map = None
355
+
356
+ # Initialize weights and apply final processing
357
+ self.post_init()
358
+
359
+ def parallelize(self, device_map=None):
360
+ warnings.warn(
361
+ "`GPT2LMHeadModel.parallelize` is deprecated and will be removed in v5 of Transformers, you should load"
362
+ " your model with `device_map='balanced'` in the call to `from_pretrained`. You can also provide your own"
363
+ " `device_map` but it needs to be a dictionary module_name to device, so for instance {'transformer.h.0':"
364
+ " 0, 'transformer.h.1': 1, ...}",
365
+ FutureWarning,
366
+ )
367
+ self.device_map = (
368
+ get_device_map(len(self.transformer.h), range(torch.cuda.device_count()))
369
+ if device_map is None
370
+ else device_map
371
+ )
372
+ assert_device_map(self.device_map, len(self.transformer.h))
373
+ self.transformer.parallelize(self.device_map)
374
+ self.lm_head = self.lm_head.to(self.transformer.first_device)
375
+ self.model_parallel = True
376
+
377
+ def deparallelize(self):
378
+ self.transformer.deparallelize()
379
+ self.transformer = self.transformer.to("cpu")
380
+ self.lm_head = self.lm_head.to("cpu")
381
+ self.model_parallel = False
382
+ torch.cuda.empty_cache()
383
+
384
+ def get_output_embeddings(self):
385
+ return self.lm_head
386
+
387
+ def set_output_embeddings(self, new_embeddings):
388
+ self.lm_head = new_embeddings
389
+
390
+ def forward(
391
+ self,
392
+ input_ids: Optional[torch.LongTensor] = None,
393
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
394
+ attention_mask: Optional[torch.FloatTensor] = None,
395
+ token_type_ids: Optional[torch.LongTensor] = None,
396
+ position_ids: Optional[torch.LongTensor] = None,
397
+ head_mask: Optional[torch.FloatTensor] = None,
398
+ inputs_embeds: Optional[torch.FloatTensor] = None,
399
+ encoder_hidden_states: Optional[torch.Tensor] = None,
400
+ encoder_attention_mask: Optional[torch.FloatTensor] = None,
401
+ labels: Optional[torch.LongTensor] = None,
402
+ use_cache: Optional[bool] = None,
403
+ output_attentions: Optional[bool] = None,
404
+ output_hidden_states: Optional[bool] = None,
405
+ return_dict: Optional[bool] = None,
406
+ **kwargs,
407
+ ) -> Union[Tuple, CausalLMOutputWithCrossAttentions]:
408
+ r"""
409
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
410
+ Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set
411
+ `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100`
412
+ are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]`
413
+ """
414
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
415
+
416
+ transformer_outputs = self.transformer(
417
+ input_ids,
418
+ past_key_values=past_key_values,
419
+ attention_mask=attention_mask,
420
+ token_type_ids=token_type_ids,
421
+ position_ids=position_ids,
422
+ head_mask=head_mask,
423
+ inputs_embeds=inputs_embeds,
424
+ encoder_hidden_states=encoder_hidden_states,
425
+ encoder_attention_mask=encoder_attention_mask,
426
+ use_cache=use_cache,
427
+ output_attentions=output_attentions,
428
+ output_hidden_states=output_hidden_states,
429
+ return_dict=return_dict,
430
+ )
431
+ hidden_states = transformer_outputs[0]
432
+
433
+ # Set device for model parallelism
434
+ if self.model_parallel:
435
+ torch.cuda.set_device(self.transformer.first_device)
436
+ hidden_states = hidden_states.to(self.lm_head.weight.device)
437
+
438
+ lm_logits = self.lm_head(hidden_states)
439
+
440
+ loss = None
441
+ if labels is not None:
442
+ # Flatten the tokens
443
+ loss = self.loss_function(
444
+ lm_logits,
445
+ labels,
446
+ vocab_size=self.config.vocab_size,
447
+ **kwargs,
448
+ )
449
+
450
+ if not return_dict:
451
+ output = (lm_logits,) + transformer_outputs[1:]
452
+ return ((loss,) + output) if loss is not None else output
453
+
454
+ return CausalLMOutputWithCrossAttentions(
455
+ loss=loss,
456
+ logits=lm_logits,
457
+ past_key_values=transformer_outputs.past_key_values,
458
+ hidden_states=transformer_outputs.hidden_states,
459
+ attentions=transformer_outputs.attentions,
460
+ cross_attentions=transformer_outputs.cross_attentions,
461
+ )
462
+
463
+ @staticmethod
464
+ def _reorder_cache(
465
+ past_key_values: Tuple[Tuple[torch.Tensor]], beam_idx: torch.Tensor
466
+ ) -> Tuple[Tuple[torch.Tensor]]:
467
+ """
468
+ This function is used to re-order the `past_key_values` cache if [`~PreTrainedModel.beam_search`] or
469
+ [`~PreTrainedModel.beam_sample`] is called. This is required to match `past_key_values` with the correct
470
+ beam_idx at every generation step.
471
+ """
472
+ return tuple(
473
+ tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past)
474
+ for layer_past in past_key_values
475
+ )
476
+
477
+
478
+
479
+ from transformers import AutoConfig, AutoModel
480
+ AutoConfig.register("parallel-gpt2", ParallelGPT2Config)
481
+ AutoModel.register(ParallelGPT2Config, ParallelGPT2LMHeadModel)
482
+
483
+ __all__ = [
484
+ "ParallelGPT2LMHeadModel",
485
+ "ParallelGPT2Model",
486
+ "ParallelGPT2Config",
487
+ ]
488
+
489
+
490
+ if __name__ == "__main__":
491
+ cg = ParallelGPT2Config.from_pretrained("gpt2-medium", architectures=["ParallelGPT2LMHeadModel"])
492
+ model = ParallelGPT2LMHeadModel(cg)
493
+ from src.utils.model_utlis import print_trainable_parameters
494
+ print_trainable_parameters(model)
495
+ model(torch.randint(0, 10000, (1, 100)))
496
+ print()