mgelard commited on
Commit
1b644d1
·
verified ·
1 Parent(s): 7c5d6ff

Upload BulkRNABert

Browse files
Files changed (4) hide show
  1. README.md +199 -0
  2. bulkrnabert.py +327 -0
  3. config.json +24 -0
  4. model.safetensors +3 -0
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
bulkrnabert.py ADDED
@@ -0,0 +1,327 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import logging
2
+ from dataclasses import dataclass, field
3
+ from typing import Optional
4
+
5
+ import numpy as np
6
+ import torch
7
+ import torch.nn as nn
8
+ import torch.nn.functional as F # noqa: N812
9
+ from transformers import PretrainedConfig, PreTrainedModel
10
+
11
+
12
+ class MultiHeadAttention(nn.Module):
13
+ def __init__(
14
+ self,
15
+ num_heads: int,
16
+ key_size: int,
17
+ add_bias_kv: bool = False,
18
+ value_size: Optional[int] = None,
19
+ model_size: Optional[int] = None,
20
+ name: Optional[str] = None,
21
+ ):
22
+ super().__init__()
23
+ if not model_size:
24
+ model_size = key_size
25
+ if not value_size:
26
+ value_size = key_size
27
+ self.model_size = model_size
28
+ self.key_size = key_size
29
+ self.value_size = value_size
30
+ self.add_bias_kv = add_bias_kv
31
+ self.name = name
32
+ self.num_heads = num_heads
33
+
34
+ self.w_k = nn.Linear(self.model_size, self.num_heads * self.key_size)
35
+ self.w_q = nn.Linear(self.model_size, self.num_heads * self.key_size)
36
+ self.w_v = nn.Linear(self.model_size, self.num_heads * self.value_size)
37
+ self.output = nn.Linear(self.num_heads * self.value_size, self.model_size)
38
+
39
+ def forward(
40
+ self,
41
+ query: torch.Tensor,
42
+ key: torch.Tensor,
43
+ value: torch.Tensor,
44
+ attention_mask: Optional[torch.Tensor] = None,
45
+ attention_weight_bias: Optional[torch.Tensor] = None,
46
+ ) -> dict[str, torch.Tensor]:
47
+ """
48
+ Returns:
49
+ dictionary containing attention weights
50
+ and outputs.
51
+ """
52
+ key_heads = self.w_k(key).reshape(
53
+ (*key.shape[:-1], self.num_heads, self.key_size)
54
+ )
55
+ query_heads = self.w_q(query).reshape(
56
+ (*query.shape[:-1], self.num_heads, self.key_size)
57
+ )
58
+ value_heads = self.w_v(value).reshape(
59
+ (*value.shape[:-1], self.num_heads, self.value_size)
60
+ )
61
+ attention_weights = torch.einsum(
62
+ "...thd, ...Thd -> ...htT", query_heads, key_heads
63
+ )
64
+ sqrt_key_size = np.sqrt(self.key_size)
65
+ attention_weights = attention_weights / sqrt_key_size
66
+ if attention_mask is not None:
67
+ attention_weights = torch.where(attention_mask, attention_weights, -1e30)
68
+ if attention_weight_bias:
69
+ attention_weights = F.softmax(
70
+ attention_weights + attention_weight_bias, dim=-1
71
+ )
72
+ else:
73
+ attention_weights = F.softmax(attention_weights, dim=-1)
74
+ value_out = torch.einsum(
75
+ "...htT, ...Thd->...thd", attention_weights, value_heads
76
+ )
77
+ value_out = value_out.reshape((*value_out.shape[:-2], -1))
78
+ embeddings = self.output(value_out)
79
+
80
+ return {"attention_weights": attention_weights, "embeddings": embeddings}
81
+
82
+
83
+ class SelfAttentionBlock(nn.Module):
84
+ def __init__(
85
+ self,
86
+ num_heads: int,
87
+ embed_dim: int,
88
+ ffn_embed_dim: int,
89
+ key_size: Optional[int] = None,
90
+ add_bias_kv: bool = False,
91
+ add_bias_fnn: bool = True,
92
+ ffn_activation_name: str = "gelu-no-approx",
93
+ use_glu_in_ffn: bool = False,
94
+ layer_norm_eps: float = 1e-5, # this is the default haiku value
95
+ pre_layer_norm: bool = True,
96
+ name: Optional[str] = None,
97
+ ):
98
+ super().__init__()
99
+ if key_size is None:
100
+ if embed_dim % num_heads != 0:
101
+ raise ValueError(
102
+ f"The embedding dimension should be divisible by the number of "
103
+ f"heads, however provided embedding dimension is {embed_dim} and "
104
+ f"the number of heads is {num_heads}."
105
+ )
106
+ else:
107
+ key_size = embed_dim // num_heads
108
+
109
+ # Get ffn activation function
110
+ self._pre_layer_norm = pre_layer_norm
111
+ self._use_glu_in_fnn = use_glu_in_ffn
112
+ # Define layers
113
+ if use_glu_in_ffn:
114
+ # user should multiply ffn_embed_dim by 2/3 when using GLU
115
+ # to keep total number of parameters equal
116
+ # see https://arxiv.org/pdf/2002.05202.pdf. for more details
117
+ # we multiply by 2 here as the output will be split in 2 for GLU
118
+ self.fc1 = nn.Linear(embed_dim, int(2 * ffn_embed_dim), bias=add_bias_fnn)
119
+ else:
120
+ self.fc1 = nn.Linear(embed_dim, ffn_embed_dim, bias=add_bias_fnn)
121
+
122
+ self.fc2 = nn.Linear(ffn_embed_dim, embed_dim, bias=add_bias_fnn)
123
+
124
+ self.layer_norm_self_attention = nn.LayerNorm(
125
+ embed_dim,
126
+ )
127
+ self.layer_norm_mlp = nn.LayerNorm(embed_dim)
128
+ if ffn_activation_name == "swish":
129
+ self._ffn_activation_fn = nn.SiLU()
130
+ elif ffn_activation_name == "gelu-no-approx":
131
+ self._ffn_activation_fn = lambda x: F.gelu(x, approximate="none")
132
+ else:
133
+ self._ffn_activation_fn = getattr(torch.nn, ffn_activation_name)
134
+
135
+ self.mha = MultiHeadAttention(
136
+ num_heads=num_heads,
137
+ key_size=key_size,
138
+ add_bias_kv=add_bias_kv,
139
+ model_size=embed_dim,
140
+ name="self_attention",
141
+ )
142
+
143
+ def mlp(self, embed: torch.Tensor) -> torch.Tensor:
144
+
145
+ if self._pre_layer_norm:
146
+ x = self.layer_norm_mlp(embed)
147
+ else:
148
+ x = embed
149
+
150
+ if self._use_glu_in_fnn:
151
+ x = self.fc1(x)
152
+ x1, x2 = torch.split(x, split_size_or_sections=x.shape[-1] // 2, dim=-1)
153
+ x = self._ffn_activation_fn(x1) * x2
154
+ else:
155
+ x = self._ffn_activation_fn(self.fc1(x))
156
+ x = self.fc2(x)
157
+
158
+ if not self._pre_layer_norm:
159
+ x = self.layer_norm_mlp(x + embed)
160
+ return x
161
+
162
+ def forward(
163
+ self,
164
+ x: torch.Tensor,
165
+ attention_mask: Optional[torch.Tensor] = None,
166
+ attention_weight_bias: Optional[torch.Tensor] = None,
167
+ ) -> torch.Tensor:
168
+
169
+ res = x
170
+ if self._pre_layer_norm:
171
+ x = self.layer_norm_self_attention(x)
172
+
173
+ output = self.mha(
174
+ x,
175
+ x,
176
+ x,
177
+ attention_mask=attention_mask,
178
+ attention_weight_bias=attention_weight_bias,
179
+ )
180
+
181
+ if not self._pre_layer_norm:
182
+ output["embeddings"] = self.layer_norm_self_attention(
183
+ output["embeddings"] + res
184
+ )
185
+
186
+ x = output["embeddings"]
187
+ else:
188
+ x = output["embeddings"]
189
+ x = res + x
190
+
191
+ # MLP
192
+ if not self._pre_layer_norm:
193
+ x = self.mlp(x)
194
+ else:
195
+ x = x + self.mlp(x)
196
+
197
+ output["embeddings"] = x
198
+ return output
199
+
200
+
201
+ @dataclass
202
+ class BulkRNABertConfig(PretrainedConfig):
203
+ model_type = "BulkRNABert"
204
+ n_genes: int = 19_062
205
+ n_expressions_bins: int = 64
206
+ embed_dim: int = 256
207
+ init_gene_embed_dim: int = 200
208
+ use_gene_embedding: bool = True
209
+ project_gene_embedding: bool = True
210
+ num_attention_heads: int = 8
211
+ key_size: Optional[int] = None
212
+ ffn_embed_dim: int = 512
213
+ num_layers: int = 4
214
+
215
+ # return
216
+ embeddings_layers_to_save: tuple[int, ...] = field(default_factory=tuple)
217
+ attention_maps_to_save: list[tuple[int, int]] = field(default_factory=list)
218
+
219
+ def __post_init__(self):
220
+ # Validate attention key size
221
+ key_size = self.key_size
222
+ if key_size is None:
223
+ embed_dim = self.embed_dim
224
+ num_attention_heads = self.num_attention_heads
225
+ if not embed_dim % num_attention_heads == 0:
226
+ raise ValueError(
227
+ f"When no key size is provided, the embedding dimension should be "
228
+ f"divisible by the number of heads, however provided embedding "
229
+ f"dimension is {embed_dim} and the number of heads is "
230
+ f"{num_attention_heads}."
231
+ )
232
+ self.key_size = embed_dim // num_attention_heads
233
+
234
+ # Validate gene embedding projection
235
+ use_gene_embedding = self.use_gene_embedding
236
+ if use_gene_embedding:
237
+ init_gene_embed_dim = self.init_gene_embed_dim
238
+ embed_dim = self.embed_dim
239
+ if init_gene_embed_dim != embed_dim:
240
+ project_gene_embedding = self.project_gene_embedding
241
+ if not project_gene_embedding:
242
+ logging.warning(
243
+ f"Init gene embedding dimension ({init_gene_embed_dim})"
244
+ f"different than embedding dimension ({embed_dim})."
245
+ f"Setting `project_gene_embedding` to True"
246
+ )
247
+ self.project_gene_embedding = True
248
+
249
+
250
+ class BulkRNABert(PreTrainedModel):
251
+ config_class = BulkRNABertConfig
252
+
253
+ def __init__(self, config: BulkRNABertConfig):
254
+ super().__init__(config=config)
255
+
256
+ self.expression_embedding_layer = nn.Embedding(
257
+ config.n_expressions_bins, config.embed_dim
258
+ )
259
+ self.gene_embedding_layer = nn.Embedding(
260
+ config.n_genes,
261
+ config.init_gene_embed_dim,
262
+ )
263
+ self.fc_gene_embedding = nn.Linear(config.init_gene_embed_dim, config.embed_dim)
264
+
265
+ attention_maps_to_save = config.attention_maps_to_save
266
+ self._attention_layers_to_save = list({t[0] for t in attention_maps_to_save})
267
+
268
+ self._attention_maps_per_layer_to_save = {
269
+ layer: [t[1] for t in attention_maps_to_save if t[0] == layer]
270
+ for layer in self._attention_layers_to_save
271
+ }
272
+ max_layer = max(self._attention_layers_to_save + [0])
273
+ if max_layer > config.num_layers:
274
+ raise ValueError(
275
+ f"You are requiring attention maps for layer {max_layer}, "
276
+ f"while the model has {config.num_layers} layers only."
277
+ )
278
+ self.transformer_layers = nn.ModuleList(
279
+ [
280
+ SelfAttentionBlock(
281
+ num_heads=config.num_attention_heads,
282
+ embed_dim=config.embed_dim,
283
+ key_size=config.key_size,
284
+ ffn_embed_dim=config.ffn_embed_dim,
285
+ name=f"attention_layer_{layer_idx}",
286
+ )
287
+ for layer_idx in range(config.num_layers)
288
+ ]
289
+ )
290
+
291
+ self.lm_head = nn.Linear(config.embed_dim, config.n_expressions_bins)
292
+
293
+ def forward(
294
+ self, input_ids: torch.Tensor, attention_mask: Optional[torch.Tensor] = None
295
+ ) -> dict[str, torch.Tensor]:
296
+ outs = {}
297
+ x = self.expression_embedding_layer(input_ids)
298
+
299
+ if self.config.use_gene_embedding:
300
+ gene_indices = torch.arange(self.config.n_genes, device=x.device)
301
+ gene_embedding = self.gene_embedding_layer(gene_indices)
302
+ if self.config.project_gene_embedding:
303
+ gene_embedding = self.fc_gene_embedding(gene_embedding)
304
+ x = x + gene_embedding
305
+
306
+ outs["embeddings"] = x
307
+
308
+ if attention_mask is None:
309
+ batch_size, seq_length = input_ids.shape
310
+ attention_mask = torch.ones( # noqa
311
+ (batch_size, 1, seq_length, seq_length),
312
+ device=input_ids.device,
313
+ dtype=bool,
314
+ )
315
+
316
+ for layer_idx, transformer in enumerate(self.transformer_layers):
317
+ output = transformer(x, attention_mask=attention_mask)
318
+ x = output["embeddings"]
319
+ if (layer_idx + 1) in self.config.embeddings_layers_to_save:
320
+ outs[f"embeddings_{(layer_idx + 1)}"] = output["embeddings"]
321
+ if (layer_idx + 1) in self._attention_layers_to_save:
322
+ for map_number in self._attention_maps_per_layer_to_save[layer_idx + 1]:
323
+ dkey = f"attention_map_layer_{layer_idx + 1}_number_{map_number}"
324
+ outs[dkey] = output["attention_weights"][:, map_number + 1]
325
+
326
+ outs["logits"] = self.lm_head(x)
327
+ return outs
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BulkRNABert"
4
+ ],
5
+ "attention_maps_to_save": [],
6
+ "auto_map": {
7
+ "AutoConfig": "bulkrnabert.BulkRNABertConfig",
8
+ "AutoModel": "bulkrnabert.BulkRNABert"
9
+ },
10
+ "embed_dim": 256,
11
+ "embeddings_layers_to_save": [],
12
+ "ffn_embed_dim": 512,
13
+ "init_gene_embed_dim": 200,
14
+ "key_size": 32,
15
+ "model_type": "BulkRNABert",
16
+ "n_expressions_bins": 64,
17
+ "n_genes": 19062,
18
+ "num_attention_heads": 8,
19
+ "num_layers": 4,
20
+ "project_gene_embedding": true,
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.37.2",
23
+ "use_gene_embedding": true
24
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b934e179e95a9f25f22ede71c7fe92132469d5bc4340c1031c9601b102a491f5
3
+ size 24027776