instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
ERROR WHEN IMPORTING PYTORCH (The filename or extension is too long) | I'm using Anconda to run my Transformers project locally in google colab.
I've created a new environment (tf_gpu) and installed (supposedly) everything I need.
And everything works fine, but when I try to simply import pytorch, this error appears:
[WinError 206] The filename or extension is too long: 'C:\\Users\\34662\\anaconda3\\envs\\tf_gpu\\lib\\site-packages\\torch\\lib'
When clearly the path is not long enough to trigger this error.
My python version is 3.8, and my GPU is a Nvidia GeForce GTX 1650, so it shouldn't be a GPU problem
Does anybody knows why this happens?
Any help is good at this point, I don't know how to solve this.
Here I leave a screenshot of the complete error message
Thank you in advance.
| Your problem is that the error ist not a too long path error it is a file not found error which mean that pytorch is not correctly installed
| https://stackoverflow.com/questions/70532485/ |
Cross entropy yields different results for vectors with identical distributions | I am training a neural network to distinguish between three classes. Naturally, I went for PyTorch's CrossEntropyLoss. During experimentation, I realized that the loss was significantly higher when a Softmax layer was put at the end of the model. So I decided to experiment further:
import torch
from torch import nn
pred_1 = torch.Tensor([[0.1, 0.2, 0.7]])
pred_2 = torch.Tensor([[1, 2, 7]])
pred_3 = torch.Tensor([[2, 4, 14]])
true = torch.Tensor([2]).long()
loss = nn.CrossEntropyLoss()
print(loss(pred_1, true))
print(loss(pred_2, true))
print(loss(pred_3, true))
The result of this code is as follows:
0.7679
0.0092
5.1497e-05
I also tried what happens when multiplying the input with some constant.
Several sources (1, 2) stated that the loss has a softmax built in, but if that were the case, I would have expected all of the examples above to return the same loss, which clearly isn't the case.
This poses the following question: if bigger outputs lead to a lower loss, wouldn't the network optimize towards outputting bigger values, thereby skewing the loss curves? If so, it seems like a Softmax layer would fix that. But since this results in a higher loss overall, how useful would the resulting loss actually be?
| From the docs, the input to CrossEntropyLoss "is expected to contain raw, unnormalized scores for each class". Those are typically called logits.
There are two questions:
Scaling the logits should not yield the same cross-entropy. You might be thinking of a linear normalization, but the (implicit) softmax in the cross-entropy normalizes the exponential of the logits.
This causes the learning to optimize toward larger values of the logits. This is exactly what you want because it means that the network is more "confident" of the classification prediction. (The posterior p(c|x) is closer to the ground truth.)
| https://stackoverflow.com/questions/70533018/ |
How do you know if a Pytorch Save contains a model and/or just the weights? | I'm fairly new to pytorch and this might be a version issue, but I see torch.load and torch.load_state_dict used, but in both cases the file extension is commonly ".pth"
Models that I have created, I can Save and Load them via torch.Save and torch.Load and call model.eval()
I have another model file that I'm fairly sure is just the state dictionary, as model.eval() fails after a load.
How would I inspect the file and know that one has a full model in it?
Thanks much.
| As far as I know there isn't a foolproof way to figure this out. torch.save uses Python's pickle under the hood (ref: Pytorch docs), so users can save arbitrary Python objects. For example, the following code wraps the state dicts in a dictionary:
# example from https://github.com/lucidrains/lightweight-gan/blob/fce20938562a0cc289c915f7317722a8241abd37/lightweight_gan/lightweight_gan.py#L1437
save_data = {
'GAN': self.GAN.state_dict(),
'version': __version__,
'G_scaler': self.G_scaler.state_dict(),
'D_scaler': self.D_scaler.state_dict()
}
torch.save(save_data, self.model_name(num))
If it helps, state dicts themselves are OrderedDict objects. If isinstance(model, collections.OrderedDict) returns True, you can be fairly confident that model is a state dict. (Remember to import collections)
Models themselves are subclasses of torch.nn.Module, so you can check if something is a model by verifying that isinstance(model, torch.nn.Module) returns True.
| https://stackoverflow.com/questions/70536259/ |
Unable to use Pytorch with CUDA in Celery task | I have a celery task that uses torch library, which internally uses CUDA. When I run the task, it fails saying
"Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method"
When I browsed on this a little, I got this - https://github.com/celery/celery/issues/6036
This issue says Celery supports only fork and not spawn.
Is there any workaround/alternative to this?
| you should load pytorch model and use it as global variable. The instance of model and task will run in the same process
from celery.signals import worker_process_init
# for more information about worker_process_init, read here:
# https://docs.celeryproject.org/en/stable/userguide/signals.html#worker-process-init
pytorch_model = None
@worker_process_init.connect()
def init_worker_process(**kwargs):
"""
load model before running tasks
:param kwargs:
:return:
"""
global pytorch_model
pytorch_model = load_model()
@app.task
def predict_task(image: np.ndarray):
return pytorch_model.predict(image)
| https://stackoverflow.com/questions/70541625/ |
how to use pretrained model .pth in pytorch? | I have one question, how to use the saved pre-trained model. For example, I have a model and I train it with the dataset I have and then I save it with the .pth extension. how to use file .pth to test new image. thanks
| You can load the parameters inside from a.pt/h into a model like this:
# initialize a model with the same architecture as the model which parameters you saved into the .pt/h file
model = Model()
# load the parameters into the model
model.load_state_dict(torch.load("parameters.pth"))
| https://stackoverflow.com/questions/70542964/ |
Pytorch LSTM - generating sentence- word by word? | I'm trying to implement a neural network to generate sentences (image captions), and I'm using Pytorch's LSTM (nn.LSTM) for that.
The input I want to feed in the training is from size batch_size * seq_size * embedding_size, such that seq_size is the maximal size of a sentence. For example - 64*30*512.
After the LSTM there is one FC layer (nn.Linear).
As far as I understand, this type of networks work with hidden state (h,c in this case), and predict the next word each time.
My question is- in the training - do we have to manually feed the sentence word by word to the LSTM in the forward function, or the LSTM knows how to do it itself?
My forward function looks like this:
def forward(self, features, caption, h = None, c = None):
batch_size = caption.size(0)
caption_size = caption.size(1)
no_hc = False
if h == None and c == None:
no_hc = True
h,c = self.init_hidden(batch_size)
embeddings = self.embedding(caption)
output = torch.empty((batch_size, caption_size, self.vocab_size)).to(device)
for i in range(caption_size): #go over the words in the sentence
if i==0:
lstm_input = features.unsqueeze(1)
else:
lstm_input = embeddings[:,i-1,:].unsqueeze(1)
out, (h,c) = self.lstm(lstm_input, (h,c))
out = self.fc(out)
output[:,i,:] = out.squeeze()
if no_hc:
return output
return output, h,c
(took inspiration from here)
The output of the forward here is from size batch_size * seq_size * vocab_size, which is good because it can be compared with the original batch_size * seq_size sized caption in the loss function.
The question is whether this for loop inside the forward that feeds the words one after the other is really necessary, or I can somehow feed the entire sentence at once and get the same results?
(I saw some example that do that, for example this one, but I'm not sure if it's really equivalent)
| The answer is, LSTM knows how to do it on its own. You do not have to manually feed each word one by one.
An intuitive way to understand is that the shape of the batch that you send, contains seq_length (batch.shape[1]), using which it decides the number of words in the sentence. The words are passed through LSTM Cell generating the hidden states and C.
| https://stackoverflow.com/questions/70550047/ |
big data in pytorch, help for tuning steps | I've previously splitted my bigdata:
# X_train.shape : 4M samples x 2K features
# X_test.shape : 2M samples x 2K features
I've prepared the dataloaders
target = torch.tensor(y_train.to_numpy())
features = torch.tensor(X_train.values)
train = data_utils.TensorDataset(features, target)
train_loader = data_utils.DataLoader(train, batch_size=10000, shuffle=True)
testtarget = torch.tensor(y_test.to_numpy())
testfeatures = torch.tensor(X_test.values)
test = data_utils.TensorDataset(testfeatures, testtarget)
validation_generator = data_utils.DataLoader(test, batch_size=20000, shuffle=True)
I copied from an online course this example for a network (no idea if other model are better)
base_elastic_model = ElasticNet()
param_grid = {'alpha':[0.1,1,5,10,50,100],
'l1_ratio':[.1, .5, .7, .9, .95, .99, 1]}
grid_model = GridSearchCV(estimator=base_elastic_model,
param_grid=param_grid,
scoring='neg_mean_squared_error',
cv=5,
verbose=0)
I've built this fitting
for epoch in range(1):
# Training
cont=0
total = 0
correct = 0
for local_batch, local_labels in train_loader:
cont+=1
with torch.set_grad_enabled(True):
grid_model.fit(local_batch,local_labels)
with torch.set_grad_enabled(False):
predicted = grid_model.predict(local_batch)
total += len(local_labels)
correct += ((1*(predicted>.5)) == np.array(local_labels)).sum()
#print stats
# Validation
total = 0
correct = 0
with torch.set_grad_enabled(False):
for local_batch, local_labels in validation_generator:
predicted = grid_model.predict(local_batch)
total += len(local_labels)
correct += ((1*(predicted>.5)) == np.array(local_labels)).sum()
#print stats
Maybe my grandchildren will have the results for 1 epoch!
I need some advises:
how/where (in the code) can I use quickly less data for a first tuning?
some advise for the steps to have a result in the 2022?
because I've added "with torch.set_grad_enabled(False):" for stats printing, have I to add (as done) "with torch.set_grad_enabled(True):" ?
I have got a GPU (useful without images??). I've the function "get_device()". Where have I to put ".to(get_device())" to use CUDA?
I'm learning putting together pieces of information, do you have general advising for my exercise?
|
To shorten the training process by simply stopping the training for loop after a certain number like so.
for local_batch, local_labels in train_loader:
cont+=1
if cont== number_u_want_to_stop:
break #Breaks out of the for Loop and continues with the rest.
Always use your GPU for training and "inferencing" aka (using a model to make predictions) bs it is more than 20 faster than even the best CPU.
No you don't have to make it true again. That's the main point of using the "with" syntax so after the code that is in the with the block is finished the properties will just dissolve into air :). So u can delete this line with a torch.set_grad_enabled(False):
Like I said in the 2nd point use your GPU for all your projects but keep in mind u will have to use a graphics card with at least 4GB to train even little models.
here the install cmd for using the GPU on windows:
pip3 install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio===0.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
and here is the one for Linux
pip3 install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio==0.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
and here is a link to the PyTorch doc that explains to you how to use the GPU in PyTorch
A very nice starter project that probably everyone has done when he started with machine learning especially those who want to use computer vision. The Implementation of the image classification using the MNIST Dataset. There are many great tutorials out there. So at first, it will be very overwhelming with all those new words but I will promise it will get better when you start to speak the same language as the guys writing those tutorials. So first follow the tutorial and if u don't understand any word just google it by itself and work through it in little pieces bc otherwise, it will be very hard to comprehend. After u gained some basic knowledge u can start to build your own little projects. Start with something little. So keep grinding :)
| https://stackoverflow.com/questions/70551621/ |
Attention does not seem to be applied at TransformerEncoderLayer and MultiheadAttention PyTorch | Changing something at one position in my input does not affect the outputs at other positions of my transformer encoder. I made a test in isolation in PyTorch:
# My encoder layer
encoder_layer = nn.TransformerEncoderLayer(d_model=8, nhead=2)
# Turn off dropout
encoder_layer.eval()
# Random input
src = torch.rand(2, 10, 8)
# Predict the output
out_0 = encoder_layer(src)
# Change the values at one of the positions (position 3 in this case)
src[:,3,:] += 1
# Predict once again the output
out_1 = encoder_layer(src)
# Check at which positions the outcomes are different between the two cases
# I summed in the embedding space direction
print(np.sum(np.abs(out_0.detach().numpy()),axis=-1) - np.sum(np.abs(out_1.detach().numpy()),axis=-1))
Output:
[[ 0. 0. 0. -0.15470695 0. 0. 0. 0. 0. 0. ]
[ 0. 0. 0. -0.27988768 0. 0.0. 0. 0. 0. ]]
However, this does work in TensorFlow:
# My encoder layer
encoder_layer = TransformerBlock(8, 2, 8)
# Random input
src = np.random.randn(2, 10, 8)
# Predict the output
out_0 = encoder_layer(src, training=False)
# Change the values at one of the positions (position 3 in this case)
src[:,3,:] += 1
# Predict once again the output
out_1 = encoder_layer(src, training=False)
# Check at which positions the outcomes are different between the two cases
# I summed in the embedding space direction
print(np.sum(np.abs(out_0),axis=-1) )
Output:
[[6.4196725 6.775745 6.946576 7.26213 6.473065 5.520765 6.201167
7.1266503 6.3147016 6.614853 ]
[5.565378 7.030789 6.768366 6.6065626 6.7277775 7.480627 6.6785836
6.4560523 6.4248576 6.6436586]]
My question is: Why aren't the values at all the position affected by changing the input at one input in PyTorch?
| From the documentation:
batch_first – If True, then the input and output tensors are provided as (batch, seq, feature). Default: False.
In other words, your input are 10 8-dimensional batches of sequence length 2 each. What you are doing is add 1 to all dimensions of all inputs of sample #4 in the batch, which --unsurprisingly-- alters only the output values of that specific sample.
| https://stackoverflow.com/questions/70554730/ |
PyTorch how to set the minimum value along each row of a tensor to zero? | We don't know the shape of the input tensor and we shouldn't use any loops. Only reduction and indexing operations. How do we set the minimum value of each row to zero?
For example:
input:
x = torch.tensor([[
[10, 20, 30]
[2, 5, 1]
]])
output:
torch.tensor([
[0, 20, 30],
[2, 5, 0]
])
I couldn't figure it out and couldn't find any related questions. I'm stuck.
| One way to do so is to compute row-wise minimum x.min(dim=-1), get minimal values x.min(dim=-1).values (indices won't work in case of multiple minimal elements), get mask indicating locations of non-minimal elements using comparison and multiply by it:
axis = -1 # Minimum iterating over the last dimension
min_values = x.min(dim=axis).values # Get minimal values
min_values_shape_corrected = min_values.unsqueeze(axis) # Reshape the minimal values so we can compare it with `x`
mask = (x != min_values_shape_corrected) # Get the mask of non-minimal elements
result = x * mask # Multiplying by a boolean mask leaves only True elements and sets False ones to 0
Or in one line
x * (x != x.min(dim=axis).values.unsqueeze(axis))
| https://stackoverflow.com/questions/70555376/ |
SimpleTransformers "max_seq_length" argument results in CUDA out of memory error in Kaggle and Google Colab | When fine-tuning the sloBERTa Transformer model, based on CamemBERT, for a multiclass classification task with SimpleTransformers, I want to use the model argument "max_seq_length": 512, as previous work states that it gives better results than 128, but the inclusion of this argument triggers the error below. The error is the same in Kaggle and Google Colab environment, and terminating the execution and reruning it does not help. The error is triggered not matter how small the number of training epochs is, and the dataset contains only 600 instances (with text as strings, and labels as integers). I've tried lowering the max_seq_length to 509, 500 and 128, but the error persists.
The setup without this argument works normally and allows training with 90 epochs, so I otherwise have enough memory.
from simpletransformers.classification import ClassificationModel
# define hyperparameter
model_args ={"overwrite_output_dir": True,
"num_train_epochs": 90,
"labels_list": LABELS_NUM,
"learning_rate": 1e-5,
"train_batch_size": 32,
"no_cache": True,
"no_save": True,
#"max_seq_length": 512,
"save_steps": -1,
}
model = ClassificationModel(
"camembert", "EMBEDDIA/sloberta",
use_cuda = device,
num_labels = NUM_LABELS,
args = model_args)
model.train_model(train_df)
This is the error:
RuntimeError Traceback (most recent call last)
/tmp/ipykernel_34/2529369927.py in <module>
19 args = model_args)
20
---> 21 model.train_model(train_df)
/opt/conda/lib/python3.7/site-packages/simpletransformers/classification/classification_model.py in train_model(self, train_df, multi_label, output_dir, show_running_loss, args, eval_df, verbose, **kwargs)
610 eval_df=eval_df,
611 verbose=verbose,
--> 612 **kwargs,
613 )
614
/opt/conda/lib/python3.7/site-packages/simpletransformers/classification/classification_model.py in train(self, train_dataloader, output_dir, multi_label, show_running_loss, eval_df, test_df, verbose, **kwargs)
883 loss_fct=self.loss_fct,
884 num_labels=self.num_labels,
--> 885 args=self.args,
886 )
887 else:
/opt/conda/lib/python3.7/site-packages/simpletransformers/classification/classification_model.py in _calculate_loss(self, model, inputs, loss_fct, num_labels, args)
2256
2257 def _calculate_loss(self, model, inputs, loss_fct, num_labels, args):
-> 2258 outputs = model(**inputs)
2259 # model outputs are always tuple in pytorch-transformers (see doc)
2260 loss = outputs[0]
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
/opt/conda/lib/python3.7/site-packages/transformers/models/roberta/modeling_roberta.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict)
1210 output_attentions=output_attentions,
1211 output_hidden_states=output_hidden_states,
-> 1212 return_dict=return_dict,
1213 )
1214 sequence_output = outputs[0]
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
/opt/conda/lib/python3.7/site-packages/transformers/models/roberta/modeling_roberta.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
859 output_attentions=output_attentions,
860 output_hidden_states=output_hidden_states,
--> 861 return_dict=return_dict,
862 )
863 sequence_output = encoder_outputs[0]
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
/opt/conda/lib/python3.7/site-packages/transformers/models/roberta/modeling_roberta.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
531 encoder_attention_mask,
532 past_key_value,
--> 533 output_attentions,
534 )
535
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
/opt/conda/lib/python3.7/site-packages/transformers/models/roberta/modeling_roberta.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value, output_attentions)
415 head_mask,
416 output_attentions=output_attentions,
--> 417 past_key_value=self_attn_past_key_value,
418 )
419 attention_output = self_attention_outputs[0]
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
/opt/conda/lib/python3.7/site-packages/transformers/models/roberta/modeling_roberta.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value, output_attentions)
344 encoder_attention_mask,
345 past_key_value,
--> 346 output_attentions,
347 )
348 attention_output = self.output(self_outputs[0], hidden_states)
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
/opt/conda/lib/python3.7/site-packages/transformers/models/roberta/modeling_roberta.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value, output_attentions)
273 attention_probs = attention_probs * head_mask
274
--> 275 context_layer = torch.matmul(attention_probs, value_layer)
276
277 context_layer = context_layer.permute(0, 2, 1, 3).contiguous()
RuntimeError: CUDA out of memory. Tried to allocate 192.00 MiB (GPU 0; 15.90 GiB total capacity; 15.04 GiB already allocated; 15.75 MiB free; 15.12 GiB reserved in total by PyTorch)
Additional code (if it helps - I've tried everything regarding the pytorch I found on the web - the full code can be accessed at https://www.kaggle.com/tajakuz/0-sloberta-example-max-seq-length-error):
!conda install --yes pytorch>=1.6 cudatoolkit=11.0 -c pytorch
# install simpletransformers
!pip install -q transformers
!pip install --upgrade transformers
!pip install -q simpletransformers
# check installed version
!pip freeze | grep simpletransformers
!pip uninstall -q torch -y
!pip install -q torch==1.6.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
# pytorch libraries
import torch # the main pytorch library
import torch.nn as nn # the sub-library containing Softmax, Module and other useful functions
import torch.optim as optim # the sub-library containing the common optimizers (SGD, Adam, etc.)
from torch.utils.data import Dataset, DataLoader, RandomSampler, SequentialSampler
from torch import cuda
device = 'cuda' if cuda.is_available() else 'cpu'
#importing other necessary packages and ClassificationModel for bert
from tqdm import tqdm
import warnings
warnings.simplefilter('ignore')
from scipy.special import softmax
Thank you so much for your help, it is really appreciated!
| This happened because max_seq_length defines the number of input neurons for the model thus increasing the number of trainable parameters which will require it to allocate more memory which might exceed your memory limits on those platforms.
Most of the time, max_seq_length is up the dataset, and sometimes adding too much could be wasteful in terms of training time and model size.
What you can do is to find the max number of words per sample in your training dataset and use that as your max_seq_length.
| https://stackoverflow.com/questions/70556326/ |
Neural net loss exponentially rises after first propogation | I am training a neural network on video frames (converted to greyscale) to output a tensor with two values. The first iteration always evaluates an acceptable loss (mean squared error generally between 15-40), followed by an exponential rise in the second pass, and then infinite.
The net is quite vanilla:
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(100 * 291, 29100),
nn.ReLU(),
nn.Linear(29100, 29100),
nn.ReLU(),
nn.Linear(29100, 2),
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
As is the training loop:
def train(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
model.train()
for batch, (X, y) in enumerate(dataloader):
X, y = X.to("cpu"), y.to("cpu")
# Compute prediction error
pred = model(X)
loss = loss_fn(pred, y)
# Backpropogation
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_fn = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
Example of loss function growth:
ITERATION 1
prediction: tensor([[-1.2239, -8.2337]], grad_fn=<AddmmBackward>)
actual: tensor([[0.0321, 0.0325]])
loss: tensor(34.9545, grad_fn=<MseLossBackward>)
ITERATION 2
prediction: tensor([[ 314636.5625, 2063098.2500]], grad_fn=<AddmmBackward>)
actual: tensor([[0.0330, 0.0323]])
loss: tensor(2.1777e+12, grad_fn=<MseLossBackward>)
ITERATION 3
prediction: tensor([[-8.0924e+22, -5.3062e+23]], grad_fn=<AddmmBackward>)
actual: tensor([[0.0334, 0.0317]])
loss: tensor(inf, grad_fn=<MseLossBackward>)
Here is an example of the video data: it's a 291x100 greyscale image and there are 1100 of them in the training dataset:
dataset.video_frames.size()
> torch.Size([1100, 100, 291])
dataset.video_frames[0]
> tensor([[21., 29., 28., ..., 33., 27., 26.],
[22., 27., 25., ..., 25., 25., 30.],
[23., 26., 26., ..., 24., 24., 28.],
...,
[24., 33., 31., ..., 41., 40., 42.],
[26., 34., 31., ..., 26., 20., 22.],
[25., 32., 32., ..., 21., 20., 18.]])
And the labeled training data:
dataset.y.size()
> torch.Size([1100, 2])
dataset.y[0]
> tensor([0.0335, 0.0315], dtype=torch.float)
I've fiddled the learning rate, number of hidden layers, and nothing seems to keep the loss from going to infinite.
| Properly scaling the inputs is crucial for proper training.
Weights are initialized based on some assumptions on the way inputs are scaled.
See this part of a lecture on weight initialization and see how critical it is for proper convergence.
More details on the mathematical analysis of the influence of weight initialization can be found in Sec. 2 of this paper:
Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification (ICCV 2015).
| https://stackoverflow.com/questions/70559302/ |
Should we actively use the weight argument in loss functions | Most of the current machine learning libraries have loss functions that comes with a weight argument, which allows us to tackle unbalanced datasets. However should this feature be actively made use of? If not, are there certain guidelines as to when we should use it(e.g. if the dataset is skewed to a certain extent). Will the model eventually learn to predict the rare cases anyway if it is complex(for the lack of a better word, I understand complexity doesn't equate to performance) enough?
I had this question because I was training a model with an unbalanced dataset(but not to the extreme), however I am adjusting the weights in the loss function somewhat arbitrarily according to the proportion of each class present in the dataset.
| You can use the weighted version of loss functions if you are certain the real world data your model will need to generalize for is similarly imbalanced. If not, you are introducing man-made bias into the system.
The choice to use weight cannot be based solely on model performance during training, validation or testing, but has to be made based on close scrutiny of the data set and how it was built.
A clear example where it may help is tumor detection in CT scans, where background and foreground often have a ratio of 20:1.
| https://stackoverflow.com/questions/70561412/ |
Some weights of Actor Critic model not updating | I am working on an Actor-Critic model in Pytorch. The model first receives the input in an RNN and then the policy net comes into play. The code for Policy net is:
class Policy(nn.Module):
"""
implements both actor and critic in one model
"""
def __init__(self):
super(Policy, self).__init__()
self.fc1 = nn.Linear(state_size+1, 128)
self.fc2 = nn.Linear(128, 64)
# actor's layer
self.action_head = nn.Linear(64, action_size)
self.mu = nn.Sigmoid()
self.var = nn.Softplus()
# critic's layer
self.value_head = nn.Linear(64, 1)
def forward(self, x):
"""
forward of both actor and critic
"""
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
# actor: choses action to take from state s_t
# by returning probability of each action
action_prob = self.action_head(x)
mu = self.mu(action_prob)
var = self.var(action_prob)
# critic: evaluates being in the state s_t
state_values = self.value_head(x)
return mu, var, state_values
policy = Policy()
In model class, we are calling this policy after the rnn. And in agent class’s act method, we are calling the model to get the action like this:
def act(self, some_input, state):
mu, var, state_value = self.model(some_input, state)
mu = mu.data.cpu().numpy()
sigma = torch.sqrt(var).data.cpu().numpy()
action = np.random.normal(mu, sigma)
action = np.clip(action, 0, 1)
action = torch.from_numpy(action/1000)
return action, state_value
I must mention that in optimizer, we are calling the model.parameters. When we print all the trainable parameters in each epoch, we see that everything else is changing except for the policy.action_head. Any idea why this is happening? I must also mention how the losses are calculated now:
advantage = reward - Value
Lp = -math.log(pdf_prob_now)*advantage
policy_losses.append(Lp)
#similar for value_losses
#after all the runs in the epoch is done
loss = torch.stack(policy_losses).sum() + alpha*torch.stack(value_losses).sum()
loss.backward()
Here Value is the state_value (the 2nd output from agent.act) and the pdf_prob_now is the probability of the action from all possible actions which is calculated like this:
def find_pdf(policy, action, rnn_output):
mu, var, _ = policy(rnn_output)
mu = mu.data.cpu().numpy()
sigma = torch.sqrt(var).data.cpu().numpy()
pdf_probability = stats.norm.pdf(action.cpu(), loc=mu, scale=sigma)
return pdf_probability
Is there some logical error here?
| the bug is in act function
def act(self, some_input, state):
# mu contains info required for gradient
mu, var, state_value = self.model(some_input, state)
# mu is detached and now has forgot all the operations performed
# in self.action_head
mu = mu.data.cpu().numpy()
sigma = torch.sqrt(var).data.cpu().numpy()
action = np.random.normal(mu, sigma)
action = np.clip(action, 0, 1)
action = torch.from_numpy(action/1000)
return action, state_value
for the further process, if loss is calculated using tensor operations performed on action, it can not be traced back to update self.action_head weights, as you detached the tensor mu which removes it from the computation graph and so you do not see any updates in self.action_head.
| https://stackoverflow.com/questions/70562317/ |
How to use nn.TransTransformerEncoder from pytorch | I am trying to use PyTorch's '''nn.TransformerEncoder''' module for a classification task.
I have data points of varying lengths i.e I have sequences of different lengths. All sequences have one corresponding output(target which is either 0 or 1).[![enter code here][1]][1]
This image outlines my dataset
This image shows how the sequences vary in length
However the entries of each sequence are of the same length.
I want to use this dataset to train an encoder part of the Transformer to be able to predict the corresponding outputs.
How can I go about doing this? And are there any examples I can check online?
| It depends on how your data actually looks like and what kind of output you expect. In general I would suggest to you to use the Transformers Library from HuggingFace, they have a lot of documentation and detailed code examples that you can work on -- plus an active forum. Here is a link to their description of Encoder-Decoder Models. I hope that helps you a bit.
| https://stackoverflow.com/questions/70566705/ |
How can I do calculations on tensors that have "requires_grad = true"? | I have this program you see below.
import torch
def dht_calculate_transformation_of_single_joint(para_dht_parameters):
var_a = para_dht_parameters[0]
var_d = para_dht_parameters[1]
var_alpha = para_dht_parameters[2]
var_theta = para_dht_parameters[3]
var_transformation = torch.tensor(data=[
[torch.cos(var_theta), -1 * torch.sin(var_theta) * torch.cos(var_alpha), torch.sin(var_theta) * torch.sin(var_alpha), var_a * torch.cos(var_theta)],
[torch.sin(var_theta), torch.cos(var_theta) * torch.cos(var_alpha), -1 * torch.cos(var_theta) * torch.sin(var_alpha), var_a * torch.sin(var_theta)],
[0, torch.sin(var_alpha), torch.cos(var_alpha), var_d],
[0, 0, 0, 1]
], dtype=torch.float32, requires_grad=True)
return var_transformation
def dht_calculate_positions_of_all_joints(para_all_transformations_of_joints):
var_all_positions_of_joints = torch.zeros(size=[27], dtype=torch.float32, requires_grad=True)
var_index_all_positions_of_joints = 0
var_transformation_to_joint = torch.zeros(size=[4, 4], dtype=torch.float32, requires_grad=True)
for var_index_of_transformation_of_joint, var_transformation_of_joint in enumerate(para_all_transformations_of_joints):
if var_index_of_transformation_of_joint == 0:
var_transformation_to_joint = var_transformation_of_joint
else:
var_transformation_to_joint = torch.matmul(var_transformation_to_joint, var_transformation_of_joint)
var_all_positions_of_joints[var_index_all_positions_of_joints + 0] = var_transformation_to_joint[0][3]
var_all_positions_of_joints[var_index_all_positions_of_joints + 1] = var_transformation_to_joint[1][3]
var_all_positions_of_joints[var_index_all_positions_of_joints + 2] = var_transformation_to_joint[2][3]
var_index_all_positions_of_joints += 3
return var_all_positions_of_joints
def dht_complete_calculation(para_input):
var_input_reshaped = para_input.view(-1, 9, 4)
var_output = torch.zeros(size=[para_input.shape[0], 27], dtype=torch.float32, requires_grad=True) # Tensor ist x Zeilen (Datenreihen) * 27 Spalten (Positionen von Joints) groß.
for var_index_of_current_row, var_current_row in enumerate(var_input_reshaped):
var_all_transformations_of_joints = torch.zeros(size=[9, 4, 4], dtype=torch.float32, requires_grad=True)
for var_index_of_current_column, var_current_column in enumerate(var_current_row):
var_all_transformations_of_joints[var_index_of_current_column] = dht_calculate_transformation_of_single_joint(var_current_column)
var_output[var_index_of_current_row] = dht_calculate_positions_of_all_joints(var_all_transformations_of_joints)
return var_output
if __name__ == "__main__":
inp = torch.tensor(data=
[
[5.1016, 5.2750, 5.0043, 5.2184,
4.8471, 5.3377, 5.0113, 5.0789,
4.8800, 5.0455, 5.0394, 4.9092,
4.6609, 5.5003, 5.1327, 4.7121,
4.9442, 5.0918, 4.8083, 4.3548,
5.0163, 4.8840, 4.7491, 4.8089,
4.8919, 5.0975, 4.9931, 5.0999,
4.6400, 5.0069, 4.7420, 5.3347,
4.6725, 5.0338, 5.0310, 5.0470],
[4.9628, 5.0113, 5.0834, 4.7143,
5.0336, 5.1864, 5.4348, 5.0918,
5.1570, 4.8881, 4.5411, 4.6745,
4.6072, 4.9938, 4.9655, 5.2279,
5.5559, 5.1952, 5.2229, 5.0727,
5.1382, 4.7613, 4.6449, 4.3832,
5.1866, 5.6650, 4.9886, 4.8088,
4.9390, 5.3506, 5.1028, 4.4640,
5.1076, 5.0772, 4.8219, 5.1303]
]
, requires_grad=True)
t1 = dht_complete_calculation(inp)
print("Endergebins \n", t1, t1.shape)
I get the following message when I execute the main:
Traceback (most recent call last):
File "dht.py", line 77, in <module>
t1 = dht_complete_calculation(inp)
File "dht.py", line 46, in dht_complete_calculation
var_all_transformations_of_joints[var_index_of_current_column] = dht_calculate_transformation_of_single_joint(var_current_column)
RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.
The thing is that the "dht_complete_calculation" function will be used with a neural Network (which isn't in the code fragment and isn't relevant to the question). The Output of the neural Network will be inputted in the "dht_complete_calculation" function. That is why the outputtensor and every tensor used in the calculation need to have "requires_grad = true".
The "dht_complete_calculation" function gets a tensor with x rows and 36 columns as input and should output a tensor with x rows and 27 columns. The calculations you see there are correct, because if I remove "requires_grad = true" from every tensor it works.
This is the desired output:
tensor([[ 2.4727e+00, -4.4623e+00, 5.2750e+00, 6.6468e+00, -4.1351e+00,
1.1145e+01, 1.3516e+01, -4.3618e+00, 1.2571e+01, 1.7557e+01,
-1.0147e+01, 1.4048e+01, 1.8344e+01, -1.2500e+01, 2.0697e+01,
2.4276e+01, -1.4575e+01, 2.3784e+01, 2.6110e+01, -2.0825e+01,
2.6521e+01, 2.6707e+01, -2.4291e+01, 3.2371e+01, 3.1856e+01,
-2.4376e+01, 3.6915e+01],
[ 9.4848e-03, -4.9628e+00, 5.0113e+00, 3.1514e+00, -6.8211e+00,
1.1249e+01, 9.8675e+00, -6.9772e+00, 1.3564e+01, 1.1752e+01,
-9.6508e+00, 1.9519e+01, 1.1553e+01, -8.3219e+00, 2.7006e+01,
1.4205e+01, -2.2681e+00, 2.9327e+01, 1.6872e+01, -2.0226e+00,
3.6526e+01, 1.2353e+01, -5.7472e-01, 4.2049e+01, 1.0814e+01,
3.8157e+00, 4.7547e+01]]) torch.Size([2, 27])
Process finished with exit code 0
However with "requires_grad = true" removed the network wouldn't learn anything, which is not what I want.
Can you help me to understand which part of the code triggers this error and how to fix it?
| The problem here is not that you are doing computations on a requires_grad=True tensor. After all this is how one gets gradients! By doing computations on such tensors :)
The issue is that you are doing what are called in-place operations.
By in-place we mean that a previously existing variable's memory location is now replaced with a new variable. As a result the computational graph is broken, and thus, no gradient backpropagation can be achieved.
How does this look like? I found a few quick examples in this Pytorch-forum question
In particular:
>>> x = torch.rand(1)
>>> y = torch.rand(1)
>>> x
tensor([0.2738])
>>> id(x)
140736259305336
>>> x = x + y # Normal operation
>>> id(x)
140726604827672 # New location
>>> x += y
>>> id(x)
140726604827672 # Existing location used (in-place)
So, you might then ask, where do you do that?
One such place is
var_all_positions_of_joints[var_index_all_positions_of_joints + 0] = var_transformation_to_joint[0][3]
var_all_positions_of_joints[var_index_all_positions_of_joints + 1] = var_transformation_to_joint[1][3]
var_all_positions_of_joints[var_index_all_positions_of_joints + 2] = var_transformation_to_joint[2][3]
Instead of doing that, you should instead collect all var_transofrmation_to_joint variables in a list and then do torch.stack or torch.cat depending on your situation. Alternatively, if in the future you seek to re-arrange the location of elements in a tensor, I recommend using something like einops for a highly-efficient and framework-independent solution.
| https://stackoverflow.com/questions/70571868/ |
Lists of PyTorch Lightning sub-models don't get transferred to GPU | When using PyTorch Lightning on CPU, everything works fine. However when using GPUs, I get a RuntimeError: Expected all tensors to be on the same device.
It seems that the trouble comes from the model using a list of sub-models which don't get passed to the GPU:
class LambdaLayer(LightningModule):
def __init__(self, fun):
super(LambdaLayer, self).__init__()
self.fun = fun
def forward(self, x):
return self.fun(x)
class TorchModel(LightningModule):
def __init__(self):
super(TorchModel, self).__init__()
self.cat_layers = [TorchCatEmbedding(cat) for cat in columns_to_embed]
self.num_layers = [LambdaLayer(lambda x: x[:, idx:idx+1]) for _, idx in numeric_columns]
self.ffo = TorchFFO(len(self.num_layers) + sum([embed_dim(l) for l in self.cat_layers]), y.shape[1])
self.softmax = torch.nn.Softmax(dim=1)
model = TorchModel()
trainer = Trainer(gpus=-1)
Before running trainer(model):
>>> model.device
device(type='cpu')
>>> model.ffo.device
device(type='cpu')
>>> model.cat_layers[0].device
device(type='cpu')
After running trainer(model):
>>> model.device
device(type='cuda', index=0) # <---- correct
>>> model.ffo.device
device(type='cuda', index=0) # <---- correct
>>> model.cat_layers[0].device
device(type='cpu') # <---- still showing 'cpu'
Apparently, PyTorch Lightning is not able to transfer the lists of sub-models to the GPU. How to proceed so that the entire model, including list of sub-models (cat_layers and num_layers) is transferred to the GPU?
| Submodules contained in lists are not registered and can't be transformed as is.
You need to use ModuleList instead, i.e.:
...
from torch.nn import ModuleList
...
class TorchModel(LightningModule):
def __init__(self):
super(TorchModel, self).__init__()
self.cat_layers = ModuleList([TorchCatEmbedding(cat) for cat in columns_to_embed])
self.num_layers = ModuleList([LambdaLayer(lambda x: x[:, idx:idx+1]) for _, idx in numeric_columns])
self.ffo = TorchFFO(len(self.num_layers) + sum([embed_dim(l) for l in self.cat_layers]), y.shape[1])
self.softmax = torch.nn.Softmax(dim=1)
edit: I'm not sure what the Lightning equivalent is, or if one such exists, see also PyTorch Lightning - LightningModule for ModuleList / ModuleDict?
| https://stackoverflow.com/questions/70577039/ |
"ValueError: You have to specify either input_ids or inputs_embeds" when training AutoModelWithLMHead Model (GPT-2) | I want to fine-tune the AutoModelWithLMHead model from this repository, which is a German GPT-2 model. I have followed the tutorials for pre-processing and fine-tuning. I have prepocessed a bunch of text passages for the fine-tuning, but when beginning training, I receive the following error:
File "GPT\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "GPT\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py", line 774, in forward
raise ValueError("You have to specify either input_ids or inputs_embeds")
ValueError: You have to specify either input_ids or inputs_embeds
Here is my code for reference:
# Load data
with open("Fine-Tuning Dataset/train.txt", "r", encoding="utf-8") as train_file:
train_data = train_file.read().split("--")
with open("Fine-Tuning Dataset/test.txt", "r", encoding="utf-8") as test_file:
test_data = test_file.read().split("--")
# Load pre-trained tokenizer and prepare input
tokenizer = AutoTokenizer.from_pretrained('dbmdz/german-gpt2')
tokenizer.pad_token = tokenizer.eos_token
train_input = tokenizer(train_data, padding="longest")
test_input = tokenizer(test_data, padding="longest")
# Define model
model = AutoModelWithLMHead.from_pretrained("dbmdz/german-gpt2")
training_args = TrainingArguments("test_trainer")
# Evaluation
metric = load_metric("accuracy")
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = numpy.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
# Train
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_input,
eval_dataset=test_input,
compute_metrics=compute_metrics,
)
trainer.train()
trainer.evaluate()
Does anyone know the reason for this? Any help is welcome!
| I didn't find the concrete answer to this question, but a workaround. For anyone looking for examples on how to fine-tune the GPT models from HuggingFace, you may have a look into this repo. They listed a couple of examples on how to fine-tune different Transformer models, complemented by documented code examples. I used the run_clm.py script and it achieved what I wanted.
| https://stackoverflow.com/questions/70577285/ |
Emulate fmin in torch 1.7.1 | In the current version there exists torch.fmin Torch Documentation fimin
Unfortunately, my project relies on torch 1.7.1 and I can't upgrade. Is there another way to use min with NaNs in my tensor to emulate fmin. The NaNs are intended and are not a result of poor implementation. So I would like to keep them.
| It's ugly but this works:
>>> a = torch.tensor([2.2, float('nan'), 2.1, float('nan')])
>>> b = torch.tensor([-9.3, 0.1, float('nan'), float('nan')])
>>> c = torch.stack((a,b))
>>> c[c.isnan()] = float('inf')
>>> min_, idx = torch.min(c, dim=0)
>>> min_[min_.isinf()] = float('nan')
>>> min_
tensor([-9.3000, 0.1000, 2.1000, nan])
| https://stackoverflow.com/questions/70589494/ |
Custom loss function in pytorch 1.10.1 | I am struggeling with defining a custom loss function for pytorch 1.10.1. My model outputs a float ranging from -1 to +1. The target values are floats of arbitrary range. The loss should be a sum of pruducts if the sign between the model output and target is different.
I have searched the internet for quite some hours, but it seems there have been some changes to pytorch throughout the last versions, so I don't really know which example would best fit to my use case and pytorch 1.10.1.
Here is my approach so far:
class Loss(torch.nn.Module):
@staticmethod
def forward(self, output, target) -> Tensor:
loss = 0.0
for i in range(len(target)):
o = output[i,0]
t = target[i]
l = o * t
if l<0: #if different sign
loss -= l
return loss
Question:
Should I subclass torch.nn.Module or torch.autograd.Function?
Do I need to define @staticmethod?
On some examples, I saw ctx instead of self being used and invocations of ctx.save_for_backward etc. Do I need this? What is its purpose?
When subclassing torch.nn.Module, my code complains: 'Tensor' object has no attribute 'children'. What am I missing?
When subclassing torch.autograd.Function, my code complains about not having a backward function defined. How should my backward function look like?
| Custom loss functions can be as simple as a python function. You can simplify this a bit:
def custom_loss(output, target):
prod = output[:,0]*target
return -prod[prod<0].sum()
| https://stackoverflow.com/questions/70590495/ |
Training LSTM over multiple datasets of different timestep number | I'm new to working with LSTMs and I'm stuggling to understand them even intuitively.
I'm using them for a Regression problem, I'm having about 6000 datasets of ~450 timesteps each and every timestep has 11 features. The target values are 2d ~ [a,b] and they are the same for a single dataset. After training I want to provide the timesteps and predict the 2d y value.
Example:
dataset (1 out of 6000) has ~450 different timesteps of type x = [1,2,3,4,5,6,7,8,9,10,11] and a target value y = [1,2]
The problems I'm having currently is understanding what exactly LSTM learns in terms of correlation between inputs, what data do I feed exactly and in what order if I'm dealing with multiple datasets? I'm confused of the term batch_size and what happens with the term seq_length if I have a varying sequences... Do I pass the whole 450 timesteps as sequences?
What I do now is merging all the data in a csv file and passing them to the model. I couldn't run it because of memory problems so I reduced it to 5000 timesteps,
below is the LSTM class I use
'''
class LSTM(nn.Module):
def __init__(self, num_classes, input_size, hidden_size, num_layers):
super(LSTM, self).__init__()
self.num_classes = num_classes
self.num_layers = num_layers
self.input_size = input_size
self.hidden_size = hidden_size
self.seq_length = seq_length
self.lstm = nn.LSTM(input_size=input_size, hidden_size=hidden_size,
num_layers=num_layers, batch_first=True)
self.fc = nn.Linear(hidden_size, num_classes)
def forward(self, x):
h_0 = Variable(torch.zeros(
self.num_layers, x.size(0), self.hidden_size))
c_0 = Variable(torch.zeros(
self.num_layers, x.size(0), self.hidden_size))
# Propagate input through LSTM
ula, (h_out, _) = self.lstm(x, (h_0, c_0))
h_out = h_out.view(-1, self.hidden_size)
out = self.fc(h_out)
return out, h_out
'''
I don't really need technical answers.. I would really like if somebody could clarify what is going on in this scenario and how I should approach it... I've searched dozens of posts online but it seems I just don't get it or it is not exactly my case.
| I will try to explain this in a way that also explains the vocabulary.
LSTMs are often used for sequential data, for example a time series, where you have data points x_t for multiple time steps t=t0...tN. Here, N would be the sequence length (=seq_length?). Now that means for D-dimensional data, one "dataset" or more precisely, one sequence has the shape N x D.
Let us for now assume that N is equal for all sequences. That means, if you have B sequences, you can stack them into a B x N x D tensor - this would correspond to the actual dataset, which is basically all data that you use. Here, B is your batch axis, basically just meaning the axis where you stack independent sequences. If you choose to train on all data at the same time, you could just pass your complete B x N x D dataset to the model. Then, your batch size would be B. (a Note below)
Now if the sequence length is not equal, there are multiple things you could do. First, you should ask yourself if you want to train on the full sequences. Is it necessary to read the full N steps to get an estimate of the result, or could it be enough to only look at n < N steps? If that is the case, you can sample b (your new batch size, which you can define how you like) sequences of length n, where n < N for all sequences.
If parts of the sequence are not sufficient to estimate the result, it gets more complicated. Then I would suggest to feed the full sequences individually and just train on single sequences. This basically means the batchsize b=1, since you can not stack the sequences as their length differs pairwise. Here, you will feed your model with a b x n x D tensor.
I'm not sure if any of this is "standard procedure", but this is how I would address this.
NOTE: Training on the full dataset is usually not a good practice. Typically, you want to sample b < B random batches from your dataset, which randomizes your training.
| https://stackoverflow.com/questions/70590541/ |
Is it possible to install different graphic cards and use multi-GPU in pytorch? | I have a question.
Is it possible to install different graphic cards and use multi-GPU in pytorch?
Is there any other problem?
Ex>
Is the data parallel function of pytorch available in a combination of 3070 (1ea) + 3080 (1ea)?
Thank you in advance for your response.
| I believe it is possible on the technical level, but it would be sub-optimal. The code that divides the load between the different GPUs assume they are the same. Having different GPUs will basically mean that you'll benefit according to the minimal performance of the GPUs you have.
For example, if you have a card with 1GB mem and another with 10GB you will only be able to work with batches suited for the 1GB card and have 9GB un-utilized on the second.
| https://stackoverflow.com/questions/70591460/ |
mat1 and mat2 shapes cannot be multiplied for GRU | I am creating a GRU to do some classification for a project, and I'm relatively new to Pytorch and implementing GRUs. I know similar questions like this one have been answered already but I can't seem to bring the same solution over to my own problem. I understand that there is an issue with the shape/order of my fc arrays but after trying to change things I can no longer see the trees for the wood. I would appreciate it if someone could point me in the right direction.
Below I have attached my code and the error. The datasets im using contain 24 features with a label in the 25th column.
# Imports
import pandas as pd
import numpy as np
import torch
import torchvision # torch package for vision related things
import torch.nn.functional as F # Parameterless functions, like (some) activation functions
import torchvision.datasets as datasets # Standard datasets
import torchvision.transforms as transforms # Transformations we can perform on our dataset for augmentation
from torch import optim # For optimizers like SGD, Adam, etc.
from torch import nn # All neural network modules
from torch.utils.data import Dataset, DataLoader # Gives easier dataset managment by creating mini batches etc.
from tqdm import tqdm # For a nice progress bar
from sklearn.preprocessing import StandardScaler
# Set device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Hyperparameters
input_size = 24
hidden_size = 128
num_layers = 1
num_classes = 2
sequence_length = 1
learning_rate = 0.005
batch_size = 8
num_epochs = 3
# Recurrent neural network with GRU (many-to-one)
class RNN_GRU(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, num_classes):
super(RNN_GRU, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.gru = nn.GRU(input_size, hidden_size, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_size * sequence_length, num_classes)
def forward(self, x):
# Set initial hidden and cell states
x = x.unsqueeze(0)
h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device)
# Forward propagate LSTM
out, _ = self.gru(x, h0)
out = out.reshape(out.shape[0], -1)
# Decode the hidden state of the last time step
out = self.fc(out)
return out
class MyDataset(Dataset):
def __init__(self,file_name):
stats_df=pd.read_csv(file_name)
x=stats_df.iloc[:,0:24].values
y=stats_df.iloc[:,24].values
self.x_train=torch.tensor(x,dtype=torch.float32)
self.y_train=torch.tensor(y,dtype=torch.float32)
def __len__(self):
return len(self.y_train)
def __getitem__(self,idx):
return self.x_train[idx],self.y_train[idx]
nomDs=MyDataset("nomStats.csv")
atkDs=MyDataset("atkStats.csv")
train_loader=DataLoader(dataset=nomDs,batch_size=batch_size)
test_loader=DataLoader(dataset=atkDs,batch_size=batch_size)
# Initialize network (try out just using simple RNN, or GRU, and then compare with LSTM)
model = RNN_GRU(input_size, hidden_size, num_layers, num_classes).to(device)
# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
# Train Network
for epoch in range(num_epochs):
for batch_idx, (data, targets) in enumerate(tqdm(train_loader)):
# Get data to cuda if possible
data = data.to(device=device).squeeze(1)
targets = targets.to(device=device)
# forward
scores = model(data)
loss = criterion(scores, targets)
# backward
optimizer.zero_grad()
loss.backward()
# gradient descent update step/adam step
optimizer.step()
# Check accuracy on training & test to see how good our model
def check_accuracy(loader, model):
num_correct = 0
num_samples = 0
# Set model to eval
model.eval()
with torch.no_grad():
for x, y in loader:
x = x.to(device=device).squeeze(1)
y = y.to(device=device)
scores = model(x)
_, predictions = scores.max(1)
num_correct += (predictions == y).sum()
num_samples += predictions.size(0)
# Toggle model back to train
model.train()
return num_correct / num_samples
print(f"Accuracy on training set: {check_accuracy(train_loader, model)*100:2f}")
print(f"Accuracy on test set: {check_accuracy(test_loader, model)*100:.2f}")
Traceback (most recent call last):
File "TESTGRU.py", line 87, in <module>
scores = model(data)
File "C:\Users\steph\anaconda3\envs\FYP\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "TESTGRU.py", line 47, in forward
out = self.fc(out)
File "C:\Users\steph\anaconda3\envs\FYP\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\steph\anaconda3\envs\FYP\lib\site-packages\torch\nn\modules\linear.py", line 94, in forward
return F.linear(input, self.weight, self.bias)
File "C:\Users\steph\anaconda3\envs\FYP\lib\site-packages\torch\nn\functional.py", line 1753, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x1024 and 128x2)
| It seems like these lines
# Forward propagate LSTM
out, _ = self.gru(x, h0)
out = out.reshape(out.shape[0], -1)
are the problem.
It appears that you only want to feed the hidden state of the last time step.
This could be read from the output in two ways:
If you want the output of all layers at the last time step, you should use the second return value of out, _ = self.gru(x, h0) not the first.
If you want to use just the last layer's output at the last time step (which seems to be the case), you should use
out[:, -1, :]. With this change, you may not need the
reshape operation.
| https://stackoverflow.com/questions/70597040/ |
Pytorch GPU memory keeps increasing with every batch | I'm training a CNN model on images. Initially, I was training on image patches of size (256, 256) and everything was fine. Then I changed my dataloader to load full HD images (1080, 1920) and I was cropping the images after some processing. In this case, the GPU memory keeps increasing with every batch. Why is this happening?
PS: While tracking losses, I'm doing loss.detach().item() so that loss is not retained in the graph.
| As suggested here, deleting the input, output and loss data helped.
Additionally, I had the data as a dictionary. Just deleting the dictionary isn't sufficient. I had to iterate over the dict elements and delete all of them.
| https://stackoverflow.com/questions/70602796/ |
Does torch.nn.MultiheadAttention contain normalisation layer and feed forward layer? | Tried to find the source code of multihead attention but could not find any implementation details. I wonder if this module only contains the attention part rather than the whole transformer block (i.e. It does not contain the normalisation layer, residual connection and an additional feedforward neural network)?
| According to the source code, the answer is no. MultiheadAttention unsurprisingly implements only the attention function.
| https://stackoverflow.com/questions/70606412/ |
Solving "CUDA out of memory" when fine-tuning GPT-2 (HuggingFace) | I get the reoccuring CUDA out of memory error when using the HuggingFace Transformers library to fine-tune a GPT-2 model and can't seem to solve it, despite my 6 GB GPU capacity, which I thought should be enough for fine-tuning on texts. The error reads as follows:
File "GPT\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "GPT\lib\site-packages\transformers\modeling_utils.py", line 1763, in forward
x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight)
RuntimeError: CUDA out of memory. Tried to allocate 144.00 MiB (GPU 0; 6.00 GiB total capacity; 4.28 GiB already allocated; 24.50 MiB free; 4.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I already set batch size to as low as 2 and reduced training examples without success. I also tried to migrate the code to Colab, where the 12GB RAM were quickly consumed.
My examples are rather long, some counting 2.400 characters, but they should be truncated by the model automatically. My (German) examples look like this:
Er geht in fremde Wohnungen, balgt sich mit Freund und Feind, ist
zudringlich zu unsern Sämereien und Kirschen. Wenn die Gesellschaft nicht groß
ist, lasse ich sie gelten und streue ihnen sogar Getreide. Sollten sie hier
aber doch zu viel werden, so hilft die Windbüchse, und sie werden in den
Meierhof hinabgescheucht. Als einen bösen Feind zeigte sich der Rotschwanz. Er
flog zu dem Bienenhause und schnappte die Tierchen weg. Da half nichts, als ihn
ohne Gnade mit der Windbüchse zu töten.
Ich wollte
Ihnen mein Wort halten, liebe Mama, aber die Versuchung war zu groß. Da bin ich
eines Abends in den Keller gegangen und hab' aus allen Fässern den Spund
herausgeklopft. Bis auf den letzten Tropfen ist das Gift ausgeronnen aus den
Fässern. Der Schade war groß, aber der Teufel war aus dem Haus. «
Andor lachte. »Mama, das Geschrei hätten Sie hören sollen! Als ob der
Weltuntergang gekommen wäre. Er bedauerte beinahe seine
Schroffheit. Nun, nachlaufen wird er ihnen nicht, die werden schon selber
kommen. Aber bewachen wird er seine Kolonie bei Tag und bei Nacht lassen
müssen. Hol' der Teufel diesen Mercy. Muß der gerade in Högyész ein Kastell
haben. Wenn einer von den Schwarzwäldern dahin kommt und ihn verklagt.
Is there a problem with the data formatting maybe?
If anyone has a hint on how to solve this, it would be very welcome.
EDIT: Thank you Timbus Calin for the answer, I described in the comment how adding the block_size flag to the config.json solved the problem. Here is the whole configuration for reference:
{
"model_name_or_path": "dbmdz/german-gpt2",
"train_file": "Fine-Tuning Dataset/train.txt",
"validation_file": "Fine-Tuning Dataset/test.txt",
"output_dir": "Models",
"overwrite_output_dir": true,
"per_device_eval_batch_size": 8,
"per_device_train_batch_size": 8,
"block_size": 100,
"task_type": "text-generation",
"do_train": true,
"do_eval": true
}
|
If the memory problems still persist, you could opt for
DistillGPT2, as it has a 33% reduction in the parameters of the
network (the forward pass is also twice as fast). Particularly for a small GPU memory like 6GB VRAM, it could
be a solution/alternative to your problem.
At the same time, it depends on how you preprocess the data. Indeed,
the model is capable of "receiving" a maximum length of N tokens
(could be for example 512/768) depending on the models you choose. I
recently trained a named entity recognition model and the model
had a maximum length of 768 tokens. However, when I manually set the
dimension of the padded tokens in my PyTorch DataLoader() to a big
number, I also got OOM memory (even on 3090 24GB VRAM). As I reduced
the dimension of the tokens to a much smaller one (512 instead of
768 for example) the training started to work and I did not get
any issues with the lack of memory.
TLDR: Reducing the number of tokens in the preprocessing phase, regardless of the max capacity of the network, can also help to solve your memories problem.
Note that reducing the number of tokens to process in a sequence is different from the dimension of a token.
| https://stackoverflow.com/questions/70606666/ |
What is the cleanest way of installing pytorch with CUDA enabled to the latest versions from CLI? | The way I have installed pytorch with CUDA (on Linux) is by:
Going to the pytorch website and manually filling in the GUI checklist, and copy pasting the resulting command conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
Going to the NVIDIA cudatoolkit install website, filling in the GUI, and copy pasting the following code:
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin
sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/11.5.1/local_installers/cuda-repo-ubuntu2004-11-5-local_11.5.1-495.29.05-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu2004-11-5-local_11.5.1-495.29.05-1_amd64.deb
sudo apt-key add /var/cuda-repo-ubuntu2004-11-5-local/7fa2af80.pub
sudo apt-get update
sudo apt-get -y install cuda
By the way, if I don't install the toolkit from the NVIDIA website then pytorch tells me CUDA is unavailably, probably because the pytorch conda install command doesn't install the drivers.
Is there a way to do all of this in a cleaner way, without manually checking the latest version each time you reinstall, or filling in a GUI?
| TLDR:
You can always try to use
sudo apt install nvidia-cuda-toolkit (to check which version nvcc --version)
conda install pytorch torchvision torchaudio cudatoolkit -c pytorch -c nvidia
(can add -c conda-forge for more robustness of channels)
Warning: Without any specifics like that, you might end up downloading a build that isn't CUDA, so always check before downloading. This is usually not the case if downloading from pytorch and nvidia channels though.
Step by step also explained here.
Consider the following still
The approach you described usually avoids a lot of headaches on a single PC. Another approach is to use NVIDIA's dockers that are pretty much already set up (still have to set up CUDA drivers though), and just expose ports for jupyter notebook or run jobs directly there. This is nice if you don't have to do extra editing.
In general the actual NVIDIA cudatoolkit that you install can be of higher version (to some extent) then the anaconda version of cudatoolkit, meaning that you don't have to be that precise for looking up version (after 11.1 which supports 3090's). Even if you look at the documentation from nvidia, in the end the website which you choose will build up those same commands. As seen on the image below:
Consider Module
On another note, if you have clusters in company or university, they usually have module load XYZ where you can directly load the CUDA support. If you have multiple computers or version of CUDA need installing, might check out this website for more info on modules. This is highly recommended if you don't wanna do reinstalling all the time.
And when you check module avail you would get something like this:
| https://stackoverflow.com/questions/70608188/ |
PyTorch can't pickle lambda | I have a model that uses a custom LambdaLayer as follows:
class LambdaLayer(LightningModule):
def __init__(self, fun):
super(LambdaLayer, self).__init__()
self.fun = fun
def forward(self, x):
return self.fun(x)
class TorchCatEmbedding(LightningModule):
def __init__(self, start, end):
super(TorchCatEmbedding, self).__init__()
self.lb = LambdaLayer(lambda x: x[:, start:end])
self.embedding = torch.nn.Embedding(50, 5)
def forward(self, inputs):
o = self.lb(inputs).to(torch.int32)
o = self.embedding(o)
return o.squeeze()
The model runs perfectly fine on CPU or 1 GPU. However, when running it with PyTorch Lightning over 2+ GPUs, this error happens:
AttributeError: Can't pickle local object 'TorchCatEmbedding.__init__.<locals>.<lambda>'
The purpose of using a lambda function here is that given an inputs tensor, I want to pass only inputs[:, start:end] to the embedding layer.
My questions:
is there an alternative to using a lambda in this case?
if not, what should be done to get the lambda function to work in this context?
| So the problem isn't the lambda function per se, it's that pickle doesn't work with functions that aren't just module-level functions (the way pickle treats functions is just as references to some module-level name). So, unfortunately, if you need to capture the start and end arguments, you won't be able to use a closure, you'd normally just want something like:
def function_maker(start, end):
def function(x):
return x[:, start:end]
return function
But this will get you right back to where you started, as far as the pickling problem is concerned.
So, try something like:
class Slicer:
def __init__(self, start, end):
self.start = start
self.end = end
def __call__(self, x):
return x[:, self.start:self.end])
Then you can use:
LambdaLayer(Slicer(start, end))
I'm not familiar with PyTorch, I'm surprised though that it doesn't offer the ability to use a different serialization backend. The pathos/dill project can pickle arbitrary functions, for example, and is often easier to just use that. But I believe the above should solve the problem.
| https://stackoverflow.com/questions/70608810/ |
RuntimeError: Found dtype Double but expected Float - PyTorch | I am new to pytorch and I am working on DQN for a timeseries using Reinforcement Learning and I needed to have a complex observation of timeseries and some sensor readings, so I merged two neural networks and I am not sure if that's what is ruining my loss.backward or something else.
I know there is multiple questions with the same title but none worked for me, maybe I am missing something.
First of all, this is my network:
class DQN(nn.Module):
def __init__(self, list_shape, score_shape, n_actions):
super(DQN, self).__init__()
self.FeatureList = nn.Sequential(
nn.Conv1d(list_shape[1], 32, kernel_size=8, stride=4),
nn.ReLU(),
nn.Conv1d(32, 64, kernel_size=4, stride=2),
nn.ReLU(),
nn.Conv1d(64, 64, kernel_size=3, stride=1),
nn.ReLU(),
nn.Flatten()
)
self.FeatureScore = nn.Sequential(
nn.Linear(score_shape[1], 512),
nn.ReLU(),
nn.Linear(512, 128)
)
t_list_test = torch.zeros(list_shape)
t_score_test = torch.zeros(score_shape)
merge_shape = self.FeatureList(t_list_test).shape[1] + self.FeatureScore(t_score_test).shape[1]
self.FinalNN = nn.Sequential(
nn.Linear(merge_shape, 512),
nn.ReLU(),
nn.Linear(512, 128),
nn.ReLU(),
nn.Linear(128, n_actions),
)
def forward(self, list, score):
listOut = self.FeatureList(list)
scoreOut = self.FeatureScore(score)
MergedTensor = torch.cat((listOut,scoreOut),1)
return self.FinalNN(MergedTensor)
I have a function called calc_loss, and at its end it return the MSE loss as below
print(state_action_values.dtype)
print(expected_state_action_values.dtype)
return nn.MSELoss()(state_action_values, expected_state_action_values)
and the print shows float32 and float64 respectively.
I get the error when I run the loss.backward() as below
LEARNING_RATE = 0.01
optimizer = optim.Adam(net.parameters(), lr=LEARNING_RATE)
for i in range(50):
optimizer.zero_grad()
loss_v = calc_loss(sample(obs, 500, 200, 64), net, tgt_net)
print(loss_v.dtype)
print(loss_v)
loss_v.backward()
optimizer.step()
and the print output is as below:
torch.float64
tensor(1887.4831, dtype=torch.float64, grad_fn=)
Update 1:
I tried using a simpler model, yet the same issue, when I tried to cast the inputs to Float, I got an error:
RuntimeError: expected scalar type Double but found Float
What makes the model expects double ?
Update 2:
I tried to add the below line on top after the torch import but same issue of RuntimeError: Found dtype Double but expected Float
>>> torch.set_default_tensor_type(torch.FloatTensor)
But when I used the DoubleTensor I got:
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.DoubleTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
| The issue wasn't in the input to the network but the criterion of the MSELoss, so it worked fine after casting the criterion to float as below
return nn.MSELoss()(state_action_values.float(), expected_state_action_values.float())
I decided to leave the answer for beginners like me who might be stuck and didn't expect to check the datatype of the loss criterion
| https://stackoverflow.com/questions/70615514/ |
How can I get the sum of gradients immediately after loss.backward()? | I am new to Pytorch, and I am trying to do some importance sampling experiments:
During an evaluation epoch, I calculate the loss for each training sample, and obtain the sum of gradients for this training sample. Finally, I will sort the training samples based on gradients they introduced. For example, if sample A shows a very high gradient sum, it must be an important sample to training. Otherwise, it is not a very important sample.
Note that, the gradients calculated here will not be used to update parameters. In other words, they are only used for selecting importance samples.
I know gradients will be ready somewhere after loss.backward(). But what is the easiest way to grab the summed gradients over the entire model? In my current implementation, I am only allowed to modify one small module with only loss availble, so I don’t have “inputs” or “model”. Is it possible to get the gradients from only “loss”?
| Gradients after backward are stored as the grad attribute of tensors that require grad. You can find all tensors involved and sum up their grads. A cleaner way might be to write a backward hook to accumulate gradients to some global variable while backpropagating.
An example is
import torch
import torch.nn as nn
model = nn.Linear(5, 3)
print(model.weight.grad) # None, since the grads have not been computed yet
print(model.bias.grad)
x = torch.randn(5, 5)
y = model(x)
loss = y.sum()
loss.backward()
print(model.weight.grad)
print(model.bias.grad)
output:
None
None
tensor([[-0.6164, 1.1585, -3.4117, -4.3192, -3.7273],
[-0.6164, 1.1585, -3.4117, -4.3192, -3.7273],
[-0.6164, 1.1585, -3.4117, -4.3192, -3.7273]])
tensor([5., 5., 5.])
As you see, you can access the gradients as param.grad. If model is an torch.nn.Module object, you can iterate over its parameters with for param in model.parameters().
Maybe you can also work with backward hooks but I am not that familiar with them to give a code example.
| https://stackoverflow.com/questions/70617211/ |
Running test calculations in DDP mode with multiple GPUs with PyTorchLightning | I have a model which I try to use with trainer in DDP mode.
import pytorch_lightning as pl
import torch
import torchvision
from torchmetrics import Accuracy
class Model(pl.LightningModule):
def __init__(
self,
model_name: str,
num_classes: int,
model_hparams: Dict["str", Union[str, int]],
optimizer_name: str,
optimizer_hparams: Dict["str", Union[str, int]],
):
super().__init__()
self.save_hyperparameters()
self.model = torchvision.resnet18(num_classes=num_classes, **model_hparams)
self.loss_module = CrossEntropyLoss()
self.example_input_array = torch.zeros((1, 3, 512, 512), dtype=torch.float32)
# Trying to use in DDP mode
self.test_accuracy = Accuracy(num_classes=num_classes)
def forward(self, imgs) -> Tensor:
return self.model(imgs)
# <redacted training_*, val_*, etc. as they are not relevant>
def test_step(self, batch, batch_idx):
imgs, labels = batch
preds = self.model(imgs)
self.test_accuracy.update(preds, labels)
return labels, preds.argmax(dim=-1)
def test_epoch_end(self, outputs) -> None:
num_classes = self.hparams.num_classes
# Creates table of correct and incorrect predictions
results = torch.zeros((num_classes, num_classes))
for output in outputs:
for label, prediction in zip(*output):
results[int(label), int(prediction)] += 1
# Total accuracy. This and `compute` are identical in 1 GPU training
acc = results.diag().sum() / results.sum()
self.log("test_acc", self.test_accuracy.compute())
print(results) # This prints twice
and trainer
trainer = pl.Trainer(
gpus=torch.cuda.device_count(),
max_epochs=180,
callbacks=callbacks,
strategy="ddp",
auto_scale_batch_size="binsearch",
)
However, I get as prints from test
tensor([[0., 0., 0., 0., 0., 5.],
[0., 7., 0., 0., 0., 0.],
[0., 3., 0., 0., 0., 2.],
[0., 3., 0., 0., 0., 0.],
[0., 3., 0., 0., 0., 2.],
[0., 1., 0., 0., 0., 4.]])tensor([[0., 0., 0., 0., 0., 6.],
[0., 2., 0., 0., 0., 0.],
[0., 4., 0., 0., 0., 2.],
[0., 2., 0., 0., 0., 1.],
[0., 3., 0., 0., 0., 2.],
[0., 5., 0., 0., 0., 3.]])
Also
trainer.fit(model, datamodule=datamodule)
test_results = trainer.test(model, datamodule=datamodule)
print(test_results)
# [{'test_acc': 0.18333333730697632}]
# [{'test_acc': 0.18333333730697632}]
where I would only expect single tensor to be printed. How can I make my calculations over all test predictions rather than by GPU and return the table I create in test_epoch_end from those predictions? I interpreted the documentation as *_epoch_end being executed only on single GPU and am quite lost.
| I think you should use following techniques:
test_epoch_end: In ddp mode, every gpu runs same code in this method. So each gpu computes metric on partial batch not whole batches. You need to synchronize metric and collect to rank==0 gpu to compute
evaluation metric on entire dataset.
torch.distributed.reduce: This method collects and calculate tensors across distributed gpu devices. (docs)
self.trainer.is_global_zero: This flag will be true for rank==0
What is best way to manually compute metric over test set? you should check docs
Using mentioned techniques, you can compute metric over entire dataset and use results tensor after .test. Here is snippet:
import os
import torch
from torch.utils.data import DataLoader
from torchvision import models, transforms
from torchvision.datasets import CIFAR10
from pytorch_lightning import LightningModule, LightningDataModule, Trainer
os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
class CIFAR(LightningDataModule):
def __init__(self, img_size=32, batch_size=32):
super().__init__()
self.img_size = img_size if isinstance(img_size, tuple) else (img_size, img_size)
self.batch_size = batch_size
self.test_transforms = transforms.Compose([
transforms.Resize(self.img_size),
transforms.CenterCrop(self.img_size),
transforms.ToTensor(),
transforms.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))
])
def prepare_data(self) -> None:
CIFAR10(root='data', train=True, download=True)
CIFAR10(root='data', train=False, download=True)
def setup(self, stage=None):
self.test_ds = CIFAR10(root='data', train=False, download=False, transform=self.test_transforms)
def test_dataloader(self):
return DataLoader(self.test_ds, num_workers=4, batch_size=self.batch_size, shuffle=False)
class BasicModule(LightningModule):
def __init__(self):
super().__init__()
self.model = models.resnet18(num_classes=10, pretrained=False)
def test_step(self, batch, batch_idx):
x, y = batch
y_hat = self.model(x)
return y, y_hat.argmax(dim=-1)
def test_epoch_end(self, outputs):
results = torch.zeros((10, 10)).to(self.device)
for output in outputs:
for label, prediction in zip(*output):
results[int(label), int(prediction)] += 1
torch.distributed.reduce(results, 0, torch.distributed.ReduceOp.SUM)
acc = results.diag().sum() / results.sum()
if self.trainer.is_global_zero:
self.log("test_metric", acc, rank_zero_only=True)
self.trainer.results = results
if __name__ == '__main__':
data = CIFAR(batch_size=512)
model = BasicModule()
trainer = Trainer(max_epochs=2, gpus='0,1', strategy="ddp", precision=16)
test_results = trainer.test(model, data)
if trainer.is_global_zero:
print(test_results)
print(trainer.results)
| https://stackoverflow.com/questions/70623377/ |
Bigger batch size improves training by too much | I am writing a classifier that takes a surname and predicts a language it belongs to. I found that small batch sizes (256 and less) perform poorly compared to big batch sizes (2048 and more). Could someone give me some insight on why this is happening and how to fix it? Thank you.
Training code:
def indices_to_packed(names, input_size):
names = [F.one_hot(item, input_size).float() for item in names]
names_packed = pack_sequence(names, enforce_sorted=False)
return names_packed
def infer(model, data, labels, lengths, device):
data_packed = indices_to_packed(data, model.rnn.input_size)
data_packed, labels, lengths = data_packed.to(device), labels.to(device), lengths.to(device)
preds = model(data_packed, lengths)
loss = loss_fn(preds, labels)
return loss, preds
results = {}
epochs = 100
for BATCH_SIZE in [4096, 2048, 256]:
train_loader = data.DataLoader(train_data, BATCH_SIZE, sampler=train_sampler, collate_fn=partial(my_collate, input_size=input_size, output_size=output_size))
val_loader = data.DataLoader(val_data, BATCH_SIZE, sampler=val_sampler, collate_fn=partial(my_collate, input_size=input_size, output_size=output_size))
model = LSTM(input_size, HIDDEN_SIZE, NUM_LAYERS, DROPOUT, output_size)
optimizer = torch.optim.Adam(model.parameters())
model.to(device)
train_losses = []
val_losses = []
cur_losses = {}
duration = 0
for epoch in range(epochs):
start = time.time()
train_loss = 0
model.train()
# Using PackedSequence
for names, langs, lengths in train_loader:
optimizer.zero_grad()
loss, _ = infer(model, names, langs, lengths, device)
loss.backward()
optimizer.step()
train_loss += loss
train_loss /= len(train_data)
train_losses.append(train_loss.cpu().detach().numpy())
model.eval()
val_loss = 0
with torch.no_grad():
for names, langs, lengths in val_loader:
loss, _ = infer(model, names, langs, lengths, device)
val_loss += loss
val_loss /= len(val_data)
val_losses.append(val_loss.cpu().detach().numpy())
cur_duration = time.time() - start
duration += cur_duration
log_line = (f"BATCH_SIZE: {BATCH_SIZE} epoch: {epoch} train loss: "
f"{train_loss:.5f} val loss: {val_loss:.5f}")
print(log_line)
cur_losses["train_losses"] = train_losses
cur_losses["val_losses"] = val_losses
results[BATCH_SIZE] = {"losses" : cur_losses, "duration" : duration, "model": model}
Model:
class LSTM(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, dropout, output_size):
super().__init__()
self.rnn = nn.LSTM(input_size, hidden_size, num_layers, dropout=DROPOUT)
self.linear = nn.Linear(hidden_size, output_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, x, lengths):
lstm_out, _ = self.rnn(x)
# https://discuss.pytorch.org/t/get-each-sequences-last-item-from-packed-sequence/41118/7
sum_batch_sizes = torch.cat((
torch.zeros(2, dtype=torch.int64),
torch.cumsum(lstm_out.batch_sizes, 0)
))
sorted_lengths = lengths[lstm_out.sorted_indices]
last_seq_idxs = sum_batch_sizes[sorted_lengths] + torch.arange(lengths.size(0))
last_seq_items = lstm_out.data[last_seq_idxs]
lstm_last_out = last_seq_items[lstm_out.unsorted_indices]
linear_out = self.linear(lstm_last_out)
softmax_out = self.softmax(linear_out)
return softmax_out
Losses with different batch sizes:
| It looks like there issue is how the loss is calculated.
train_loss += loss line accumulates the loss. When batch size is higher, there will be fewer steps to do. The code normalizes this by dividing by the length of train data, train_loss /= len(train_data), but should probably take into account the batch size: train_loss /= (len(train_data) / BATCH_SIZE).
The same for validation loss, but the effect is different probably because of smaller data size compared to training data.
| https://stackoverflow.com/questions/70625575/ |
Collecting features from network.foward() in TensorFlow | So basically I want to achieve the same goal as in this code but in TensorFlow
def get_function(network, loader):
''' Collect function (features) from the self.network.module.forward_features() routine '''
features = []
for batch_idx, (inputs, targets) in enumerate(loader):
inputs, targets = inputs.to('cpu'), targets.to('cpu')
features.append([f.cpu().data.numpy().astype(np.float16) for f in network.forward_features(inputs)])
return [np.concatenate(list(zip(*features))[i]) for i in range(len(features[0]))]
Are there any clean ways to do this with TensorFlow iterator? Here is the torch code that I want to replicate in TensorFlow. https://pastecode.io/s/b03cpoyv
| To answer your question, I just need to ensure that you understand your original torch code properly. So, here's your workflow
class LeNet(nn.Module):
def forward:
# few bunch of layers
return output
def forward_features:
# same as forward function
return [each layer output]
Now, next, you use the torch_get_function method and retrieve all layers output from the forward_features function that is defined in your model. The torch_get_function gives a total of 4 outputs as a list and you pick only the first feature and concate across the batches in the end.
def torch_get_function(network, loader):
features = []
for batch_idx, (inputs, targets) in enumerate(loader):
print('0', network.forward_features(inputs)[0].shape)
print('1', network.forward_features(inputs)[1].shape)
print('2', network.forward_features(inputs)[2].shape)
print('3', network.forward_features(inputs)[3].shape)
print()
features.append([f... for f in network.forward_features(inputs)])
return [np.concatenate(list(zip(*features))[i]) for i in range(len(features[0]))]
for epoch in epochs:
dataset = torchvision.datasets.MNIST...
dataset = torch.utils.data.Subset(dataset, list(range(0, 1000)))
functloader = torch.utils.data.DataLoader(...)
# for x , y in functloader:
# print('a ', x.shape, y.shape)
# a torch.Size([100, 1, 28, 28]) torch.Size([100])
activs = torch_get_function(net, functloader)
print(activs[0].shape)
break
That's why if when I ran your code, I got
# These are the 4 output that returned by forward_features(inputs)
0 torch.Size([100, 10, 12, 12])
1 torch.Size([100, 320])
2 torch.Size([100, 50])
3 torch.Size([100, 10])
...
# In the return statement of forward_features -
# You take only the first index feature and concate across batches.
(1000, 10, 12, 12)
So, the input size of your model is (batch_size, 1, 28, 28) and the final output is like (1000, 10, 12, 12).
Let's do the same in tensorflow, step by step.
import numpy as np
from tqdm import tqdm
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import (Conv2D, Dropout, MaxPooling2D,
Dense, Flatten)
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_test = x_test.astype("float32") / 255.0
x_test = np.reshape(x_test, (-1, 28, 28, 1))
dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
dataset = dataset.shuffle(buffer_size=1024).batch(100)
# it's like torch.utils.data.Subset
dataset = dataset.take(1000)
dataset
<TakeDataset shapes: ((None, 28, 28, 1), (None,)), types: (tf.float32, tf.uint8)>
Let's now build the model. To make it familiar to you, I'm writing in sub-class API.
class LeNet(keras.Model):
def __init__(self, num_classes, input_size=28):
super(LeNet, self).__init__()
self.conv1 = Conv2D(10, (5, 5))
self.conv2 = Conv2D(20, (5, 5))
self.conv2_drop = Dropout(rate=0.5)
self.fc1 = Dense(50)
self.fc2 = Dense(num_classes)
def call(self, inputs, training=None):
x1 = tf.nn.relu(MaxPooling2D(2)(self.conv1(inputs)))
x2 = tf.nn.relu(MaxPooling2D(2)(self.conv2_drop(self.conv2(x1))))
x2 = Flatten()(x2)
x3 = tf.nn.relu(self.fc1(x2))
x4 = tf.nn.softmax(self.fc2(x3), axis=1)
# in tf/keras, when we will call model.fit / model.evaluate
# to train the model only x4 will return
if training:
x4
else: # but when model(input)/model.predict(), we can return many :)
return [x1, x2, x3, x4]
lenet = LeNet(10)
lenet.build(input_shape=(None, 28, 28, 1))
Get the desired features
features = []
for input, target in tqdm(dataset):
# lenet(...) will give 4 output as we model
# but as we're interested on the first index feature...
features.append(lenet(input, training=False)[0])
print(len(features))
features = np.concatenate(features, axis=0)
features.shape
(10000, 12, 12, 10)
In tensorflow, the channel axis is default set to last, as opposed to the torch. In torch, you received (1000, 10, 12, 12) and in tensorflow, it gives you (10000, 12, 12, 10) but you can change it of course, (how). Here is the working colab.
| https://stackoverflow.com/questions/70633113/ |
Wrong outputs from torch.sub? | I’m currently using torch.sub alongside torch.div to obtain the MAPE between my predicted and true labels for my neural network although I’m not getting the answers I’m expecting. According to the example in the documentation, I should be getting a 4x1 tensor, not 4x4.
Could anyone clear this up for me?
print('y_true ', y_true)
y_true tensor([[ 46],
[262],
[ 33],
[ 35]], device=‘cuda:0’, dtype=torch.int16)
print('y_pred ', y_pred)
y_pred tensor([[[308.5075]],
[[375.8983]],
[[389.4587]],
[[406.4957]]], device=‘cuda:0’, grad_fn=)
print('torch.sub ', torch.sub(y_true, y_pred))
torch.sub tensor([[[-262.5075],
[ -46.5075],
[-275.5075],
[-273.5075]],
[[-329.8983],
[-113.8983],
[-342.8983],
[-340.8983]],
[[-343.4587],
[-127.4587],
[-356.4587],
[-354.4587]],
[[-360.4957],
[-144.4957],
[-373.4957],
[-371.4957]]], device='cuda:0', grad_fn=<SubBackward0>)
| That is because y_pred has an extra dimension which means the y_true tensor
probably gets broadcasted to the correct dimension.
If you remove the extra last dimension you get the desired result:
>>> torch.sub(y_true, y_pred[...,0]).shape
torch.Size([4, 1])
| https://stackoverflow.com/questions/70633857/ |
ValueError in trainer.fit() | I have encountered ValueError: No positive samples in targets, true positive value should be meaningless when I tried to run trainer.fit(model, dataset) for my model, but I've double checked dataset and all of the train/val/test sets had proper number of positive samples. Why would this occur and where should I start to fix this issue? Thanks!
| Before the training loop actually starts, PL trainer will run a sanity check of validation loop for two steps. In that case, these two steps may only have one type of label (negative or positive) and crash your metrics.
Turn it off by setting num_sanity_val_steps=0 in your trainer.
https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html#num-sanity-val-steps
| https://stackoverflow.com/questions/70635743/ |
PyTorch equivalent for Keras sequential model | How to get the perfect copy of this Keras sequential network in PyTorch?
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
| This is a snippet that works for this case:
model_torch = nn.Sequential(
nn.Flatten(),
nn.Linear(28*28, 128),
nn.ReLU(),
nn.Linear(128, 10),
)
| https://stackoverflow.com/questions/70636297/ |
Looking for a torch.imshow() 'like' command | Say that i have a variable image (which is currently located on the gpu), sized [32,1,256,256] where 32 is the batch size, 1 is the amount of channels (gray scale).
Instead of ploting this:
plt.imshow(img[0,0,:,:].cpu().detach(),'gray');plt.show()
I wish i could do
torch.imshow(img,8,'gray') and it will subplot 8 images from my batch
is there any thing like that?
| You are looking for torchvision.utils.make_grid: It will convert the [32, 1, 256,256] tensor into a grid of 32 images. You still need to use plt to actually plot the image grid to screen.
| https://stackoverflow.com/questions/70642941/ |
wandb.plot.line does not work and it just shows a table | I used this example that was provided by WandB. However, the web interface just shows a table instead of a figure.
data = [[i, random.random() + math.sin(i / 10)] for i in range(100)]
table = wandb.Table(data=data, columns=["step", "height"])
wandb.log({'line-plot1': wandb.plot.line(table, "step", "height")})
This is a screenshot from WandB's web interface:
Also, I have the same problem with other kinds of figures and charts that use a table.
|
X-Post from the wandb forum
https://community.wandb.ai/t/wandb-plot-confusion-matrix-just-show-a-table/1744
If you click the section called “Custom Charts” above the Table, it’ll show the line plot that you’ve logged.
Logging the Table also is expected behaviour, because this will allow users to interactively explore the logged data in a W&B Table after logging it.
| https://stackoverflow.com/questions/70644326/ |
How to use the Inception model for transfer learning in PyTorch? | I have created a PyTorch torchvision model for transfer learning, using the pre-built ResNet50 base model, like this:
# Create base model from torchvision.models
model = resnet50(pretrained=True)
num_features = model.fc.in_features
# Define the network head and attach it to the model
model_head = nn.Sequential(
nn.Linear(num_features, 512),
nn.ReLU(),
nn.Dropout(0.25),
nn.Linear(512, 256),
nn.ReLU(),
nn.Dropout(0.5),
nn.Linear(256, num_classes),
)
model.fc = model_head
Now I wanted to use the Ineception v3 model instead as base, so I switched from resnet50() above to inception_v3(), the rest stayed as is. However, during training I get the following error:
TypeError: cross_entropy_loss(): argument 'input' (position 1) must be Tensor, not InceptionOutputs
So how can one use the Inception v3 model from torchvision.models as base model for transfer learning?
| From PyTorch documentation about Inceptionv3 architecture:
This network is unique because it has two output layers when training. The primary output is a linear layer at the end of the network. The second output is known as an auxiliary output and is contained in the AuxLogits part of the network.
Have a look at this tutorial: https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html#inception-v3 There you can find how to use transfer learning for several models, included ResNet and Inception.
| https://stackoverflow.com/questions/70645161/ |
Pytorch Index Error ('index out of range in self'): How to Solve? | I recently encountered a roadblock following a deep learning tutorial on youtube (entire code can be found here). I'm having a problem with part 4.4. The goal is to return a dictionary of article summaries for certain stocks (their tickers are in a list: monitered_tickers).
def summarize(articles):
summaries = []
for article in articles:
input_ids = tokenizer.encode(article, return_tensors='pt')
output = model.generate(input_ids, max_length=55, num_beams=5, early_stopping=True)
summary = tokenizer.decode(output[0], skip_special_tokens=True)
summaries.append(summary)
return summaries
summaries = {ticker:summarize(articles[ticker]) for ticker in monitered_tickers}
summaries
When I run the code above, I get the following error:
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_10688/3329134555.py in <module>
----> 1 summaries = {ticker:summarize(articles[ticker]) for ticker in monitered_tickers}
2 summaries
~\AppData\Local\Temp/ipykernel_10688/3329134555.py in <dictcomp>(.0)
----> 1 summaries = {ticker:summarize(articles[ticker]) for ticker in monitered_tickers}
2 summaries
~\AppData\Local\Temp/ipykernel_10688/3177436574.py in summarize(articles)
3 for article in articles:
4 input_ids = tokenizer.encode(article, return_tensors='pt')
----> 5 output = model.generate(input_ids, max_length=40, num_beams=5, early_stopping = True)
6 summary = tokenizer.decode(output[0], skip_special_tokens=True)
7 summaries.append(summary)
~\anaconda3\lib\site-packages\torch\autograd\grad_mode.py in decorate_context(*args, **kwargs)
26 def decorate_context(*args, **kwargs):
27 with self.__class__():
---> 28 return func(*args, **kwargs)
29 return cast(F, decorate_context)
30
~\anaconda3\lib\site-packages\transformers\generation_utils.py in generate(self, inputs, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, max_new_tokens, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, logits_processor, stopping_criteria, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, **model_kwargs)
1022 # if model is encoder decoder encoder_outputs are created
1023 # and added to `model_kwargs`
-> 1024 model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
1025 inputs_tensor, model_kwargs, model_input_name
1026 )
~\anaconda3\lib\site-packages\transformers\generation_utils.py in _prepare_encoder_decoder_kwargs_for_generation(self, inputs_tensor, model_kwargs, model_input_name)
484 encoder_args = ()
485
--> 486 model_kwargs["encoder_outputs"]: ModelOutput = encoder(*encoder_args, **encoder_kwargs)
487
488 return model_kwargs
~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
~\anaconda3\lib\site-packages\transformers\models\pegasus\modeling_pegasus.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)
754 inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale
755
--> 756 embed_pos = self.embed_positions(input_shape)
757
758 hidden_states = inputs_embeds + embed_pos
~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
~\anaconda3\lib\site-packages\torch\autograd\grad_mode.py in decorate_context(*args, **kwargs)
26 def decorate_context(*args, **kwargs):
27 with self.__class__():
---> 28 return func(*args, **kwargs)
29 return cast(F, decorate_context)
30
~\anaconda3\lib\site-packages\transformers\models\pegasus\modeling_pegasus.py in forward(self, input_ids_shape, past_key_values_length)
138 past_key_values_length, past_key_values_length + seq_len, dtype=torch.long, device=self.weight.device
139 )
--> 140 return super().forward(positions)
141
142
~\anaconda3\lib\site-packages\torch\nn\modules\sparse.py in forward(self, input)
156
157 def forward(self, input: Tensor) -> Tensor:
--> 158 return F.embedding(
159 input, self.weight, self.padding_idx, self.max_norm,
160 self.norm_type, self.scale_grad_by_freq, self.sparse)
~\anaconda3\lib\site-packages\torch\nn\functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
2042 # remove once script supports set_grad_enabled
2043 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2044 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
2045
2046
IndexError: index out of range in self
Wondering how I can fix this. Bit of a newbie so any help on this would be greatly appreciated. Thank you.
| Your article length might exceed the model max input length. Use:
tokenizer.encode(article, return_tensors='pt', max_length=512, truncation=True)
| https://stackoverflow.com/questions/70647743/ |
Reproducing arithmetic with pytorch's quantized tensors with numpy operations | I would like to know what exact arithmetic operations I have to do to reproduce results of quantized operations in pytorch.
This is almost duplicate question to:
I want to use Numpy to simulate the inference process of a quantized MobileNet V2 network, but the outcome is different with pytorch realized one
But I would even simplify it with the example of adding two quantized tensors.
For example for addition of two quantized tensors in Resnet architecture I use nn.quantized.FloatFunctional().
self.skip_add = nn.quantized.FloatFunctional()
And during inference I can add two tensors via
out1 = self.skip_add.add(x1, x2)
where x1 and x2 are tensors of torch.Tensor type, quantized with fbgemm backend during post training quantization procedure.
I expected out2_int = x1.int_repr() + x2.int_repr() should be the same as out1.int_repr() (with probably need of clamping in the needed range).
However that is not the case.
Below I dump the example outputs.
So I wonder how can I get out1 with integer operations?
>print(x1)
...,
[-0.0596, -0.0496, -0.1390, ..., -0.0596, -0.0695, -0.0099],
[-0.0893, 0.0000, -0.0695, ..., 0.0596, -0.0893, -0.0298],
[-0.1092, 0.0099, 0.0000, ..., -0.0397, -0.0794, -0.0199]]]],
size=(1, 256, 14, 14), dtype=torch.quint8,
quantization_scheme=torch.per_tensor_affine, scale=0.009925744496285915,
zero_point=75)
print(x2)
...,
[ 0.1390, -0.1669, -0.0278, ..., -0.2225, -0.0556, -0.1112],
[ 0.0000, -0.1669, -0.0556, ..., 0.0556, 0.1112, -0.2781],
[ 0.1390, 0.1669, 0.0278, ..., 0.2225, 0.4171, 0.0834]]]],
size=(1, 256, 14, 14), dtype=torch.quint8,
quantization_scheme=torch.per_tensor_affine, scale=0.02780967578291893,
zero_point=61)
print(x1.int_repr())
...,
[69, 70, 61, ..., 69, 68, 74],
[66, 75, 68, ..., 81, 66, 72],
[64, 76, 75, ..., 71, 67, 73]]]], dtype=torch.uint8)
print(x2.int_repr())
...,
[66, 55, 60, ..., 53, 59, 57],
[61, 55, 59, ..., 63, 65, 51],
[66, 67, 62, ..., 69, 76, 64]]]], dtype=torch.uint8)
print(out1)
...,
[ 0.0904, -0.2109, -0.1808, ..., -0.2712, -0.1205, -0.1205],
[-0.0904, -0.1808, -0.1205, ..., 0.1205, 0.0301, -0.3013],
[ 0.0301, 0.1808, 0.0301, ..., 0.1808, 0.3314, 0.0603]]]],
size=(1, 256, 14, 14), dtype=torch.quint8,
quantization_scheme=torch.per_tensor_affine, scale=0.03012925386428833,
zero_point=56)
print(out1.int_repr())
...,
[59, 49, 50, ..., 47, 52, 52],
[53, 50, 52, ..., 60, 57, 46],
[57, 62, 57, ..., 62, 67, 58]]]], dtype=torch.uint8)
print(out2_int)
[135, 125, 121, ..., 122, 127, 131],
[127, 130, 127, ..., 144, 131, 123],
[130, 143, 137, ..., 140, 143, 137]]]], dtype=torch.uint8)
| The answer is twofold:
Integer operations are implemented taking into account that int8 number refer to different domain. Convolution (or matrix-matrix multiplication in general) is implemented with respect to this fact and my answer here I want to use Numpy to simulate the inference process of a quantized MobileNet V2 network, but the outcome is different with pytorch realized one worked for me.
Addition in pytorch is implemented in floats. You need to convert from int to float, make an addition and then convert back to int.
def manual_addition(xq1_int, scale1, zp1, xq2_int, scale2, zp2,
scale_r, zp_r):
xdq = scale1 * (xq1_int.astype(np.float) - zp1)
ydq = scale2 * (xq2_int.astype(np.float) - zp2)
zdq = xdq + ydq
zq_manual_int = (((zdq / scale_r).round()) + zp_r).round()
return zq_manual_int #clipping might be needed
| https://stackoverflow.com/questions/70651229/ |
Tensor matrix multiplication returning vector einsum | I am confused about the following example of a matrix tensor multiplication that returns a vector. At first glance I thought that it would mean multiplying the first dimension of the tensor dydx by the matrix dLdy but I don't get the expected results as depicted below. So what is the meaning of this einsum ?
import torch
import numpy as np
dLdy = torch.randn(2,2)
dydx = torch.randn(2,2,2)
torch.einsum('jk,jki->i', dLdy, dydx)
tensor([0.3115, 3.7255])
dLdy
tensor([[-0.4845, 0.6838],
[-1.1723, 1.4914]])
dydx
tensor([[[ 1.5496, -1.2722],
[ 0.1221, 1.0495]],
[[-1.4882, 0.0307],
[-0.5134, 1.6276]]])
(dLdy * dydx[0]).sum()
-0.1985
| For A and B this is contraction (sum) over the first two dimensions jk, so
res(i) = sum_{j,k} A(j,k)B(j,k,i)
for example:
import torch
import numpy as np
dLdy = torch.randn(2,2)
dydx = torch.randn(2,2,2)
print(torch.einsum('jk,jki->i', dLdy, dydx))
print((dLdy * dydx[:,:,0]).sum())
print((dLdy * dydx[:,:,1]).sum())
produces
tensor([4.6025, 1.8987])
tensor(4.6025)
tensor(1.8987)
ie (dLdy * dydx[:,:,0]).sum() is the first element of the resulting vector, etc
| https://stackoverflow.com/questions/70657019/ |
Why are the parameters of this PyTorch AutoEncoder hardcoded this way? | Hi I am trying to understand how the following PyTorch AutoEncoder code works. The code below uses the MNIST dataset which is 28X28. My question is how the nn.Linear(128,3) parameters where chosen?
I have a dataset which is 512X512 and I would like to modify the code for this AutoEncoder to support.
class LitAutoEncoder(pl.LightningModule):
def __init__(self):
super().__init__()
self.encoder = nn.Sequential(nn.Linear(28 * 28, 128), nn.ReLU(), nn.Linear(128, 3))
self.decoder = nn.Sequential(nn.Linear(3, 128), nn.ReLU(), nn.Linear(128, 28 * 28))
def forward(self, x):
# in lightning, forward defines the prediction/inference actions
embedding = self.encoder(x)
return embedding
def training_step(self, batch, batch_idx):
# training_step defined the train loop. It is independent of forward
x, y = batch
x = x.view(x.size(0), -1)
z = self.encoder(x)
x_hat = self.decoder(z)
loss = F.mse_loss(x_hat, x)
return loss
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
return optimizer
| I am assuming input image data are in this shape: x.shape == [bs, 1, h, w], where bs is batch size. Then, x is first viewed as [bs, h*w], i.e. [bs, 28*28]. This means all pixels in an image are flattened into a 1D vector.
Then in the encoder:
nn.Linear(28*28, 128) takes flattened input of size [bs, 28*28] and outputs intermediate result of size [bs, 128]
nn.Linear(128, 3): [bs, 128] -> [bs, 3]
Then in the decoder:
nn.Linear(3, 128): [bs, 3] -> [bs, 128]
nn.Linear(128, 28*28): [bs, 128] -> [bs, 28*28]
The final output is then matched against the input.
If you want to use the exact architecture for your 512x512 images, simply change every occurrence of 28*28 in the code to 512*512. However, this is a quite infeasible choice, for these reasons:
For MNIST images, nn.Linear(28*28, 128) contains 28x28x128+128=100480 parameters, while for your images nn.Linear(512*512, 128) contains 512x512x128+128=33554560 parameters. The size is too large, and it may lead to overfitting
The intermediate data [bs, 3] uses only 3 floats to encode a 512x512 image. I don't think you can recover anything with such compression
I'd suggest looking up convolutional architectures for you purpose
| https://stackoverflow.com/questions/70657510/ |
Saving Pytorch model error: codec can't decode bytes in position 2-3 | I have seen similar questions regarding the loading of a pytorch model, but not the saving of one and the solutions offered on those questions have not solved my problem.
Here is the code I have to save the model:
PATH = "c:\Users\my_name\Desktop\model"
torch.save(model, PATH)
But I am stuck getting the title error. I am saving the model without a checkpoint, before evaluating. Not sure what is going wrong here.
| If the path is just my c: drive, for whatever reason the error is fixed. Still unsure about why I cant select a deeper path.
| https://stackoverflow.com/questions/70658583/ |
how to make small chunks of python dictionary | hi i have python sub dictionary like this
import torch
import torch.nn as nn
dic = {
"A": [0.2822, -0.0958, -0.5165, -0.3812,
-0.3469, 0.4025, -0.0696, -0.1246,
-0.1132, 0.4170, -0.0383, -0.4071,
-0.5407, 0.1519, 0.5630, 0.1276],
"B": [1.0014, 0.9980, 1.0012, 0.9986,
1.0001, 0.9999, 1.0016, 1.0014,
1.0008, 0.9996, 1.0008, 1.0004,
1.0000, 0.9987, 0.9997, 0.9989]
}
for key 1 i have 16 values, i want to make 2 chunks of size 8 for key A. how it can be possible?
Like first 8 values stored in separate array and last 8 values stoed ins eparate array or dictionary with their key values.
as shown in this image
| First of all, I'd refer to keys as what they are (i.e. Key 'A' has 16 values, not key 1). It's a good practice to just think of dictionaries as unordered and simply a group of key-value pairs.
Second, using numpy will allow us to split the key we want into two (or more) even groups. If you end up needing to split a list of 30 elements into three lists, this code will still work.
import numpy as np
dic = {
"A": [0.2822, -0.0958, -0.5165, -0.3812,
-0.3469, 0.4025, -0.0696, -0.1246,
-0.1132, 0.4170, -0.0383, -0.4071,
-0.5407, 0.1519, 0.5630, 0.1276],
"B": [1.0014, 0.9980, 1.0012, 0.9986,
1.0001, 0.9999, 1.0016, 1.0014,
1.0008, 0.9996, 1.0008, 1.0004,
1.0000, 0.9987, 0.9997, 0.9989]
}
# We give array_split() our list, and how many we want it split into.
a1, a2 = np.array_split(dic['A'], 2) # We get our two lists returned.
dic['A1'] = a1.tolist() # Numpy returns it as an np.array, so let's put it back into a list.
dic['A2'] = a2.tolist()
del(dic['A']) # Remove the now unused key-value.
{'B': [1.0014, 0.998, 1.0012, 0.9986, 1.0001, 0.9999, 1.0016, 1.0014, 1.0008, 0.9996, 1.0008, 1.0004, 1.0, 0.9987, 0.9997, 0.9989],
'A1': [0.2822, -0.0958, -0.5165, -0.3812, -0.3469, 0.4025, -0.0696, -0.1246],
'A2': [-0.1132, 0.417, -0.0383, -0.4071, -0.5407, 0.1519, 0.563, 0.1276]}
| https://stackoverflow.com/questions/70659353/ |
index a list of tensors | I have a list object named " imgs " of tensors (50 images).
I have an array of indices (indi) of length 29.
how do I index the list of tensors with the array of indices?
when I do the following I get:
imgs[indi]
TypeError: only integer scalar arrays can be converted to a scalar index
Thanks
| Assuming these are normal python lists then you can use a list comprehension
result = [imgs[i] for i in indi]
which will give a list of tensors.
If you further want to make this a single tensor containing the images you can use torch.stack
result = torch.stack([imgs[i] for i in indi], dim=0)
| https://stackoverflow.com/questions/70659461/ |
Unable to load custom pretrained weight in Pytorch Lightning | I want to retrain a custom model with my small dataset. I can load the pretrained weight (.pth) and run it in Pytorch. However, I need more functionalities and refactored the code to Pytorch lightning but I can't figure out how to load the pretrained weight into the Pytorch Lightning model.
Please see the details of my code below:
class BDRAR(nn.Module):
def __init__(self):
super(BDRAR, self).__init__()
resnext = ResNeXt101()
self.layer0 = resnext.layer0
self.layer1 = resnext.layer1
self.layer2 = resnext.layer2
self.layer3 = resnext.layer3
self.layer4 = resnext.layer4
Pytorch Lightning code:
class liteBDRAR(pl.LightningModule):
def __init__(self):
super(liteBDRAR, self).__init__()
self.model = BDRAR()
print('Model Created!')
def forward(self, x):
return self.model(x)
Pytorch Lightning run:
path = './ckpt/BDRAR/3000.pth'
bdrar = liteBDRAR.load_from_checkpoint(path, strict=False)
trainer = pl.Trainer(fast_dev_run=True, gpus=1)
trainer.fit(bdrar)
Error:
keys = model.load_state_dict(checkpoint["state_dict"], strict=strict)
**KeyError: 'state_dict'**
I will appreciate any help.
Thank you.
| It can be that your .pth file is already a state_dict. Try to load pretrained weight in your lightning class.
class liteBDRAR(pl.LightningModule):
def __init__(self):
super(liteBDRAR, self).__init__()
self.model = BDRAR()
print('Model Created!')
def load_model(self, path):
self.model.load_state_dict(torch.load(path, map_location='cuda:0'), strict=False)
path = './ckpt/BDRAR/3000.pth'
model = liteBDRAR()
model.load_model(path)
| https://stackoverflow.com/questions/70661251/ |
Understanding an example of PyTorch's Einsum function | I am studying some code and I came across a usage of PyTorch's einsum function that I am not understanding. The docs are here.
The snippet looks like (slightly modified from the original):
import torch
x = torch.rand(64, 64, 25, 25)
y = torch.rand(64, 64, 64, 25)
result = torch.einsum('ncuv,nctv->nctu', x, y)
print(result.shape)
>> torch.Size([64, 64, 64, 25])
So the notation is such that n=64, c=64, u=25, v=25, t=64.
I'm not too sure what's happening. I think that for each 25 dimensional vector in t (64 of them), each one is being multiplied with each of the u=25 vectors of size 25 elementwise and then the results summed, or rather 25 dot products of 25 dimensional vectors?
Any insights appreciated.
| Basically, you can think of it as taking dot products over certain dimensions, and reorganizing the rest.
For simplicity, let's ignore the batching dimensions n and c (since they are consistent before and after ncuv,nctv->nctu), and discuss:
import torch
x = torch.rand(25, 25)
y = torch.rand(64, 25)
result = torch.einsum('uv,tv->tu', x, y)
print(result.shape)
>> torch.Size([64, 25])
Note that v vanishes after einsum, meaning v is the dimension being summed up, while t and u are not. You can interpret it this way: x is a collection of 25 25-dimensional vectors; y is a collection of 64 25-dimensional vectors. The dot product of the t-th vector in y and the u-th vector in x are computed and put in the t-th row and u-th column of result.
You can also rewrite into a math equation:
result[n,c,t,u] = \sum_{v} x[n,c,u,v] * y[n,c,t,v], for each n, c, t, u
Note two things:
the summation is over the indices that vanish in the summation pattern nctu,ncuv->nctv
indices appearing on the right of the pattern are the indices of the resulting tensor
| https://stackoverflow.com/questions/70661298/ |
Can the increase in training loss lead to better accuracy? | I'm working on a competition on Kaggle. First, I trained a Longformer base with the competition dataset and achieved a quite good result on the leaderboard. Due to the CUDA memory limit and time limit, I could only train 2 epochs with a batch size of 1. The loss started at about 2.5 and gradually decreased to 0.6 at the end of my training.
I then continued training 2 more epochs using that saved weights. This time I used a little bit larger learning rate (the one on the Longformer paper) and added the validation data to the training data (meaning I no longer split the dataset 90/10). I did this to try to achieve a better result.
However, this time the loss started at about 0.4 and constantly increased to 1.6 at about half of the first epoch. I stopped because I didn't want to waste computational resources.
Should I have waited more? Could it eventually lead to a better test result? I think the model could have been slightly overfitting at first.
| Your model got fitted to the original training data the first time you trained it. When you added the validation data to the training set the second time around, the distribution of your training data must have changed significantly. Thus, the loss increased in your second training session since your model was unfamiliar with this new distribution.
Should you have waited more? Yes, the loss would have eventually decreased (although not necessarily to a value lower than the original training loss)
Could it have led to a better test result? Probably. It depends on if your validation data contains patterns that are:
Not present in your training data already
Similar to those that your model will encounter in deployment
| https://stackoverflow.com/questions/70663238/ |
ValueError: Unsupported ONNX opset version: 13 | Goal: successfully run Notebook as is on Jupyter Labs.
Section 2.1 throws a ValueError, I believe because of the version of PyTorch I'm using.
PyTorch 1.7.1
Kernel conda_pytorch_latest_p36
Very similar SO post; the solution was to use the latest PyTorch version... which I am using.
Code:
import onnxruntime
def export_onnx_model(args, model, tokenizer, onnx_model_path):
with torch.no_grad():
inputs = {'input_ids': torch.ones(1,128, dtype=torch.int64),
'attention_mask': torch.ones(1,128, dtype=torch.int64),
'token_type_ids': torch.ones(1,128, dtype=torch.int64)}
outputs = model(**inputs)
symbolic_names = {0: 'batch_size', 1: 'max_seq_len'}
torch.onnx.export(model, # model being run
(inputs['input_ids'], # model input (or a tuple for multiple inputs)
inputs['attention_mask'],
inputs['token_type_ids']), # model input (or a tuple for multiple inputs)
onnx_model_path, # where to save the model (can be a file or file-like object)
opset_version=13, # the ONNX version to export the model to
do_constant_folding=True,
input_names=['input_ids', # the model's input names
'input_mask',
'segment_ids'],
output_names=['output'], # the model's output names
dynamic_axes={'input_ids': symbolic_names, # variable length axes
'input_mask' : symbolic_names,
'segment_ids' : symbolic_names})
logger.info("ONNX Model exported to {0}".format(onnx_model_path))
export_onnx_model(configs, model, tokenizer, "bert.onnx")
Traceback:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-7-7aaa4c5455a0> in <module>
25 logger.info("ONNX Model exported to {0}".format(onnx_model_path))
26
---> 27 export_onnx_model(configs, model, tokenizer, "bert.onnx")
<ipython-input-7-7aaa4c5455a0> in export_onnx_model(args, model, tokenizer, onnx_model_path)
22 dynamic_axes={'input_ids': symbolic_names, # variable length axes
23 'input_mask' : symbolic_names,
---> 24 'segment_ids' : symbolic_names})
25 logger.info("ONNX Model exported to {0}".format(onnx_model_path))
26
~/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/onnx/__init__.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format)
228 do_constant_folding, example_outputs,
229 strip_doc_string, dynamic_axes, keep_initializers_as_inputs,
--> 230 custom_opsets, enable_onnx_checker, use_external_data_format)
231
232
~/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/onnx/utils.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format)
89 dynamic_axes=dynamic_axes, keep_initializers_as_inputs=keep_initializers_as_inputs,
90 custom_opsets=custom_opsets, enable_onnx_checker=enable_onnx_checker,
---> 91 use_external_data_format=use_external_data_format)
92
93
~/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/onnx/utils.py in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, example_outputs, opset_version, _retain_param_name, do_constant_folding, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, fixed_batch_size, custom_opsets, add_node_names, enable_onnx_checker, use_external_data_format, onnx_shape_inference, use_new_jit_passes)
614 # training=TrainingMode.TRAINING or training=TrainingMode.PRESERVE,
615 # (to preserve whatever the original training mode was.)
--> 616 _set_opset_version(opset_version)
617 _set_operator_export_type(operator_export_type)
618 with select_model_mode_for_export(model, training):
~/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/onnx/symbolic_helper.py in _set_opset_version(opset_version)
506 _export_onnx_opset_version = opset_version
507 return
--> 508 raise ValueError("Unsupported ONNX opset version: " + str(opset_version))
509
510 _operator_export_type = None
ValueError: Unsupported ONNX opset version: 13
Please let me know if there's anything else I can add to post.
| ValueError: Unsupported ONNX opset version N -> install latest PyTorch.
Credit to Tianleiwu on this Git Issue.
As per 1st cell of Notebook:
# Install or upgrade PyTorch 1.8.0 and OnnxRuntime 1.7.0 for CPU-only.
I inserted a new cell right after:
pip install torch==1.10.0 # latest
| https://stackoverflow.com/questions/70664534/ |
Why are the gradients not equivalent when using loss.backward() v.s torch.auto.grad? | I ran into this weird behavior when trying to "manually" optimize a network's parameters via SGD. When attempting to update the model's parameters using the following way, it works just fine:
for _ in trange(epochs):
for x, y in train_loader:
x, y = x.to(device, non_blocking=True), y.to(device, non_blocking=True)
loss = F.cross_entropy(m(x), y)
grad = torch.autograd.grad(loss, m.parameters())
with torch.no_grad():
for p, g in zip(m.parameters(), grad):
p -= 0.1 * g
However, doing the following throws off the model completely:
for _ in trange(epochs):
for x, y in train_loader:
x, y = x.to(device, non_blocking=True), y.to(device, non_blocking=True)
loss = F.cross_entropy(m(x), y)
loss.backward()
with torch.no_grad():
for p in m.parameters():
p -= 0.1 * p.grad
But to me, both methods should be equivalent. And upon further inspection, when comparing the values of g from grad with the values of p.grad from m.paramters(), it turned out that the gradient values are not the same! I also tried removing with torch.no_grad(): and doing the following, but it didn't work either:
for p in m.parameters():
p.data -= 0.1 * p.grad
Can somebody please explain why is this happening? Shouldn't the gradients in both methods have the same values (keeping in mind that both models m are identical)?
REPRODUCIBLE EXAMPLE:
Ensure reproducibility:
device = torch.device('cuda')
torch.manual_seed(0)
torch.cuda.manual_seed_all(0)
np.random.seed(0)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
torch.cuda.empty_cache()
Load the data:
T = transforms.ToTensor()
train_data = datasets.MNIST(root='data', transform=T, download=True)
test_data = datasets.MNIST(root='data', transform=T, train=False, download=True)
BS = 300
epochs = 5
LR = 0.1
train_loader = DataLoader(train_data, batch_size=BS, pin_memory=True)
test_loader = DataLoader(test_data, batch_size=1000, pin_memory=True)
Define the model to be optimized:
class Model(nn.Module):
def __init__(self, out_dims):
super().__init__()
self.conv1 = nn.Conv2d(1, out_dims, 3, stride=3, padding=1)
self.conv2 = nn.Sequential(nn.Conv2d(out_dims, out_dims * 2, 3), nn.BatchNorm2d(out_dims * 2), nn.ReLU())
self.conv3 = nn.Sequential(nn.Conv2d(out_dims * 2, out_dims * 4, 4, stride=2, padding=1), nn.BatchNorm2d(out_dims * 4), nn.ReLU(), nn.Flatten())
self.fc = nn.Linear(out_dims * 4 * 16, 10)
def forward(self, x):
return nn.Sequential(*tuple(self.children()))(x)
m1 = Model(5).to(device)
m2 = deepcopy(m1) # "m2.load_state_dict(m1.state_dict())" doesn't work either
Training and evaluation:
# M1's training:
for _ in trange(epochs):
for x, y in train_loader:
x, y = x.to(device, non_blocking=True), y.to(device, non_blocking=True)
loss = F.cross_entropy(m1(x), y)
grad = torch.autograd.grad(loss, m1.parameters())
with torch.no_grad():
for p, g in zip(m1.parameters(), grad):
p -= LR * g
# M1's evaluation:
m1.eval()
acc1 = []
with torch.no_grad():
for x, y in test_loader:
x, y = x.to(device, non_blocking=True), y.to(device, non_blocking=True)
_, pred = m1(x).max(1)
acc1.append(metric(pred, y).item())
print(f'Accuracy: {np.mean(acc1) * 100:.4}%')
# M2's training:
for _ in trange(epochs):
for x, y in train_loader:
x, y = x.to(device, non_blocking=True), y.to(device, non_blocking=True)
loss = F.cross_entropy(m2(x), y)
loss.backward()
with torch.no_grad():
for p in m2.parameters():
p -= LR * p.grad
# M2's evaluation:
m2.eval()
acc2 = []
with torch.no_grad():
for x, y in test_loader:
x, y = x.to(device, non_blocking=True), y.to(device, non_blocking=True)
_, pred = m2(x).max(1)
acc2.append(metric(pred, y).item())
print(f'Accuracy: {np.mean(acc2) * 100:.4}%')
| It took me a while to figure out, but the problem was in loss.backward(). Unlike autograd.grad() which computes and returns the gradients, the inplace backward() computes and accumulates the gradients of participating nodes in the computation graph. In other words, the two will have the same effect when used to back-prop once, but every repetition of backward() will add the currently computed gradients to all previous ones (hence the divergence). Resetting the gradients using model.zero_grad() fixes stuff.
| https://stackoverflow.com/questions/70668522/ |
Torch Tensor with same shape but different storage size | I'm working on GANs model, the generator creates a tensor with size (3,128,128) which I dumped with the pseudo-code
import torch
image = Generator(noise).clone()
tensor = image[0].detach().cpu()
torch.save(tensor, save_path)
The problem is that tensor costs more storage than the same tensor size, even they have the same shape and dtype
>>> import sys
>>> import torch
>>> tensor = tensor.load(save_path)
>>> rand_t = torch.randn(tensor.shape)
>>> print(tensor.shape, rand_t.shape)
torch.Size([3, 128, 128]) torch.Size([3, 128, 128])
>>> print(tensor.dtype, rand_t.dtype)
torch.float32 torch.float32
>>> print(sys.getsizeof(tensor.storage()))
9830472
>>> print(sys.getsizeof(rand_t.storage()))
196680
I tried to dump those tensors, the output from the generator took 9.2MB and the random tensor took 197.4kB. I did read the pytorch's documents but found nothing. Please help me to figure out what is the difference between them?
| It seems like extracting a sub-tensor directly from the original will bring the whole container with it. The function .clone() can solve it. Example:
>>> import sys
>>> import torch
>>> tensor = torch.randn(10,3,128,128)
>>> sys.getsizeof(tensor.storage())
1966144
>>> sub1 = tensor[0]
>>> sub1.shape
torch.Size([3, 128, 128])
>>> sys.getsizeof(sub1.storage())
1966144
>>> sub2 = tensor[0].clone()
>>> sub2.shape
torch.Size([3, 128, 128])
>>> sys.getsizeof(sub2.storage())
196672
Therefore, in my case, cloning the image individually should solve the problem:
import torch
image = Generator(noise).clone()
tensor = image[0].detach().clone().cpu() # using clone()
torch.save(tensor, save_path)
| https://stackoverflow.com/questions/70669036/ |
Substitute values in a vector (PyTorch) | I have a tensor of N unique target labels, randomly selected from [0,R], where N<R (i.e., my target vector can have any length, but only contains N unique labels.). I would like to transform the labels to [0,N]. Is there a function available for this target transform? e.g. input vector: [12, 6, 4, 5, 3, 12, 4] → transformed vector : [4, 3, 1, 2, 0, 4, 1]
My attempt:
I have implemented the following snippet, which works as expected, but might not be the most glorious implementation:
import torch
def my_transform(vec):
t_ = torch.unique(vec)
return torch.cat(list(map(lambda x: (t_ == x).nonzero(as_tuple=True)[0], vec)))
t = torch.Tensor([12, 6, 4, 5, 3, 12, 4])
print(my_transform(t))
| You're looking for searchsorted
import torch
t = torch.Tensor([12, 6, 4, 5, 3, 12, 4])
transformed = torch.searchsorted(t.unique(),t)
# tensor([4, 3, 1, 2, 0, 4, 1])
| https://stackoverflow.com/questions/70674579/ |
Create an instance of AdamParamState | I need to create an instance of AdamParamState. I looked through the adam.cpp code as an example, and accordingly copied the following code from there. But, with the provided headers, it still does not recognize AdamParamState.
I appreciate any help or comment on this matter.
#include <torch/optim/adam.h>
#include <torch/csrc/autograd/variable.h>
#include <torch/nn/module.h>
#include <torch/serialize/archive.h>
#include <torch/utils.h>
#include <ATen/ATen.h>
void get_state(torch::optim::Optimizer *optimizer){
for (auto& group : optimizer->param_groups()) {
for (auto &p : group.params()) {
if (!p.grad().defined()) {
continue;
}
auto grad = p.grad();
TORCH_CHECK(!grad.is_sparse(),
"Adam does not support sparse gradients"/*, please consider SparseAdam instead*/);
ska::flat_hash_map<std::string, std::unique_ptr<torch::optim::OptimizerParamState>>& state_ = optimizer->state();
auto param_state = state_.find(c10::guts::to_string(p.unsafeGetTensorImpl()));
auto tmp_ = p.dim();
int tmp_0;
int tmp_1;
if (tmp_ > 0)
tmp_0 = p.size(0);
if (tmp_ > 1)
tmp_1 = p.size(1);
std::cout << tmp_ << tmp_0 << tmp_1 << std::endl;
// auto& options = static_cast<AdamOptions&>(group.options());
auto& state = static_cast<AdamParamState&>(*state_[c10::guts::to_string(p.unsafeGetTensorImpl())]);
}
}
}
| I found that this works:
auto& state = static_cast<torch::optim::AdamParamState&>(*state_[c10::guts::to_string(p.unsafeGetTensorImpl())]);
very simple and juicy!
| https://stackoverflow.com/questions/70674717/ |
Torch backward do not return a tensor | To set up the problem I have as input a matrix X and as output a matrix Y, we obtain Y by a matrix multiplication with W that I pass through an exponential. From what I have understood from torch.backward() with a gradient parameter the formula should be the following
Yet dy_over_dx as a jacobian should be a tensor of sort of size (I mean not the usual n by n dimensional matrix).
X = torch.tensor( [[2.,1.,-3], [-3,4,2]], requires_grad=True)
W = torch.tensor( [ [3.,2.,1.,-1] , [2,1,3,2 ] , [3,2,1,-2] ], requires_grad=True)
Y = torch.exp(torch.matmul(X, W))
Y.retain_grad()
print(Y)
dL_over_dy = torch.tensor( [[2,3,-3,9],[-8,1,4,6]])
print(dL_over_dy, dL_over_dy.shape)
Y.backward(dL_over_dy)
print(X.grad)
tensor([[3.6788e-01, 3.6788e-01, 7.3891e+00, 4.0343e+02],
[1.4841e+02, 7.3891e+00, 5.9874e+04, 1.0966e+03]],
grad_fn=<ExpBackward>)
tensor([[ 2, 3, -3, 9],
[-8, 1, 4, 6]]) torch.Size([2, 4])
tensor([[ -3648.6118, 7197.7920, -7279.4707],
[229369.6250, 729282.0625, 222789.8281]])
Next if I look at the gradient of Y which I suppose is dy_over_dx I obtain, what do I do not understand here ?
print(Y.grad)
tensor([[ 2., 3., -3., 9.],
[-8., 1., 4., 6.]])
| What you're looking at here is Y.grad, which is dL/dY i.e. none other than dL_over_dy.
To help clarify, let Z = X @ Y (@ is equivalent to matmul), and Y = exp(Z). Then we have with the chain-rule:
Y.grad = dL/dY
Z.grad = dL/dZ = dL/dY . dY/dZ, where dY/dZ = exp(Z) = Y
X.grad = dL/dX = dL/dZ . dZ/dX, where dZ/dX = d(X@W)/dX = W.T
Here is the implementation:
X = torch.tensor([[ 2., 1., -3],
[ -3, 4., 2.]], requires_grad=True)
W = torch.tensor([[ 3., 2., 1., -1],
[ 2., 1., 3., 2.],
[ 3., 2., 1., -2]], requires_grad=True)
Z = torch.matmul(X, W)
Z.retain_grad()
Y = torch.exp(Z)
dL_over_dy = torch.tensor([[ 2., 3., -3, 9.],
[ -8, 1., 4., 6.]])
Y.backward(dL_over_dy)
Then we have
>>> dL_over_Z = dL_over_dy*Y
tensor([[ 7.3576e-01, 1.1036e+00, -2.2167e+01, 3.6309e+03],
[-1.1873e+03, 7.3891e+00, 2.3950e+05, 6.5798e+03]],
grad_fn=<MulBackward0>)
>>> dL_over_X = dL_over_Z @ W.T
tensor([[ -3648.6118, 7197.7920, -7279.4707],
[229369.6250, 729282.0625, 222789.8281]], grad_fn=<MmBackward0>)
| https://stackoverflow.com/questions/70679434/ |
IndexError: Target is out of bounds | I am currently trying to replicate the article
https://towardsdatascience.com/text-classification-with-bert-in-pytorch-887965e5820f
to get an introduction to PyTorch and BERT.
I used some own sample corpus and corresponding tragets as practise, but the code throws the following:
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-4-8577755f37de> in <module>()
201 LR = 1e-6
202
--> 203 trainer(model, df_train, df_val, LR, EPOCHS)
3 frames
<ipython-input-4-8577755f37de> in trainer(model, train_data, val_data, learning_rate, epochs)
162 output = model(input_id, mask)
163
--> 164 batch_loss = criterion(output, torch.max(train_label,1)[1])
165 total_loss_train += batch_loss.item()
166
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py in forward(self, input, target)
1150 return F.cross_entropy(input, target, weight=self.weight,
1151 ignore_index=self.ignore_index, reduction=self.reduction,
-> 1152 label_smoothing=self.label_smoothing)
1153
1154
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing)
2844 if size_average is not None or reduce is not None:
2845 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2846 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
2847
2848
IndexError: Target 32 is out of bounds.
The code is mostly identical to the one in the article, except of course the more extensive lable-dict.
Orginial:
labels = {'business':0,
'entertainment':1,
'sport':2,
'tech':3,
'politics':4
}
Mine:
labels =
{'Macroeconomics': 0,
'Microeconomics': 1,
'Labor Economics': 2,
'Subnational Fiscal Issues': 3,
'Econometrics': 4,
'International Economics': 5,
'Financial Economics': 6,
'Health, Education, and Welfare': 7,
'Public Economics': 8,
'Development and Growth': 9,
'Industrial Organization': 10,
'Other': 11,
'Environmental and Resource Economics': 12,
'History': 13,
'Regional and Urban Economics': 14,
'Development Economics': 15,
'Corporate Finance': 16,
'Children': 17,
'Labor Studies': 18,
'Economic Fluctuations and Growth': 19,
'Economics of Aging': 20,
'Economics of Education': 21,
'International Trade and Investment': 22,
'Asset Pricing': 23,
'Health Economics': 24,
'Law and Economics': 25,
'International Finance and Macroeconomics': 26,
'Monetary Economics': 27,
'Technical Working Papers': 28,
'Political Economy': 29,
'Development of the American Economy': 30,
'Health Care': 31,
'Productivity, Innovation, and Entrepreneurship': 32}
Code:
class Dataset(torch.utils.data.Dataset):
def __init__(self, df):
self.labels = torch.LongTensor([labels[label] for label in df["category"]])
self.texts = [tokenizer(text,
padding='max_length', max_length = 512, truncation=True,
return_tensors="pt") for text in df['text']]
def classes(self):
return self.labels
def __len__(self):
return len(self.labels)
def get_batch_labels(self, idx):
# Fetch a batch of labels
return np.array(self.labels[idx])
def get_batch_texts(self, idx):
# Fetch a batch of inputs
return self.texts[idx]
def __getitem__(self, idx):
batch_texts = self.get_batch_texts(idx)
batch_y = np.array(range(0,len(labels)))
return batch_texts, batch_y
#Splitting the sample into trainingset, validationset and testset (80,10,10)
np.random.seed(112)
df_train, df_val, df_test = np.split(df.sample(frac=1, random_state=42),
[int(.8*len(df)), int(.9*len(df))])
print(len(df_train),len(df_val), len(df_test))
from torch import nn
class BertClassifier(nn.Module):
def __init__(self, dropout=0.5):
super(BertClassifier, self).__init__()
self.bert = BertModel.from_pretrained('bert-base-cased')
self.dropout = nn.Dropout(dropout)
self.linear = nn.Linear(768, 5)
self.relu = nn.ReLU()
def forward(self, input_id, mask):
_, pooled_output = self.bert(input_ids= input_id, attention_mask=mask,return_dict=False)
dropout_output = self.dropout(pooled_output)
linear_output = self.linear(dropout_output)
final_layer = self.relu(linear_output)
return final_layer
from torch.optim import Adam
from tqdm import tqdm
def trainer(model, train_data, val_data, learning_rate, epochs):
train, val = Dataset(train_data), Dataset(val_data)
train_dataloader = torch.utils.data.DataLoader(train, batch_size=2, shuffle=True)
val_dataloader = torch.utils.data.DataLoader(val, batch_size=2)
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
criterion = nn.CrossEntropyLoss()
optimizer = Adam(model.parameters(), lr= learning_rate)
if use_cuda:
model = model.cuda()
criterion = criterion.cuda()
for epoch_num in range(epochs):
total_acc_train = 0
total_loss_train = 0
for train_input, train_label in tqdm(train_dataloader):
train_label = train_label.to(device)
mask = train_input['attention_mask'].to(device)
input_id = train_input['input_ids'].squeeze(1).to(device)
output = model(input_id, mask)
batch_loss = criterion(output, torch.max(train_label,1)[1])
total_loss_train += batch_loss.item()
acc = (output.argmax(dim=1) == train_label).sum().item()
total_acc_train += acc
model.zero_grad()
batch_loss.backward()
optimizer.step()
total_acc_val = 0
total_loss_val = 0
with torch.no_grad():
for val_input, val_label in val_dataloader:
val_label = val_label.to(device)
mask = val_input['attention_mask'].to(device)
input_id = val_input['input_ids'].squeeze(1).to(device)
output = model(input_id, mask)
batch_loss = criterion(output, val_label)
total_loss_val += batch_loss.item()
acc = (output.argmax(dim=1) == val_label).sum().item()
total_acc_val += acc
print(
f'Epochs: {epoch_num + 1} | Train Loss: {total_loss_train / len(train_data): .3f} \
| Train Accuracy: {total_acc_train / len(train_data): .3f} \
| Val Loss: {total_loss_val / len(val_data): .3f} \
| Val Accuracy: {total_acc_val / len(val_data): .3f}')
EPOCHS = 5
model = BertClassifier()
LR = 1e-6
trainer(model, df_train, df_val, LR, EPOCHS)
| You're creating a list of length 33 in your __getitem__ call which is one more than the length of the labels list, hence the out of bounds error. In fact, you create the same list each time this method is called. You're supposed to fetch the associated y with the X found at idx.
If you replace batch_y = np.array(range(...)) with batch_y = np.array(self.labels[idx]), you'll fix your error. Indeed, this is already implemented in your get_batch_labels method.
| https://stackoverflow.com/questions/70680290/ |
How to solve no such node error in pytables and h5py | I built an hdf5 dataset using pytables. It contains thousands of nodes, each node being an image stored without compression (of shape 512x512x3). When I run a deep learning training loop (with a Pytorch dataloader) on it it randomly crashes, saying that the node does not exist. However, it is never the same node that is missing and when I open the file myself to verify if the node is here it is ALWAYS here.
I am running everything sequentially, as I thought that I may have been the fault of multithreading/multiprocessing access on the file. But it did not fix the problem. I tried a LOT of things but it never works.
Does someone have an idea about what to do ? Should I add like a timer between calls to give the machine the time to reallocate the file ?
Initially I was working with pytables only, but in an attempt to solve my problem I tried loading the file with h5py instead. Unfortunately it did not work better.
Here is the error I get with h5py: "RuntimeError: Unable to get link info (bad symbol table node signature)"
The exact error may change but every time it says "bad symbol table node signature"
PS: I cannot share the code because it is huge and part of a bigger basecode that is my company's property. I can still share part of the code below to show how I load the images:
with h5py.File(dset_filepath, "r", libver='latest', swmr=True) as h5file:
node = h5file["/train_group_0/sample_5"] # <- this line breaks
target = node.attrs.get('TITLE').decode('utf-8')
img = Image.fromarray(np.uint8(node))
return img, int(target.strip())
| Before accessing the dataset (node), add a test to confirm it exists. While you're adding checks, do the same for the attribute 'TITLE'. If you are going to use hard-coded path names (like 'group_0') you should check all nodes in the path exist (for example, does 'group_0' exist? Or use one of the recursive visitor functions (.visit() or .visititems() to be sure you only access existing nodes.
Modified h5py code with rudimentary checks looks like this:
sample = 'sample_5'
with h5py.File(dset_filepath, 'r', libver='latest', swmr=True) as h5file:
if sample not in h5file['/train_group_0'].keys():
print(f'Dataset Read Error: {sample} not found')
return None, None
else:
node = h5file[f'/train_group_0/{sample}'] # <- this line breaks
img = Image.fromarray(np.uint8(node))
if 'TITLE' not in node.attrs.keys():
print(f'Attribute Read Error: TITLE not found')
return img, None
else:
target = node.attrs.get('TITLE').decode('utf-8')
return img, int(target.strip())
You said you were working with PyTables. Here is code to do the same with PyTables package:
import tables as tb
sample = 'sample_5'
with tb.File(dset_filepath, 'r', libver='latest', swmr=True) as h5file:
if sample not in h5file.get_node('/train_group_0'):
print(f'Dataset Read Error: {sample} not found')
return None, None
else:
node = h5file.get_node(f'/train_group_0/{sample}') # <- this line breaks
img = Image.fromarray(np.uint8(node))
if 'TITLE' not in node._v_attrs:
print(f'Attribute Read Error: TITLE not found')
return img, None
else:
target = node._v_attrs['TITLE'].decode('utf-8')
return img, int(target.strip())
| https://stackoverflow.com/questions/70682602/ |
Pytorch model gradients are printed correctly but copied wrongly | I want to copy the gradients of loss, with respect to weight, for different data samples using pytorch. In the code below, I am iterating one sample each time from the data loader (batch size = 1) and collecting gradients for 1st fully connected (fc1) layer. Gradients should be different for different samples. The print function shows correct gradients, which are different for different samples. But when I store them in a list, I get the same gradients repeatedly. Any suggestions would be much appreciated. Thanks in advance!
grad_list = [ ]
for data in test_loader:
inputs, labels = data[0], data[1]
inputs = torch.autograd.Variable(inputs)
labels = torch.autograd.Variable(labels)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward
output = target_model(inputs)
loss = criterion(output, labels)
loss.backward()
grad_list.append(target_model.fc1.weight.grad.data)
print(target_model.fc1.weight.grad.data)
| Try using clone and detach instead:
grad_list.append(target_model.fc1.weight.grad.clone().detach())
The data property you are appending to your list is a mutable reference to the storage of the parameter (i.e. the actual memory address and the values contained within). What you need to do is create a replica of the gradient tensor (with clone) and remove it from the computational graph (with detach) to avoid it interfering with gradient computation.
| https://stackoverflow.com/questions/70685546/ |
spectral_norm on GCNConv module | I want to call torch.nn.utils spectral_norm function on a GCNConv layer
gc1 = GCNConv(18, 16)
spectral_norm(gc1)
but I'm getting the following error:
KeyError: 'weight'
meaning gc1._parameters doesn't have weight (only bias):
gc1._parameters
OrderedDict([('bias', Parameter containing:
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
requires_grad=True))])
However, gc1.parameters() stores two objects and one of them is a 16 by 18 matrix (weight matrix).
for p in gc1.parameters():
print('P: ', p.shape)
P: torch.Size([16])
P: torch.Size([16, 18])
How can I make spectral_norm function work on a GCNConv module?
| According to the source code, the weight parameter is wrapped within a linear module contained in GCNConv objects as lin.
I imagine that this should then work:
gc1 = GCNConv(18, 16)
spectral_norm(gc1.lin)
| https://stackoverflow.com/questions/70686721/ |
What is the equivalent of PyTorch's BoolTensor in Tensorflow 2.x? | Is there an equivalent of BoolTensor from Pytorch in Tensorflow assuming I have the below usage in Pytorch that I want to migrate to Tensorflow
done_mask = torch.BoolTensor(dones.values).to(device)
next_state_values[done_mask] = 0.0
| What is dones?
Assuming it's a 0/1 tensor, you can convert it to a Bool tensor like this:
tf.cast(dones,tf.bool)
However, if you want to assign values to a tensor, you can't do it that way.
A way, which I recommend, is to multiply by a matrix of 1/0:
next_state_values *= tf.cast(dones!=1,next_state_values.dtype)
Another way , that I don't recommend as it gives some issues when using the gradient, is to use tf.tensor_scatter_nd_update. For your case, that would be:
indices = tf.where(dones==1)
next_state_values = tf.tensor_scatter_nd_update(next_state_values ,indices,2*tf.zeros(len(indices)))
| https://stackoverflow.com/questions/70688556/ |
PyTorch cannot handle complex tensor on GPU, but works on CPU | I am using PyTorch to simulate NNs on a quantum computer, and therefore I have to use tensors with ComplexFloatTensor datatypes. When I run this line of code on GPU:
torch.matmul(A.transpose(1,2).flatten(0,1), H.flatten(1,2)).reshape(N,steps,2**n,2**n).transpose(0,1)
I get the following error when the tensors are LARGE:
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasCgemm( handle, opa, opb, m, n, k, reinterpret_cast<const cuComplex*>(&alpha), reinterpret_cast<const cuComplex*>(a), lda, reinterpret_cast<const cuComplex*>(b), ldb, reinterpret_cast<const cuComplex*>(&beta), reinterpret_cast<cuComplex*>(c), ldc)`
A and H are both ComplexFloatTensor tensors.
The above error starts occurring when A and H are of shape torch.Size([100, 54, 10]) and torch.Size([54, 512, 512]) or larger, but doesn't occur when they are of shape torch.Size([100, 44, 10]) and torch.Size([44, 256, 256])
Don't worry too much about the exact numbers, but the point is that it always works on CPU (just very slowly), but on GPU it breaks past a certain size.
Does anyone know what the problem could be? Given the edit below, it might just be caused by the fact that the GPU ran out of memory (but the error failed to tell me so)
EDIT: I ran the same thing on Google Colab and got the following error at the same place:
RuntimeError: CUDA out of memory. Tried to allocate 570.00 MiB (GPU 0; 14.76 GiB total capacity; 12.19 GiB already allocated; 79.75 MiB free; 13.38 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Google Colab uses Tesla T4 GPUs, while my server uses NVIDIA RTX A6000
| I figured out the answer to this question myself in the meantime. As it turns out, my GPU simply ran out of memory.
For some reason, Google Colab showed this error correctly (see above), while my own GPU showed this weird CUBLAS_STATUS_NOT_SUPPORTED error, instead of directly telling me that it is a memory issue.
| https://stackoverflow.com/questions/70689874/ |
pytorch cuda out of memory while inferencing | I think this is a very basic question, my apologies as I am very new to pytorch. I am trying to find if an image is manipulated or not using MantraNet. After running 2-3 inferences I get the CUDA out of memory, then after restarting the kernel also I keep getting the same error: The error is given below:
RuntimeError: CUDA out of memory. Tried to allocate 616.00 MiB (GPU 0; 4.00 GiB total capacity; 1.91 GiB already allocated; 503.14 MiB free; 1.93 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
The 'tried to allocate memory(here 616.00 MiB) keeps changing. I checked the GPU statistics and it shoots up while I try to do the inferencing. In tensorflow I know we can control the memory usage by defining an upper limit, is there anything similar in pytorch that one can try?
| So during inferencing for a single image I realized there was no way I was able to fit my image in the gpu which I was using, so after resizing the image to a particular value I was able to do the inferencing without facing any memory issue.
| https://stackoverflow.com/questions/70697046/ |
focal loss for imbalanced data using pytorch | I want to use focal loss with multiclass imbalanced data using pytorch . I searched got and try to use this code but I got error
class_weights=tf.constant([0.21, 0.45, 0.4, 0.46, 0.48, 0.49])
loss_fn=nn.CrossEntropyLoss(weight=class_weights,reduction='mean')
and use this in train function
preds = model(sent_id, mask, labels)
# compu25te the validation loss between actual and predicted values
alpha=0.25
gamma=2
ce_loss = loss_fn(preds, labels)
pt = torch.exp(-ce_loss)
focal_loss = (alpha * (1-pt)**gamma * ce_loss).mean()
the error is
TypeError: cannot assign 'tensorflow.python.framework.ops.EagerTensor' object to buffer 'weight' (torch Tensor or None required)
in this line
loss_fn=nn.CrossEntropyLoss(weight=class_weights,reduction='mean')
| You're mixing tensorflow and pytorch objects.
Try:
class_weights=torch.tensor([0.21, ...], requires_grad=False)
| https://stackoverflow.com/questions/70698416/ |
How to get a Pytorch data loader per class? | I want to train my model on 1 MNIST class at a time.
I can load the data with a general loader:
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as datasets
import torchvision.transforms as transforms
from torch.autograd import Variable
trans = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (1.0,))])
# if not exist, download mnist dataset
root = './data'
train_set = datasets.MNIST(root=root, train=True, transform=trans, download=True)
batch_size = 100
train_loader = torch.utils.data.DataLoader(
dataset=train_set,
batch_size=batch_size,
shuffle=True)
But I'm not sure how to create 10 loaders (1 for each of the classes/digits) from this general loader (or just 10 loaders initially)
| A rather simple solution would involve grouping the dataset by truth value, and creating a unique dataloader per group:
...
from torch.utils.data import Subset, DataLoader
subsets = {target: Subset(train_set, [i for i, (x, y) in enumerate(train_set) if y == target]) for _, target in train_set.class_to_idx.items()}
loaders = {target: DataLoader(subset) for target, subset in subsets.items()}
you can then pick out a specific loader based on class index:
class_3_loader = loaders[3]
| https://stackoverflow.com/questions/70698487/ |
Programing a Pytorch neural network with a branch in the flow of information | I am trying to program a custom layer in PyTorch. I would like this layer to be fully connected to the previous layer but at the same time I want to feed some information from the input layer, let's say I want it to be fully connected to the first layer as well. For example the 4th layer would be fed the 3rd and 1st layer.
This would make the information flow split at the first layer and one branch would be inserted later into the network.
I have to define the forward in this layer having two inputs
class MyLayer(nn.Module):
def __init__(self, size_in, size_out):
super().__init__()
self.size_in, self.size_out = size_in, size_out
weights = torch.Tensor(size_out, size_in)
(... ...)
def forward(self, first_layer, previous_layer):
(... ...)
return output
How can I make this work if I put this layer after, let's say, a normal feed-farward which takes only the previous layer's output as input?
Can I use nn.Sequential with this layer?
Thanks!
| just concatenate the input info with the output of previous layers and feed it to next layers, like:
class Net(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(100, 120) #supose your input shape is 100
self.fc2 = nn.Linear(120, 80)
self.fc3 = nn.Linear(180, 10)
def forward(self, input_layer):
x = F.relu(self.fc1(input_layer))
x = F.relu(self.fc2(x))
x = torch.cat((input_layer, x), 0)
x = self.fc3(x) #this layer is fed by the input info and the previous layer
return x
| https://stackoverflow.com/questions/70703071/ |
Unable to create a custom torchtext BucketIterator | I'm trying to create a POS tagger with LSTM and I'm facing some difficulties with preparing the data.
I've successfully followed a guide that used the following code to prepare the data itertors:
TEXT = data.Field(lower = True)
UD_TAGS = data.Field(unk_token = None)
PTB_TAGS = data.Field(unk_token = None)
fields = (("text", TEXT), ("udtags", UD_TAGS), ("ptbtags", PTB_TAGS))
train_data, valid_data, test_data = datasets.UDPOS.splits(fields)
MIN_FREQ = 2
TEXT.build_vocab(train_data,
min_freq = MIN_FREQ,
vectors = "glove.6B.100d",
unk_init = torch.Tensor.normal_)
UD_TAGS.build_vocab(train_data)
PTB_TAGS.build_vocab(train_data)
BATCH_SIZE = 128
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
device = device)
And then when training the model the code is:
for batch in iterator:
text = batch.text
tags = batch.udtags
Now for my problem - I have a dataset of lists: list of sentences (where every sentence is a list of words) and a list of lists of tags corresponsing to the sentences words.
I created a torch DataSet intance from the x_train, y_train (each one is a list of lists).
But, it does not behave like the 'train_data' that comes from datasets.UDPOS.splits(fields). So, when trying to access the data with:
for batch in iterator:
text = batch.text
tags = batch.udtags
I'm getting an error since my iterator does not have the fields inside. I tried accecing the data in a different manner but coudn't find a way around it.
I also noticed that in the above example, the data in the batch is with the embeddings indexes, while the batch in my code is still the words themselves.
All of the examples I found on the internet uses datasets from torchtext.legacy.datasets, so it does not really help me with my problem.
If it helps, here is my code (it's part of a bigger project, so a bit messy):
class ConvertDataset(Dataset):
"""
Create an instances of pytorch Dataset from lists.
"""
def __init__(self, x, y):
# data loading
self.x = x
self.y = y
def __getitem__(self, index):
return {'text': self.x[index], 'tags': self.y[index]}
def __len__(self):
return len(self.x)
# ## model variables
DROPOUT = 0.25
HIDDEN_DIM = 128
# ## load and prepare train data
train_set = load_annotated_corpus(params_d['data_fn'])
x_train, y_train = _prepare_data(train_set)
TEXT = Field(lower=True)
UD_TAGS = Field(unk_token=None)
# ## build words and tags vocabularies
TEXT.build_vocab(x_train,
min_freq=params_d['min_frequency'],
vectors='glove.6B.100d',
unk_init=torch.Tensor.normal_,
max_size=None if params_d['max_vocab_size'] == -1 else
params_d['max_vocab_size'])
UD_TAGS.build_vocab(y_train)
# ## more model variables
INPUT_DIM = len(TEXT.vocab)
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]
# ## initiate a model
lstm_model = BiLSTM.LSTM(input_dim=INPUT_DIM,
embedding_dim=params_d['embedding_dimension'],
hidden_dim=HIDDEN_DIM,
output_dim=params_d['output_dimension'],
n_layers=params_d['num_of_layers'],
dropout=DROPOUT,
pad_idx=PAD_IDX)
lstm_model.apply(_init_weights)
pretrained_embeddings = TEXT.vocab.vectors
lstm_model.embedding.weight.data.copy_(pretrained_embeddings)
# set pad tag embedding to 0
lstm_model.embedding.weight.data[PAD_IDX] = torch.zeros(params_d['embedding_dimension'])
BATCH_SIZE = 128
My data (lists of lists):
x_train, y_train = _prepare_data(train_data)
Data preparation
train_torch_dataset = ConvertDataset(x_train, y_train)
# ## create data iterators
train_iterator = BucketIterator(
train_torch_dataset,
batch_size=BATCH_SIZE,
device=device,
# Function to use for sorting examples.
sort_key=lambda x: len(x['text']),
# Repeat the iterator for multiple epochs.
repeat=True,
# Sort all examples in data using `sort_key`.
sort=False,
# Shuffle data on each epoch run.
shuffle=True,
# Use `sort_key` to sort examples in each batch.
sort_within_batch=True
)
| Took me a while but I found a solution.
To create a torchtext dataset with input data as lists, use SequenceTaggingDataset (from torchtext.legacy.datasets.SequenceTaggingDataset) but you need to do a simple change to the original source code in the __init__ function, like this:
def __init__(self, columns, fields, encoding="utf-8", separator="\t", **kwargs):
examples = []
# for 2 fields data sets (text, tags)
for words, labels in zip(columns[0], columns[-1]):
examples.append(data.Example.fromlist([words, labels], fields))
super(SequenceTaggingDataset, self).__init__(examples, fields,
**kwargs)
Then, assuming you have a data with two field (in my example, text and pos-tags) you can define the dataset like that:
from torchtext.legacy import data
TEXT = data.Field()
UD_TAGS = data.LabelField()
# define torchtext fields
fields = (("text", TEXT), ("udtags", UD_TAGS))
# push the data into a torchtext type of dataset (** modified SequenceTaggingDataset **)
train_torchtext_dataset = SequenceTaggingDataset([x_train, y_train], fields=fields)
Note that x_train, y_train are nested lists.
| https://stackoverflow.com/questions/70703655/ |
PyTorch TensorBoard add_graph() dictionary input error | What would be the proper way of passing the PyTorch dictionary dataset to the TensorBoard add_graph(model, data).
May seems similar to the Question1, Qeustion2 and Question3, however, couldn't find the right way of handling with dictionary dataset.
Error Message
Dictionary inputs to traced functions must have consistent type. Found Tensor and List[str]
Error occurs, No graph saved
Below are my anonymized scripts of the project.
train.py
from torch.utils.tensorboard import SummaryWriter
from models import CustomModel
from datasets import CustomDataset
writer = SummaryWriter()
# Dataset
dataset = CustomDataset(params ...)
train_dataset = [dataset[i] for i in range(0, k)]
train_dataloader = DataLoader(train_dataset, batch_size=32, shuffle=True)
# Model & TensorBoard
model = CustomModel(params....)
writer.add_graph(model, next(iter(train_dataloader))) # ---- HERE ----
datasets.py
class CustomDataset(Dataset):
def __init__(self, ...):
...
self.x_sequences = pad_sequence(x_sequences, batch_first=True, padding_value=0)
self.y_label = torch.LongTensor(label_list)
...
def __len__(self):
return len(self.y_label)
def __getitem__(self, index):
...
return {
"x_categoricals": self.x_categoricals[index],
"x_sequences": self.x_sequences[index],
"y_label": self.y_label[index],
"info": self.info[index],
}
| The error message tells you that the entries of the dictionary must all be the same type, but in your case you seem to have a Tensor in one entry but a list of strings in another entry. You'd have to make sure that all entries have the same type.
| https://stackoverflow.com/questions/70706389/ |
Switch function/class implementation between Numpy & PyTorch:? | I have a function (actually a class, but for simplicity, let's pretend it's a function) that uses several NumPy operations that all exist in PyTorch e.g. np.add and I also want a PyTorch version of the function. I'm trying to avoid duplicating my code, so I want to know:
Is there a way for me to dynamically switch a function's execution back and forth between NumPy and PyTorch without needing duplicate implementations?
For a toy example, suppose my function is:
def foo_numpy(x: np.ndarray, y: np.ndarray) -> np.ndarray:
return np.add(x, y)
I could define a PyTorch equivalent:
def foo_torch(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
return torch.add(x, y)
Could I somehow define a function like:
def foo(x, y, mode: str = 'numpy'):
if mode == 'numpy':
return np.add(x, y)
elif mode == 'torch':
return torch.add(x, y)
else:
raise ValueError
without needing the if-else statement?
Edit: what about something like the following?
def foo(x, y, mode: str = 'numpy'):
if mode == 'numpy':
lib = np
elif mode == 'torch':
lib = torch
else:
raise ValueError
return lib.add(x, y)
| Instead of using a string, you can use a boolean (bool) value to represent the mode you want to use i.e. False (0) representing NumPy and True (1) representing PyTorch. One can then use ternary operators to further shrink the if statements.
def foo(x, y, mode: bool = 0):
lib = torch if mode else np
return lib.add(x, y)
If you want to switch back and forth between the two in a class you can do something similar
class Example:
def __init__(self):
self._mode = True
def switchMode(self):
self._mode = !self._mode
def foo(self, x, y):
lib = torch if self._mode else np
return lib.add(x, y)
| https://stackoverflow.com/questions/70706732/ |
PyTorch equivalent of Numpy Round? | numpy.round() optionally accepts a specified number of digits to round to. However, torch.round does not, and while it seems like PyTorch will conform to NumPy eventually, what are people's current solutions?
I just want a function like torch.round(3.22, decimals=1) that returns 3.2.
| You can define your own rounding function by
def round(x, decimals=0):
b = 10**decimals
return torch.round(x*b)/b
| https://stackoverflow.com/questions/70706832/ |
Understanding Pytorch Weight and Biases in Linear Layer | Below is the code for combining weight and bias into a single layer, I am not able to understand the line below, why we have to multiply weight transpose matrix with bais. I should just bias without weight because we are multiplying weight for getting final output3
combined_layer.bias.data = layer1.bias @ layer2.weight.t() + layer2.bias
# Create a single layer to replace the two linear layers
combined_layer = nn.Linear(input_size, output_size)
combined_layer.weight.data = layer2.weight @ layer1.weight
combined_layer.bias.data = layer1.bias @ layer2.weight.t() + layer2.bias //This should be just bias
outputs3 = inputs @ combined_layer.weight.t() + combined_layer.bias
Could anyone please help me in understanding this?
| You simply need to expand the original equation of two Linear layers i.e.
# out = layer2(layer1(x))
# given (x @ A + B) @ C + D
out = (x @ layer1.weight.t() + layer1.bias) @ layer2.weight.t() + layer2.bias
You can expand (x @ A + B) @ C + D = (x @ A @ C) + B @ C + D
out = x @ layer1.weight.t() @ layer2.weight.t() + layer1.bias @ layer2.weight.t() + layer2.bias
out = x @ (layer1.weight.t() @ layer2.weight.t()) + (layer1.bias @ layer2.weight.t() + layer2.bias)
# the above equation is x @ (A @ C) + B @ C + D
# now you can assume
combined_layer.weight = layer2.weight @ layer1.weight
combined_layer.bias = layer1.bias @ layer2.weight.t() + layer2.bias
# final output
out = x @ combined_layer.weight.t() + combined_layer.bias
Also, note that matrix multiplication transpose rule is also used here i.e.
transpose(A@B) = transpose(B) @ transpose(A)
That's why combined_layer.weight.t() is multiplied by x as we didn't take transpose in layer2.weight @ layer1.weight.
| https://stackoverflow.com/questions/70713265/ |
Merge one tensor into other tensor on specific indexes in PyTorch | Any efficient way to merge one tensor to another in Pytorch, but on specific indexes.
Here is my full problem.
I have a list of indexes of a tensor in below code xy is the original tensor.
I need to preserve the rows (those rows who are in indexes list) of xy and apply some function on elements other than those indexes (For simplicity let say the function is 'multiply them with two),
xy = torch.rand(100,4)
indexes=[1,2,55,44,66,99,3,65,47,88,99,0]
Then merge them back into the original tensor.
This is what I have done so far:
I create a mask tensor
indexes=[1,2,55,44,66,99,3,65,47,88,99,0]
xy = torch.rand(100,4)
mask=[]
for i in range(0,xy.shape[0]):
if i in indexes:
mask.append(False)
else:
mask.append(True)
print(mask)
import numpy as np
target_mask = torch.from_numpy(np.array(mask, dtype=bool))
print(target_mask.sum()) #output is 89 as these are element other than preserved.
Apply the function on masked rows
zy = xy[target_mask]
print(zy)
zy=zy*2
print(zy)
Code above is working fine and posted here to clarify the problem
Now I want to merge tensor zy into xy on specified index saved in the list indexes.
Here is the pseudocode I made, as one can see it is too complex and need 3 for loops to complete the task. and it will be too much resources wastage.
# pseudocode
for masked_row in indexes:
for xy_rows_index in xy:
if xy_rows_index= masked_row
pass
else:
take zy tensor row and replace here #another loop to read zy.
But I am not sure what is an efficient way to merge them, as I don't want to use NumPy or for loop etc. It will make the process slow, as the original tensor is too big and I am going to use GPU.
Any efficient way in Pytorch for this?
| Once you have your mask you can assign updated values in place.
zy = 2 * xy[target_mask]
xy[target_mask] = zy
As for acquiring the mask I don't see a problem necessarily with your approach, though using the built-in set operations would probably be more efficient. This also gives an index tensor instead of a mask, which, depending on the number of indices being updated, may be more efficient.
i = list(set(range(len(xy)))-set(indexes))
zy = 2 * xy[i]
xy[i] = zy
Edit:
To address the comment, specifically to find the complement of indices of i we can do
i_complement = list(set(range(len(xy)))-set(i))
However, assuming indexes contains only values between 0 and len(xy)-1 then we could equivalently use i_complement = len(set(indexes)), which just removes the repeated values in indexes.
| https://stackoverflow.com/questions/70715968/ |
How can I make all rows of matrix sum 1 from tensor in pytorch | I have a matrix, which is from Cora.
The size of it is [2708,1433]
For any rows, the elements are 1 or 0. I want to make "sum of elements of any rows be 1, by dividing sum of rows."
How can I make it? At first I thought I can do it by 'for' and 'append' command.
Is there any easier way?
| xs = xs / xs.sum(dim=-1).unsqueeze(-1)
If xs is your Tensor, xs.sum(dim=-1) is the summation over the column-index (i.e. a Tensor of shape (2708,). By unsqueezing it, you turn it into a matrix of shape (2708, 1) which you can then broadcast against xs. The result of the division
is a matrix, all rows of which sum to 1:
xs.sum(dim=1)
assert torch.allclose(torch.ones(xs.shape[0], dtype=float), xs.sum(dim=1))
ps: if xs is ones and zeros, you might need to cast it to float first:
xs = xs.to(float)
| https://stackoverflow.com/questions/70720494/ |
Detectron2 models not generating any results | I am just trying out detectron2 with some basic code as follows
model = model_zoo.get('COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml', trained=True)
im = Image.open('input.jpg')
t = transforms.ToTensor()
model.eval()
with torch.no_grad():
im = t(im)
output = model([{'image':im}])
print(output)
However the model does not produce any meaningful predictions
[{'instances': Instances(num_instances=0, image_height=480, image_width=640, fields=[pred_boxes: Boxes(tensor([], device='cuda:0', size=(0, 4))), scores: tensor([], device='cuda:0'), pred_classes: tensor([], device='cuda:0', dtype=torch.int64)])}]
I don't quite get what went wrong, it was stated in the detectron2 documentation that:
You can also run inference directly like this:
model.eval()
with torch.no_grad():
outputs = model(inputs)
and
For inference of builtin models, only “image” key is required, and “width/height” are optional.
In which case, I can't seem to find the missing link here.
| I had the same issue, for me I had two issues to fix. The first was resizing shortest edge. I used the Detectron2 built function from detectron2.data.transforms and imported ResizeShortestEdge. The model values can be found with cfg.INPUT, which will list max/min sizes for test and train. The other issue was matching the color channels with cfg.
| https://stackoverflow.com/questions/70722262/ |
How to add two new dimensions to the input feature before the last two layers in MLP | Initially we had an MLP of multiple layers. We had an input embedding of 200 dimension. We now want to add two more dimensions to the original embedding to encode two important features. But as the original dimension is high, we fear that MLP would neglect the two new dimensions, which are quite important.
Thus, we want to add(concat) the two new dimensions before the last two layers of the MLP. I am still a new learner of ML and PyTorch, and I have searched a lot online but fail to come up with the ways to do this.
May I ask how can we achieve this using PyTorch? Thank you so much!
| You could simply create two input heads. One for the embedding, which goes through its own neural network and then one for the two features. The output of both networks are then simply concatinated and passed into a final layer.
Since for the one input head there is only two features (probably a vector of size two, right?)
You can combine two neural network modules simply like this:
# create a seperate network for your embedding input
class EmbeddingModel(nn.Module):
def __init__(self):
super(EmbeddingModel, self).__init__()
self.layer1 = nn.Linear(...)
. . .
self.layerN = nn.Linear(...)
def forward(self, x):
x = F.activation(self.layer1(x))
. . .
x = F.activation(self.layerN(x))
return x
# create a one layer network for your "two important features"
# use the same activation function as the last layer of the "EmbeddingModel"
class FeaturesModel(nn.Module):
def __init__(self):
super(FeaturesModel, self).__init__()
self.layer1 = nn.Linear(...)
def forward(self, x):
x = F.activation(self.layer1(x))
return x
# finally create your main-model which combines both
class MainModel(nn.Module):
def __init__(self):
super(MainModel, self).__init__()
self.embeddingModel = EmbeddingModel()
self.featuresModel = FeaturesModel()
# the input-dim to this layer has to be the output-dim of the embeddingModel + the output-dim of the featureModel
self.outputLayer = nn.Linear(...)
def forward(self, x_embeggings, x_features):
x_embeggings = self.embeddingModel(x_embeggings)
x_features = self.featuresModel(x_features)
x = torch.cat((x_embeddings, x_features), -1)
x = F.activation(self.outputLayer(x))
return x
| https://stackoverflow.com/questions/70727212/ |
How to load an ONNX file and use it to make a ML prediction in PyTorch? | Below is the source code, I use to load a .pth file and do a multi-class image classification prediction.
model = Classifier() # The Model Class.
model.load_state_dict(torch.load('<PTH-FILE-HERE>.pth'))
model = model.to(device)
model.eval()
# prediction function to test images
def predict(img_path):
image = Image.open(img_path)
resize = transforms.Compose(
[ transforms.Resize((256,256)), transforms.ToTensor()])
image = resize(image)
image = image.to(device)
y_result = model(image.unsqueeze(0))
result_idx = y_result.argmax(dim=1)
print(result_idx)
I converted the .pth file to an ONNX file using torch.onnx.export.
Now, How can I write a prediction script similar to above one by using the ONNX file alone and not using the .pth file.?
Is it possible to do so?
| You can use ONNX Runtime.
# !pip install onnx onnxruntime-gpu
import onnx, onnxruntime
model_name = 'model.onnx'
onnx_model = onnx.load(model_name)
onnx.checker.check_model(onnx_model)
image = Image.open(img_path)
resize = transforms.Compose(
[ transforms.Resize((256,256)), transforms.ToTensor()])
image = resize(image)
image = image.unsqueeze(0) # add fake batch dimension
image = image.to(device)
EP_list = ['CUDAExecutionProvider', 'CPUExecutionProvider']
ort_session = onnxruntime.InferenceSession(model_name, providers=EP_list)
def to_numpy(tensor):
return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
# compute ONNX Runtime output prediction
ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(image)}
ort_outs = ort_session.run(None, ort_inputs)
max = float('-inf')
max_index = -1
for i in range(0, len(ort_outs[0][0])):
if(ort_outs[0][0][i] > max):
max = ort_outs[0][0][i]
max_index = i
print(max_index)
You can follow the tutorial for detailed explanation.
Usually, the purpose of using onnx is to load the model in a different framework and run inference there e.g. PyTorch -> ONNX -> TensorRT.
| https://stackoverflow.com/questions/70731064/ |
TypeError: nll_loss_nd(): argument 'input' (position 1) must be Tensor, not tuple | So I'm trying to train my BigBird model (BigBirdForSequenceClassification) and I got to the moment of the training, which ends with below error message:
Traceback (most recent call last):
File "C:\Users\######", line 189, in <module>
train_loss, _ = train()
File "C:\Users\######", line 152, in train
loss = cross_entropy(preds, labels)
File "C:\Users\#####\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\######\venv\lib\site-packages\torch\nn\modules\loss.py", line 211, in forward
return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction)
File "C:\Users\######\venv\lib\site-packages\torch\nn\functional.py", line 2532, in nll_loss
return torch._C._nn.nll_loss_nd(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
TypeError: nll_loss_nd(): argument 'input' (position 1) must be Tensor, not tuple
From what I understand, the problem happens because the train() function returns the tuple. Now - my question is how I should approach such issue? How do I change the output of train() function to return tensor instead of tuple?
I have seen similar issues posted here but none of the solutions seems to be helpful in my case, not even
model = BigBirdForSequenceClassification(config).from_pretrained(checkpoint, return_dict=False)
(When I don't add return_dict=False I got similiar error message but it says "TypeError: nll_loss_nd(): argument 'input' (position 1) must be Tensor, not SequenceClassifierOutput"
Please see my train code below:
def train():
model.train()
total_loss = 0
total_preds = []
for step, batch in enumerate(train_dataloader):
if step % 10 == 0 and not step == 0:
print('Batch {:>5,} of {:>5,}.'.format(step, len(train_dataloader)))
batch = [r.to(device) for r in batch]
sent_id, mask, labels = batch
preds = model(sent_id, mask)
loss = cross_entropy(preds, labels)
total_loss = total_loss + loss.item()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
optimizer.step()
optimizer.zero_grad()
preds = preds.detach().cpu().numpy()
total_preds.append(preds)
avg_loss = total_loss / len(train_dataloader)
total_preds = np.concatenate(total_preds, axis=0)
return avg_loss, total_preds
and then:
for epoch in range(epochs):
print('\n Epoch {:} / {:}'.format(epoch + 1, epochs))
train_loss, _ = train()
train_losses.append(train_loss)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
I will really appreciate any help on this case and thank you in advance!
| Ok, so it seems like I should have used BigBirdModel instead of BigBirdForSequenceClassification - issue solved
| https://stackoverflow.com/questions/70746737/ |
Problem with Pytorch gradient of a non-sequential model | I have troubles reproducing this Pytorch tutorial.
The model they introduce is :
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.i2h = nn.Linear(input_size + hidden_size, hidden_size)
self.i2o = nn.Linear(input_size + hidden_size, output_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, input, hidden):
combined = torch.cat((input, hidden), 1)
hidden = self.i2h(combined)
output = self.i2o(combined)
output = self.softmax(output)
return output, hidden
def initHidden(self, batch_size):
return torch.zeros(batch_size, self.hidden_size, dtype=torch.float32, requires_grad=True)
This model reproduces what is happening inside a RNN cell.
While coding, I had troubles with the gradient inside the model.
The code reproducing the issue is the following :
import torch
import torch.nn as nn
# Toy data to reproduce the issue
toy_data_batch = torch.tensor([[0, 1], [1, 0], [1, 0]], dtype=torch.float32)
toy_label_batch = torch.tensor([2, 0, 3], dtype=torch.long)
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.i2h = nn.Linear(input_size + hidden_size, hidden_size)
self.i2o = nn.Linear(input_size + hidden_size, output_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, input, hidden):
combined = torch.cat((input, hidden), 1)
hidden = self.i2h(combined)
output = self.i2o(combined)
output = self.softmax(output)
return output, hidden
def initHidden(self, batch_size):
return torch.zeros(batch_size, self.hidden_size, dtype=torch.float32, requires_grad=True)
# Model initialization
input_size = 2
hidden_size = 2
output_size = 4 # Targets in {0, 1, 2, 3}
batch_size = 3 # 3 data points in the batch
learning_rate = 5e-3
rnn = RNN(input_size, hidden_size, output_size)
hidden = rnn.initHidden(batch_size) # init hidden layer with zeros
# Negative log likelihood as it is classification
criterion = nn.NLLLoss()
# Forward pass
output, hidden = rnn(toy_data_batch, hidden)
#output, hidden = rnn(toy_data_batch, hidden) ### BUG: if I remove the comment here, It works
# Loss computation
loss = criterion(output, toy_label_batch)
# Backward pass
loss.backward()
print(rnn.i2o.weight.grad) # This one is fine
print(rnn.i2h.weight.grad) # This one isn't (has type None)
# This will fail, because of the None gradient
for weight in rnn.parameters():
weight.data.add_(weight.grad.data, alpha=-learning_rate)
The output is :
tensor([[-0.1892, 0.0462, 0.0000, 0.0000],
[ 0.1274, 0.1133, 0.0000, 0.0000],
[ 0.1455, -0.2525, 0.0000, 0.0000],
[-0.0837, 0.0930, 0.0000, 0.0000]])
None
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-3-5f02113fddf6> in <module>
54 # This will fail, because of the None gradient
55 for weight in rnn.parameters():
---> 56 weight.data.add_(weight.grad.data, alpha=-learning_rate)
AttributeError: 'NoneType' object has no attribute 'data'
I've noticed that if I uncomment the line
output, hidden = rnn(toy_data_batch, hidden)
#output, hidden = rnn(toy_data_batch, hidden) ### BUG: if I remove the comment here, It works
everything works without a problem. It seems to me that there is a problem with the initialization of the variable hidden. As I've turned on the argument 'requires_grad', I don't know what to do.
Thank you in advance, any help will be appreciated
| self.i2h has no gradient because it's not used in the first step of your model. When you back propagate, your model only uses self.i2o in the first stage, so self.i2h has no effect on the output. However, when you get to the second step, it utilises a hidden which has been calculated using self.i2h, so therefore there is a traceable gradient through that layer.
| https://stackoverflow.com/questions/70748727/ |
Add(3)(5) nn.Sequential. How it works? | class Add(nn.Module):
def __init__(self, value):
super().__init__()
self.value = value
def forward(self, x):
return x + self.value
calculator = nn.Sequential(
Add(3),
Add(2),
Add(5),
)
x = torch.tensor([1])
output = calculator(x)
print(output) # tensor([11])
I made Add Model. but I can't understand how 'nn.Sequential' works.
at the firsttime, I understood like this
add = Add(torch.tensor([1]))
add(3) # tensor([4])
add = Add(add(3))
add(2) # tensor([6])
add = Add(add(2))
add(5) # tensor([11])
but Add(3)(1) also works.
I can't understand why 'Add(3)(1)' works. help me plz
| You understand it right, the Sequential class in a nutshell just calls provided modules one by one. Here's the code of the forward method
def forward(self, input):
for module in self:
input = module(input)
return input
here, for module in self just iterates through the modules provided in a constructor (Sequential.__iter__ method in charge of it).
Sequential module calls this method when you call it using () syntax.
calculator = nn.Sequential(
Add(3),
Add(2),
Add(5),
)
output = calculator(torch.tensor([1]))
But how does it work? In python, you could make objects of a class callable, if you add __call__ method to the class definition. If the class does not contain this method explicitly, it means that it possibly was inherited from a superclass. In case of Add and Sequential, it's Module class that implements __call__ method. And __call__ method calls 'public' forward method defined by a user.
It could be confusing that python uses the same syntax for the object instantiation and function or method call. To make a difference visible to the reader, python uses naming conventions. Classes should be named in a CamelCase with a first capital letter, and objects in a snake_case (it's not obligatory, but it's the rule that better to follow).
Just like in you example, Add is a class and add is a callable object of this class:
add = Add(torch.tensor([1]))
So, you can call add just like you have called a calculator in you example.
>>> add = Add(torch.tensor([1]))
>>> add(2)
Out: tensor([3])
But that won't work:
>>> add = Add(torch.tensor([1]))
>>> add(2)(1)
Out:
----> 3 add(2)(1)
TypeError: 'Tensor' object is not callable
That means that add(2) returns a Tensor object that does not implement __call__ method.
Compare this code with
>>> Add(torch.tensor([1]))(2)
Out:
tensor([3])
This code is the same as the first example, but rearranged a little bit.
--
To avoid confusion, I usually name objects differently: like add_obj = Add(1). It helps me to highlight a difference.
If you are not sure what you're working with, use functions type and isinstance. They would help to find out what's going on.
For example, if you check the add object, you could see that it's a callable object (i.e., it implements __call__)
>>> from typing import Callable
>>> isinstance(add, Callable)
True
And for a tensor:
>>> from typing import Callable
>>> isinstance(add, torch.tensor(1))
False
Hence, it will rase TypeError: 'Tensor' object is not callable in case you call it.
If you'd like to understand how python double-under methods like init or call work, you could read this page that describes python data model
(It could be a bit tedious, so you could prefer to read something like Fluent Python or other book)
| https://stackoverflow.com/questions/70754176/ |
Pytorch is creating non empty Tensor with torch.empty((x,y) | I just want to create an empty tensor containing only zero values. but here is what I am getting so far. Source: https://pytorch.org/docs/stable/generated/torch.empty.html
a=torch.empty((2,3), dtype=torch.int32, device = 'cuda')
a
tensor([[16843009, 1, 1],
[ 0, 1, 0]], device='cuda:0', dtype=torch.int32)
Screenshot as proof:
My question is, Why?? Is it a bug or what
| If you want a tensor of zeros, use torch.zeros.
torch.empty allocates a tensor but it does not initialise the contents, meaning that the tensor will contain whatever data happened to occupy that region of memory already.
| https://stackoverflow.com/questions/70758474/ |
torch.no_grad() and detach() combined | I encountered many code fragments like the following for choosing an action, that include a mix of torch.no_grad and detach (where actor is some actor, SomeDistribution your preferred distribution), and I'm wondering whether they make sense:
def f():
with torch.no_grad():
x = actor(observation)
dist = SomeDistribution(x)
sample = dist.sample()
return sample.detach()
Is the use of detach in the return statement not unnecessary, as x has its requires_grad already set to False, so all computations using x should already be detached from the graph? Or do the computations after the torch.no_grad wrapper somehow end up on the graph again, so we need to detach them once again in the end (in which case it seems to me that no_grad would be unnecessary)?
Also, if I'm right, I suppose instead of omitting detach one could also omit torch.no_grad, and end up with the same functionality, but worse performance, so torch.no_grad is to be preferred?
| While it may be redundant, it depends on the internals of actor and SomeDistribution. In general, there are three cases I can think of where detach would be necessary in this code. Since you've already observed that x has requires_grad set to False then cases 2 and 3 don't apply to your specific case.
If SomeDistribution has internal parameters (leaf tensors with requires_grad=True) then dist.sample() may result in a computation graph connecting sample to those parameters. Without detaching, that computation graph, including those parameters, would be unnecessarily kept in memory after returning.
The default behavior within a torch.no_grad context is to return the result of tensor operations having requires_grad set to False. However, if actor(observation) for some reason explicitly sets requires_grad of its return value to True before returning, then a computation graph may be created that connects x to sample. Without detaching, that computation graph, including x, would be unnecessarily kept in memory after returning.
This one seems even more unlikely, but if actor(observation) actually just returns a reference to observation, and observation.requires_grad is True, then a computation graph all the way from observation to sample may be constructed during dist.sample().
As for the suggestion of removing the no_grad context in leu of detach, this may result in the construction of a computation graph connecting observation (if it requires gradients) and/or the parameters of the distribution (if it has any) to x. The graph would be discarded after detach, but it does take time and memory to create the computation graph, so there may be a performance penalty.
In conclusion, it's safer to do both no_grad and detach, though the necessity of either depends on the details of the distribution and actor.
| https://stackoverflow.com/questions/70758855/ |
How can I retrieve elements in a multidimensional pytorch tensor by a list of indices? | I have two tensors: scores and lists
scores is of shape (x, 8) and lists of (x, 8, 4). I want to filter the max values for each row in scores and filter the respective elements from lists.
Take the following as an example (shape dimension 8 was reduced to 2 for simplicity):
scores = torch.tensor([[0.5, 0.4], [0.3, 0.8], ...])
lists = torch.tensor([[[0.2, 0.3, 0.1, 0.5],
[0.4, 0.7, 0.8, 0.2]],
[[0.1, 0.2, 0.1, 0.3],
[0.4, 0.3, 0.2, 0.5]], ...])
Then I would like to filter these tensors to:
scores = torch.tensor([0.5, 0.8, ...])
lists = torch.tensor([[0.2, 0.3, 0.1, 0.5], [0.4, 0.3, 0.2, 0.5], ...])
NOTE:
I tried so far, to retrieve the indices from the original score vector and use it as an index vector to filter lists:
# PSEUDO-CODE
indices = scores.argmax(dim=1)
for list, idx in zip(lists, indices):
list = list[idx]
That is also where the question name is coming from.
| I imagine you tried something like
indices = scores.argmax(dim=1)
selection = lists[:, indices]
This does not work because the indices are selected for every element in dimension 0, so the final shape is (x, x, 4).
The perform the correct selection you need to replace the slice with a range.
indices = scores.argmax(dim=1)
selection = lists[range(indices.size(0)), indices]
| https://stackoverflow.com/questions/70760238/ |
Pytorch error when launching two distinct backward | I am building a simple autoencoder followed by an MLP neural nets. Regarging the autoencoder I am not running into any problem
# ---- Prepare training set ----
x_data = train_set_categorical.drop(["churn"], axis=1).to_numpy()
labels = train_set_categorical.loc[:, "churn"].to_numpy()
dataset = TensorDataset(torch.Tensor(x_data), torch.Tensor(labels) )
loader = DataLoader(dataset, batch_size=127)
# ---- Model Initialization ----
model = AE()
# Validation using MSE Loss function
loss_function = nn.MSELoss()
# Using an Adam Optimizer with lr = 0.1
optimizer = torch.optim.Adam(model.parameters(),
lr = 1e-1,
weight_decay = 1e-8)
epochs = 50
outputs = []
losses = []
for epoch in range(epochs):
for (image, _) in loader:
# Output of Autoencoder
embbeding, reconstructed = model(image)
# Calculating the loss function
loss = loss_function(reconstructed, image)
# The gradients are set to zero,
# the the gradient is computed and stored.
# .step() performs parameter update
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Storing the losses in a list for plotting
losses.append(loss)
if epoch == 49:
outputs.append(embbeding)
But then I am feeding an MLP with the outcome of the autoencoder and this is where things starts to fail
class Feedforward(torch.nn.Module):
def __init__(self):
super().__init__()
self.neural = torch.nn.Sequential(
torch.nn.Linear(33, 260),
torch.nn.ReLU(),
torch.nn.Linear(260, 450),
torch.nn.ReLU(),
torch.nn.Linear(450, 260),
torch.nn.ReLU(),
torch.nn.Linear(260, 1),
torch.nn.Sigmoid()
)
def forward(self, x):
outcome = self.neural(x.float())
return outcome
modelz = Feedforward()
criterion = torch.nn.BCELoss()
opt = torch.optim.Adam(modelz.parameters(), lr = 0.01)
modelz.train()
epoch = 20
for epoch in range(epoch):
opt.zero_grad()
# Forward pass
y_pred = modelz(x_train)
# Compute Loss
loss_2 = criterion(y_pred.squeeze(), torch.tensor(y_train).to(torch.float32))
#print('Epoch {}: train loss: {}'.format(epoch, loss.item()))
# Backward pass
loss_2.backward()
opt.step()
I get the following error:
RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling .backward() or autograd.grad() the first time.
Of course I have tried to add "retain_graph=True" to both backwards or only the first one but it does not seem to solve the problem. If I launch both code independently from another It works but as a sequence I don't know why but it is not.
| You should be able to disconnect the output of the auto-encoder from the model by calling embbeding.detach(), before appending it to outputs.
| https://stackoverflow.com/questions/70761665/ |
Why Tensorflow reports CUDA out of memory but empty_cache doesn't work? | device = torch.device("cuda:0")
model = BertModel.from_pretrained("bert-base-uncased", output_hidden_states = True)
model.to(device)
train_hidden_states = []
model.eval()
for batch in train_dataloader:
b_input_ids = batch[0].to(device)
b_input_mask = batch[1].to(device)
with torch.no_grad():
output = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask,
)
hidden_states = output[2][12]
train_hidden_states.append(hidden_states)
Here I am trying to get the last layer embeddings of Bert model for data in the train_dataloader.
The thing is that CUDA out of memory after 14 batches.
I tried to empty the cache, but it only decreases the GPU usage for a little bit.
with torch.cuda.device('cuda:0'):
torch.cuda.empty_cache()
What could be the problem?
| You are storing tensors on GPU in train_hidden_states list. You can move then into CPU before pushing to the list train_hidden_states.append(hidden_states.cpu()).
| https://stackoverflow.com/questions/70762753/ |
How to replace the value of multiple cells in multiple rows in a Pytorch tensor? | I have a tensor
import torch
torch.zeros((5,10))
>>> tensor([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
How can I replace the values of X random cells in each row with random inputs (torch.rand())?
That is, if X = 2, in each row, 2 random cells should be replaced with torch.rand().
Since I need it to not break backpropagation I found here that replacing the .data attribute of the cells should work.
The only familiar thing to me is using a for loop but it's not efficient for a large tensor
| You can try tensor.scatter_().
x = torch.zeros(3,4)
n_replace = 3 # number of cells to be replaced with random number
src = torch.randn(x.size())
index = torch.stack([torch.randperm(x.size()[1]) for _ in range(x.size()[0])])[:,:n_replace]
x.scatter_(1, index, src)
Out[22]:
tensor([[ 0.0000, 0.5769, 0.7432, -0.1776],
[-2.1673, -1.0802, 0.0000, 0.6241],
[-0.6421, 0.1315, 0.0000, -2.7224]])
To avoid repetition,
perm = torch.randperm(tensor.size(0))
idx = perm[:k]
samples = tensor[idx]
| https://stackoverflow.com/questions/70763681/ |
PyTorch: What's the purpose of saving the optimizer state? | PyTorch is capable of saving and loading the state of an optimizer. An example is shown in the PyTorch tutorial. I'm currently just saving and loading the model state but not the optimizer. So what's the point of saving and loading the optimizer state besides not having to remember the optimizers params such as the learningrate. And what's contained in the optimizer state?
| You should save the optimizer state if you want to resume model training later. Especially if Adam is your optimizer. Adam is an adaptive learning rate method, which means it computes individual learning rates for various parameters.
It is not required if you only want to use the saved model for inference.
However, It's best practice to save both model state and optimizer state.
You can also save loss history and other running metrics if you want to plot them later.
I'd do it like,
torch.save({
'epoch': epochs,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'train_loss_history': loss_history,
}, PATH)
| https://stackoverflow.com/questions/70768868/ |
Python TypeError - 'Class' object is not callable (Google Collab Example Inside) | [Redacted]
In this example, in my final cell of code, I try to call my model. This is following the tutorial on a Youtube video.
In this step, the video is able to perform the lines
model = UCC_Classifier(config)
then in the next cell
loss, output = model(input_ids.unsqueeze(dim=0), am.unsqueeze(dim=0), labels.unsqueeze(dim=0))
To successfully get a result. However when I try and do the same thing, I get told my class is not callable. I cannot see any difference and am unsure why this might not be callable.
Thanks
| Your UCC_Classifier model should be a pl.LightningModule, not a pl.LightningDataModule.
| https://stackoverflow.com/questions/70773312/ |
Replace torch.gather by other operator? | I have one script code, where x1 and x2 size of 1x68x8x8
tmp_batch, tmp_channel, tmp_height, tmp_width = x1.size()
x1 = x1.view(tmp_batch*tmp_channel, -1)
max_ids = torch.argmax(x1, 1)
max_ids = max_ids.view(-1, 1)
x2 = x2.view(tmp_batch*tmp_channel, -1)
outputs_x_select = torch.gather(x2, 1, max_ids) # size of 68 x 1
As for the above code, I have trouble with torch.gather when I used old onnx. Hence, I would like to find an alternative solution that replaces the toch.gather by other operators but gives the same output with the above code. Could you please give me some suggestions?
| One workaround is to use the equivalent numpy method. If you include an import numpy as np statement somewhere, you could do the following.
outputs_x_select = torch.Tensor(np.take_along_axis(x2,max_ids,1))
If that gives you a grad related error, try
outputs_x_select = torch.Tensor(np.take_along_axis(x2.detach(),max_ids,1))
An approach without numpy: in this case, it seems that max_ids contains exactly one entry per row. Thus, I believe the following will work:
max_ids = torch.argmax(x1, 1) # do not reshape
x2 = x2.view(tmp_batch*tmp_channel, -1)
outputs_x_select = x2[torch.arange(tmp_batch*tmp_channel),max_ids]
| https://stackoverflow.com/questions/70775450/ |
Combining Parameterlist in PyTorch | I am trying to combine two ParameterLists in Pytorch. I've implemented the following snippet:
import torch
list = nn.ParameterList()
for i in sub_list_1:
list.append(i)
for i in sub_list_2:
list.append(i)
Is there any functions that takes care of this without a need to loop over each list?
| You can use nn.ParameterList.extend, which works like python's built-in list.extend
plist = nn.ParameterList()
plist.extend(sub_list_1)
plist.extend(sub_list_2)
Alternatively, you can use += which is just an alias for extend
plist = nn.ParameterList()
plist += sub_list_1
plist += sub_list_2
| https://stackoverflow.com/questions/70779631/ |
ValueError: only one element tensors can be converted to Python scalars when converting list to float Torch tensor | I have the following:
type of X is: <class 'list'>
X: [tensor([[1.3373, 0.5666, 0.2337, ..., 0.4899, 0.1876, 0.5892],
[0.0320, 0.0797, 0.0052, ..., 0.3405, 0.0000, 0.0390],
[0.1305, 0.1281, 0.0021, ..., 0.6454, 0.1964, 0.0493],
...,
[0.2635, 0.0237, 0.0000, ..., 0.6635, 0.1376, 0.2988],
[0.0241, 0.5464, 0.1263, ..., 0.5766, 0.2352, 0.0140],
[0.1740, 0.1664, 0.0057, ..., 0.6056, 0.1020, 1.1573]],
device='cuda:0')]
However, following the instructions in this video, I get this error:
X_tensor = torch.FloatTensor(X)
ValueError: only one element tensors can be converted to Python scalars
I have the following conversion code:
X_tensor = torch.FloatTensor(X)
How can I fix this problem?
$ python
Python 3.8.10 (default, Nov 26 2021, 20:14:08)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.__version__
'1.10.1+cu113'
X here is a list of torch.tensors.
| Use torch.stack():
X = torch.stack(X)
| https://stackoverflow.com/questions/70780369/ |
How do I shift specific elements of a tensor with torch.roll? | I have a tensor x, that looks like this:
x = tensor([ 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10]
[ 11, 12, 13, 14, 15])
I'm trying to switch the first two and last two numbers of each tensor, like this:
x = tensor([ 4, 5, 3, 1, 2],
[ 9, 10, 8, 6, 7],
[ 14, 15, 13, 11, 12])
How could I do this with torch.roll()? How would I switch 3 instead of 1?
| Not sure if that can be done with torch.roll alone... However, you can expect the desired result by using a temporary tensor and a pair assignment:
>>> x = torch.arange(1, 16).reshape(3,-1)
tensor([[ 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10],
[11, 12, 13, 14, 15]])
>>> tmp = x.clone()
# swap the two sets of columns
>>> x[:,:2], x[:,-2:] = tmp[:,-2:], tmp[:,:2]
Such that tensor x has been mutated as:
>>> x
tensor([[ 4, 5, 3, 1, 2],
[ 9, 10, 8, 6, 7],
[14, 15, 13, 11, 12]])
You can pull off this operation with torch.roll and some indexing:
>>> x = torch.arange(1, 21).reshape(4,-1)
tensor([[ 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10],
[11, 12, 13, 14, 15],
[16, 17, 18, 19, 20]])
>>> rolled = x.roll(-2,0)
tensor([[11, 12, 13, 14, 15],
[16, 17, 18, 19, 20],
[ 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10]])
# overwrite columns [1,-1[ from rolled with those from x
>>> rolled[:, 1:-1] = x[:, 1:-1]
Such that at this end you get:
>>> rolled
tensor([[11, 2, 3, 4, 15],
[16, 7, 8, 9, 20],
[ 1, 12, 13, 14, 5],
[ 6, 17, 18, 19, 10]])
| https://stackoverflow.com/questions/70781351/ |
How to apply regularization only to one layer in pytorch? | Let's image a network with 2 layers (X1, X2). I want to use L1 Norm on X1 and then do (loss + L1).backward() on X1. X2 is still trained but without the regularization. My goal is to make X1 become sparse.
I have already tried this, unfortunately the regularization is applied to all layers, even though it only uses parameters from one layer.
I have also tried to freeze X1, do loss.backward() and then freeze X2 to apply do loss.backward(), including regularization. Like this:
for parameter in model.X1.parameters():
parameter.requires_grad = False
loss.backward(retain_graph=True)
for parameter in model.X1.parameters():
parameter.requires_grad = True
for parameter in model.X2.parameters():
parameter.requires_grad = False
loss += l1_regularization
loss.backward()
optimizer.step()
The outcome is not as expected though. X2 does not get updated at all anymore and the values in X1 seem to be too low (all weights become very close to zero).
What am I doing wrong and is there any way to reach my goal?
Thanks for your help
| Your second implementation should work. However, it doesn't show the part where you set requires_grad = True for X2 afterwards (or at the start where you freeze X1). If that part is indeed missing in your code, then from the second loop onward, X2 will not get trained.
| https://stackoverflow.com/questions/70784528/ |
Converting PyTorch to ONNX model increases file size for ALBert | Goal: Use this Notebook to perform quantisation on albert-base-v2 model.
Kernel: conda_pytorch_p36.
Outputs in Sections 1.2 & 2.2 show that:
converting vanilla BERT from PyTorch to ONNX stays the same size, 417.6 MB.
Quantization models are smaller than vanilla BERT, PyTorch 173.0 MB and ONNX 104.8 MB.
However, when running ALBert:
PyTorch and ONNX model sizes are different.
Quantized model sizes are bigger than vanilla.
I think this is the reason for poorer model performance of both Quantization methods of ALBert, compared to vanilla ALBert.
PyTorch:
Size (MB): 44.58906650543213
Size (MB): 22.373255729675293
ONNX:
ONNX full precision model size (MB): 341.64233207702637
ONNX quantized model size (MB): 85.53886985778809
Why might exporting ALBert from PyTorch to ONNX increase model size, but not for BERT?
Please let me know if there's anything else I can add to post.
| Explanation
ALBert model has shared weights among layers. torch.onnx.export outputs the weights to different tensors, which causes the model size to grow larger.
A number of Git Issues have been marked Solved regarding this phenomena.
The most common solution is to remove shared weights, that is to remove tensor arrays that contain the exact same values.
Solutions
Section "Removing shared weights" in onnx_remove_shared_weights.ipynb.
Pseudo-code:
from onnxruntime.transformers.onnx_model import OnnxModel
model=onnx.load(path)
onnx_model=OnnxModel(model)
count = len(model.graph.initializer)
same = [-1] * count
for i in range(count - 1):
if same[i] >= 0:
continue
for j in range(i+1, count):
if has_same_value(model.graph.initializer[i], model.graph.initializer[j]):
same[j] = i
for i in range(count):
if same[i] >= 0:
onnx_model.replace_input_of_all_nodes(model.graph.initializer[i].name, model.graph.initializer[same[i]].name)
onnx_model.update_graph()
onnx_model.save_model_to_file(output_path)
Source of both solutions
| https://stackoverflow.com/questions/70786010/ |
Incorporating dim of torch.topk in tf.nn.top_k | Pytorch provide torch.topk(input, k, dim=None, largest=True, sorted=True) function to calculate k largest elements of the given input tensor along a given dimension dim.
I have a tensor of shape (64, 128, 512) and I am using torch.topk in the following manner-
reduce = input.topk(k, dim=1).values
I found similar tensorflow implementaion as following - tf.nn.top_k(input, k=1, sorted=True, name=None).
My question is how to incorporate dim=1 parameter in tf.nn.top_k so as to achieve the tensor of the same shape as calculated by pytorch?
| I agree with @jodag, you will have to transpose or reshape your tensor, since tf.math.top_k always works on the last dimension.
What you could also do is first get all the max values in the tensor along a certain dimension and then get the top k values from that tensor:
import tensorflow as tf
tf.random.set_seed(2)
k = 3
tensor = tf.random.uniform((2, 4, 6), maxval=10, dtype=tf.int32)
max_tensor = tf.reduce_max(tensor, axis=1)
k_max_tensor = tf.math.top_k(max_tensor, k=k, sorted=True).values
print('Original tensor --> ', tensor)
print('Max tensor --> ', max_tensor)
print('K-Max tensor --> ', k_max_tensor)
print('Unique K-Max tensor', tf.unique(tf.reshape(k_max_tensor, (tf.math.reduce_prod(tf.shape(k_max_tensor)), ))).y)
Original tensor --> tf.Tensor(
[[[1 6 2 7 3 6]
[7 5 1 1 0 6]
[9 1 3 9 1 4]
[6 0 6 2 4 0]]
[[4 6 8 2 4 7]
[5 0 8 2 8 9]
[0 2 0 0 9 8]
[9 3 8 9 0 6]]], shape=(2, 4, 6), dtype=int32)
Max tensor --> tf.Tensor(
[[9 6 6 9 4 6]
[9 6 8 9 9 9]], shape=(2, 6), dtype=int32)
K-Max tensor --> tf.Tensor(
[[9 9 6]
[9 9 9]], shape=(2, 3), dtype=int32)
Unique K-Max tensor tf.Tensor([9 6], shape=(2,), dtype=int32)
| https://stackoverflow.com/questions/70790336/ |
pytorch lightning epoch_end/validation_epoch_end | Could anybody breakdown the code and explain it to me? The part that needs help is indicated with the "#This part". I would greatly appreciate any help thanks
def validation_epoch_end(self, outputs):
batch_losses = [x["val_loss"]for x in outputs] #This part
epoch_loss = torch.stack(batch_losses).mean()
batch_accs = [x["val_acc"]for x in outputs] #This part
epoch_acc = torch.stack(batch_accs).mean()
return {'val_loss': epoch_loss.item(), 'val_acc': epoch_acc.item()}
def epoch_end(self, epoch, result):
print("Epoch [{}], val_loss: {:.4f}, val_acc: {:.4f}".format( epoch,result['val_loss'], result['val_acc'])) #This part
| In your provided snippet, outputs is a list containing dicts elements which seem to contain at least keys "val_loss", and "val_acc". It would be fair to assume they correspond to the validation loss and validation accuracy respectively.
Those two lines (annotated with the # This path comment) correspond to list comprehensions going over the elements inside the outputs list. The first one gathers the values of the key "val_loss" for each element in outputs. The second one does the same this time gathering the values of the "val_acc" key.
A minimal example would be:
## before
outputs = [{'val_loss': tensor(a), # element 0
'val_acc': tensor(b)},
{'val_loss': tensor(c), # element 1
'val_acc': tensor(d)}]
## after
batch_losses = [tensor(a), tensor(c)]
batch_acc = [tensor(b), tensor(d)]
| https://stackoverflow.com/questions/70790473/ |
Is this the right way to compute cosine similarity in PyTroch? | cos = torch.nn.CosineSimilarity(dim=-1, eps=1e-6)
c = torch.FloatTensor([1, 2, 4])
b = torch.FloatTensor([1, 2, 3])
simi = cos(b,c)
tensor(0.9915)
I am using dim=-1 in this funciton, does that mean it is a one dimension float list? Is this correct?
| Like with most indexing in python, -1 refers to last dimension (-2 would be second-to-last, etc...). Using dim=-1 when initializing cosine similarity means that cosine similarity will be computed along the last dimension of the inputs.
For example, if b and c were 3-dimensional tensors with size [X,Y,Z], then the result would be a 2-dimensional tensor of size [X,Y]. In your case, since the input tensors only have one dimension (size [3]), you end up getting a result tensor of size [], i.e. a scalar.
| https://stackoverflow.com/questions/70793278/ |
Torchscript trace "must be on the current device" error despite model and input both being on the same device | I am failing to run torch.jit.trace despite my best effort, encountering RuntimeError: Input, output and indices must be on the current device
I have a (fairly complex) model which I have already put on GPU, along with a set of inputs, also on GPU. I can verify that all input tensors and model parameters & buffers are on the same device:
(Pdb) {p.device for p in self.parameters()}
{device(type='cuda', index=0)}
(Pdb) {p.device for p in self.buffers()}
{device(type='cuda', index=0)}
(Pdb) in_ = (<several tensors here>)
(Pdb) {p.device for p in in_}
{device(type='cuda', index=0)}
(Pdb) torch.cuda.current_device()
0
I can certify the model runs and the output is on the correct device:
(Pdb) self(*in_).device
device(type='cuda', index=0)
Despite all this, tracing fails:
(Pdb) generator_script = torch.jit.trace(self, example_inputs=in_)
*** RuntimeError: Input, output and indices must be on the current device
I understand about inputs and outputs, but what are these "indices"
that must also be on the same device?
What other elements that I am
not accounting for could be causing trace to fail?
| If you're not yet mapping the device during the loading process, doing so could be the solution.[1] That is, mapping the device should happen during jit.load, not as a simple call of .to(device) after jit.load has already finished. See this page for more info.
As an example of what to do:
model = jit.load("your_traced_model.pt", map_location=torch.device("cuda"))
This is different from how it works for typical/non-JIT models, where you can simply do:
model = some_model_creation_function()
_ = model.to(torch.device("cuda"))
1 = this does not currently work for the MPS device.
| https://stackoverflow.com/questions/70798689/ |
Creating a new tensor according to a list of lengths | I have a tensor t with dim b x 3 and a list of lengths len = [l_0, l_1, ..., l_n]. All entries in len sum to to b. I want to create a new tensor with dim n x 3, which stores the average of the entries in t. E.g. The first l_0 entries in t are averaged and build the first element in the new tensor. The following l_1 entries are averaged and build the second element, ...
Thanks for your help.
| You can do so using a combination a cumulative list of indices as helper and a list comprehension to construct the new array:
>>> b, lens = 10, [2, 3, 1, 3, 1]
>>> t = torch.rand(b, 3)
tensor([[0.3567, 0.3998, 0.9396],
[0.4061, 0.6465, 0.6955],
[0.3500, 0.4135, 0.5288],
[0.0726, 0.9575, 0.3785],
[0.6216, 0.2975, 0.3293],
[0.3878, 0.0735, 0.8181],
[0.1694, 0.5446, 0.1179],
[0.7793, 0.6613, 0.1748],
[0.0964, 0.9825, 0.1651],
[0.1421, 0.0994, 0.8086]])
Build the cumulative list of indices:
>>> c = torch.cumsum(torch.tensor([0] + lens), 0)
tensor([ 0, 2, 5, 6, 9, 10])
Loop over c by twos, with an overlapping window. For example zip(c[:-1], c[1:]) works well. Each selection from i to j gets averaged on dim=0.
>>> [t[i:j].sum(0) for i, j in zip(c[:-1], c[1:])]
[tensor([0.7628, 1.0463, 1.6351]),
tensor([1.0442, 1.6685, 1.2367]),
tensor([0.3878, 0.0735, 0.8181]),
tensor([1.0451, 2.1885, 0.4578]),
tensor([0.1421, 0.0994, 0.8086])]
Then you can stack the list:
>>> torch.stack([t[i:j].sum(0) for i, j in zip(c[:-1], c[1:])])
tensor([[0.7628, 1.0463, 1.6351],
[1.0442, 1.6685, 1.2367],
[0.3878, 0.0735, 0.8181],
[1.0451, 2.1885, 0.4578],
[0.1421, 0.0994, 0.8086]])
| https://stackoverflow.com/questions/70799782/ |
Pytorch Autograd with complex fourier transforms gives wrong results | I am trying to implement a real valued cost function that evaluates a complex input in frequency space with pytorch & autograd since I am interested in the gradients of the cost function w.r.t. the input. When I compare the autograd results with the derivative that I computed by hand (with Wirtinger calculus) I get a different result. I'm not sure where I made the mistake, whether it is in my implementation or in my own derivation of the gradient.
The cost function and its derivative by hand looks like this:
Formula of the cost function
My implementation is here
def f_derivative_by_hand(f):
f = torch.tensor(f, dtype=torch.complex128)
ftilde = torch.fft.fft(f)
absf = torch.abs(ftilde)
f2 = absf**2
C = torch.trapz(f2).numpy()
grads = 2 * torch.fft.ifft((ftilde)).numpy()
return C, grads
def f_derivative_autograd(f):
f = torch.tensor(f, dtype=torch.complex128, requires_grad=True)
ftilde = torch.fft.fft(f)
f2 = torch.abs(ftilde)**2
C = torch.trapz(f2)
C.backward()
grads = f.grad
return C.detach().numpy(), grads.detach().numpy()
When I use some data and evaluate it by both functions, the gradients of the implementation with automatic differentiation is tilted in comparison (note that I normalized the plotted arrays):
Autograd and derivative by hand comparison
I suspect there could also be something wrong with the automatic differentiation of fft though since if I remove the fourier transform from the cost function and integrate the function in real space, both implementations match exactly except at the edges (again normalized):
No FFT autograd and derivative by hand
It would be fantasic if someone could help me figure out what is wrong!
| After some more investigation, I found the solution to the problem of the tilted derivatives. Apparently, the trapezoidal integration rule assumes boundary conditions that will show some artifacts at the boundaries as discussed in this pytorch forum post.
In my original problem, the observed tilt results from the integration of the fourier transformed signal which is asymmetric. The boundary artifacts introduce spatial frequencies which tilt the derivative in real space.
For me, the simplest solution is just to use a sum and weight by the frequency differential. Then, everything works out.
| https://stackoverflow.com/questions/70804781/ |
RuntimeError: Expected all tensors to be on the same device | I am getting the following error:
RuntimeError: Expected all tensors to be on the same device
However, both my tensor are using .to(device=t.device).
self.indices_buf = torch.LongTensor().to(device=t.device)
self.beams_buf = torch.LongTensor().to(device=t.device)
self.beams_buf_float = torch.FloatTensor().to(device=t.device)
Here self.beams_buf_float.type(torch.LongTensor) gives Expected all tensors to be on the same device error.
torch.div(self.indices_buf, vocab_size, out=self.beams_buf_float)
self.beams_buf = self.beams_buf_float.type(torch.LongTensor)
I am confused here, as all of them are using device=t.device.
| When calling self.beams_buf_float.type(torch.LongTensor), the resulting tensor device is set to the default one (i.e. cpu).
The correct way to cast your tensor to a new type while maintaining the original device is by calling self.brams_buf_float.to(torch.long) or self.brams_buf_float.long()
| https://stackoverflow.com/questions/70808388/ |
where can I find the source code for torch.unique()? | I can only find in the pytorch source code (https://github.com/pytorch/pytorch/blob/2367face24afb159f73ebf40dc6f23e46132b770/torch/functional.py#L783) the following function call:
_VF.unique_dim() and torch._unique2()
but they don't point to anywhere else in the directory
| Most of the pytorch backend code is implemented in C++ and/or CUDA. To see it you need to find the appropriate entrypoint in the source code. There are a couple ways to do this but the easiest I've found without downloading all the code yourself is to search for the keywords on github.
For example, if you go to github.com and the search for unique_dim repo:pytorch/pytorch, then click the "Code" tab on the left side you should quickly find the following.
From torch/jit/_builtins.py:103
17: _builtin_ops = [
...
103: (torch._VF.unique_dim, "aten::unique_dim"),
From this and further analysis of the code we can conclude that torch._VF.unique_dim is actually invoking the aten::unique_dim function from the ATen library.
Like most functions in ATen there are multiple implementations of this function. Most ATen functions are registered in aten/src/ATen/native/native_functions.yaml, generally the functions here will have a _cpu and _cuda version.
Going back to the search results we can find that the CUDA implementation is actually calling the function unique_dim_cuda at aten/src/ATen/native/cuda/Unique.cu:197
196: std::tuple<Tensor, Tensor, Tensor>
197: unique_dim_cuda(const Tensor& self, const int64_t dim, const bool sorted, const bool return_inverse, const bool return_counts) {
198: return AT_DISPATCH_ALL_TYPES_AND2(kBool, kHalf, self.scalar_type(), "unique_dim", [&] {
199: return unique_dim_cuda_template<scalar_t>(self, dim, false, return_inverse, return_counts);
200: });
201: }
and the CPU implementation is calling the function unique_dim_cpu at aten/src/ATen/native/Unique.cpp:271
270: std::tuple<Tensor, Tensor, Tensor>
271: unique_dim_cpu(const Tensor& self, const int64_t dim, const bool sorted, const bool return_inverse, const bool return_counts) {
272: return AT_DISPATCH_ALL_TYPES_AND2(at::ScalarType::BFloat16, at::ScalarType::Bool, self.scalar_type(), "unique_dim", [&] {
273: // The current implementation using `dim` always sorts due to unhashable tensors
274: return _unique_dim_cpu_template<scalar_t>(self, dim, false, return_inverse, return_counts);
275: });
276: }
From this point you should be able to trace the function calls further down to see exactly what they are doing.
Following a similar string of searches you should find that torch._unique2 is implemented at aten/src/ATen/native/cuda/Unique.cu:188 and aten/src/ATen/native/Unique.cpp:264 for CUDA and CPU respectively.
| https://stackoverflow.com/questions/70809160/ |
Argmax of 2d vector on C++ | I am working on python/pytorch and I have an example like
2d vector a
|
v
dim-0 ---> -----> dim-1 ------> -----> --------> dim-1
| [[-1.7739, 0.8073, 0.0472, -0.4084],
v [ 0.6378, 0.6575, -1.2970, -0.0625],
| [ 1.7970, -1.3463, 0.9011, -0.8704],
v [ 1.5639, 0.7123, 0.0385, 1.8410]]
|
v
Then, the argmax with the index of 1 will be
# argmax (indices where max values are present) along dimension-1
In [215]: torch.argmax(a, dim=1)
Out[215]: tensor([1, 1, 0, 3])
My question is that given the 2d vector a as above, how could I implement argmax function on C++ to give me same output as above? Thanks for reading
This is what I did
vector<vector<float>> a_vect
{
{-1.7739, 0.8073, 0.0472, -0.4084},
{0.6378, 0.6575, -1.2970, -0.0625},
{1.7970, -1.3463, 0.9011, -0.8704},
{1.5639, 0.7123, 0.0385, 1.8410}
};
std::vector<int>::iterator max = max_element(a_vect.begin() , a_vect.end()-a_vect.begin());
| You can use std::max_element to find the index in each sub vector
#include <algorithm>
#include <iostream>
#include <vector>
using std::vector;
int main()
{
vector<vector<float>> a_vect=
{
{-1.7739, 0.8073, 0.0472, -0.4084},
{0.6378, 0.6575, -1.2970, -0.0625},
{1.7970, -1.3463, 0.9011, -0.8704},
{1.5639, 0.7123, 0.0385, 1.8410}
};
vector<int> max_index;
for(auto& v:a_vect)
max_index.push_back(std::max_element(v.begin(),v.end())-v.begin());
for(auto i:max_index)
std::cout << i << ' '; // 1 1 0 3
}
| https://stackoverflow.com/questions/70813025/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.