id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st81468 | Hi,
When I use F.instance_norm with batch_size = 1 everything runs fine.
But with higher batch size I got some size errors.
Each element of my batch is a person so I want to gives weights to normalize per persons per channels e.g : batch of 3 persons 64 channels and wathever 2D size => weights of size 3,64.
If I get it right instanceNorm is perfect for that but I cannot pass other thing that 64 element to F.instance_norm(weights=…)
I tried (3,64), 192 (3*64), but it only accept 64 elements (which is wrong because I want parameters per channels per batch)
Does anyone know how to do it ?
Thanks in advance,
Pierre |
st81469 | Solved by SimonW in post #2
Instance norm does not use per-element weight. It doesn’t make sense as a network layer. You can just do the affine transform yourself after instance norm… |
st81470 | Instance norm does not use per-element weight. It doesn’t make sense as a network layer. You can just do the affine transform yourself after instance norm… |
st81471 | Thanks for the answer,
I misunderstood some concept but after reading again it’s more clear
You’re solution is working great, I’m just doing the “Adaptative” layer on my own after a classic instance norm.
Thanks again |
st81472 | For those who can have the same question :
I’m just doing a classical python instance norm and then I multiply by my own weights to get the adaptive behaviour
out = nn.InstanceNorm(in)
weighted_out = out * w_custom +b_custom |
st81473 | Hello everyone, hope you are having a great time.
I wanted to create an autoecoder. a simple one. if my memory serves me correctly, back in the day, one way to create an autoencoder was to share weights between encoder and decoder. that is, the decoder was simply using the transpose of the encoder. aside from the practicality of this and whether or not this was for the best or the worst, can you please help me do this?
based on this 10 discussion, I tried doing :
self.decoder[0].weight = self.encoder[0].weight.t()
and this wont work. and I get :
TypeError : cannot assign ‘torch.FloatTensor’ as parameter ‘weight’ (torch.nn.Parameter or None expected)
So I ended up doing:
class AutoEncoder(nn.Module):
def __init__(self, embeddingsize=40):
super().__init__()
self.encoder = nn.Sequential(nn.Linear(28*28, embeddingsize),
nn.Tanh())
self.decoder = nn.Sequential(nn.Linear(embeddingsize, 28*28),
nn.Sigmoid())
self.decoder[0].weight = nn.Parameter(self.encoder[0].weight.t())
def forward(self, input):
....
return output
the network trains and I get no errors, but I’m not sure if it uses the very same weights for both of them or the initial weights are simply used as initial values andnn.Parameter() simply creates a brand new weights for the decoder!
Any helps in this regard is greatly appreciated and Thanks a lot in advance |
st81474 | Solved by InnovArul in post #2
I guess, you have already read the answer.
I am linking the post just for completion! |
st81475 | I guess, you have already read the answer.
I am linking the post just for completion!
How to create and train a tied autoencoder?
To me, mixed approach looks better.
For the record, I have updated the gist to verify that it works too. |
st81476 | Thanks a lot, It would be much better if you copy/paste all of those codes here as well. its kind of hard to be redirected to another website.
Anyway I have some questions as well.
Why cant we simply do :
self.decoder[0].weight = self.encoder[0].weight.t()
and instead we must do :
self.decoder[0].weight.data = self.encoder[0].weight.data.transpose(0,1)
doesn’t data only copy the raw values from the source to the destination (i.e encoder to decoder)?
so basically this would be two different weights that happen to have the same initial values! and in backprop they just get tuned independently. (please see the images below)
my second question is, why cant we use .t() instead of transpose(0,1) aren’t they interchangeable?
Also I noticed, simply doing :
weights = nn.Parameter(torch.randn_like(self.encoder[0].weight))
self.encoder[0].weight.data = weights.clone()
self.decoder[0].weight.data = self.encoder[0].weight.data.transpose(0, 1)
results in different weight visualization. when I tried to visualize both weights, they look just different!
trained_encoder_w.jpg752×768 192 KB
trained_decoder_w.png752×768 289 KB
Update:
After transposing the decoders weight and visualizing it, it turns out, they are identical ( decoders weight visualization is a bit washed out though, but they are indeed look alike) |
st81477 | I have been working on a similar problem… after following @InnovArul code in the thread already linked, I was able to get it to work. Though I am unsure how the different methods effect the end result… they seem to all work, is it just a matter of speed? I also have another related question, what if I wish to tie weights between layers that are within different classes or modules. Would I return the weight data, and then pass in the weight data into the other classes? Would this still allow me to tie the weights?
Edit: Of course after staring at @InnovArul 's code for an hour… it is straight after I post this question that I figure out what he is doing regarding passing weight data through classes and so figure out my own question. I still wonder about the difference between the three methods though. |
st81478 | Is there any way to access the computation graph created by Pytorch during forward pass and shift that to CPU from GPU? I want to free the GPU so that I can load the next part of the model on it and continue the training. |
st81479 | Hi @Rishabh_Dahale,
I don’t think it is possible.
But if your problem is that the computation graph is too big to fit in your GPU memory, gradient checkpointing might be able to help you here.
Please take a look at the official doc 11 and at this tutorial 10. Even though it is using the Variable API, the checkpointing is still relevant. |
st81480 | Hi, I was experimenting with LSTMs and noted that the training for an unrolled LSTM seems to be a lot worse than a rolled one. The test errors I get are a lot higher.
So below are two variants of my code that are relevant. The remainder of my code is untouched. This one is the unrolled version. I simply pass the data with the full timestep before sending it to a fully connected layer.
class Net(nn.Module):
def __init__(self, feature_dim, hidden_dim, batch_size):
super(Net, self).__init__()
num_layers=1
# single layer lstm
self.lstm = nn.LSTM(feature_dim, hidden_size=hidden_dim, num_layers=num_layers, batch_first=True, dropout = 0.7)
self.h0 = Variable(torch.randn(num_layers, batch_size, hidden_dim))
self.c0 = Variable(torch.randn(num_layers, batch_size, hidden_dim))
# fc layers
self.fc1 = nn.Linear(hidden_dim, 2)
def forward(self, x, mode=False):
output, hn = self.lstm(x, (self.h0,self.c0))
output = self.fc1(output[:,-1,:])
return output
And the test errors (right most result, out of 100)
epoch 0 tr loss 54.90 te loss 17.37 tr err 144/316 te err 51/100
epoch 20 tr loss 48.21 te loss 15.11 tr err 96/316 te err 31/100
epoch 40 tr loss 37.15 te loss 13.07 tr err 71/316 te err 27/100
epoch 60 tr loss 31.83 te loss 15.43 tr err 62/316 te err 28/100
epoch 80 tr loss 27.14 te loss 25.34 tr err 45/316 te err 29/100
epoch 100 tr loss 24.40 te loss 32.11 tr err 39/316 te err 28/100
epoch 120 tr loss 23.74 te loss 22.59 tr err 32/316 te err 24/100
epoch 140 tr loss 28.67 te loss 23.78 tr err 50/316 te err 26/100
epoch 160 tr loss 15.99 te loss 29.97 tr err 24/316 te err 30/100
epoch 180 tr loss 18.61 te loss 29.87 tr err 22/316 te err 26/100
epoch 200 tr loss 25.49 te loss 36.15 tr err 31/316 te err 28/100
epoch 220 tr loss 20.56 te loss 33.28 tr err 33/316 te err 24/100
epoch 240 tr loss 6.13 te loss 49.73 tr err 7/316 te err 25/100
epoch 260 tr loss 18.26 te loss 38.68 tr err 12/316 te err 27/100
epoch 280 tr loss 4.94 te loss 54.48 tr err 4/316 te err 23/100
epoch 300 tr loss 4.12 te loss 57.66 tr err 9/316 te err 25/100
epoch 320 tr loss 20.31 te loss 47.79 tr err 28/316 te err 28/100
epoch 340 tr loss 3.74 te loss 76.23 tr err 10/316 te err 28/100
epoch 360 tr loss 20.10 te loss 45.14 tr err 25/316 te err 23/100
epoch 380 tr loss 2.62 te loss 54.53 tr err 16/316 te err 28/100
epoch 400 tr loss 2.22 te loss 51.11 tr err 13/316 te err 24/100
epoch 420 tr loss 2.21 te loss 55.38 tr err 12/316 te err 29/100
epoch 440 tr loss 5.46 te loss 51.78 tr err 11/316 te err 22/100
epoch 460 tr loss 1.88 te loss 46.23 tr err 13/316 te err 25/100
epoch 480 tr loss 8.04 te loss 43.05 tr err 19/316 te err 25/100
Now I loop through the data and pass each timestep before sending the final output to a fully connected layer.
class Net(nn.Module):
def __init__(self, feature_dim, hidden_dim, batch_size):
super(Net, self).__init__()
# lstm architecture
self.hidden_size=hidden_dim
self.input_size=feature_dim
self.batch_size=batch_size
self.num_layers=1
# lstm
self.lstm = nn.LSTM(feature_dim, hidden_size=self.hidden_size, num_layers=self.num_layers, batch_first=True)
# fc layers
self.fc1 = nn.Linear(hidden_dim, 2)
def forward(self, x, mode=False):
# initialize hidden and cell
hn = Variable(torch.randn(self.num_layers, self.batch_size, self.hidden_size))
cn = Variable(torch.randn(self.num_layers, self.batch_size, self.hidden_size))
# step through the sequence one timestep at a time
for xt in torch.t(x):
output, (hn,cn) = self.lstm(xt[:,None,:], (hn,cn))
# output is [batch size, timestep = 1, hidden dim]
output = self.fc1(output[:,0,:])
return output
And the test errors
epoch 0 tr loss 54.89 te loss 17.44 tr err 154/316 te err 53/100
epoch 20 tr loss 48.50 te loss 17.40 tr err 84/316 te err 43/100
epoch 40 tr loss 36.92 te loss 15.90 tr err 72/316 te err 34/100
epoch 60 tr loss 32.13 te loss 18.82 tr err 52/316 te err 32/100
epoch 80 tr loss 29.61 te loss 27.07 tr err 41/316 te err 27/100
epoch 100 tr loss 30.03 te loss 28.65 tr err 41/316 te err 31/100
epoch 120 tr loss 22.94 te loss 39.26 tr err 32/316 te err 31/100
epoch 140 tr loss 22.82 te loss 43.07 tr err 28/316 te err 33/100
epoch 160 tr loss 19.11 te loss 47.77 tr err 34/316 te err 32/100
epoch 180 tr loss 19.52 te loss 46.45 tr err 29/316 te err 33/100
epoch 200 tr loss 22.89 te loss 45.91 tr err 21/316 te err 29/100
epoch 220 tr loss 24.83 te loss 50.92 tr err 28/316 te err 35/100
epoch 240 tr loss 12.37 te loss 54.97 tr err 36/316 te err 34/100
epoch 260 tr loss 11.72 te loss 54.28 tr err 30/316 te err 33/100
epoch 280 tr loss 9.71 te loss 55.99 tr err 20/316 te err 35/100
epoch 300 tr loss 21.23 te loss 71.60 tr err 27/316 te err 34/100
epoch 320 tr loss 8.87 te loss 53.11 tr err 32/316 te err 31/100
epoch 340 tr loss 7.34 te loss 59.80 tr err 32/316 te err 37/100
epoch 360 tr loss 4.35 te loss 73.08 tr err 7/316 te err 35/100
epoch 380 tr loss 5.93 te loss 68.64 tr err 27/316 te err 33/100
epoch 400 tr loss 3.67 te loss 78.00 tr err 18/316 te err 35/100
epoch 420 tr loss 15.13 te loss 64.23 tr err 39/316 te err 38/100
epoch 440 tr loss 2.61 te loss 88.74 tr err 8/316 te err 38/100
epoch 460 tr loss 4.82 te loss 82.88 tr err 5/316 te err 38/100
epoch 480 tr loss 2.72 te loss 93.69 tr err 8/316 te err 42/100
I have run this experiment several times and always see that the unrolled version performs worse. Is there something wrong with the way I am manually stepping through the LSTM ? |
st81481 | your comparison is not fair, in the first version, you are defining c0 and h0 in the init function, so they are constant throughout the learning but in the second version, you are setting h0 and c0 to a new random tensor
solving this problem will probably answer your question |
st81482 | Hello, I made the following modification to my code. I initialized hn and cn once when creating the neural network. Then I passed their value to h0,c0 everytime a forward pass is performed. I have run the code several times but I am never able to get equally good results. On the other hand, the rolled lstm always provides the roughly similar results over the epochs.
class Net(nn.Module):
def __init__(self, feature_dim, hidden_dim, batch_size):
super(Net, self).__init__()
# lstm architecture
self.hidden_size=hidden_dim
self.input_size=feature_dim
self.batch_size=batch_size
self.num_layers=1
# initialize hidden and cell
self.hn = Variable(torch.randn(self.num_layers, self.batch_size, self.hidden_size))
self.cn = Variable(torch.randn(self.num_layers, self.batch_size, self.hidden_size))
# lstm
self.lstm = nn.LSTM(feature_dim, hidden_size=self.hidden_size, num_layers=self.num_layers, batch_first=True)
# fc layers
self.fc1 = nn.Linear(hidden_dim, 2)
def forward(self, x, mode=False):
h0 = self.hn
c0 = self.cn
# step through the sequence one timestep at a time
for (i,xt) in enumerate(torch.t(x)):
output, (h0,c0) = self.lstm(xt[:,None,:], (h0,c0))
output = self.fc1(output[:,-1,:])
return output
epoch 0 tr loss 54.92 te loss 17.42 tr err 158/316 te err 56/100
epoch 20 tr loss 49.03 te loss 17.09 tr err 85/316 te err 36/100
epoch 40 tr loss 34.17 te loss 16.82 tr err 70/316 te err 26/100
epoch 60 tr loss 30.62 te loss 24.70 tr err 57/316 te err 31/100
epoch 80 tr loss 26.15 te loss 26.07 tr err 41/316 te err 32/100
epoch 100 tr loss 22.72 te loss 39.18 tr err 41/316 te err 33/100
epoch 120 tr loss 21.97 te loss 44.00 tr err 49/316 te err 34/100
epoch 140 tr loss 18.72 te loss 46.30 tr err 29/316 te err 32/100
epoch 160 tr loss 18.30 te loss 47.71 tr err 33/316 te err 35/100
epoch 180 tr loss 13.59 te loss 51.09 tr err 22/316 te err 36/100
epoch 200 tr loss 10.30 te loss 72.76 tr err 11/316 te err 40/100
epoch 220 tr loss 11.10 te loss 71.32 tr err 23/316 te err 37/100
epoch 240 tr loss 7.85 te loss 71.26 tr err 8/316 te err 36/100
epoch 260 tr loss 8.96 te loss 60.27 tr err 21/316 te err 32/100
epoch 280 tr loss 6.97 te loss 63.88 tr err 10/316 te err 36/100
epoch 300 tr loss 10.76 te loss 65.86 tr err 8/316 te err 36/100
epoch 320 tr loss 4.51 te loss 62.41 tr err 9/316 te err 35/100
epoch 340 tr loss 4.13 te loss 60.39 tr err 8/316 te err 33/100
epoch 360 tr loss 15.63 te loss 65.40 tr err 16/316 te err 36/100
epoch 380 tr loss 14.48 te loss 73.36 tr err 81/316 te err 36/100
epoch 400 tr loss 9.04 te loss 62.02 tr err 5/316 te err 37/100
epoch 420 tr loss 3.63 te loss 55.84 tr err 16/316 te err 29/100
epoch 440 tr loss 1.13 te loss 74.18 tr err 0/316 te err 39/100
epoch 460 tr loss 0.07 te loss 101.76 tr err 0/316 te err 45/100
epoch 480 tr loss 0.02 te loss 112.72 tr err 0/316 te err 44/100 |
st81483 | Can you check your input to unrolled lstm? it could just be that your input is always the same. Another important issue is that output from your first lstm model (auto-unrolled) has output from every time step, whereas the manually unrolled model is only keeping the output from the last time step.
output of shape (seq_len, batch, hidden_size * num_directions): tensor containing the output features (h_t) from the last layer of the LSTM, for each t. If a torch.nn.utils.rnn.PackedSequence has been given as the input, the output will also be a packed sequence.
h_n of shape (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for t = seq_len
c_n (num_layers * num_directions, batch, hidden_size): tensor containing the cell state for t = seq_len |
st81484 | Hi, I did some tests as u have suggested to check the variables when using the unrolled and auto-unrolled lstm.
I computed the L2 norm of the hidden states and they are 0
I computed the L2 norm of the inputs and they are 0
I computed the L2 norm of the outputs but they are NOT 0
I pasted the outputs of both 1 and 2 below for a batch size of 4. I cant tell what is going on. Why the outputs at each timestep would not be the same. I have no dropout. I use the same initializer for both rolled and auto-rolled lstms.
class Net(nn.Module):
def __init__(self, feature_dim, hidden_dim, batch_size):
super(Net, self).__init__()
# lstm architecture
self.hidden_size=hidden_dim
self.input_size=feature_dim
self.batch_size=batch_size
self.num_layers=1
# initialize hidden and cell
self.hn = Variable(torch.randn(self.num_layers, self.batch_size, self.hidden_size))
self.cn = Variable(torch.randn(self.num_layers, self.batch_size, self.hidden_size))
# lstm
self.lstm = nn.LSTM(feature_dim, hidden_size=self.hidden_size, num_layers=self.num_layers, batch_first=True)
# fc layers
self.fc1 = nn.Linear(hidden_dim, 2)
def forward(self, x, mode=False):
# xt is correct
h0 = self.hn
c0 = self.cn
print("Original shape ", np.shape(x))
print("Tranpose shape ", np.shape(torch.t(x)))
output_all, (hn_all,cn_all) = self.lstm(x, (h0, c0))
print("Original output shape ", np.shape(output_all))
print("Original hn shape ", np.shape(hn_all))
print("Original cn shape ", np.shape(cn_all))
# step through the sequence one timestep at a time
for (i,xt) in enumerate(torch.t(x)):
output, (h0,c0) = self.lstm(xt[:,None,:], (h0,c0))
# CHECK INPUTS
print(np.linalg.norm(xt - x[:,i,:]))
print("New hn shape ", np.shape(h0))
print("New cn shape ", np.shape(c0))
# CHECK STATES
print(np.linalg.norm(hn_all - h0))
print(np.linalg.norm(cn_all - c0))
output = self.fc1(output[:,-1,:])
return output
Original shape torch.Size([4, 50, 28])
Tranpose shape torch.Size([50, 4, 28])
Original output shape torch.Size([4, 50, 30])
Original hn shape torch.Size([1, 4, 30])
Original cn shape torch.Size([1, 4, 30])
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
New hn shape torch.Size([1, 4, 30])
New cn shape torch.Size([1, 4, 30])
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
0
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]] |
st81485 | Can someone PLEASE send me a SIMPLE example of TRAINING a rolled and unrolled LSTM !?
Use batch first
Use for loop for unrolled |
st81486 | Hi Knog and others who are looking for rolling training.
This training script has an option to train the model using the rolling method you mentioned earlier and the results are good too.
script - link 90 |
st81487 | Python 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34)
[GCC 7.3.0] on linux
Type “help”, “copyright”, “credits” or “license” for more information.
import torch
Traceback (most recent call last):
File “”, line 1, in
File “/var/storage/shared/titanv/sys/jobs/application_1545092406025_20974/anaconda3/envs/python36/lib/python3.6/site-packages/torch/init.py”, line 84, in
from torch._C import *
ImportError: /var/storage/shared/titanv/sys/jobs/application_1545092406025_20974/anaconda3/envs/python36/lib/python3.6/site-packages/torch/lib/libtorch.so.1: undefined symbol: _ZNK2at11TypeDefault19tensorWithAllocatorEN3c108ArrayRefIlEEPNS1_9AllocatorE
##########################################################################
I install pytorch in a new conda env by conda. I meet this problem when I import torch in python, as above. How can i solve this problem? Thanks |
st81488 | They recommend using pip to install it instead of conda and even if you’re in a conda environment. Might be related to that. I was trying to understand why that’s recommendation when I hit your question.
I’m also wondering if it’s normal for libtorch.so to be about 1.3 GB? If so, why? |
st81489 | torchtext.data.iterator.BucketIterator
I am writing some sentiment analysis code using torchtext bucketiterator and surprised by the behavior of how we make dataset
for example if we have
from torchtext.data import TabularDataset
TEXT = data.Field(tokenize = 'spacy', include_lengths = True, preprocessing= lambda x: preprocessor(x), lower=True)
LABEL = data.LabelField(dtype = torch.long)
INDEX = data.RawField()
INDEX.is_target = False
train_data = TabularDataset('./data/train.tsv',
format='tsv',
skip_header=True,
fields=[('PhraseId', None), ('SentenceId', None), ('Phrase', TEXT), ('Sentiment', LABEL)])
test_data = TabularDataset('./data/test.tsv',
format='tsv',
skip_header=True,
fields=[('PhraseId', INDEX), ('SentenceId', None), ('Phrase', TEXT)])
TEXT.build_vocab(train_data,
#max_size = MAX_VOCAB_SIZE,
vectors = 'glove.6B.100d',
unk_init = torch.Tensor.normal_)
LABEL.build_vocab(train_data)
Above is kinda thing you wouldn’t be interested, but magic happens below:
BATCH_SIZE= 64
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
test_iterator = data.BucketIterator.splits(
test_data,
sort = True,
sort_within_batch=True,
sort_key = lambda x: len(x.Phrase),
batch_size = BATCH_SIZE,
device = device)
if we have data like above, below code shows
vars(test_iterator[107].dataset)
Out[47]:
{‘Phrase’: [‘movie’, ‘becomes’, ‘heady’, ‘experience’], ‘PhraseId’: ‘156168’}
But below shows
train_iterator, test_iterator = data.BucketIterator.splits(
(train_data, test_data),
sort = True,
sort_within_batch=True,
sort_key = lambda x: len(x.Phrase),
batch_size = BATCH_SIZE,
device = device)
vars(test_iterator[107].dataset)
it throws an error that (TypeError: ‘BucketIterator’ object does not support indexing)
Don’t know why but the only difference there is whether you construct iterator data using train_data and test_data together or just using test_data. Just parenthesis for test data like
test_iterator = data.BucketIterator.splits(
(test_data),
...
does not show any difference compared to using only test_data(i.e., without parenthesis) |
st81490 | You should use BucketIterator.splits() when you actually have a split data. If you want to create BucketIterator only for one split e.g. test or train, use BucketIterator only. That means your above case where you only pass test_data should be changed to:
test_iterator = data.BucketIterator(
test_data,
sort = True,
sort_within_batch=True,
sort_key = lambda x: len(x.Phrase),
batch_size = BATCH_SIZE,
device = device) |
st81491 | Hello!
I am doing training on GPU in Jupyter notebook.
I have a problem: whenever I interrupt training GPU memory is not released. So I wrote a function to release memory every time before starting training:
def torch_clear_gpu_mem():
gc.collect()
torch.cuda.empty_cache()
It releases some but not all memory: for example X out of 12 GB is still occupied by something. And it seems like this X is growing after every training interruption. And I can release it fully only by restarting the kernel.
But I found a strange workaround: to cause some error to before calling torch_clear_gpu_mem() - for example dividing by 0 in some cell of the notebook.
Then after I call torch_clear_gpu_mem() memory is fully released!
Can someone please explain how it happens? Is it some memory leak in Jupyter?
I would like to make a function that does this automatically. Right now I need to call a special cell that causes error before clearing GPU memory. |
st81492 | If you are working interactively in your notebook, all objects including tensors will be stored.
Even if you delete all tensors and clear the cache (usually not necessary), the CUDA context will still be initialized and will take some memory.
I guess the error is killing the kernel and thus the memory is completely released. |
st81493 | Hello @ptrblck! Thanks for answering.
Yes, I think CUDA context takes around 900MB for me, but after interruption I get 3-4GB that I cannot release (by deleting all objects / exiting function scope).
I produce some simple exception (like ZeroDivision) and it doesn’t kill kernel (cause I can still access all my variables from other cells after error).
But when I then call torch_clear_gpu_mem(), CUDA memory reliably returns to 900MB. |
st81494 | If you create the division by zero and raise the exception, all GPU memory is cleared and you can still access all CUDATensors? |
st81495 | I can then start my training process on GPU again with all memory available (like after kernel restart) |
st81496 | OK, this would mean the exception is in fact restarting the kernel.
What did you mean by “I can still access all my variables from other cells after error”?
If you still can access and e.g. print old variables, the kernel should be alive and it’s strange that all GPU memory is cleared.
On the other hand, if you cannot print old variables and just restart the training, what would the difference be to a clean restart instead of a restart caused by an exception? |
st81497 | I have a train_cell: where I call my training function (there I have a function and I call it right after definition).
But this cell requires all prev cells (prep_cells) to be executed (imports, data preparation, etc.)
I also have an exception_cell where I do 1/0 (ZeroDivision exception).
And mem_clear_cell where I call torch_clear_gpu_mem()
I run prep_cells, train_cell
Interrupt training, GPU memory usage: 9GB
Run mem_clear_cell, GPU memory usage: 4GB (no matter how many times I run clear, gpu memory usage doesn’t go down)
Run exception_cell, then mem_clear_cell, GPU memory usage: 935MB
Then I can just call train_cell without calling prep_cells
So with exception approach to clearing GPU memory I don’t need to restart kernel and run prep_cells before running train_cell |
st81498 | Hello everyone,
I’ve updated my pyTorch from 1.0.0 to the newest version and now when I want to execute my code I’m facing this error. Any idea how can I solve the error??
**python3: /pytorch/third_party/ideep/mkl-dnn/src/cpu/jit_avx2_conv_kernel_f32.cpp:567: static mkldnn::impl::status_t mkldnn::impl::cpu::jit_avx2_conv_fwd_kernel_f32::init_conf(mkldnn::impl::cpu::jit_conv_conf_t&, const convolution_desc_t&, const mkldnn::impl::memory_desc_wrapper&, const mkldnn::impl::memory_desc_wrapper&, const mkldnn::impl::memory_desc_wrapper&, const primitive_attr_t&): Assertion `jcp.ur_w * (jcp.nb_oc_blocking + 1) <= num_avail_regs’ failed.
Aborted (core dumped)
** |
st81499 | Hi every one
I need help to visualize a frames in data set when applying each transformation operation?
mean=[0.485, 0.456, 0.406]
std=[0.229, 0.224, 0.225]
normalize = Normalize(mean=mean, std=std)
spatial_transform = transforms.Compose([transforms.RandomRotation(20),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ColorJitter(hue=.05, saturation=.05),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],[0.229, 0.224, 0.225])])
vidSeqTrain = makeDataset(trainDataset, trainLabels, spatial_transform=spatial_transform, seqLen=seqLen)
vidSeqTest = makeDataset(data2, label2, seqLen=seqLen, spatial_transform= spatial_transform)
testLoader = torch.utils.data.DataLoader(vidSeqTest, batch_size=trainBatchSize,
shuffle=True, num_workers=int(numWorkers/2), pin_memory=True)
# torch iterator to give data in batches of specified size
trainLoader = torch.utils.data.DataLoader(vidSeqTrain, batch_size=trainBatchSize,
shuffle=True, num_workers=numWorkers, pin_memory=True, drop_last=True) |
st81500 | Solved by ptrblck in post #2
If you want to visualize the PIL.Image after each transformation, you could create a custom transformation and call e.g. img.show() (or use any other library to plot the image):
class Visualize(object):
def __call__(self, img):
img.show()
return img
spatial_transform = … |
st81501 | If you want to visualize the PIL.Image after each transformation, you could create a custom transformation and call e.g. img.show() (or use any other library to plot the image):
class Visualize(object):
def __call__(self, img):
img.show()
return img
spatial_transform = transforms.Compose([transforms.RandomRotation(20),
Visualize(),
transforms.RandomResizedCrop(224),
Visualize(),
transforms.RandomHorizontalFlip(),
Visualize(),
transforms.ColorJitter(hue=.05, saturation=.05),
Visualize(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],[0.229, 0.224, 0.225])])
to_pil_image = transforms.ToPILImage()
x = torch.randn(3, 256, 256)
img = to_pil_image(x)
out = spatial_transform(img) |
st81502 | class A(nn.Module):
def __init__(self):
super().__init__()
self.conv = nn.Conv2d(in_channels=256, out_channels=32,
kernel_size=9, stride=2, padding=0)
def forward(self, x):
return self.conv(x)
class B(nn.Module):
def __init__(self):
super().__init__()
self.capsules = nn.ModuleList([
nn.Conv2d(in_channels=256, out_channels=32,
kernel_size=9, stride=2, padding=0)
for _ in range(8)])
def forward(self, x):
...
class C(nn.Module):
def __init__(self):
super().__init__()
self.capsules = nn.ModuleList([A() for _ in range(8)])
def forward(self, x):
... |
st81503 | Hi @vainaijr,
The only difference is when you try to access the convolutions.
Let’s say you want to access the first conv.
In C case you have to do:
c.capsules[0].conv
and in B case you can simply do:
b.capsules[0]
Except from that, running them would give you exactly the same result. A is just a wrapper forwarding x to Conv2d. |
st81504 | The following is my method using unfold.
boundary_width = 9
mask = mask.unfold(2, boundary_width, 1).unfold(3, boundary_width, 1)
mask = mask.contiguous().view(*mask.size()[:-2], -1)
mask = (mask.max(4)[0] > 0).float() * 1
However, it runs out of memory very quickly if the boundary_width is large (say 49 pixels). Is there any other way to do this? Or is there any way to reduce unfold memory usage?
I am hoping to be able to use it with ~49 pixels. |
st81505 | I want to modify batch norm all to group norm. So I am modifying the code, but it doesn’t work as well as I thought. Do you have any help?
adasdasdas.PNG1070×560 6.44 KB
12390121.PNG1263×98 2.19 KB |
st81506 | Hi,
nu is a string, so you will have to use test.__getattribute__(nu) to access the attribute named as the content of nu. |
st81507 | error_occur.PNG723×342 3.85 KB
error_occur1.PNG1066×151 2.74 KB
I still get the error can you help me? |
st81508 | I have a question about independent backward passes in each process in Pytorch’s distributed framework.
Suppose I spawn n worker processes with mp.spawn(), and then in each worker process, I call dist.init_process_group() with its rank and a URL (e.g. tcp://127.0.0.1:1111 to initialize it to a port on the local host).
I’m parallelizing the model either with
apex.parallel.DistributedDataParallel(model)
or
torch.nn.parallel.DistributedDataParallel(model.cuda(), device_ids=[rank])
When I call loss.backward() on my network’s loss function, I believe the gradients are being averaged from all gpus in the process group (all_reduce sort of operation, so each gpu gets the result).
Suppose I wish for each process to ignore all other gpus and only call backward() with respect to its data subset. I wish to use these process-dependent gradients then to update my main, shared model. Does Pytorch’s current framework support this, and if so, how?
Presumably two ways to do it would be (1) create a new process group right before the backward call, and then rejoin the joint process group immediately afterwards (2) do not parallelize the model under DistributedDataParallel, but keep a separate model at each process, and do the send/recv/reduce operations myself. But this wouldn’t support synchronized BatchNorm, most likely.
Thanks! |
st81509 | I am trying to implement the a mixture of expert layer, similar to the one described in:
arXiv.org
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer 34
The capacity of a neural network to absorb information is limited by its
number of parameters. Conditional computation, where parts of the network are
active on a per-example basis, has been proposed in theory as a way of
dramatically increasing...
Basically this layer have a number of sub-layers F_i(x_i) which process a projected version of the input. There is also a gating layer G_i(x_i) which is basically an attention mechanism over all sub-expert-layers:
sum(G_i(x_i)*F_i(x_i).
My Naive approach is to build a list for the sub-layers:
sublayer_list = nn.ModuleList()
for i in range(num_of_layer):
sublayer_list.append(self.make_layer())
Then when applying this I use another for loop
out_list= []
for i,l in enumerate(sublayer_list):
out_list.appned(l(input(i)))
However the addition of this Mixture-of-Expert layer slows training by almost 7 times (against one with MoE layer swapped for a similar-sized MLP). I am wondering if there are more efficient ways to implement this in pytorch? Many thanks! |
st81510 | Solved by davidmrau in post #9
I did not yet implement the distribution over multiple GPUs. On a single GPU though, the speed up will be significantly because of the architecture of the sparsely-gated MoE. Instead of passing each sample through all experts, the gating mechanism will make sure that each sample is only passed throu… |
st81511 | Hello, we’re developing a library for sparse training in Pytorch. Please provide us with your complete pytorch code, and we’ll optimize and include it in our library. |
st81512 | I re-implemented the Sparsely-Gated Mixture-of-Experts Layer based on the tensorflow code here 45. You can find my implementation here: https://github.com/davidmrau/mixture-of-experts 207 |
st81513 | Here is a code snippet:
import torch
import torch.nn as nn
from utility import *
import torch.nn.functional as F
import numpy as np
import math
class MoE_model(nn.Module):
def init(self,device=‘cuda’,in_dim = 128, out_dim = 256,T=9,num_mod = 16,mod_dim = 32,mod_out_dim = 8):
super(MoE_model,self).init()
self.num_mod = num_mod
self.mod_dim = mod_dim
self.mod_out_dim = mod_out_dim
self.device = device
self.in_dim = in_dim
self.out_dim = out_dim
self.T = T
self.mod_layer_1 = nn.Linear(self.in_dim,self.mod_dim*self.num_mod)
self.mod_layer_1_bn = nn.BatchNorm1d(self.mod_dim*self.num_mod)
self.module_net = nn.ModuleList()
for i in range(self.num_mod):
mod = nn.Sequential(
nn.Linear(self.mod_dim,48),
nn.BatchNorm1d(48),
nn.ReLU(True),
nn.Linear(48,self.mod_out_dim),
nn.BatchNorm1d(self.mod_out_dim),
nn.ReLU(True)
)
self.module_net.append(mod)
self.rel_local_fc_1 = nn.Linear(self.num_mod*self.mod_out_dim*2,self.out_dim)
self.rel_local_fc_1_bn = nn.BatchNorm1d(256)
def forward(self,fl_02,fl_12):
fm_02 = F.relu(self.mod_layer_1_bn(self.mod_layer_1(fl_02.view(-1,self.in_dim))))
fm_12 = F.relu(self.mod_layer_1_bn(self.mod_layer_1(fl_12.view(-1,self.in_dim))))
fm_02_split = torch.split(fm_02.view(-1,self.num_mod,self.mod_dim),1,1)
fm_12_split = torch.split(fm_12.view(-1,self.num_mod,self.mod_dim),1,1)
fm_02_list = []
fm_12_list = []
for i,l in enumerate(self.module_net):
fm_02_list.append(l(fm_02_split[i].squeeze()))
fm_12_list.append(l(fm_12_split[i].squeeze()))
fm_02 = torch.cat(fm_02_list,-1)
fm_12 = torch.cat(fm_12_list,-1)
fm_02_sum = torch.sum(fm_02.view(-1,self.T,self.num_mod*self.mod_out_dim),1)
fm_12_sum = torch.sum(fm_12.view(-1,self.T,self.num_mod*self.mod_out_dim),1)
fm_cat = torch.cat([fm_02_sum,fm_12_sum],1)
fl = F.relu(self.rel_local_fc_1_bn(self.rel_local_fc_1(fm_cat)))
return fl |
st81514 | It looks like for this model, all the weights are used in each forward pass? Or is my interpretation off? |
st81515 | yes for this simple model there is no gating mechanism yet. The output from each expert layer is concatenated at the end. My concern is that currently this model is very slow to train. Adding the gating mechanism will even make it slower. The training speed is far slower than a MLP model with similar number of parameters, which is kind of wierd since the number of FLOPS should be roughly the same. |
st81516 | I did not yet implement the distribution over multiple GPUs. On a single GPU though, the speed up will be significantly because of the architecture of the sparsely-gated MoE. Instead of passing each sample through all experts, the gating mechanism will make sure that each sample is only passed through k experts. |
st81517 | i guess i’m pretty noob. here is my train loop body
for i, (lr_image, hr_image) in enumerate(train_bar):
start = time.time()
batch_size = lr_image.shape[0]
if torch.cuda.is_available():
lr_image = lr_image.cuda()
hr_image = hr_image.cuda()
d_lr = adjust_learning_rate(
l.optimizerD, epoch, i, l.train_loader.size, config.d_lr
)
g_lr = adjust_learning_rate(
l.optimizerG, epoch, i, l.train_loader.size, config.g_lr
)
if config.prof and i > 10:
break
############################
# (1) Update D network
##########################
l.netD.zero_grad()
fake_img = l.netG(lr_image)
real_out = l.netD(hr_image).mean()
fake_out = l.netD(fake_img).mean()
d_loss = 1 - real_out + fake_out
d_loss.backward(retain_graph=True)
l.optimizerD.step()
############################
# (2) Update G network
###########################
l.netG.zero_grad()
g_loss = l.generator_loss(fake_out, fake_img, hr_image)
g_loss.backward()
l.optimizerG.step()
i got it from https://github.com/leftthomas/SRGAN/blob/master/train.py 4. I realize that this might be wrong because fake_img isn’t detached on being passed to the discriminator. according to this post (i think) When training GAN why do we not need to zero_grad discriminator? 1 i might be doing the wrong thing. also the loss function for the discriminator is a little wacky but it makes sense to me in a way (if real_out == 1 then low penalty, if fake_out == 1 then high penalty). |
st81518 | Hi,
I want to do summation of outputs of several convolution layer and send generated output to the next layer. Is there sum layer (sth like pooling layer, convolution layer, …)?!
How could I do it?
Thanks |
st81519 | Solved by ptrblck in post #3
How would you like to sum the conv layer outputs?
If you would like to sum over a specific dimension, you could simply use:
output = conv(input)
output = output.sum(dim=1) # change the dim to your use case
On the other hand, if you would like to sum patches similar to a conv layer, you could def… |
st81520 | Maybe linear layer with bias = 0 do the same. Although activation function changes it to non-linear value. Maybe there are not any way, isn’t it? Unless, I customize what I want… |
st81521 | How would you like to sum the conv layer outputs?
If you would like to sum over a specific dimension, you could simply use:
output = conv(input)
output = output.sum(dim=1) # change the dim to your use case
On the other hand, if you would like to sum patches similar to a conv layer, you could define a convolution kernel with all ones, and apply it using the functional API:
output = torch.randn(1, 2, 8, 8) # comes from a preceding conv layer
sum_kernel = torch.ones(1, 2, 3, 3)
output_sum = F.conv2d(output, sum_kernel, stride=1, padding=1) |
st81522 | Dear Ptrblck,
Thanks a lot. Like every time your answer gave me an idea to solve my problem
Best Regards |
st81523 | I believe the sparse matrix features in pytorch aren’t as developed as the dense features yet, but at least we can create sparse matrices with torch.sparse.Tensor. If I wanted to do a sparse linear solve on the GPU by giving it a sparse matrix and a dense vector, what would be the best way to do it? Should I look into external packages like cupy? Does anybody have experience calling packages like suitesparse from python? I’d still prefer to have my tensors in pytorch since I’m using many pytorch features, but at some point I need to do the sparse solve so even if I’m using a external library I’d like it to have decent interop with pytorch at the GPU level. |
st81524 | In the official VAE implementation which is given below, reduction='sum' is used in the BCE loss and if someone uses ‘mean’ instead, in the backward pass, the autograd will fail with the error :
RuntimeError: Function ‘AddmmBackward’ returned nan values in its 2th output.
This happens, while all the weight norms are positive and none of them have inf values in them. the input to the layer which fails in the backward pass with the mentioned error, also has no inf values.
I believe we should be able to use ‘mean’ as well, so what is stopping us from using that?
I remember the initial version of the VAE, used the default behavior (i.e.reduction='mean')
BCE = F.binary_cross_entropy(recon_x, x.view(-1, 784))
and everything was fine. but after 0.4 it seems, this was changed to sum .
import argparse
import torch
import torch.utils.data
from torch import nn, optim
from torch.nn import functional as F
from torchvision import datasets, transforms
from torchvision.utils import save_image
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
batch_size=128
kwargs = {'num_workers': 1, 'pin_memory': True}
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=True, download=True,
transform=transforms.ToTensor()),
batch_size=batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False, transform=transforms.ToTensor()),
batch_size=batch_size, shuffle=True, **kwargs)
class VAE(nn.Module):
def __init__(self):
super(VAE, self).__init__()
self.fc1 = nn.Linear(784, 400)
self.fc21 = nn.Linear(400, 20)
self.fc22 = nn.Linear(400, 20)
self.fc3 = nn.Linear(20, 400)
self.fc4 = nn.Linear(400, 784)
def encode(self, x):
h1 = F.relu(self.fc1(x))
return self.fc21(h1), self.fc22(h1)
def reparameterize(self, mu, logvar):
std = torch.exp(0.5*logvar)
eps = torch.randn_like(std)
return mu + eps*std
def decode(self, z):
h3 = F.relu(self.fc3(z))
return torch.sigmoid(self.fc4(h3))
def forward(self, x):
mu, logvar = self.encode(x.view(-1, 784))
z = self.reparameterize(mu, logvar)
return self.decode(z), mu, logvar
model = VAE().to(device)
optimizer = optim.Adam(model.parameters(), lr=1e-3)
# Reconstruction + KL divergence losses summed over all elements and batch
def loss_function(recon_x, x, mu, logvar):
BCE = F.binary_cross_entropy(recon_x, x.view(-1, 784), reduction='sum')
# see Appendix B from VAE paper:
# Kingma and Welling. Auto-Encoding Variational Bayes. ICLR, 2014
# https://arxiv.org/abs/1312.6114
# 0.5 * sum(1 + log(sigma^2) - mu^2 - sigma^2)
KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
return BCE + KLD
def train(epoch):
model.train()
train_loss = 0
for batch_idx, (data, _) in enumerate(train_loader):
data = data.to(device)
optimizer.zero_grad()
recon_batch, mu, logvar = model(data)
loss = loss_function(recon_batch, data, mu, logvar)
loss.backward()
train_loss += loss.item()
optimizer.step()
if batch_idx % 1000 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader),
loss.item() / len(data)))
print('====> Epoch: {} Average loss: {:.4f}'.format(
epoch, train_loss / len(train_loader.dataset)))
def test(epoch):
model.eval()
test_loss = 0
with torch.no_grad():
for i, (data, _) in enumerate(test_loader):
data = data.to(device)
recon_batch, mu, logvar = model(data)
test_loss += loss_function(recon_batch, data, mu, logvar).item()
if i == 0:
n = min(data.size(0), 8)
comparison = torch.cat([data[:n],
recon_batch.view(batch_size, 1, 28, 28)[:n]])
save_image(comparison.cpu(),
'vae_results/reconstruction_' + str(epoch) + '.png', nrow=n)
test_loss /= len(test_loader.dataset)
print('====> Test set loss: {:.4f}'.format(test_loss))
epochs=20
for epoch in range(1, epochs + 1):
train(epoch)
test(epoch)
with torch.no_grad():
sample = torch.randn(64, 20).to(device)
sample = model.decode(sample).cpu()
save_image(sample.view(64, 1, 28, 28),
'vae_results/sample_' + str(epoch) + '.png')
Any help is greatly appreciated |
st81525 | I want to know the number of arguments in the picture (160, 16, 64 …)
Please help me.
sadasdasd.PNG586×840 2.38 KB |
st81526 | Solved by ptrblck in post #2
You can access the internal attributes using the argument name, e.g. print(bn.num_features). |
st81527 | You can access the internal attributes using the argument name, e.g. print(bn.num_features). |
st81528 | I’m wondering how to fine tune a complex structure as shown in the picture. Using Modules. () Or chiled () doesn’t work. I want to change all batch norm to group norm.
sakldkasld.PNG1319×841 10.2 KB |
st81529 | Yes, you need to refit the model. Group Norm and Batch Norm don’t normalize the same distribution, so I don’t think there is even a way of doing the bridge with a mathematical formula. |
st81530 | The code snippet works fine in 1.1 but causes an error in 1.2:
p = F.softmax(x, dim=1)
m = y != self.ignore_index
t = F.one_hot((y * m.byte()).long(), num_classes=self.num_classes).byte().permute(0,3,1,2)
i = (p * (t * m.unsqueeze(1).byte()).float()).sum((0,2,3))
u = ((p + t.float()) * m.unsqueeze(1).float()).sum((0,2,3)) - i
v = u.nonzero()
return -((i[v] / u[v]).mean()).log()
I get the error message below:
RuntimeError: range.second - range.first == t.size() INTERNAL ASSERT FAILED at /pytorch/torch/csrc/autograd/generated/Functions.cpp:55, please report a bug to PyTorch. inconsistent range for TensorList output
I had to remove the usage of nonzero() like below to make the code work:
p = F.softmax(x, dim=1)
m = y != self.ignore_index
t = F.one_hot((y * m.byte()).long(), num_classes=self.num_classes).byte().permute(0,3,1,2)
i = (p * (t * m.unsqueeze(1).byte()).float()).sum((0,2,3))
u = ((p + t.float()) * m.unsqueeze(1).float()).sum((0,2,3)) - i
return -((i / u).mean()).log()
I have two questions:
(1) Why does the runtime error happen in 1.2 but not in 1.1?
(2) By not using nonzero to prevent division by zero I have numeric instability in theory. However, in really the number that comes out from a softmax operation should not reach zero because the input can’t really reach negative infinity right? |
st81531 | Hi,
about the first question, could you please provide x and y to let me reproduce the issue?
It would be great too if you show the specs of your environment by running
python collect_env.py 3
And about the second question, in practice I think you are right, you should not get such a number in your network, so softmax would never output zero.
Good luck
Nik |
st81532 | The code snippet comes from a loss function for a semantic segmentation network. I modified it a little so it can reproduce the same runtime error alone:
import torch
import torch.nn as nn
import torch.nn.functional as F
def loss_func(p, y, ignore_index=255, num_classes=19):
p = F.softmax(x, dim=1)
m = y != ignore_index
t = F.one_hot((y * m.byte()).long(), num_classes=num_classes).byte().permute(0,3,1,2)
i = (p * (t * m.unsqueeze(1).byte()).float()).sum((0,2,3))
u = ((p + t.float()) * m.unsqueeze(1).float()).sum((0,2,3)) - i
v = u.nonzero()
return -((i[v] / u[v]).mean()).log()
x = torch.rand(1,19,1024,2048, requires_grad=True).to(‘cuda’).float()
y = torch.randint(0, 18, (1,1024,2048)).to(‘cuda’).byte()
y[0][0][0] = 255
loss = loss_func(x, y)
loss.backward()
Below is the output from collect_env.py:
Collecting environment information…
PyTorch version: 1.2.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 18.04.2 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: version 3.13.1
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration: GPU 0: GeForce GTX 1080 Ti
Nvidia driver version: 415.27
cuDNN version: /usr/local/cuda-10.0/targets/x86_64-linux/lib/libcudnn.so.7.5.0
Versions of relevant libraries:
[pip3] efficientnet-pytorch==0.4.0
[pip3] numpy==1.16.4
[pip3] torch==1.2.0
[pip3] torchfile==0.1.0
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.4.0
[conda] Could not collect |
st81533 | Actually, If you do not use backward, and just try to use forward feed, nonzero() won’t throw any errors. The issue is related to backward semantics. Currently, I cannot really figure out what is happening. Could you create and issue in pytorch github page 5?
By the way, you can use code format in forum by putting ``` at the start and the end of your code section. |
st81534 | Hello everyone,
I’m currently trying to build a classifier for a multilabel classification where multiple annotators annotated one label each.
Let’s say the label vectors for annotators 1, 2, and 3 look like this:
A1: [1, 0, 0, 0]
A2: [1, 0, 0, 0]
A3: [0, 0, 0, 1]
I could only use the majority label and use [1, 0, 0, 0] as my label vector - or I could make it a multi-label classification problem with [1, 0, 0, 1] as my labels.
But is there a way to weight the loss based on the distribution of labels?
For instance, a loss function that could use [0.66, 0, 0, 0.33] as the label vector?
Thanks! |
st81535 | Hello, everyone!
Is it possible to implement following function in vectorized fashion, without for cycles, using advanced indexing?
def dumb_foo(x, permutation):
assert x.ndimension() == permutation.ndimension()
ret = torch.zeros_like(x)
if x.ndimension() == 1:
ret = x[permutation]
elif x.ndimension() == 2:
for i in range(x.size(0)):
ret[i] = x[permutation[i]]
elif x.ndimension() == 3:
for i in range(x.size(0)):
for j in range(x.size(1)):
ret[i, j] = x[i, j, permutation[i, j]]
else:
ValueError("Only 3 dimensions maximum")
return ret |
st81536 | Now I end up with something like this. Can it be implemented in more efficient way?
def smart_foo(x, permutation):
assert x.ndimension() == permutation.ndimension()
if x.ndimension() == 1:
ret = x[permutation]
elif x.ndimension() == 2:
d1, d2 = x.size()
ret = x[
torch.arange(d1).unsqueeze(1).repeat((1, d2)).flatten(),
permutation.flatten()
].view(d1, d2)
elif x.ndimension() == 3:
d1, d2, d3 = x.size()
ret = x[
torch.arange(d1).unsqueeze(1).repeat((1, d2 * d3)).flatten(),
torch.arange(d2).unsqueeze(1).repeat((1, d3)).flatten().unsqueeze(0).repeat((1, d1)).flatten(),
permutation.flatten()
].view(d1, d2, d3)
else:
ValueError("Only 3 dimensions maximum")
return ret |
st81537 | I’m looking to move my dataset to GPU memory (It’s fairly small and should fit). I thought something like this would work, but I end up with CUDA Error: initialization error:
class MyDataSet(Dataset):
def __init__ (self, X,y,device='cpu'):
'''
So that we can move the entire dataset to the GPU.
:param X: float32 data scaled numpy array
:param y: float32 data scaled numpy vector
:param device: 'cpu' or 'cuda:0'
'''
self.X = torch.from_numpy(X).to(device)
# y vector needs to be in a column vector (or at least it
# did in the normal dataset.)
self.y = torch.from_numpy(y[:,None]).to(device)
def __len__(self):
return list(self.X.size())[0]
def __getitem__(self, item):
return self.X[item], self.y[item]
Then I put it into use it to initialize a training set:
# device = 'cuda:0', X_train, and y_train are numpy arrays, float32
train = MyDataSet(X_train, y_train,device=device)
loader = DataLoader(train,batch_size=256,shuffle=True, num_workers=1)
I initialize my model and also send it to the GPU.
I start my training loop and it dies on the ‘enumerate(loader,0)’
for epoch in range(300):
running_loss = 0.0
for i, data in enumerate(loader, 0): # <<--- This is where the error is thrown
inputs, labels = data[0].to(device), data[1].to(device)
#inputs, labels = data # <-- in theory if data is all on GPU I shouldn't need to 'move' it there again.
optimizer.zero_grad()
outputs = model(inputs)
loss = loss_fn(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
Traceback (most recent call last):
File “nn.py”, line 116, in
for i, data in enumerate(loader, 0):
File “/torch/utils/data/dataloader.py”, line 819, in next
return self._process_data(data)
File “/torch/utils/data/dataloader.py”, line 846, in _process_data
data.reraise()
File “/torch/_utils.py”, line 369, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File “torch/utils/data/_utils/worker.py”, line 178, in _worker_loop
data = fetcher.fetch(index)
File “torch/utils/data/_utils/fetch.py”, line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File “torch/utils/data/_utils/fetch.py”, line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File “/MyDataSet.py”, line 21, in getitem
return self.X[item], self.y[item]
RuntimeError: CUDA error: initialization error
On the second part of the error it seems to be having a problem with the getitem of my custom Dataset.
When I run my custom Dataset but force it to be CPU-only, then everything works as expected (and the same as if I used pytorch’s normal DataSet class. I can also use the CPU-based Dataset to inside my traiing loop to push each mini-batch to the GPU and that works too, but is actually slower than just doing it all on the CPU due to the overhead of transfer.
when I do:
self.X = torch.from_numpy(X).to(device)
print(self.X)
It shows me that the data appears to have moved to the GPU:
---SNIP--
-3.3333e-01, 1.5024e+04, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00]], device='cuda:0')
I feel like I need to put some sort of ‘cuda syncronize’ somewhere (probably data loader) so the the enumerate waits until the dataset is transfered. Not sure what that might be.
Thoughts? |
st81538 | Weirdly, setting ‘num_workers’ = 0 seems to allow it to work. ??
# this works and doesn't throw the errors from above
loader = DataLoader(train,batch_size=256,shuffle=True, num_workers=0)
While that seems to fix the error, I’m wondering if there is a way to structure my code for better performance on the GPU? When num_workers = 0 performance is still limited as only 1 core is used at 100% to go over the epoch loop. Using a small batch_size is much slower than using a large batch_size. I thought by moving the data to the CPU, I wouldn’t be experiencing memory traffic (i’m assuming it’s memory traffic) or are some parts of the training loop that are necessarily on the CPU and need to pull results from the GPU?
When using ‘cpu’ only, all my cores are used. I’m assuming some of that is numpy backend + python interpreter
Is there something else I’m doing wrong that is limiting performance of ‘on-GPU’ datasets? (some setting or issue with my epoch loop or dataloader for example) |
st81539 | Each worker in your DataLoader will try to create a CUDA context, since you are using CUDATensors, which will raise this error.
You could use the 'spawn' method for multiprocessing as described here 42.
Here is a small example:
import torch.multiprocessing as mp
class MyDataset(Dataset):
def __init__(self, device='cpu'):
super(MyDataset, self).__init__()
self.data = torch.randn(100, 1, device=device)
def __getitem__(self, index):
x = self.data[index]
return x
def __len__(self):
return len(self.data)
def main():
dataset = MyDataset(device='cuda')
loader = DataLoader(
dataset,
num_workers=2
)
for data in loader:
print(data.device)
if __name__=='__main__':
mp.set_start_method('spawn')
main() |
st81540 | I am trying to upgrade my existing pytorch 0.4 model to 1.0 and am attempting to use the Caffe2 backend to run the models in production on the GPU.
So, what I did is as follows:
# Export my model to ONNX
torch.onnx._export(model, args, "test.pnnx", export_params=True)
import caffe2.python.onnx.backend as onnx_caffe2_backend
# Load the ONNX model from file.
model = onnx.load("test.onnx")
# We will run our model on the GPU with ID 3.
rep = onnx_caffe2_backend.prepare(model, device="CUDA:3")
outputs = rep.run(np.random.randn(1, 3, 128, 64).astype(np.float32))
Now, I have a couple of questions about this:
1: What if my input data already resides on the GPU? How can I pass that data to the model rather than moving it to CPU with numpy and then passing it to the executor? I tried tjhe following:
args = torch.randn(1, 3, 128, 64, dtype=torch.float32).cuda(3)
print(args.dtype)
outputs = rep.run(args)
This prints torch.float32. However, I get the error:
if arr.dtype == np.dtype('float64'):
TypeError: data type not understood
I am not sure why the array is being interpreted as a double array.
2: I noticed that the call to the prepare is rather slow. So, it seems my old pytorch code is faster than running it on the backend. I will do more exhaustive timing comparisons but is this the right way to export the model and have it running on the GPU with pytorch/onnx/caffe?
So, regarding this point. If I call prepare without the GPU option, the call is fast but specifying GPU with onnx_caffe2_backend.prepare(model, device="CUDA:3") is very slow.
My system is using
python 3.6.8
pytorch 1.0.0
onnx 1.3.0
ubuntu 16.04
cuda 9.0 |
st81541 | Hi,
Have you fixed this ? I have the same problem, very slow when using caffe2 backend on GPU. And I think it’s because the input is on CPU .numpy(), so the question can be how to move the input on gpu in order to use it with caffe2 backend.
with a similar code I have this result about inference speed:
pytorch CPU (3s) > onnx/caffe2 backend GPU (600ms) > onnx/caffe2 backend CPU (200ms)> pytorch GPU (50ms).
I would expect onnx/caffe2 backend GPU to have a better speed but it is not the case. |
st81542 | Hi, is nn.Linear expected to have that large relative difference when running on a 3d tensor, as opposed to looping through the dimension? Interestingly, the latter matches to numpy matrix multiplication, as shown below.
import torch
import numpy as np
fc = torch.nn.Linear(256, 128)
inp = torch.rand(3, 10, 256)
out1 = fc(inp).detach().numpy()
out2 = []
for i in range(3):
out2.append(fc(inp[i]))
out2 = torch.stack(out2).detach().numpy()
w = fc.weight.detach().numpy()
b = fc.bias.detach().numpy()
out3 = inp.numpy() @ w.T + b
# passes this line
np.testing.assert_allclose(out3, out2)
# fails here
np.testing.assert_allclose(out3, out1)
Traceback (most recent call last):
File "tmp.py", line 47, in <module>
np.testing.assert_allclose(out3, out1)
AssertionError:
Not equal to tolerance rtol=1e-07, atol=0
Mismatch: 78.8%
Max absolute difference: 5.9604645e-07
Max relative difference: 0.04850746
x: array([[[-3.184696e-01, 4.671749e-01, -3.306221e-01, ...,
-3.613108e-01, 3.210519e-01, -4.924317e-01],
[-5.997717e-06, 7.380165e-02, 6.725912e-02, ...,...
y: array([[[-3.184695e-01, 4.671748e-01, -3.306221e-01, ...,
-3.613108e-01, 3.210520e-01, -4.924318e-01],
[-5.986542e-06, 7.380170e-02, 6.725915e-02, ...,... |
st81543 | The difference is most likely created by the limited precision using FP32.
If you use DoubleTensors, your difference should be smaller:
fc = torch.nn.Linear(256, 128).double()
inp = torch.rand(3, 10, 256).double() |
st81544 | For a project that I have started to build in PyTorch, I would need to implement my own descent algorithm (a custom optimizer different from RMSProp, Adam, etc.). In TensorFlow, it seems to be possible to do so (https://towardsdatascience.com/custom-optimizer-in-tensorflow-d5b41f75644a 382) and I would like to know if it was also the case in PyTorch.
I have tried to do it by simply adding my descent vector to the leaf variable, but PyTorch didn’t agree: “a leaf Variable that requires grad has been used in an in-place operation.”. When I don’t do the operation in-place, the “new” variable loses its position of leaf, so it doesn’t work neither…
Is there an easy way to create such a custom optimizer in PyTorch?
Thanks in advance |
st81545 | You may want to look at that post: Regarding implementation of optimization algorithm 3.8k |
st81546 | Hi Artix !
You can write your own update function for sure !
To update your weights, you might use the optimiser library. But you
can also do it yourself. For example, you can basically code the
gradient descent, the SGD or Adam using the following code.
net = NN()
learning_rate = 0.01
for param in net.parameters():
weight_update = smth_with_good_dimensions
param.data.sub_(weight_update * learning_rate)
As you can see, you have access to your parameters in net.parameters(), so you can update them like you want.
If you want more specific examples, you can go here where I implemented both SVRG and SAGA : https://github.com/kilianFatras/variance_reduced_neural_networks 664 (variance reduced algorithmes) ! If you have any further question, do not hesitate |
st81547 | The code snippet is shown as follows:
a = torch.randn(100, 20)
b = torch.pdist(a)
c = torch.pdist(a.cuda()).cpu()
print(torch.sum(torch.abs(b - c))) # tensor(0.0007)
The output difference is quite large between gpu and cpu computation. What’s the cause of it? |
st81548 | how do I resize an image which is rectangle in shape, let us assume 40x60 to a square image, let us assume 30x30, without stretching it, and have black bars at top and bottom instead of stretch? |
st81549 | Have a look here 16. It’s even doing a bit more than that, but you should be able to find the relevant bits. |
st81550 | Hello everyone, hope you are having a great time. I tried to implement a simple VAE! but I’m getting this error out of no where!
The code I’m using is this :
class VAE(nn.Module):
def __init__(self, embedding=100):
super().__init__()
self.fc1 = nn.Linear(28*28, 400)
self.fc1_mu = nn.Linear(400, embedding)
self.fc1_std = nn.Linear(400, embedding)
self.decoder = nn.Sequential( nn.Linear(embedding, 400),
nn.ReLU(),
nn.Linear(400, 28*28),
nn.Sigmoid())
def reparamtrization_trick(self, mu, logvar):
if self.training:
std = logvar.mul(0.5).exp_()
eps = torch.tensor(std.data.new(std.size()).normal_(0,1))
return eps.mul(std).add(mu)
else:
# During the inference, we simply return the mean of the
# learned distribution for the current input. We could
# use a random sample from the distribution, but mu of
# course has the highest probability.
return mu
def forward(self, input):
output = input.view(input.size(0), -1)
output = F.relu(self.fc1(output))
output_mu = self.fc1_mu(output)
output_std = self.fc1_std(output)
z = self.reparamtrization_trick(output_mu, output_std)
# decoder
output = self.decoder(z)
return output, output_mu, output_std
# now lets train :
epochs = 20
embeddingsize = 100
interval = 1000
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = VAE(embeddingsize).to(device)
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(model.parameters(), lr =0.01)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 20)
for e in range(epochs):
for i, (imgs, labels) in enumerate(dataloader_train):
imgs = imgs.to(device)
preds,mu, logvar = model(imgs)
# check to see if the values are in [0,1] range:
print(f'min: {np.round(preds.min(1)[0].sum().item())} max: {np.round(preds.max(1)[0].sum().item())}')
# for loss we simply add the reconstruction loss +kl divergance
loss_recons = criterion(preds, imgs.view(imgs.size(0), -1))
# see Appendix B from VAE paper:
# Kingma and Welling. Auto-Encoding Variational Bayes. ICLR, 2014
# https://arxiv.org/abs/1312.6114
# I guess 0.5 is the beta (a multiplier that specifies how large the distribution
# should be)
# - D_{KL} = 0.5 * sum(1 + log(sigma^2) - mu^2 - sigma^2)
kl = 0.5 * torch.sum(1+ logvar - mu.pow(2) - logvar.exp())
# Normalise by same number of elements as in reconstruction
kl/=imgs.size(0) * (28*28)
loss = loss_recons + kl
optimizer.zero_grad()
loss.backward()
optimizer.step()
if i% interval ==0:
print(f'epoch ({e}/{epochs}) loss: {loss.item():.6f}'
f'KL: {kl.item():.6f} recons {loss_recons.item():.6f}'
f'lr: {scheduler.get_lr()}')
scheduler.step()
and this is the full stacktrace :
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
in
260 interval = 1000
261 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
--> 262 model = VAE(embeddingsize).to(device)
263 criterion = nn.BCELoss()
264 optimizer = torch.optim.Adam(model.parameters(), lr =0.01)
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in to(self, *args, **kwargs)
384 return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
385
--> 386 return self._apply(convert)
387
388 def register_backward_hook(self, hook):
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _apply(self, fn)
191 def _apply(self, fn):
192 for module in self.children():
--> 193 module._apply(fn)
194
195 for param in self._parameters.values():
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _apply(self, fn)
197 # Tensors stored in modules are graph leaves, and we don't
198 # want to create copy nodes, so we have to unpack the data.
--> 199 param.data = fn(param.data)
200 if param._grad is not None:
201 param._grad.data = fn(param._grad.data)
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in convert(t)
382
383 def convert(t):
--> 384 return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
385
386 return self._apply(convert)
RuntimeError: CUDA error: device-side assert triggered
As you can see, this is giving me the error upon instantiating a new object from the VAE class!
I have no idea what is happening. Any help is greatly appreciated.
Update:
For no apparent reason, switching to cpu and then cuda again changed the error and now I’m getting :
RuntimeError : Function ‘AddmmBackward’ returned nan values in its 2th output.
This error occurs when using the official VAE example as well. the code and full stack trace is given below |
st81551 | Solved by Shisho_Sama in post #20
Thanks to dear God, I finally found the culprit! it was/is the BCE!
For some weird reason the former versions of Pytorch worked perfectly fine with reduction='mean', however, in the newer Pytorch versions that I tested myself , including (1.1.0 and ultimately 1.2.0+cu92) only reduction='sum' will w… |
st81552 | Ok. I tried running this on CPU and got this error :
RuntimeError : Assertion `x >= 0. && x <= 1.’ failed. input value should be between 0~1, but got -nan(ind) at …\aten\src\THNN/generic/BCECriterion.c:62
The weird thing is that, my dataset is MNIST and I’m not doing anything weird. it is defined like this :
dataset_train = datasets.MNIST(root='MNIST',
train=True,
transform = transforms.ToTensor(),
download=True)
dataset_test = datasets.MNIST(root='MNIST',
train=False,
transform = transforms.ToTensor(),
download=True)
batch_size = 32
num_workers = 2
dataloader_train = torch.utils.data.DataLoader(dataset_train,
batch_size = batch_size,
shuffle=True,
num_workers = num_workers)
dataloader_test = torch.utils.data.DataLoader(dataset_test,
batch_size = batch_size,
num_workers = num_workers)
and I also use a sigmoid in my decoder, so the output must be in range [0,1]. however, when I tried to get the min and max of my output from decoder, there were nans! both in min and max!
here is the output:
min: 10.0 max: 22.0
epoch (0/20) loss: 0.697066KL: -0.000713 recons 0.697778lr: [0.01]
min: 1.0 max: 27.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: nan max: nan
Why am I getting these nans? |
st81553 | You could set torch.autograd.set_detect_anomaly(True) and rerun your code to get a stack trace pointing to the operation, which created these invalid values. |
st81554 | ptrblck:
torch.autograd.set_detect_anomaly(True)
Thanks a lot .
This is what I get
sys:1: RuntimeWarning: Traceback of forward call that caused the error:
File "C:\Users\sama\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\sama\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "C:\Users\sama\Anaconda3\lib\site-packages\traitlets\config\application.py", line 658, in launch_instance
app.start()
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\kernelapp.py", line 505, in start
self.io_loop.start()
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\platform\asyncio.py", line 148, in start
self.asyncio_loop.run_forever()
File "C:\Users\sama\Anaconda3\lib\asyncio\base_events.py", line 427, in run_forever
self._run_once()
File "C:\Users\sama\Anaconda3\lib\asyncio\base_events.py", line 1440, in _run_once
handle._run()
File "C:\Users\sama\Anaconda3\lib\asyncio\events.py", line 145, in _run
self._callback(*self._args)
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\ioloop.py", line 690, in <lambda>
lambda f: self._run_callback(functools.partial(callback, future))
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\ioloop.py", line 743, in _run_callback
ret = callback()
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 787, in inner
self.run()
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 748, in run
yielded = self.gen.send(value)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 365, in process_one
yield gen.maybe_future(dispatch(*args))
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 272, in dispatch_shell
yield gen.maybe_future(handler(stream, idents, msg))
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 542, in execute_request
user_expressions, allow_stdin,
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\ipkernel.py", line 294, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\zmqshell.py", line 536, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2855, in run_cell
raw_cell, store_history, silent, shell_futures)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2881, in _run_cell
return runner(coro)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\async_helpers.py", line 68, in _pseudo_sync_runner
coro.send(None)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3058, in run_cell_async
interactivity=interactivity, compiler=compiler, result=result)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3249, in run_ast_nodes
if (await self.run_code(code, result, async_=asy)):
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3326, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-24-961c103c637c>", line 269, in <module>
preds,mu, logvar = model(imgs)
File "C:\Users\sama\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "<ipython-input-24-961c103c637c>", line 222, in forward
output_std = self.fc1_std(output)
File "C:\Users\sama\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\sama\Anaconda3\lib\site-packages\torch\nn\modules\linear.py", line 92, in forward
return F.linear(input, self.weight, self.bias) |
st81555 | OK, I changed the output of decoder from output to reconstructed_img and now I’m getting a new error !
it seems, since I was using Jupyter notebook, the output of decoder was used somehow as the input for fc1_std? (I dont know how thats even possible!! anyway changing the name resulted in this new error : )
min: 10.0 max: 22.0
epoch (0/20) loss: 0.702296KL: -0.000698 recons 0.702994lr: [0.01]
min: 2.0 max: 28.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
sys:1: RuntimeWarning: Traceback of forward call that caused the error:
File "C:\Users\sama\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\sama\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "C:\Users\sama\Anaconda3\lib\site-packages\traitlets\config\application.py", line 658, in launch_instance
app.start()
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\kernelapp.py", line 505, in start
self.io_loop.start()
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\platform\asyncio.py", line 148, in start
self.asyncio_loop.run_forever()
File "C:\Users\sama\Anaconda3\lib\asyncio\base_events.py", line 427, in run_forever
self._run_once()
File "C:\Users\sama\Anaconda3\lib\asyncio\base_events.py", line 1440, in _run_once
handle._run()
File "C:\Users\sama\Anaconda3\lib\asyncio\events.py", line 145, in _run
self._callback(*self._args)
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\ioloop.py", line 690, in <lambda>
lambda f: self._run_callback(functools.partial(callback, future))
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\ioloop.py", line 743, in _run_callback
ret = callback()
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 787, in inner
self.run()
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 748, in run
yielded = self.gen.send(value)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 365, in process_one
yield gen.maybe_future(dispatch(*args))
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 272, in dispatch_shell
yield gen.maybe_future(handler(stream, idents, msg))
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 542, in execute_request
user_expressions, allow_stdin,
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\ipkernel.py", line 294, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\zmqshell.py", line 536, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2855, in run_cell
raw_cell, store_history, silent, shell_futures)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2881, in _run_cell
return runner(coro)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\async_helpers.py", line 68, in _pseudo_sync_runner
coro.send(None)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3058, in run_cell_async
interactivity=interactivity, compiler=compiler, result=result)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3249, in run_ast_nodes
if (await self.run_code(code, result, async_=asy)):
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3326, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-25-401572503a4f>", line 269, in <module>
preds,mu, logvar = model(imgs)
File "C:\Users\sama\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "<ipython-input-25-401572503a4f>", line 222, in forward
output_std = self.fc1_std(output)
File "C:\Users\sama\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\sama\Anaconda3\lib\site-packages\torch\nn\modules\linear.py", line 92, in forward
return F.linear(input, self.weight, self.bias)
File "C:\Users\sama\Anaconda3\lib\site-packages\torch\nn\functional.py", line 1406, in linear
ret = torch.addmm(bias, input, weight.t())
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
in
284 loss = loss_recons + kl
285 optimizer.zero_grad()
--> 286 loss.backward()
287 optimizer.step()
288 if i% interval ==0:
~\Anaconda3\lib\site-packages\torch\tensor.py in backward(self, gradient, retain_graph, create_graph)
105 products. Defaults to ``False``.
106 """
--> 107 torch.autograd.backward(self, gradient, retain_graph, create_graph)
108
109 def register_hook(self, hook):
~\Anaconda3\lib\site-packages\torch\autograd\__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
91 Variable._execution_engine.run_backward(
92 tensors, grad_tensors, retain_graph, create_graph,
---> 93 allow_unreachable=True) # allow_unreachable flag
94
95
RuntimeError: Function 'AddmmBackward' returned nan values in its 2th output.
[26] |
st81556 | In both stack traces self.fc1_std creates the NaN values.
Could you also check the value of logvar?
If it’s very high, logvar.mul(0.5).exp_() will create an Inf.
Try to add some canity checks in your code, e.g.
def reparamtrization_trick(self, mu, logvar):
if self.training:
std = logvar.mul(0.5).exp_()
if not torch.isfinite(std):
print('ERROR! std is ', std)
eps = torch.tensor(std.data.new(std.size()).normal_(0,1))
return eps.mul(std).add(mu) |
st81557 | Thanks, but I get
RuntimeError : bool value of Tensor with more than one value is ambiguous
I tried any() but got this error :
RuntimeError : any only supports torch.uint8 dtype |
st81558 | .any() should work on torch.uint8 and torch.bool tensors.
Which PyTorch version are you using? |
st81559 | honestly I am scared to update to the latest version, I’ve seen the github repo and there were couple of bad issues there! I was waiting for the 1.2.0 to get updated with the fix so I can update |
st81560 | OK. The error is still weird, as torch.isfinite is returning a uint8 tensor in 1.1.0.
Could you check the type of the return value of torch.isfinite?
If you are afraid of updating, I would recommend to create a new conda environment and just install the latest version there. |
st81561 | OK. here is the return value for torch.isfinite:
# -VAE (Variational Autoencoders) ...
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1]], dtype=torch.uint8)
min: 0.0 max: 32.0
ERROR! std is tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1]], dtype=torch.uint8)
min: 0.0 max: 32.0
sys:1: RuntimeWarning: Traceback of forward call that caused the error:
File "C:\Users\sama\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\sama\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "C:\Users\sama\Anaconda3\lib\site-packages\traitlets\config\application.py", line 658, in launch_instance
app.start()
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\kernelapp.py", line 505, in start
self.io_loop.start()
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\platform\asyncio.py", line 148, in start
self.asyncio_loop.run_forever()
File "C:\Users\sama\Anaconda3\lib\asyncio\base_events.py", line 427, in run_forever
self._run_once()
File "C:\Users\sama\Anaconda3\lib\asyncio\base_events.py", line 1440, in _run_once
handle._run()
File "C:\Users\sama\Anaconda3\lib\asyncio\events.py", line 145, in _run
self._callback(*self._args)
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\ioloop.py", line 690, in <lambda>
lambda f: self._run_callback(functools.partial(callback, future))
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\ioloop.py", line 743, in _run_callback
ret = callback()
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 787, in inner
self.run()
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 748, in run
yielded = self.gen.send(value)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 365, in process_one
yield gen.maybe_future(dispatch(*args))
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 272, in dispatch_shell
yield gen.maybe_future(handler(stream, idents, msg))
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 542, in execute_request
user_expressions, allow_stdin,
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\ipkernel.py", line 294, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\zmqshell.py", line 536, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2855, in run_cell
raw_cell, store_history, silent, shell_futures)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2881, in _run_cell
return runner(coro)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\async_helpers.py", line 68, in _pseudo_sync_runner
coro.send(None)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3058, in run_cell_async
interactivity=interactivity, compiler=compiler, result=result)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3249, in run_ast_nodes
if (await self.run_code(code, result, async_=asy)):
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3326, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-34-a5658b289138>", line 271, in <module>
preds,mu, logvar = model(imgs)
File "C:\Users\sama\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "<ipython-input-34-a5658b289138>", line 224, in forward
output_std = self.fc1_std(output)
File "C:\Users\sama\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\sama\Anaconda3\lib\site-packages\torch\nn\modules\linear.py", line 92, in forward
return F.linear(input, self.weight, self.bias)
File "C:\Users\sama\Anaconda3\lib\site-packages\torch\nn\functional.py", line 1406, in linear
ret = torch.addmm(bias, input, weight.t()) |
st81562 | OK, I guess I made a gaff! when using .any().
I set it now
if not torch.isfinite(std).any():
print(f'ERROR! std is {std}')
and there is no issue : here is the output I get :
C:\Users\sama\Anaconda3\lib\site-packages\ipykernel_launcher.py:130: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
min: 10.0 max: 22.0
epoch (0/20) loss: 0.692846KL: -0.000649 recons 0.693495lr: [0.01]
min: 1.0 max: 26.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
min: 0.0 max: 32.0
sys:1: RuntimeWarning: Traceback of forward call that caused the error:
File "C:\Users\sama\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\sama\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "C:\Users\sama\Anaconda3\lib\site-packages\traitlets\config\application.py", line 658, in launch_instance
app.start()
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\kernelapp.py", line 505, in start
self.io_loop.start()
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\platform\asyncio.py", line 148, in start
self.asyncio_loop.run_forever()
File "C:\Users\sama\Anaconda3\lib\asyncio\base_events.py", line 427, in run_forever
self._run_once()
File "C:\Users\sama\Anaconda3\lib\asyncio\base_events.py", line 1440, in _run_once
handle._run()
File "C:\Users\sama\Anaconda3\lib\asyncio\events.py", line 145, in _run
self._callback(*self._args)
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\ioloop.py", line 690, in <lambda>
lambda f: self._run_callback(functools.partial(callback, future))
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\ioloop.py", line 743, in _run_callback
ret = callback()
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 787, in inner
self.run()
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 748, in run
yielded = self.gen.send(value)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 365, in process_one
yield gen.maybe_future(dispatch(*args))
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 272, in dispatch_shell
yield gen.maybe_future(handler(stream, idents, msg))
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 542, in execute_request
user_expressions, allow_stdin,
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\ipkernel.py", line 294, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\zmqshell.py", line 536, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2855, in run_cell
raw_cell, store_history, silent, shell_futures)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2881, in _run_cell
return runner(coro)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\async_helpers.py", line 68, in _pseudo_sync_runner
coro.send(None)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3058, in run_cell_async
interactivity=interactivity, compiler=compiler, result=result)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3249, in run_ast_nodes
if (await self.run_code(code, result, async_=asy)):
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3326, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-36-5016a401b848>", line 271, in <module>
preds,mu, logvar = model(imgs)
File "C:\Users\sama\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "<ipython-input-36-5016a401b848>", line 224, in forward
output_std = self.fc1_std(output)
File "C:\Users\sama\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\sama\Anaconda3\lib\site-packages\torch\nn\modules\linear.py", line 92, in forward
return F.linear(input, self.weight, self.bias)
File "C:\Users\sama\Anaconda3\lib\site-packages\torch\nn\functional.py", line 1406, in linear
ret = torch.addmm(bias, input, weight.t())
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
in
286 loss = loss_recons + kl
287 optimizer.zero_grad()
--> 288 loss.backward()
289 optimizer.step()
290 if i% interval ==0:
~\Anaconda3\lib\site-packages\torch\tensor.py in backward(self, gradient, retain_graph, create_graph)
105 products. Defaults to ``False``.
106 """
--> 107 torch.autograd.backward(self, gradient, retain_graph, create_graph)
108
109 def register_hook(self, hook):
~\Anaconda3\lib\site-packages\torch\autograd\__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
91 Variable._execution_engine.run_backward(
92 tensors, grad_tensors, retain_graph, create_graph,
---> 93 allow_unreachable=True) # allow_unreachable flag
94
95
RuntimeError: Function 'AddmmBackward' returned nan values in its 1th output.
So it means the std is fine right? and doesnt have inf in it? |
st81563 | self.fc1_std is unbounded, i.e. it doesnt have any activation functions as you can see and creates the logvar which we just checked right now in the parameterization_trick() ! what should I be checking about this layer? |
st81564 | As this layer creates the NaN output, I would try to narrow down, when and if its parameters are getting NaN values.
Since the input to this layer is apparently finite, the weight or bias has to become NaN or Inf eventually to create a NaN output. |
st81565 | so, should I just check for weight and bias norms and print them? would that work?
their norm seems normal , so the weights shoulnt have any infs as it would show up when using norm, am I right?
w: 5.785732269287109 b: 5.785732269287109
C:\Users\sama\Anaconda3\lib\site-packages\ipykernel_launcher.py:130: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
min: 10.0 max: 22.0
epoch (0/20) loss: 0.700486KL: -0.000755 recons 0.701241lr: [0.01]
w: 6.068157196044922 b: 6.068157196044922
min: 2.0 max: 27.0
w: 6.553120136260986 b: 6.553120136260986
min: 0.0 max: 32.0
w: 7.199903964996338 b: 7.199903964996338
min: 0.0 max: 32.0
w: 8.02138614654541 b: 8.02138614654541
min: 0.0 max: 32.0
w: 8.972151756286621 b: 8.972151756286621
min: 0.0 max: 32.0
w: 10.009076118469238 b: 10.009076118469238
min: 0.0 max: 32.0
w: 11.115690231323242 b: 11.115690231323242
min: 0.0 max: 32.0
sys:1: RuntimeWarning: Traceback of forward call that caused the error:
File "C:\Users\sama\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\sama\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "C:\Users\sama\Anaconda3\lib\site-packages\traitlets\config\application.py", line 658, in launch_instance
app.start()
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\kernelapp.py", line 505, in start
self.io_loop.start()
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\platform\asyncio.py", line 148, in start
self.asyncio_loop.run_forever()
File "C:\Users\sama\Anaconda3\lib\asyncio\base_events.py", line 427, in run_forever
self._run_once()
File "C:\Users\sama\Anaconda3\lib\asyncio\base_events.py", line 1440, in _run_once
handle._run()
File "C:\Users\sama\Anaconda3\lib\asyncio\events.py", line 145, in _run
self._callback(*self._args)
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\ioloop.py", line 690, in <lambda>
lambda f: self._run_callback(functools.partial(callback, future))
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\ioloop.py", line 743, in _run_callback
ret = callback()
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 787, in inner
self.run()
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 748, in run
yielded = self.gen.send(value)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 365, in process_one
yield gen.maybe_future(dispatch(*args))
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 272, in dispatch_shell
yield gen.maybe_future(handler(stream, idents, msg))
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 542, in execute_request
user_expressions, allow_stdin,
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\ipkernel.py", line 294, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\zmqshell.py", line 536, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2855, in run_cell
raw_cell, store_history, silent, shell_futures)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2881, in _run_cell
return runner(coro)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\async_helpers.py", line 68, in _pseudo_sync_runner
coro.send(None)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3058, in run_cell_async
interactivity=interactivity, compiler=compiler, result=result)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3249, in run_ast_nodes
if (await self.run_code(code, result, async_=asy)):
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3326, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-37-82d81355d333>", line 272, in <module>
preds,mu, logvar = model(imgs)
File "C:\Users\sama\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "<ipython-input-37-82d81355d333>", line 224, in forward
output_std = self.fc1_std(output)
File "C:\Users\sama\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\sama\Anaconda3\lib\site-packages\torch\nn\modules\linear.py", line 92, in forward
return F.linear(input, self.weight, self.bias)
File "C:\Users\sama\Anaconda3\lib\site-packages\torch\nn\functional.py", line 1406, in linear
ret = torch.addmm(bias, input, weight.t())
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
in
287 loss = loss_recons + kl
288 optimizer.zero_grad()
--> 289 loss.backward()
290 optimizer.step()
291 if i% interval ==0:
~\Anaconda3\lib\site-packages\torch\tensor.py in backward(self, gradient, retain_graph, create_graph)
105 products. Defaults to ``False``.
106 """
--> 107 torch.autograd.backward(self, gradient, retain_graph, create_graph)
108
109 def register_hook(self, hook):
~\Anaconda3\lib\site-packages\torch\autograd\__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
91 Variable._execution_engine.run_backward(
92 tensors, grad_tensors, retain_graph, create_graph,
---> 93 allow_unreachable=True) # allow_unreachable flag
94
95
RuntimeError: Function 'AddmmBackward' returned nan values in its 2th output.
[38] |
st81566 | I just noticed the Pytorchs official VAE implementation also gives the same exact error.
I simply replaced my class with the official VAE example which is this :
class VAE(nn.Module):
def __init__(self,ZDIMS=20):
super(VAE, self).__init__()
self.fc1 = nn.Linear(784, 400)
self.fc21 = nn.Linear(400, ZDIMS)
self.fc22 = nn.Linear(400, ZDIMS)
self.fc3 = nn.Linear(ZDIMS, 400)
self.fc4 = nn.Linear(400, 784)
def encode(self, x):
h1 = F.relu(self.fc1(x))
return self.fc21(h1), self.fc22(h1)
def reparameterize(self, mu, logvar):
std = torch.exp(0.5*logvar)
eps = torch.randn_like(std)
return mu + eps*std
def decode(self, z):
h3 = F.relu(self.fc3(z))
return torch.sigmoid(self.fc4(h3))
def forward(self, x):
mu, logvar = self.encode(x.view(-1, 784))
z = self.reparameterize(mu, logvar)
return self.decode(z), mu, logvar
and it failed at the encoding part just like in my class.
I’m getting the error :
RuntimeError: Function ‘AddmmBackward’ returned nan values in its 2th output.
This is the full output + stacktrace :
epoch (0/2) loss: 0.699148KL: -0.000198 recons 0.699345lr: [0.001]
sys:1: RuntimeWarning: Traceback of forward call that caused the error:
File "C:\Users\sama\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\sama\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "C:\Users\sama\Anaconda3\lib\site-packages\traitlets\config\application.py", line 658, in launch_instance
app.start()
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\kernelapp.py", line 505, in start
self.io_loop.start()
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\platform\asyncio.py", line 148, in start
self.asyncio_loop.run_forever()
File "C:\Users\sama\Anaconda3\lib\asyncio\base_events.py", line 427, in run_forever
self._run_once()
File "C:\Users\sama\Anaconda3\lib\asyncio\base_events.py", line 1440, in _run_once
handle._run()
File "C:\Users\sama\Anaconda3\lib\asyncio\events.py", line 145, in _run
self._callback(*self._args)
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\ioloop.py", line 690, in <lambda>
lambda f: self._run_callback(functools.partial(callback, future))
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\ioloop.py", line 743, in _run_callback
ret = callback()
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 787, in inner
self.run()
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 748, in run
yielded = self.gen.send(value)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 365, in process_one
yield gen.maybe_future(dispatch(*args))
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 272, in dispatch_shell
yield gen.maybe_future(handler(stream, idents, msg))
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 542, in execute_request
user_expressions, allow_stdin,
File "C:\Users\sama\Anaconda3\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\ipkernel.py", line 294, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "C:\Users\sama\Anaconda3\lib\site-packages\ipykernel\zmqshell.py", line 536, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2855, in run_cell
raw_cell, store_history, silent, shell_futures)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2881, in _run_cell
return runner(coro)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\async_helpers.py", line 68, in _pseudo_sync_runner
coro.send(None)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3058, in run_cell_async
interactivity=interactivity, compiler=compiler, result=result)
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3249, in run_ast_nodes
if (await self.run_code(code, result, async_=asy)):
File "C:\Users\sama\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3326, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-18-7260c06e8316>", line 16, in <module>
preds,mu, logvar = model(imgs)
File "C:\Users\sama\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "<ipython-input-17-3a36ef1e8b33>", line 25, in forward
mu, logvar = self.encode(x.view(-1, 784))
File "<ipython-input-17-3a36ef1e8b33>", line 13, in encode
return self.fc21(h1), self.fc22(h1)
File "C:\Users\sama\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\sama\Anaconda3\lib\site-packages\torch\nn\modules\linear.py", line 92, in forward
return F.linear(input, self.weight, self.bias)
File "C:\Users\sama\Anaconda3\lib\site-packages\torch\nn\functional.py", line 1406, in linear
ret = torch.addmm(bias, input, weight.t())
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
in
31 loss = loss_recons + kl
32 optimizer.zero_grad()
---> 33 loss.backward()
34 optimizer.step()
35 if i% interval ==0:
~\Anaconda3\lib\site-packages\torch\tensor.py in backward(self, gradient, retain_graph, create_graph)
105 products. Defaults to ``False``.
106 """
--> 107 torch.autograd.backward(self, gradient, retain_graph, create_graph)
108
109 def register_hook(self, hook):
~\Anaconda3\lib\site-packages\torch\autograd\__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
91 Variable._execution_engine.run_backward(
92 tensors, grad_tensors, retain_graph, create_graph,
---> 93 allow_unreachable=True) # allow_unreachable flag
94
95
RuntimeError: Function 'AddmmBackward' returned nan values in its 2th output.
[19]
I’d really appreciate any kind of help on this. as I’m completely clueless what is happening! it seems torch.addmm is the culprit, but why? the weight norms are normal, I cant figure this out!!
Updating to the latest Pytorch (i.e 1.2.0) didnt solve any problem. still the same error happens regardless of the VAE implementation! |
st81567 | Thanks to dear God, I finally found the culprit! it was/is the BCE!
For some weird reason the former versions of Pytorch worked perfectly fine with reduction='mean', however, in the newer Pytorch versions that I tested myself , including (1.1.0 and ultimately 1.2.0+cu92) only reduction='sum' will work without a hitch!
I do not have any idea what happened in the course of updating Pytorch 0.4 to 1.1.0 that results in such weird behavior. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.