id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st101300 | I change my implementation into a more simpler one :
class LPU(nn.Module):
def __init__(self, Encoder_size, Hidden_size, Decoder_size):
# simple autoencoder structure
super(LPU, self).__init__()
# ENC input : main sensory state (Encoder_size), the last hidden state
# and last hidden state from a superior LPU
self.encoder = nn.Linear((Encoder_size + 2*Hidden_size), Hidden_size)
self.act_encoder = nn.Sigmoid()
self.decoder = nn.Linear(Hidden_size, Decoder_size)
self.act_decoder = nn.Sigmoid()
def forward(self, Xt, last_Hidden, last_Hidden_sup):
input_encoder = torch.cat((Xt, last_Hidden, last_Hidden_sup), 1)
encoder_process = self.encoder(input_encoder)
representation = self.act_decoder(encoder_process)
decoder_process = self.decoder(representation)
out_decoder = self.act_decoder(decoder_process)
return out_decoder, representation |
st101301 | Hi, dose autograd support in-place setting value?
from torch.autograd import Variable
invar = Variable(torch.rand(5, 5), requires_grad=True)
midvar = invar + 1
midvar.data[:, 0] = 0
loss = midvar.sum()
loss.backward()
print(invar.grad.data)
the output is
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
I hope the grad of invar should be
0 1 1 1 1
0 1 1 1 1
0 1 1 1 1
0 1 1 1 1
0 1 1 1 1
If it is not supported, is there any way to implement this function? Encapsulate it as a torch.autograd.Function ? |
st101302 | Autograd works only if you perform all operations to Variables, so it knows what has changed.
In your code, midvar.data[:,0] modifies the tensor wrapped inside a Variable directly. Instead of doing that,
modifying the Variable with the same operation lets autograd compute the gradients correct:
from torch.autograd import Variable
invar = Variable(torch.rand(5, 5), requires_grad=True)
midvar = invar + 1
midvar[:, 0] = 0
loss = midvar.sum()
loss.backward()
print(invar.grad.data) |
st101303 | Thanks~
I used to implement it with midvar[:, 0] = 0 and encounter a bug as below, however I can not reproduce it… Maybe i just made something wrong.
RuntimeError: a leaf Variable that requires grad has been used in an in-place operation
Anyway now your solution works well. |
st101304 | Hi, @richard
Autograd works correctly with the code you supplied. However, runtime error still occurs with the code below, which is part of the forward function of my custom module.
message_weight = torch.sigmoid(fc_self_out_reshape + fc_neig_out_reshape + fc_pointcloud_out_relative)
message_weight[:, 0, :, :] = 0.5
message_weight = message_weight + 10
The error is:
Traceback (most recent call last):
File "/home/zhangyi/pytorch-ws/test_PGNNet.py", line 31, in <module>
main()
File "/home/zhangyi/pytorch-ws/test_PGNNet.py", line 27, in main
loss.backward()
File "/home/zhangyi/miniconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 148, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File "/home/zhangyi/miniconda3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 99, in backward
variables, grad_variables, retain_graph)
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
If I comment out the line
message_weight[:, 0, :, :] = 0.5
autograd works correctly. Very strange…
Is there any other reason for this runtime error? |
st101305 | I find that add a line before
message_weight[:, 0, :, :] = 0.5
such that
message_weight = message_weight.clone()
message_weight[:, 0, :, :] = 0.5
autograd works.
I am wondering why clone is not necessary in the code you supplied. |
st101306 | The reason for error should be very clear from the error message. You are modifying a Variable that requires gradient. If you modify it inplace, then PyTorch cannot check the input and output, and thus cannot compute gradient. |
st101307 | But what about this code snippet?
from torch.autograd import Variable
invar = Variable(torch.rand(5, 5), requires_grad=True)
midvar = invar + 1
midvar[:, 0] = 0
loss = midvar.sum()
loss.backward()
print(invar.grad.data)
The variable midvar is modified inplace.
midvar[:, 0] = 0
But autograd works. |
st101308 | It is different from the above case. In this case, a value is overwritten on a variable that is not needed elsewhere to compute the gradient. But in your above code, message_weight must have been used somewhere in some backward function to compute gradient. Hence the error. |
st101309 | fyi http://pytorch.org/docs/master/notes/autograd.html?highlight=saved_tensors#in-place-correctness-checks 252 |
st101310 | I am very new to Torch so please accept my novice questions
I am getting the same error on the following:
loss_l, loss_c = criterion(out, targets)
loss = loss_l + loss_c
loss.backward()
Any advice about what I can do? I will really appreciate. |
st101311 | I’ve trained a simple CNN until the training accuracy is >99% (so it overfits but for now I’m testing m,y ability to push test images through a pretrained network).
However when I reuse an image from the training data to look at the output it’s the same classification for every image I try with the same ‘probabilities’ when using softmax.
The code looks as below (I’ve tried to simplify it to it’s key points to see if I’m missing something really obvious).
tester = torch.load(IMG_PATH+'CategoricalNet.pt')
print(tester)
CategoricalNet(
(feature_extractor): Sequential(
(0): Conv2d(1, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(1): ReLU()
(2): Conv2d(64, 128, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(3): ReLU()
(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(6): Conv2d(128, 256, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(7): ReLU()
(8): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(classifier): Sequential(
(0): Dropout(p=0.25)
(1): Linear(in_features=65536, out_features=256, bias=True)
(2): ReLU()
(3): Dropout(p=0.25)
(4): Linear(in_features=256, out_features=10, bias=True)
)
)
test = fits.open("E:/Documents/Python_Scripts/CNN/TRAINING/EXAMPLE_DATA.fits")
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize([0], [1])])
data = transform(d.reshape(*d.shape, 1)).unsqueeze(0).float().cuda()
output = torch.nn.functional.softmax(tester(data),dim=1).cpu().detach().numpy()
print('TRUE LABEL=',test2[0].header['LABEL'])
print(output)
TRUE LABEL= 5
[[0.10622309 0.1435124 0.05875074 0.0495275 0.06827779 0.03227602
0.17474921 0.17845923 0.15276037 0.03546367]]
TEST LABEL= 7
And similarly for another test case:
TRUE LABEL= 0
[[0.10622309 0.1435124 0.05875074 0.0495275 0.06827779 0.03227602
0.17474921 0.17845923 0.15276037 0.03546367]]
TEST LABEL= 7
I’ve checked that the image transformation matches that in the training:
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize([0], [1])])
So I’m not sure why the predictions would be the same for every test case, any help on this matter would be greately appreciated! |
st101312 | Try tester = tester.eval()
Otherwise the learned batch norm parameters aren’t used |
st101313 | Unfortunately I gave that a try and it doesn’t seem to change the problem. I use .eval() before saving the network (which is done using:
torch.save(model, IMG_PATH+'CategoricalNet.pt')
and then repeat .eval() when loading in the network again just to be sure.
Thanks for the reply by the way! |
st101314 | So if it’s the same data and the same network, there must be some kind of discrepancy in your training vs test code.
I suggest the following check: first remove shuffle so you can load the train and “test” (which is just the train folder) in the same order. Print out torch.norm(data) for both cases, (actually verify they are the same input to the network). If they are, and the output is different, try to step through the layer weights to see which are different. Good luck! |
st101315 | It looks like I was mishandling the transformations, as outputting the normalised test data doesn’t seem to have scaled the values:
data = np.random.uniform(0,10,[64,64])
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize([0],[1])])
d = transform(data.reshape(*data.shape, 1)).unsqueeze(0).float().cuda()
output = torch.nn.functional.softmax(tester(d),dim=1).cpu().detach().numpy()
plt.figure()
plt.imshow(d[0,0,:,:],cmap='jet')
plt.colorbar()
Upon doing more research, I’ve figured out that this is because the scaling does the operation: (x-0)/1 which of course makes no change to the data |
st101316 | Pytorch 0.3.0+ CUDA 8.0. When run the code after some epoches, it fails with the error information:
RuntimeError: cuda runtime error (4). I have tried some solutions including reboot the system, update pytorch, but it still has this error. Can someone help me solve this problem? Thanks a lot! |
st101317 | When I use this code xception model 2, and run it like blow
if __name__ == '__main__':
model = xception()
print('Done')
the error come out
RuntimeError: Error(s) in loading state_dict for Xception:
While copying the parameter named “block1.rep.0.pointwise.weight”, whose dimensions in the model are torch.Size([128, 64, 1, 1]) and whose dimensions in the checkpoint are torch.Size([128, 64]).
While copying the parameter named “block1.rep.3.pointwise.weight”, whose dimensions in the model are torch.Size([128, 128, 1, 1]) and whose dimensions in the checkpoint are torch.Size([128, 128]).
While copying the parameter named “block2.rep.1.pointwise.weight”, whose dimensions in the model are torch.Size([256, 128, 1, 1]) and whose dimensions in the checkpoint are torch.Size([256, 128]).
While copying the parameter named “block2.rep.4.pointwise.weight”, whose dimensions in the model are torch.Size([256, 256, 1, 1]) and whose dimensions in the checkpoint are torch.Size([256, 256]).
While copying the parameter named “block3.rep.1.pointwise.weight”, whose dimensions in the model are torch.Size([728, 256, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 256]).
While copying the parameter named “block3.rep.4.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block4.rep.1.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block4.rep.4.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block4.rep.7.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block5.rep.1.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block5.rep.4.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block5.rep.7.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block6.rep.1.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block6.rep.4.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block6.rep.7.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block7.rep.1.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block7.rep.4.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block7.rep.7.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block8.rep.1.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block8.rep.4.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block8.rep.7.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block9.rep.1.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block9.rep.4.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block9.rep.7.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block10.rep.1.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block10.rep.4.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block10.rep.7.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block11.rep.1.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block11.rep.4.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block11.rep.7.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block12.rep.1.pointwise.weight”, whose dimensions in the model are torch.Size([728, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([728, 728]).
While copying the parameter named “block12.rep.4.pointwise.weight”, whose dimensions in the model are torch.Size([1024, 728, 1, 1]) and whose dimensions in the checkpoint are torch.Size([1024, 728]).
While copying the parameter named “conv3.pointwise.weight”, whose dimensions in the model are torch.Size([1536, 1024, 1, 1]) and whose dimensions in the checkpoint are torch.Size([1536, 1024]).
While copying the parameter named “conv4.pointwise.weight”, whose dimensions in the model are torch.Size([2048, 1536, 1, 1]) and whose dimensions in the checkpoint are torch.Size([2048, 1536]).
I don’t know how to deal with it , looking forward to your reply, thank you!!! |
st101318 | Solved by kelam_goutam in post #2
I guess this basically means that the tensor defined by your model is a 4D tensor while the model you are trying to load saved the parameters as 2D tensor.
If you can unsqueeze the pretrained tensor and make it a 4D tensor. I think it will work |
st101319 | I guess this basically means that the tensor defined by your model is a 4D tensor while the model you are trying to load saved the parameters as 2D tensor.
If you can unsqueeze the pretrained tensor and make it a 4D tensor. I think it will work |
st101320 | I need to train a net using 3D images with dimension BatchChannelDepthHeightWidth,and the dimension of output and label are BDH*W,but I can’t find a proper loss function from torch.nn.Loss functions.
Can you give me some suggestions?Thank you! |
st101321 | Hi, depending on the problem we’ve had good luck with CrossEntropyLoss as well as Dice.
Assuming you have your model producing a two-channel probability map, you should create a 1D view of the 3D images and the respective mask this way:
criterion = nn.CrossEntropyLoss()
output = model(input)
output = output.permute(0,2,3,4,1).contiguous()
output = output.view(output.numel() // 2, 2)
mask = mask.view(-1)
loss = criterion(output, mask)
For the Dice index, here’s a 2D implementation. I can’t send you the 3D version right now as I’m on the go: https://github.com/rogertrullo/pytorch/blob/rogertrullo-dice_loss/torch/nn/functional.py#L708 190 |
st101322 | I have met some problems about BCEloss.
The error is that:
/opt/conda/conda-bld/pytorch_1524577177097/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [4,0,0] Assertion t >= 0 && t < n_classes failed.
/opt/conda/conda-bld/pytorch_1524577177097/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [5,0,0] Assertion t >= 0 && t < n_classes failed.
Traceback (most recent call last):
File “main.py”, line 64, in
message = initrun(dataloader, netD, netG, args)
File “/home/student1/zps/drgan/run/run.py”, line 17, in initrun
mes = train_single_DRGAN(dataloader, netD, netG, args)
File “/home/student1/zps/drgan/run/train_single_DRGAN.py”, line 201, in train_single_DRGAN
L_d_gan = BCE_Loss(real_output[:, Nd].sigmoid(), batch_real_label) + BCE_Loss(
File “/home/student1/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 491, in call
result = self.forward(*input, **kwargs)
File “/home/student1/anaconda2/lib/python2.7/site-packages/torch/nn/modules/loss.py”, line 433, in forward
reduce=self.reduce)
File “/home/student1/anaconda2/lib/python2.7/site-packages/torch/nn/functional.py”, line 1483, in binary_cross_entropy
return torch._C._nn.binary_cross_entropy(input, target, weight, size_average, reduce)
RuntimeError: cudaEventCreateWithFlags in future ctor: device-side assert triggered
Does anyone meet this problem ?
Thanks a lot for any help!! |
st101323 | Hi,
Suppose we have a vocabulary V = V_global + V_instance_specific. V_global is fixed and V_instance_specific changes for each instance. Now, I want to get a softmax over V for each instance and use it in the loss function.
How can I do that? |
st101324 | you can always dynamically concatenate the arrays of sizes V_global, V_instance_specific to get V_global + V_instance_specific sized array and use F.log_softmax() + NLLLoss() [https://pytorch.org/docs/stable/nn.html#torch.nn.functional.softmax 8] dynamically. |
st101325 | Thanks for your response.
But the size of V_instance_specific is not fixed. NLLLoss() typically operates on a fixed length vector. Isn’t it? |
st101326 | I think, NLLLoss() is just a function to calculate dot product between given log likelihood values and a one-hot encoding of desired class. It can work with any length input dynamically. |
st101327 | Traceback (most recent call last):
File “torch2onnx.py”, line 8, in <module>
import torch.onnx
File “/media/hls/f2906de6-c260-4b56-89d8-7718e594b683/anaconda3/lib/python3.6/site-packages/torch/onnx/init.py”, line 6, in <module>
TensorProtoDataType = _C._onnx.TensorProtoDataType
AttributeError: module ‘torch._C’ has no attribute ‘_onnx’ |
st101328 | concatenated_input = Variable(torch.cat([input.data.view(-1,3*32*32), condition.data], 1))
TypeError: cat received an invalid combination of arguments - got (list, int), but expected one of:
(sequence[torch.cuda.FloatTensor] seq)
(sequence[torch.cuda.FloatTensor] seq, int dim)
didn’t match because some of the arguments have invalid types: (list, int)
This works if they are simply FloatTensors and not cuda.FloatTensors. Is this expected? Should i use a different function to concatenate cuda tensors?
Thanks a lot! |
st101329 | maybe input.data and condition.data are of different types. Is one of them a torch.FloatTensor and the other of type torch.cuda.FloatTensor? |
st101330 | You were right = condition.data was a FloatTensor. This was a bit of a surprise because I explicitly called .cuda() on the condition variable. Is there something that I am obviously missing?
Thanks!
Pratheeksha |
st101331 | Calling .cuda() on a Variable is not in-place, so you have to do var = var.cuda(). |
st101332 | Hi,
I’m new in PyTorch and I have the same exactly the problem using the concatenate function.
My both variables padding and source are “torch.LongTensor”
Probably I’m doing something wrong. So this is my code:
padding = torch.LongTensor(np.zeros(args.distance_context, dtype=np.int))
sequence = torch.cat([padding, source[i:seq_len+1]],dim=0)
TypeError: cat received an invalid combination of arguments - got (list, dim=int), but expected one of:
(sequence[torch.LongTensor] seq)
(sequence[torch.LongTensor] seq, int dim)
didn’t match because some of the arguments have invalid types: (list, dim=int)
Did you solve this problem ?
Thanks you very much ! |
st101333 | If I use .cuda() in each Tensor I get this message:
sequence = torch.cat([padding,sub_sequence],dim=0)
RuntimeError: inconsistent tensor sizes at /b/wheel/pytorch-src/torch/lib/THC/generic/THCTensorMath.cu:141 |
st101334 | padding and sub_sequence are of what shape? they have to be of the same shape except in dim=0 (the dimension in which they are concatenated can be different shape, but other dimensions should have same shape). |
st101335 | Hi,
I am having the same problem:
[...]
x = torch.cat([x, fill], 1)
TypeError: cat received an invalid combination of arguments - got (list, int), but expected one of:
* (sequence[torch.FloatTensor] seq)
* (sequence[torch.FloatTensor] seq, int dim)
didn't match because some of the arguments have invalid types: (list, int)
After reading this discussion and similar others, I checked type(_) and _.data.type() of x and fill:
x:
torch.FloatTensor
<class 'torch.autograd.variable.Variable'>
torch.Size([10, 1, 40])
fill:
torch.FloatTensor
<class 'torch.autograd.variable.Variable'>
torch.Size([10, 1, 40])
I am not using .cuda(), and both tensors seem to have the same size and type.
This is the code, which causes the problem:
if x.size()[1] < batch_size:
fill = (x[:, 0, :].contiguous().view(x.size()[0], 1, x.size()[2]))
for x in range(batch_size-x.size()[1]):
x = torch.cat([x, fill], 1)
(I want to fill the last batch of an epoch in case the number of elements in the data set % batch_size is not 0)
Any idea what could be the problem here?
Thanks! |
st101336 | I’m facing the similiar problem using pytorch 0.3.1.
When invoking torch.cat(hidden,encoder_outputs)
hidden type : [torch.cuda.FloatTensor of size 32x90x512 (GPU 0)]
encoder_outputs type: [torch.cuda.FloatTensor of size 32x90x512 (GPU 0)]
Traceback (most recent call last):
File “/home/yb/project/crnn.pytorch-master/crnn_att_main.py”, line 365, in
cost = trainBatch(crnn, optimizer)
File “/home/yb/project/crnn.pytorch-master/crnn_att_main.py”, line 340, in trainBatch
preds = crnn(image,batch_label_tensor) #44X3X7
File “/ENTER/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 357, in call
result = self.forward(*input, **kwargs)
File “/ENTER/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py”, line 71, in forward
return self.module(*inputs[0], **kwargs[0])
File “/ENTER/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 357, in call
result = self.forward(*input, **kwargs)
File “/home/yb/project/crnn.pytorch-master/models/crnn_att.py”, line 201, in forward
output, hidden, encoder_outputs)
File “/ENTER/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 357, in call
result = self.forward(*input, **kwargs)
File “/home/yb/project/crnn.pytorch-master/models/crnn_att.py”, line 39, in forward
rnn_input = torch.cat([embedded, context.data], 2)
TypeError: cat received an invalid combination of arguments - got (list, int), but expected one of:
(sequence[torch.FloatTensor] seq)
(sequence[torch.FloatTensor] seq, int dim)
didn’t match because some of the arguments have invalid types: (list, int)
So the problem is that the two input tensors are both cuda.FloatTensor, but it’s not the expected type.
Could you please give some advice on how to use cat func for cuda.FloatTensor |
st101337 | It’s so weird. I wanna use grid_sample in forward function, if I use one gpu, there is no problem, but if I use two gpus, the training stage would be stuck before grid_sample function. I really have no idea about it. Does anyone know why? Thanks! (pytorch0.4) |
st101338 | i find there’re some native MPI ops under: caffe2/mpi path. Just wonder if this ops have been verified on multi-nodes environment. or do we have an example to show how to use it?
BTW i have checked the example ‘resnet50_trainer.py’, but seems it only support Gloo engine way. |
st101339 | Hello!
I’m having issues that started occurring overnight (8/21/2018) with fastai’s modules built on top of PyTorch. Long story short, their learner started through this error about deprecation of nn.functional.sigmoid in every iteration in an epoch (which is very annoying):
adsfasdfa.png1001×388 34 KB
I tried looking through functional.py as well as the fastai libraries to find out where I can substitute torch.sigmoid instead, but I could not find it. How can I easily replace nn.functional.sigmoid with torch.sigmoid? |
st101340 | If you go to Github of Fast.AI and search, you will find it in lots of places - Try this - https://github.com/fastai/fastai/search?q=F.sigmoid&unscoped_q=F.sigmoid 56
Its because Fast.AI Library currently is not (yet) updated to PyTorch 0.4. They plan to do that with the next version of the course starting in middle of Oct so that the existing notebooks work and are aligned with their MOOC videos. If you downgrade your PyTorch to 0.3.1, it should not give you these warnings.
Also, I see that you are not using an Virtual Env or Conda Env. I would recommend that you create a FastAI Environment using steps in Readme at https://github.com/fastai/fastai 15 and have a Separate Env for PyTorch 0.4 experiments. |
st101341 | Sam:
Thanks for the quick response. I’m still pretty new at python, much less github, so my ability to self-help is still in a growing phase -_-. I’ll look into the link you provided and if that doesn’t work I’ll downgrade pytorch.
RE: the environment, I think I did do that – but I never did use it as the kernel because I never noticed Jeremy doing so in his lectures. Is the below what you mean by that?
this.png893×314 28.4 KB |
st101342 | Hi Ben - You are doing great. Jermey doesn’t do that in his videos because he might be first activates the environment (I believe) and then runs the Jupyter Notebook or he doesn’t have any other environment configured. When this course was built PyTorch 0.3 was the latest version.
Yeah, Change the Kernel to “FastAI custom” as you see it and it should hopefully resolve the issue. |
st101343 | Also, take a look at http://forums.fast.ai/ 5, it has tons of resources and folks ready to help with any questions on Fast.AI course notebooks. |
st101344 | Oh I’ve been all over the forums, I came here because I couldn’t find this particular issue (I think because I’m running through the course way after it happened). Also this seemed more Torch-specific.
I’ve asked a few questions over there, but no replies yet! it’s all good, I think the lesson wiki discussion pages are probably most active when the course is in session anyway. |
st101345 | Closing up this thread by saying that while the FastAI custom environment did not solve the issue, downgrading to PyTorch 0.3.1 did do the trick.
I had to do so via this version from peterjc123 because the typical conda install command didn’t work (anaconda didn’t have the 0.3.x packages anymore)
The command I used was:
conda install -c peterjc123 pytorch cuda80 |
st101346 | I am trying to test a trained model on some new test data.
I load the model as below:
model = ModelClass(input_dim=input_dim, vocab_size=vocab_size, model_config=model_config)
Then I load the model_state_dict into this newly created model.
model_state_dict = torch.load(model_path)['model_state_dict']
Everything works fine till here.
When I however run a minibatch of the test data through this model, I get a
RuntimeError: parameter types mismatch. This error is originating from the forward pass of the Encoder RNN. Below is the Traceback:
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/rnn.py", line 192, in forward
output, hidden = func(input, self.all_weights, hx, batch_sizes)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/_functions/rnn.py", line 323, in forward
return func(input, *fargs, **fkwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/_functions/rnn.py", line 287, in forward
dropout_ts)
The data generating process is the same. I do not see where a types mismatch could come from. Can some one help me in debugging this?
Thanks! |
st101347 | Solved by InnovArul in post #2
Can you make sure to post the complete error text with full error message?
Also, if you are using GPU, make sure both data and model are in GPU while calling forward method. |
st101348 | Can you make sure to post the complete error text with full error message?
Also, if you are using GPU, make sure both data and model are in GPU while calling forward method. |
st101349 | @InnovArul, you were right, this mismatch was because model was not on GPU while the transfers where. I am closing this post. |
st101350 | The default Linear layer weight initialization mechanism isn’t clear to me.
If I use default initialization, without calling tensor.nn.init.XX or reset_parameters(), I get different weight values than when I do explicitly initialize.
Consider this code:
# init_explore.py
# PyTorch 0.4 Anaconda3 4.1.1 (Python 3.5.2)
# explore layer initializations
import torch as T
class Net1(T.nn.Module):
# default weight initialization
def __init__(self):
super(Net1, self).__init__()
self.fc1 = T.nn.Linear(4, 5)
class Net2(T.nn.Module):
# explicit nn.init
def __init__(self):
super(Net2, self).__init__()
self.fc1 = T.nn.Linear(4, 5)
x = 0.5 # 1. / sqrt(4)
T.nn.init.uniform_(self.fc1.weight, -x, x)
T.nn.init.uniform_(self.fc1.bias, -x, x)
# -----------------------------------------------------------
def main():
print("\nBegin Init explore with PyTorch \n")
T.manual_seed(1)
net1 = Net1()
# net1.fc1.reset_parameters()
print("Default init weights: ")
print(net1.fc1.weight)
T.manual_seed(1)
net2 = Net2()
print("\n\nExplicit nn.init.uniform_ weights: ")
print(net2.fc1.weight)
print("\n\nEnd Init explore")
if __name__ == "__main__":
main()
The weight values of the two networks are different. If the reset_parameters() statement is un-commented, the weight values are the same.
Is this correct behavior?
(apologies in advance for any etiquette blunders – this is my first post) |
st101351 | (from the poster – sorry about the formatting – I have no idea what went wrong . . . ) |
st101352 | You can add code using three backticks (```).
I’ve formatted your code for you.
The different values are due to an additional “random operation” for Net2.
While you are setting the random seed for Net1 directly before sampling the parameters, you create the linear layer for Net2 first, and then sample the parameters again.
Add T.manual_seed(1) directly before T.nn.init.uniform_ in Net2 and you will get the same values. |
st101353 | I am new to pytorch and I am implementing the paper below. The main algorithm works fine but I am struggling to implement the gradient bias correction in section 3.2. Writing a custom torch.autograd.Function and adding the running exponential moving average to the context seems to be way to do it but I am getting the following error on the backward call “RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn”.
class LogMeanExp_Unbiased(torch.autograd.Function):
@staticmethod
def forward(ctx, input, running_ema, alpha=0.1):
yt = torch.exp(input).mean()
yt.requires_grad = True
if running_ema == 0:
running_ema = yt
else:
running_ema = alpha * yt + (1-alpha)*running_ema.item()
ctx.input = input
ctx.yt = yt
ctx.running_ema = running_ema
return yt.log()
@staticmethod
def backward(ctx, grad_output):
return (ctx.input * ctx.yt).backward() / ctx.running_ema, None, None
arxiv.org
1801.04062.pdf 10
1936.13 KB |
st101354 | Solved by JuJu in post #3
Thank you for your reply InnovArul.
Edit: In case someone else is looking to do something similar the solution seems to be:
class ExpMeanLog(torch.autograd.Function):
@staticmethod
def forward(ctx, input, running_ema):
ctx.save_for_backward(input, running_ema)
return input.… |
st101355 | JuJu:
@staticmethod def backward(ctx, grad_output): return (ctx.input * ctx.yt).backward() / ctx.running_ema, None, None
I think, you do not need .backward() call inside backward method.
@staticmethod
def backward(ctx, grad_output):
return (ctx.input * ctx.yt) / ctx.running_ema, None, None
I did not read the paper. But this will resolve the error atleast |
st101356 | Thank you for your reply InnovArul.
Edit: In case someone else is looking to do something similar the solution seems to be:
class ExpMeanLog(torch.autograd.Function):
@staticmethod
def forward(ctx, input, running_ema):
ctx.save_for_backward(input, running_ema)
return input.exp().mean().log()
@staticmethod
def backward(ctx, grad_output):
input, running_ema = ctx.saved_tensors
return grad_output * input.exp() / running_ema / input.shape[0], None |
st101357 | I have a sequence of length 10, and I wish to predict the other half given the first half. I followed the tutorial of Seq2Seq modeling for English-French translation, and removed all Softmax layers and added relu activations instead. Then I changed the negative log likelihood loss to MSE loss. This is where i do something wrong, the decoder outputs a set of 5 values. and the target is 1 value (using teacher forcing), so obviously I get the error:
RuntimeError: input and target shapes do not match: input [1 x 5], target [1]
Could someone please guide me, on how to fix this? |
st101358 | Hello,
Lets say I build a standard CNN.
I would have a couple FC linear layers and maybe some convolutional layers and maybe some maxpool.
If I want this network to have memory, can I just add a LSTM onto it.
Because from the documentation, it looks like the LSTM simply replaces the CNN.
Thanks Matt |
st101359 | I guess this link will provide you relative answer
Measuring GPU tensor operation speed
Hi,
I would like to illustrate the speed of tensor operations on GPU for a course.
The following piece of code:
x = torch.cuda.FloatTensor(10000, 500).normal_()
w = torch.cuda.FloatTensor(200, 500).normal_()
a = time.time()
y = x.mm(w.t())
b = time.time()
print('batch GPU {:.02e}s'.format(b - a))
a = time.time()
y = x.mm(w.t())
b = time.time()
print('batch GPU {:.02e}s'.format(b - a))
prints
batch GPU 1.06e-01s
batch GPU 3.43e-04s
so I presume that there is some “lazy operations” delayed… |
st101360 | Basically the problem I noticed is when using the Conv1d as the first layer I feed the data in the form:
(batch_size, in_channels, size)
And this works fine for training but when I was validating I needed to feed the data one sample at a time (because the validation process is a little complicated) and started getting the error
RuntimeError: input has less dimensions than expected
After a while I figured out that if I pad the input with zeros (make the batch_size = 2 or something) the error goes away. This is kind of annoying and I think should be fixed in the Pytorch source, so I am posting just so that people know about this problem as well as the hack to work around it. |
st101361 | Using a batch size of 1 works for me. What are the settings you used for conv1d when you saw the error? |
st101362 | I want to convert the vgg-m-2048 which pretrained on matconvnet to pytorch model, and get the network code. Can any one help me? Thank you! |
st101363 | I am working on stereo vision task, and I need to load a pair of picture at a time. But the vision.transform behave differently on two pictures. For example, RandomCrop get different range. Is there any easy way the apply the same transform on a pair of picture? |
st101364 | You could use the functional API from torchvision.transforms:
import torchvision.transforms.functional as TF
class YourDataset(Dataset):
def __init__(self):
self.image_left_paths = ...
self.image_right_paths = ...
def __getitem(self, index):
image_left = # load image with index from self.left_image_paths
image_right = # load image with index from self.right_image_paths
# Resize
resize = transforms.Resize(size=(520, 520))
image_left = resize(image_left)
image_right = resize(image_right)
# Random crop
i, j, h, w = transforms.RandomCrop.get_params(
image_left, output_size=(512, 512))
image_left = TF.crop(image_left, i, j, h, w)
image_right = TF.crop(image_right, i, j, h, w)
# Random horizontal flipping
if random.random() > 0.5:
image_left = TF.hflip(image_left)
image_right = TF.hflip(image_right)
# Random vertical flipping
if random.random() > 0.5:
image_left = TF.vflip(image_left)
image_right = TF.vflip(image_right)
image_left = TF.to_tensor(image_left)
image_right = TF.to_tensor(image_right)
return image_left, image_right
def __len__(self)
return len(self.image_left_paths) |
st101365 | The document at https://caffe2.ai/docs/operators-catalogue.html 6 is too old. A lot of attributes are not in this page.
How to request to update this page? |
st101366 | An issue can be opened at https://github.com/pytorch/pytorch/issues 21 for this. |
st101367 | I know how to use tensorboardX to visualize the loss curve. You can visualize the loss curve even end training in tensorbaordX, can visdom support the same function? Could anyone can give me some advice? thank you very much |
st101368 | I’ve done some work on understanding graph and state, and how these are freed on backward calls. Notebook here 336.
There are two questions remaining - the second question is more important.
1) No guarantee that second backward will fail?
x = Variable(torch.ones(2,3), requires_grad=True)
y = x.mean(dim=1).squeeze() + 3 # size (2,)
z = y.pow(2).mean() # size 1
y.backward(torch.ones(2))
z.backward() # should fail! But only fails on second execution
y.backward(torch.ones(2)) # still fine, though we're calling it for the second time
z.backward() # this fails (finally!)
My guess: it’s not guaranteed that an error is raised on the second backward pass through part of the graph. But of course if we need to keep buffers on part of the graph, we have to supply retain_variables=True. Cause buffers could have been freed.
Probably the specific simple operations for y (mean, add) don’t need buffers for backward, while the z=y.pow(2).mean() does need a buffer to store the result of y.pow(2). correct?
2) Using a net twice on the same input Variable makes a new graph with new state?
out = net(inp)
out2 = net(inp) # same input
out.backward(torch.ones(1,1,2,2))
out2.backward(torch.ones(1,1,2,2)) # doesnt fail -> has a different state than the first fw pass?!
Am I right to think that fw-passing the same variable twice constructs a second graph, keeping the state of the first graph around?
The problem I see with this design is that often (during testing, or when you detach() to cut off gradients, or anytime you add an extra operation just for monitoring) there’s just a fw-pass on part of the graph - so is that state then kept around forever and just starts consuming more memory on every new fw-pass of the same variable?
I understand that the volatile flag is probably introduced for this problem and I see it’s used during testing in most example code.
But I think these are some examples where there’s just fw-pass without volatile flag:
fake = netG(noise).detach() to avoid bpropping through netG https://github.com/pytorch/examples/blob/master/dcgan/main.py#L216 42
test on non-volatile variables: https://github.com/pytorch/examples/blob/master/super_resolution/main.py#L74 19
If you finetune only top layers of a feedforward net, bottom layers see only fw-passes
But in general, if I understand this design correctly, this means anytime you have a part of a network which isn’t backpropped through, you need to supply volatile flag? Then when you use that intermediate volatile variable in another part of the network which is backpropped through, you need to re-wrap and turn volatile off?
PS
If there’s interest, I could update & adapt the notebook to your answers, or merge the content into the existing “for torchies” notebook, and submit a PR to the tutorials repo. |
st101369 | Yes. We don’t guarantee that the error will be raised, but if you want to be sure that you can backprop multiple times you need to specify retain_variables=True. It won’t raise an error only for very simple ops like the ones you have here (e.g. grad_input of add is just grad_output, so there’s no need for any buffers, and that’s why it also doesn’t check if they were freed). Not sure if we should add these checks or not. It probably doesn’t matter, as it will raise a clear error, and otherwise will still compute correct gradients.
Yes, when you use the same net with the same input twice, it will construct a new graph, that will share all the leaves, but all other nodes will be exact copies of the first one, with separate state and buffers. Every operation you do on Variables adds one more node, so if you compute a function 4 times, you’ll always have 4x more nodes around (assuming all outputs stay in scope).
Now, here’s some description on when do we keep the state around:
When finetuning a net, all the nodes before the first operation with trained weights won’t even require the gradient, and because of that they won’t keep the buffers around. So no memory wasted in this case.
Test on non-volatile and detaching the outputs will keep the bottom part of the graph around, and it will require grad because the params do, so it will keep the buffers. In both cases it would help if all the generator parameters would have requires_grad set to False for a moment, or a volatile input would be used, and then the flag would be switched off on the generator output. Still, I wouldn’t say that it consumes more memory on every fw-pass - it will just increase the memory usage, but it will be a constant factor, not like a leak. The graph state will get freed as soon as the outputs will go out of scope (unlike Lua, Python uses refcounting).
There’s however one change that we’ll be rolling out soon - variables that don’t require_grad won’t keep a reference to the creator. This won’t help with inference without volatile, and it will still make the generator graph allocate the buffers, but the will be freed as soon as the output is detached. This won’t have any impact on the mem usage, since that memory would be already allocated after the first pass, and it can be reused by the discriminator afterwards.
Anyway, the examples will need to be fixed. Hope this helps, if something’s unclear just let me know. Also, you can read more about the flags in this note in the docs 137. |
st101370 | Thanks for the elaborate answer, the graph going out of scope with the output variable is the essential part I was missing here.
If these fixes 82 are what you had in mind, then I’ll send a PR.
Let me know if you think it’s useful to make the notebook with your answer into a full tutorial, I think these autograd graph/state mechanics are a bit underdocumented atm. Or maybe some explanation could be added to the autograd note in the docs. |
st101371 | Yeah, they look good, only nit is to not put spaces around the equals in volatile=True.
I agree the materials we have right now aren’t very detailed but we didn’ thave a lot of time to expand them. If you’d have a moment to write that down and submit a notebook or a PR to the notes I’ll merge them. Thanks! |
st101372 | Thanks, this post is a savior. I had been getting the RuntimeError: Trying to backward through the graph second time, but the buffers have already been freed. in my net, then recreated the exact graph structure in a simplified toy class to diagnose the problem and think of a solution (I know setting retain_variables=True would have done it, but I wanted to overthink a bit), and could not reproduce the problem at all (which gave me quite a headache). I finally understand that for very simple operations the backward pass is not required to fail . |
st101373 | Hi,
I am still a little bit confused regarding the detach() method.
We do fake = netG(noise).detach() to prevent the backpropping through netG. Now, for the netG training, we do output = netD(fake). If we had lets say
fake = netG(inputs)
Loss1 = criterion(netD1(fake), real_label)
Loss2 = criterion(netD2(fake), real_label)
Loss3 = criterion(netD3(fake), real_label)
Loss_other = criterion_other(fake, target)
Loss = Loss1 + Loss2 + Loss3 + Loss_other
Loss.backward()
does this create the graphs for each of the netDs? Will be wrong if I did
Loss1 = criterion(netD1(fake).detach(), real_label)
Loss2 = criterion(netD2(fake).detach(), real_label)
Loss3 = criterion(netD3(fake).detach(), real_label)
Loss_other = criterion_other(fake, target)
Loss = Loss1 + Loss2 + Loss3 + Loss_other
Loss.backward()
to save some memory, since I don’t need to backprop through netD? Will there be any difference in backpropping?
Regards
Nabarun |
st101374 | Hi,
if I understand you correctly, you want to train netD1 with loss1, netD3 with loss2 ,… and netG with loss_other ?
Right now what you do is you calculate the output of netD1 and then detach this output, and then with the detached output calculate the loss1 (so basically the loss between a detached variable and a target,) so it will not propagate back to netD1 or netG (since you detached the variable (output), after you pass it through netD1).
What you probably want to do is:
Loss1 = criterion(netD1(fake.detach()), real_label)
Loss2 = criterion(netD2(fake.detach()), real_label)
Loss3 = criterion(netD3(fake.detach()), real_label)
Loss_other = criterion_other(fake, target)
Loss = Loss1 + Loss2 + Loss3 + Loss_other
Loss.backward()
, where you detach the fake, thus loss1, etc can be propagated back through netD1 etc (but still not though netG, if you want to propagate through netG and not netD1 you can try to set .requires_grad=False for all paramters in netD1, but not sure if it will work, since it only works on leaves).
Hope what I just told you is mostly correct and does not confuse you more.
Cheers |
st101375 | Hi,
dzimm:
if I understand you correctly, you want to train netD1 with loss1, netD3 with loss2 ,… and netG with loss_other ?
Actually no, don’t want to train netD* at all, all the losses are for training the netG, and all netD* are fixed.
The last point of your reply kind of hit the point, I want to propagate through netG without propagating through netD*.
Why I thought it might work is because of this post Freezing parameters
Freezing parameters
Should just be something like
critic_loss = ((reward+(gamma*critic_new.detach())) - critic_old)**2
.requires_grad = False looks promising, but I am not sure either, would be great if someone can clarify on that.
@apaszke any thoughts on this?
Regards
Nabarun |
st101376 | Nabarun,
what you want is fundamentally impossible, you need to backprop through D wrt its inputs because of the chain rule. You don’t need to compute gradients of D wrt its parameters, which is what you avoid by setting requires_grad=False as in the GAN pytorch example code.
Tom |
st101377 | Adam, you said that:
apaszke:
Yes, when you use the same net with the same input twice, it will construct a new graph, that will share all the leaves, but all other nodes will be exact copies of the first one, with separate state and buffers.
Is it only valid for the same net with the same input? I am interested in this specific detail because I am trying to implement a RNN unrolling the input sequence in a loop and accumulating the gradient. In my case the input is a new Tensor (taken from the sequence) at every step of the iteration. Would I be able to achieve BPTT in this case? |
st101378 | I’m trying to train a network using PReLU module, but I get Segfault during the backward. Here’s a piece of code that reproduces the bug:
gt = torch.rand(2,3,256,256)
gt = torch.autograd.Variable(gt.cuda(async=True))
input = torch.rand(2,134,256,256)
input = torch.autograd.Variable(input.cuda())
lossL1 = torch.nn.L1Loss()
lossL1 = lossL1.cuda()
net = nn.Sequential(nn.PReLU(), nn.Conv2d(134, 3, kernel_size=1, stride=1, bias=False)).cuda()
output = net(input)
loss = lossL1(output, gt)
loss.backward()
In this example, my network just consists in a PReLU followed by a simple convolution. Note that if I switch the order of both modules, the Segfault doesn’t occur, so it only bugs when the PReLU is the first layer.
Also note that if I don’t use the GPU, the Segfault doesn’t occur neither.
Rem: I tried with pytorch versions 0.4.0 and 0.4.1. |
st101379 | I could reproduce it also on 0.5.0a0+2c7c12f.
Here is the backtrace:
#0 0x00007fffd42ce067 in THCTensor_nElement () from /home/pbialecki/libs/ptrblck/pytorch/torch/lib/libcaffe2_gpu.so
#1 0x00007fffd3cbaefa in bool THC_pointwiseApply3<float, float, float, THTensor, THTensor, THTensor, PReLUAccGradParametersShared >(THCState*, THTensor*, THTensor*, THTensor*, PReLUAccGradParametersShared const&, TensorArgType, TensorArgType, TensorArgType) ()
from /home/pbialecki/libs/ptrblck/pytorch/torch/lib/libcaffe2_gpu.so
#2 0x00007fffd3c99692 in THNN_CudaPReLU_accGradParameters () from /home/pbialecki/libs/ptrblck/pytorch/torch/lib/libcaffe2_gpu.so
#3 0x00007fffd41f2018 in at::CUDAFloatType::prelu_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, std::array<bool, 2ul>) const ()
from /home/pbialecki/libs/ptrblck/pytorch/torch/lib/libcaffe2_gpu.so
#4 0x00007fffd1f47b7d in torch::autograd::VariableType::prelu_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, std::array<bool, 2ul>) const ()
from /home/pbialecki/libs/ptrblck/pytorch/torch/lib/libtorch.so.1
#5 0x00007fffd1e4fa93 in torch::autograd::generated::PreluBackward::apply(std::vector<torch::autograd::Variable, std::allocatortorch::autograd::Variable >&&) ()
from /home/pbialecki/libs/ptrblck/pytorch/torch/lib/libtorch.so.1
#6 0x00007fffd1e0bfcb in torch::autograd::Function::operator()(std::vector<torch::autograd::Variable, std::allocatortorch::autograd::Variable >&&) ()
from /home/pbialecki/libs/ptrblck/pytorch/torch/lib/libtorch.so.1
#7 0x00007fffd1e07291 in torch::autograd::Engine::evaluate_function(torch::autograd::FunctionTask&) ()
from /home/pbialecki/libs/ptrblck/pytorch/torch/lib/libtorch.so.1
#8 0x00007fffd1e07d8b in torch::autograd::Engine::thread_main(torch::autograd::GraphTask*) () from /home/pbialecki/libs/ptrblck/pytorch/torch/lib/libtorch.so.1
#9 0x00007fffd1e045b4 in torch::autograd::Engine::thread_init(int) () from /home/pbialecki/libs/ptrblck/pytorch/torch/lib/libtorch.so.1
#10 0x00007fffe41c5a2a in torch::autograd::python::PythonEngine::thread_init (this=0x7fffe4a88200 , device=0) at torch/csrc/autograd/python_engine.cpp:39
#11 0x00007fffd1651c5c in std::execute_native_thread_routine_compat (__p=)
at /opt/conda/conda-bld/compilers_linux-64_1520532893746/work/.build/src/gcc-7.2.0/libstdc+±v3/src/c++11/thread.cc:110
#12 0x00007ffff7bc16ba in start_thread (arg=0x7fff8df82700) at pthread_create.c:333
#13 0x00007ffff78f741d in clone () at …/sysdeps/unix/sysv/linux/x86_64/clone.S:109
I tried to debug it a bit and it seems the error is thrown if the input does not require gradients.
Code to reproduce the bug:
act = nn.PReLU().to('cuda')
x = torch.randn(1, requires_grad=False, device='cuda')
output = act(x)
output.mean().backward()
print(x.grad)
Setting requires_grad=True for x works.
@tommm994 Could you open a gihub issue 8 and link to this thread?
If you are busy, let me know and I can do it. |
st101380 | Ok thanks @ptrblck ! I had already opened an issue on github. I’ve now linked it to this thread.
Btw, with pytorch 0.3.1, the bug does not occur. |
st101381 | I 'd like to try make some binary version of Conv2d operation for XNOR conv net (and upstream it if succeed) and I do not want to write it from the scratch. I found that in functional.py file there is a reference to _ConvNd = torch._C._functions.ConvNd and I do not not where to go next. Dear pytorch developers could you please share some cuda kernels from the internals of your engine? |
st101382 | Our logic for convolution is a little convoluted. It could go through cudnn, or we can run a matrix multiply to do that (there are also probably other cases). The entry point is here: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/Convolution.cpp 635 |
st101383 | @richard I just now realized that I can not use any of winograd/gemm/FTT algorithms to do XNOR conv2d or matrix multiplication or whatever. This algorithms introduce additional additions, so every time I do for example strassen 12 fast matrix multiplication nested item I come out from {-1, 1} diapason and to bigger one {-2, 0, 2} and so on.
Starssen2.png671×854 54.7 KB
Same stuff for winograd and FFT. So anybody could only use naive student like realization XNOR Conv2d without any karatsuba speedup |
st101384 | check this paper for how they (and others) learn binary kernel weights
arxiv.org
1710.07739.pdf 98
567.74 KB |
st101385 | Rana, thank you from the deep of my heart I am really interested in binary networks. But my question is about effective binary convolution kernels (Conv2d analog for XNOR net) for fast inference. Do you or your colleagues know what conv2d kernels to use for inference? |
st101386 | In the paper I sent they refer and compare to this work
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
https://arxiv.org/abs/1603.05279 38
Where during inference they perform forward propagation with the binarized weights |
st101387 | The equal function for THCudaBlas_Sgemm in Aten
THCudaBlas_Sgemm(state, ‘n’, ‘n’, n, m, k, 1.0f,
THCudaTensor_data(state, columns), n,
THCudaTensor_data(state, weight), k, 1.0f,
THCudaTensor_data(state, output_n), n); |
st101388 | Hi there,
I have transfered a keras based model (with tensorflow backend) to pytorch! Actually it is the pretrained model of YOLOV2. When I forward a test image to understand how the network works well, I does not have any warnings (notice the forward path consists of Batchnorm layers), but when I move the network to evaluation mode using model.eval(), I face with this warning:
Warning: overflow encountered in exp return 1./(1 + np.exp(-1*inp)) /pytorch_model/yoloUtil.py:143: RuntimeWarning: overflow encountered in exp box_wh = np.exp(feature[..., 2:4])
The overflow warning causes the network fail to predict any bounding boxes. Could you please tell me, why do these problems happen?
Appreciating in advance for a any response! |
st101389 | @chenyuntc Thanks for your response!
features is the final layer output. and inp is the input to the sigmoid function.
def yolo_head(feats, anchors, num_classes):
"""Convert final layer features to bounding box parameters.
Parameters
----------
feats : tensor
Final convolutional layer features.
anchors : array-like
Anchor box widths and heights.
num_classes : int
Number of target classes.
Returns
-------
box_xy : tensor
x, y box predictions adjusted by spatial location in conv layer.
box_wh : tensor
w, h box predictions adjusted by anchors and conv spatial resolution.
box_conf : tensor
Probability estimate for whether each box contains any object.
box_class_pred : tensor
Probability distribution estimate for each box over class labels.
"""
feature = feats.numpy()
feature = feature.transpose(0,2,3,1)
num_anchors = len(anchors)
# Reshape to batch, height, width, num_anchors, box_params.
anchors_tensor = anchors.reshape([1, 1, 1, num_anchors, 2])
conv_dims = feature.shape[1:3]
conv_height_index = np.arange(0, stop=conv_dims[0])
conv_width_index = np.arange(0, stop=conv_dims[1])
conv_height_index = np.tile(conv_height_index, [conv_dims[1]])
conv_width_index = np.tile(
np.expand_dims(conv_width_index, 0), [conv_dims[0], 1])
conv_width_index = (np.transpose(conv_width_index)).flatten()
conv_index = np.transpose(np.stack([conv_height_index, conv_width_index]))
conv_index = np.reshape(conv_index, [1, conv_dims[0], conv_dims[1], 1, 2])
conv_index = conv_index.astype(float)
feature = np.reshape(
feature, [-1, conv_dims[0], conv_dims[1], num_anchors, num_classes + 5])
conv_dims = np.reshape(conv_dims, [1, 1, 1, 1, 2]).astype(float)
box_xy = sigmoid(feature[..., :2])
box_wh = np.exp(feature[..., 2:4]) ***** Here is the location of warning*****
box_confidence = sigmoid(feature[..., 4:5])
box_class_probs = np.apply_along_axis(softmax,4,feature[..., 5:])
box_xy = (box_xy + conv_index) / conv_dims
box_wh = box_wh * anchors_tensor / conv_dims
return box_xy, box_wh, box_confidence, box_class_probs
Also for Calculating the softmax and sigmoid along a dimension, I wrote 2 functions as bellow:
def sigmoid(inp):
return 1./(1 + np.exp(-1*inp)) **** Here is the other location of warning******
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum() |
st101390 | Without model.eval():
`[[[[[ -1.07075989e-01 -4.39112991e-01 -9.40587878e-01 …,
3.13314486e+00 -1.54667079e+00 1.74327826e+00]
[ 1.02364635e+00 -8.85347366e-01 -1.57168433e-01 …,
2.75069904e+00 -1.75123787e+00 1.31727755e+00]
[ 1.76883888e+00 1.31513536e-01 -1.28749418e+00 …,
7.74755001e-01 -1.84538651e+00 2.71459341e+00]
[ 2.91872621e-02 1.59589723e-01 1.53984472e-01 …,
9.13461328e-01 -1.59088635e+00 1.63678157e+00]
[ 3.64127815e-01 8.80579948e-01 2.06380770e-01 …,
-1.29420459e-01 -2.28418446e+00 2.53179502e+00]]
[[ -1.59637094e+00 -2.83793718e-01 -3.61469179e-01 …,
2.42104936e+00 -1.50146878e+00 1.04749036e+00]
[ -2.38844544e-01 4.07841444e-01 -1.25134170e-01 …,
6.22990906e-01 -1.77230883e+00 2.31721014e-01]
[ 1.36048365e+00 -3.81075621e-01 -5.39776325e-01 …,
4.47489262e-01 -1.50398338e+00 -4.20431912e-01]
[ 2.17169657e-01 -1.81535196e+00 1.78491324e-02 …,
-2.13062286e-01 -1.95609546e+00 -6.64336801e-01]
[ 6.03908300e-03 3.47443670e-02 1.22472361e-01 …,
-4.82561469e-01 -2.42117286e+00 5.68968773e-01]]
[[ -1.06827986e+00 7.05153108e-01 -5.03763676e-01 …,
2.00297642e+00 -3.44963861e+00 -1.22004223e+00]
[ -1.68623328e-02 -6.48242235e-03 3.68758440e-01 …,
8.54735672e-01 -4.24609566e+00 -1.54878330e+00]
[ 5.85269928e-01 -7.14257777e-01 -2.85654664e-01 …,
5.52064121e-01 -3.39210773e+00 -1.36829436e+00]
[ 6.22348189e-01 -1.53005385e+00 -2.05970570e-01 …,
-4.38622177e-01 -2.86752367e+00 -9.15135264e-01]
[ 7.27358520e-01 2.65871972e-01 3.32212970e-02 …,
-1.58870578e+00 -3.09723973e+00 1.89628214e-01]]
…,
[[ 5.97730041e-01 2.70410627e-03 -6.01684928e-01 …,
-2.04850030e+00 -2.17200804e+00 9.50788185e-02]
[ -1.24553299e+00 3.14570904e+00 -2.15448156e-01 …,
-3.16923404e+00 -2.56539011e+00 2.00808263e+00]
[ -5.94927192e-01 1.77001309e+00 -4.54834878e-01 …,
-4.24447441e+00 -1.33400464e+00 2.14467502e+00]
[ 3.49495411e-01 1.07096577e+00 -9.58548486e-03 …,
-3.61487818e+00 -1.60971963e+00 4.42153335e-01]
[ 7.03387678e-01 -8.63260806e-01 -1.28527582e-02 …,
-3.03880906e+00 -1.64708936e+00 1.28562307e+00]]
…,
[[[ -9.89431083e-01 4.22683418e-01 -7.39042521e-01 …,
3.44297767e+00 -7.30107903e-01 -4.18882519e-01]
[ -1.73127711e-01 3.01834464e-01 -7.54745424e-01 …,
3.02644777e+00 -1.29641032e+00 -1.14082289e+00]
[ 1.04358292e+00 -9.05704558e-01 -9.51315284e-01 …,
2.23359179e+00 -1.43012857e+00 -9.47165728e-01]
[ -1.36387244e-01 -3.03453594e-01 -2.06832945e-01 …,
1.55085158e+00 -5.74167728e-01 -2.87570894e-01]
[ 8.87480527e-02 -8.30304503e-01 -1.67493373e-01 …,
1.74410915e+00 -6.33536935e-01 1.21541008e-01]]
[[ -6.48639053e-02 -8.53286743e-01 1.64060012e-01 …,
2.03146029e+00 -2.90256917e-01 -3.23312074e-01]
[ -4.62928921e-01 5.12159020e-02 3.26973081e-01 …,
1.73465192e+00 -6.56264246e-01 -5.83739102e-01]
[ -5.54190457e-01 -6.25617266e-01 -1.09632708e-01 …,
7.50372171e-01 -1.14373255e+00 -3.34591568e-01]
[ 5.69369197e-01 2.06014216e-02 -2.42602661e-01 …,
3.47952664e-01 -6.50111437e-01 2.22202986e-02]
[ 1.36280119e-01 -7.56513476e-01 -3.31164837e-01 …,
1.29209614e+00 -9.01850760e-01 3.78811538e-01]]
[[ -1.43173218e-01 7.72010535e-02 -3.40250641e-01 …,
5.43265581e-01 -1.49092317e+00 8.30179453e-03]
[ -2.54103988e-01 -4.35017437e-01 3.43804181e-01 …,
2.34699309e-01 -1.73454559e+00 -5.44458747e-01]
[ -4.06496316e-01 -8.23709130e-01 2.78647333e-01 …,
2.17914969e-01 -1.44878078e+00 -8.31533670e-02]
[ 8.87326360e-01 1.07999064e-01 -2.16917351e-01 …,
-8.14723074e-02 -1.00205600e+00 -3.16833258e-02]
[ 6.76401258e-01 -1.25771821e+00 -4.82933074e-01 …,
1.10465693e+00 -1.15929174e+00 6.48908377e-01]]
[[ 1.57525873e+00 -5.40540695e-01 -1.20892072e+00 …,
1.04007089e+00 -1.59711325e+00 -2.29269075e+00]
[ 7.97020912e-01 5.50970435e-03 -8.81440580e-01 …,
9.70211506e-01 -1.97356272e+00 -1.23058438e+00]
[ -7.06732720e-02 -9.18719172e-01 -9.84150767e-01 …,
3.32275331e-01 -1.64473248e+00 -5.06047189e-01]
[ 7.16224074e-01 -1.74189866e-01 1.74300093e-02 …,
-7.42350161e-01 -8.18396151e-01 -6.83267772e-01]
[ 4.48581547e-01 -2.33436674e-01 -5.44481985e-02 …,
-1.70234203e-01 -8.69583786e-01 -2.86695510e-01]]]
[[[ -1.15510321e+00 -4.45892811e-02 -5.27243614e-01 …,
2.25410175e+00 -6.72602355e-01 -2.73259282e-02]
[ 4.97808754e-01 -1.65608972e-01 -4.31176931e-01 …,
2.60243559e+00 -8.74305367e-01 -6.63845539e-01]
[ 9.31014121e-01 -9.65950966e-01 -9.37161326e-01 …,
2.03862977e+00 -7.79089451e-01 -4.67905223e-01]
[ 1.24568745e-01 -3.15860659e-01 -1.57195643e-01 …,
1.64764893e+00 -4.80436504e-01 -1.00497074e-01]
[ 1.79682627e-01 -5.20490885e-01 -1.43755063e-01 …,
1.53291631e+00 -4.07362163e-01 1.21973202e-01]]
[[ 1.39580086e-01 -2.45647937e-01 2.17374504e-01 …,
2.42638159e+00 4.73639071e-02 3.53267223e-01]
[ -5.22018254e-01 2.38307714e-01 3.52554500e-01 …,
2.10760283e+00 -5.09911597e-01 1.73100770e-01]
[ 2.76583910e-01 -6.26857162e-01 -6.54999986e-02 …,
1.73377800e+00 -6.14430785e-01 2.27044418e-01]
[ 7.48551965e-01 -2.00060338e-01 -3.26280743e-01 …,
1.32619882e+00 -3.96631807e-01 6.05146468e-01]
[ 2.96731710e-01 -2.42293268e-01 -3.06056291e-01 …,
1.86239004e+00 -2.64620543e-01 9.75538135e-01]]
[[ 2.50553995e-01 -7.46472597e-01 -1.48889571e-01 …,
1.16063213e+00 -4.20520544e-01 8.28358293e-01]
[ -8.39579701e-02 4.85758901e-01 5.89942932e-01 …,
9.11077380e-01 -9.60058689e-01 6.15663826e-01]
[ -6.37243390e-01 -3.94274235e-01 2.03115135e-01 …,
1.32119536e+00 -9.88202155e-01 3.89122695e-01]
[ 6.18680477e-01 2.52195656e-01 -1.67415440e-01 …,
8.67859423e-01 -8.90301168e-01 4.64680135e-01]
[ 4.15470600e-01 -1.50768459e-01 -2.57678390e-01 …,
1.78741479e+00 -8.29994082e-01 8.09115171e-01]]
…,
[[ -7.91447163e-02 9.78098512e-02 -2.80131727e-01 …,
-4.89220470e-02 -9.06519771e-01 -5.76316357e-01]
[ 3.48259658e-02 3.20821106e-01 1.95926860e-01 …,
-5.79015970e-01 -1.48590934e+00 -4.38588947e-01]
[ 9.25648510e-01 -3.35408926e-01 1.75966889e-01 …,
-8.52046132e-01 -1.19623518e+00 -4.92681563e-02]
[ 1.72467202e-01 4.47181463e-01 -1.97901145e-01 …,
-1.66926265e+00 -9.42209065e-01 -2.59209812e-01]
[ -1.42895579e-02 -3.33616614e-01 -3.29961449e-01 …,
-1.55633569e+00 -8.91426504e-01 1.31785035e-01]]
[[ -8.25410545e-01 -6.03218675e-02 2.63885319e-01 …,
-4.63727564e-02 -8.88435483e-01 -1.73753405e+00]
[ 4.06429082e-01 1.89955413e-01 1.92851394e-01 …,
6.18646085e-01 -1.83080256e+00 -1.91211462e+00]
[ -3.71373296e-01 -3.79637241e-01 -5.77083044e-02 …,
-6.13297105e-01 -1.49187517e+00 -4.53189075e-01]
[ -2.25246459e-01 1.82925940e-01 -3.60963553e-01 …,
-1.57162523e+00 -1.01040244e+00 -3.90981078e-01]
[ -3.81843835e-01 1.10523179e-02 -2.62450457e-01 …,
-2.04315782e+00 -7.68837094e-01 1.41253054e-01]]
[[ -1.18505210e-02 1.49443650e+00 2.22243965e-01 …,
1.21491241e+00 -6.66923523e-01 -2.62692714e+00]
[ 2.16134638e-01 -1.18031228e+00 -3.02660078e-01 …,
1.80655015e+00 -8.08886588e-01 -1.31399608e+00]
[ 1.94263801e-01 -5.11718750e-01 -8.90758932e-01 …,
-2.15077370e-01 -9.29562092e-01 -1.53487176e-01]
[ -3.84806991e-02 -5.45224905e-01 1.67636156e-01 …,
-1.22038198e+00 -6.96404874e-01 6.83081150e-03]
[ -1.63758278e-01 -3.24006081e-02 1.28940463e-01 …,
-1.59356236e+00 -8.51782918e-01 7.55040944e-01]]]]]`
the maximum of the output is: 16.2592
the minimum of the output is: -42.9159
with model.eval():
`
[[[[[ -3.57405243e+01 -3.40947647e+01 -1.80168018e+01 …,
-1.11552505e+02 -1.47317673e+02 1.15841808e+01]
[ -3.45298996e+01 -8.66607361e+01 -1.56450157e+01 …,
3.38557281e+01 -1.14936005e+02 7.51782608e+00]
[ -5.57367744e+01 3.30688400e+01 -2.05872498e+01 …,
4.67400856e+01 -8.21306000e+01 -3.18207779e+01]
[ -4.24697380e+01 3.34780769e+01 -9.04815388e+00 …,
-9.59447765e+00 -3.47139511e+01 -5.54042931e+01]
[ -4.58841629e+01 1.28552811e+02 -6.47773361e+00 …,
1.00611515e+01 -5.96798134e+01 -8.32996063e+01]]
[[ 2.68651199e+01 -1.63246651e+01 2.43751316e+01 …,
-1.63626968e+02 -2.72747986e+02 -3.68727036e+01]
[ -1.04041290e+01 -9.96350708e+01 -3.00881462e+01 …,
7.78560104e+01 -2.17356903e+02 -4.69987488e+01]
[ -3.31158714e+01 6.19809570e+01 -2.54234257e+01 …,
1.20234375e+02 -1.46192612e+02 -1.04720001e+02]
[ 3.76580048e+00 4.86019516e+01 -5.56028976e+01 …,
1.10028868e+01 -6.48364487e+01 -1.13399048e+02]
[ -1.31446609e+01 2.17165466e+02 -3.61693077e+01 …,
1.67402267e+01 -7.37028275e+01 -1.41293900e+02]]
[[ 3.28933296e+01 -1.87349072e+01 3.23689117e+01 …,
-2.01339417e+02 -3.07486755e+02 -4.66394157e+01]
[ -4.69960098e+01 -1.38499100e+02 -2.74664307e+01 …,
9.80660553e+01 -2.42747482e+02 -6.20588684e+01]
[ -8.69029541e+01 3.85177727e+01 -2.88304672e+01 …,
1.60042236e+02 -1.57490082e+02 -1.33510590e+02]
[ 1.17969360e+01 4.36259995e+01 -5.15230789e+01 …,
3.01130390e+01 -6.94463501e+01 -1.46284805e+02]
[ 5.86700630e+01 2.50478577e+02 -2.46847076e+01 …,
1.78839912e+01 -9.06006470e+01 -1.84669907e+02]]
…,
[[ 1.28096069e+02 -1.08034973e+02 1.00587769e+02 …,
-3.02942017e+02 -5.87585693e+02 -1.85980316e+02]
[ -1.07169044e+02 -1.92016525e+02 -1.07185005e+02 …,
3.07053619e+02 -4.95432129e+02 -1.48049164e+02]
[ -1.84443207e+02 1.46187561e+02 -3.77634125e+01 …,
4.00913208e+02 -3.21964172e+02 -2.51287964e+02]
[ 1.05675522e+02 7.63189240e+01 -5.44517975e+01 …,
2.01354736e+02 -1.78205566e+02 -3.71699127e+02]
[ 3.00891693e+02 2.60920654e+02 -7.02868652e+00 …,
1.77396393e+02 -2.07020905e+02 -4.55916260e+02]]
[[ 1.18607811e+02 -7.71134644e+01 1.03000252e+02 …,
-2.72433044e+02 -4.68416016e+02 -1.63925430e+02]
[ -1.24680687e+02 -1.35840088e+02 -6.77824707e+01 …,
2.23476318e+02 -3.85767761e+02 -8.15217590e+01]
[ -1.51540619e+02 1.24813019e+02 -4.20030098e+01 …,
3.52139038e+02 -2.55510162e+02 -1.79175507e+02]
[ 1.61691925e+02 5.56291580e+01 -4.61462326e+01 …,
1.81345627e+02 -1.42671478e+02 -2.94653198e+02]
[ 3.68817932e+02 1.83100464e+02 -3.00182953e+01 …,
1.79531754e+02 -1.65707153e+02 -3.74365234e+02]]
[[ 1.84909195e+02 -6.60677185e+01 -3.94966850e+01 …,
-2.23886200e+02 -3.13589722e+02 -1.47936615e+02]
[ -6.69442215e+01 -9.53987885e+01 -5.60369072e+01 …,
1.28458511e+02 -2.44234451e+02 -6.48127975e+01]
[ -1.80063736e+02 7.14935608e+01 -7.09763565e+01 …,
2.24825470e+02 -1.83780502e+02 -9.14833069e+01]
[ 5.47513046e+01 5.16017265e+01 -3.16933441e+00 …,
1.03527550e+02 -9.15430298e+01 -1.92175537e+02]
[ 1.57931335e+02 1.00216995e+02 -7.86803436e+00 …,
9.73970795e+01 -1.29235748e+02 -2.48961700e+02]]]
[[ 1.03647446e+02 -4.84287224e+01 4.29009857e+01 …,
-1.61768234e+02 -3.45710693e+02 -4.20045700e+01]
[ -8.61012115e+01 -7.23499603e+01 -4.41467285e-01 …,
1.81409683e+02 -3.13771912e+02 -1.03866043e+01]
[ -1.49256012e+02 1.40624847e+01 -2.46753464e+01 …,
2.51162933e+02 -2.11735046e+02 -1.00225426e+02]
[ 3.79445801e+01 6.66898727e+01 -1.85524902e+01 …,
1.29778748e+02 -1.48592590e+02 -1.96812775e+02]
[ 1.02250710e+02 -2.70945740e+01 7.59958649e+00 …,
8.03603287e+01 -1.37915009e+02 -2.35489746e+02]]
[[ 1.43202881e+02 -5.66237297e+01 -1.49130478e+01 …,
-1.37114746e+02 -2.42718002e+02 -6.97095337e+01]
[ -4.88247223e+01 -6.57657471e+01 -2.17019615e+01 …,
8.71873169e+01 -2.09040985e+02 -8.88125801e+00]
[ -9.23230133e+01 1.68656158e+00 -4.53515472e+01 …,
1.34258362e+02 -1.47102661e+02 -5.58313560e+01]
[ -2.39641571e+00 4.52565727e+01 -4.81935883e+00 …,
5.46552963e+01 -9.88356247e+01 -1.32153473e+02]
[ 3.77416306e+01 -2.55861816e+01 -8.08153534e+00 …,
3.26289711e+01 -9.94700623e+01 -1.55331451e+02]]]
[[[ -2.50377026e+01 -2.83629608e+01 7.04711914e+00 …,
-1.43002823e+02 -2.67520386e+02 1.07822800e+00]
[ -4.81338997e+01 -5.76092148e+00 -5.29888725e+00 …,
8.41980896e+01 -2.71234924e+02 -5.21333351e+01]
[ 4.67211151e+01 -8.86807404e+01 -1.72004662e+01 …,
6.92080460e+01 -1.48268478e+02 -8.12087784e+01]
[ 2.12886467e+01 -2.17681236e+01 2.25733852e+01 …,
8.13394451e+00 -1.31059464e+02 -1.64692322e+02]
[ 1.73150730e+01 -3.44890137e+01 -1.77896500e+00 …,
-3.54678726e+01 -1.00912758e+02 -1.50829758e+02]]
[[ 3.24036713e+01 -4.51581650e+01 6.58598785e+01 …,
-1.38405884e+02 -4.20851318e+02 -2.06448860e+01]
[ -4.55138474e+01 -6.77271500e+01 3.81010962e+00 …,
1.97721069e+02 -4.13602966e+02 -1.30372498e+02]
[ 6.08714828e+01 -8.89726562e+01 -1.72021904e+01 …,
2.11114685e+02 -2.62320099e+02 -2.01788208e+02]
[ 2.00961514e+01 5.09321175e+01 -5.40153770e+01 …,
8.94629059e+01 -1.81328033e+02 -2.61308228e+02]
[ 7.69575043e+01 -2.47528992e+01 -3.47502213e+01 …,
-9.40509605e+00 -1.30564438e+02 -2.40160339e+02]]
[[ 2.93432312e+01 -3.16786194e+00 7.86419296e+01 …,
-1.03176529e+02 -4.66820251e+02 -1.07993469e+01]
[ -5.57636032e+01 -7.17544556e+01 4.51091385e+00 …,
2.86591187e+02 -4.61342773e+02 -1.28324356e+02]
[ -2.46082764e+01 -8.43938751e+01 -1.04389076e+01 …,
3.07387573e+02 -2.92467529e+02 -2.25555603e+02]
[ 1.18098686e+02 7.97620087e+01 -5.05269623e+01 …,
1.61050934e+02 -1.99611679e+02 -2.88568939e+02]
[ 2.16817093e+02 -4.23579407e+01 -1.50130844e+01 …,
6.86450500e+01 -1.53752533e+02 -2.83708527e+02]]
…,
[[ 5.07347946e+01 6.57857513e+00 7.35759277e+01 …,
-1.00078987e+02 -3.23601624e+02 -2.64569664e+00]
[ -6.81458893e+01 -6.78410034e+01 1.08724518e+01 …,
1.83104355e+02 -3.08435303e+02 -3.35540657e+01]
[ -5.84255142e+01 -6.26393051e+01 -6.13491249e+00 …,
2.13446503e+02 -1.86155029e+02 -1.05150665e+02]
[ 5.09105148e+01 2.93376713e+01 -1.09460449e+01 …,
1.29562485e+02 -1.54337769e+02 -1.90460526e+02]
[ 1.05476334e+02 -6.08709869e+01 8.44407654e+00 …,
5.92027092e+01 -1.38292267e+02 -1.92579346e+02]]
[[ 5.85530128e+01 2.92981339e+00 7.23309860e+01 …,
-8.85582123e+01 -2.56360382e+02 -1.72532635e+01]
[ -8.04446182e+01 -5.68215256e+01 1.85313091e+01 …,
1.37098480e+02 -2.36876419e+02 -1.10650196e+01]
[ -1.00497459e+02 -4.35578537e+01 -9.97192860e+00 …,
1.73812210e+02 -1.40929184e+02 -6.65958405e+01]
[ 3.90408859e+01 2.00592747e+01 -3.38714600e+00 …,
1.05563202e+02 -1.23946381e+02 -1.41684906e+02]
[ 7.93414688e+01 -4.68648834e+01 9.23478699e+00 …,
4.81521721e+01 -1.10725357e+02 -1.53245071e+02]]
[[ 7.42268143e+01 -6.93738556e+00 1.75627327e+01 …,
-8.52064590e+01 -1.77411224e+02 -3.63268852e+01]
[ -5.00122452e+01 -2.74321156e+01 5.55675316e+00 …,
5.72885284e+01 -1.54153854e+02 -8.24862385e+00]
[ -7.10742340e+01 -2.71675797e+01 -2.48989258e+01 …,
9.00824585e+01 -9.69757538e+01 -3.72193451e+01]
[ -1.51661301e+00 1.67727280e+01 3.49994659e-01 …,
4.81879349e+01 -8.20349045e+01 -9.47352371e+01]
[ 3.75077133e+01 -3.13303528e+01 -4.80247116e+00 …,
1.94965630e+01 -7.80006332e+01 -9.93223724e+01]]]]]`
the maximum of the output is: 4817.28
the minimum of the output is: -13303.7
As you can see above, When the network changes to evaluation mode, the output of the network changes exponentially. |
st101391 | Can you show how you use Batchnorm in your model?
Are the inputs for the training model and the eval model the same? |
st101392 | As I have mentioned before, I have got a pretrained model in keras and I have transfered to the pytorch. I have never trained model so far but want to finetune for the future. Actually, the pretrained model was trained on mscoco dataset and my testing image is actually from it, i mean mscoco.
And here is the order of the layers and the usage of the batchnorm layer:
path1 = ["conv2d_1",
"batch_normalization_1",
"leaky_re_lu_1",
"max_pooling2d_1",
"conv2d_2",
"batch_normalization_2",
"leaky_re_lu_2",
"max_pooling2d_2",
"conv2d_3",
"batch_normalization_3",
"leaky_re_lu_3",
"conv2d_4",
"batch_normalization_4",
"leaky_re_lu_4",
"conv2d_5",
"batch_normalization_5",
"leaky_re_lu_5",
"max_pooling2d_3",
"conv2d_6",
"batch_normalization_6",
"leaky_re_lu_6",
"conv2d_7",
"batch_normalization_7",
"leaky_re_lu_7",
"conv2d_8",
"batch_normalization_8",
"leaky_re_lu_8",
"max_pooling2d_4",
"conv2d_9",
"batch_normalization_9",
"leaky_re_lu_9",
"conv2d_10",
"batch_normalization_10",
"leaky_re_lu_10",
"conv2d_11",
"batch_normalization_11",
"leaky_re_lu_11",
"conv2d_12",
"batch_normalization_12",
"leaky_re_lu_12",
"conv2d_13",
"batch_normalization_13",
"leaky_re_lu_13"
]
paralle1 = ["max_pooling2d_5",
"conv2d_14",
"batch_normalization_14",
"leaky_re_lu_14",
"conv2d_15",
"batch_normalization_15",
"leaky_re_lu_15",
"conv2d_16",
"batch_normalization_16",
"leaky_re_lu_16",
"conv2d_17",
"batch_normalization_17",
"leaky_re_lu_17",
"conv2d_18",
"batch_normalization_18",
"leaky_re_lu_18",
"conv2d_19",
"batch_normalization_19",
"leaky_re_lu_19",
"conv2d_20",
"batch_normalization_20",
"leaky_re_lu_20"
]
paralle2 = ["conv2d_21",
"batch_normalization_21",
"leaky_re_lu_21",
"space_to_depth_x2"
]
path2 = [
"conv2d_22",
"batch_normalization_22",
"leaky_re_lu_22",
"conv2d_23"
]
The other thing which is good to mention is that I have transfered the weight from keras using this code:
def loadWeights(self):
model = load_model(self.modelUrl)
j = json.loads(model.to_json())
for i, layer in enumerate(j['config']['layers']):
ln = layer['name']
l = model.get_layer(name=layer['name'])
if layer['class_name'] != 'Concatenate':
self.lid[ln] = l.input_shape[3]
else:
self.lid[ln] = l.input_shape[0][3]
self.lod[ln] = l.output_shape[3]
w = l.get_weights()
if layer['class_name'] == 'Conv2D':
filter_size = layer['config']['kernel_size'][0]
if filter_size == 3:
self.layers[ln] = nn.Conv2d(self.lid[ln],self.lod[ln],
filter_size,padding=1,stride=1,bias=False)
elif filter_size==1:
self.layers[ln] = nn.Conv2d(self.lid[ln],self.lod[ln],
filter_size,padding=0,stride=1,bias=False)
self.layers[ln].weight.data = torch.from_numpy(w[0].transpose((3,2,0,1)))
elif layer['class_name'] == 'BatchNormalization':
self.layers[ln] = nn.BatchNorm2d(self.lid[ln])
self.layers[ln].weight.data = torch.from_numpy(w[0])
self.layers[ln].bias.data = torch.from_numpy(w[1])
self.layers[ln].running_mean.data = torch.from_numpy(w[2])
self.layers[ln].running_var.data = torch.from_numpy(w[3])
elif layer['class_name'] == 'LeakyReLU':
self.layers[ln] = nn.LeakyReLU(.1)
elif layer['class_name'] == 'MaxPooling2D':
self.layers[ln] = nn.MaxPool2d(2, 2)
elif layer['class_name'] == 'Lambda':
self.layers[ln] = scale_to_depth(2) |
st101393 | from .model import Darknet19
import torch
import torch.nn as nn
class YoloV2(nn.Module):
"""Yolo version 2; It is an extented version of
yolo v1 capable of detecting 9000 object"""
def __init__(self, modelUrl):
super(YoloV2, self).__init__()
self.modelUrl = modelUrl
self.darknet19 = Darknet19(modelUrl)
self.darknet19.loadWeights()
self.weights = self.darknet19.layers
arch = self.darknet19.arch
self.path1 = self.makeSequence(arch[0])
self.parallel1 = self.makeSequence(arch[1])
self.parallel2 = self.makeSequence(arch[2])
self.path2 = self.makeSequence(arch[3])
def makeSequence(self, arch):
layers = []
for id, name in enumerate(arch):
layers.append(self.weights[name])
return nn.ModuleList(layers)
def forward(self, input):
out = input
for layer in self.path1:
out = layer(out)
out1 = out.clone()
for layer in self.parallel1:
out1 = layer(out1)
out2 = out.clone()
for layer in self.parallel2:
out2 = layer(out2)
final = torch.cat([out2, out1], dim=1)
for layer in self.path2:
final = layer(final)
return final
from keras.models import load_model
import torch
import tensorflow as tf
import json
import torch.nn as nn
import torch.autograd as autograd
import numpy as np
class scale_to_depth(nn.Module):
def __init__(self, block_size=1):
super(scale_to_depth, self).__init__()
self.block_size = block_size
def forward(self, input):
batch_size, in_channels, in_height, in_width = input.size()
channels = in_channels * (self.block_size ** 2)
out_height = int(in_height / self.block_size)
out_width = int(in_width / self.block_size)
input_view = input.contiguous().view(
batch_size, in_channels, self.block_size, self.block_size,
out_height, out_width)
shuffle_out = input_view.permute(0, 1, 4, 2, 5, 3).contiguous()
return shuffle_out.view(batch_size, channels, out_height, out_width)
class Darknet19:
"""This is the model to create the pretrained darknet19"""
def __init__(self, modelUrl):
super(Darknet19, self).__init__()
self.modelUrl = modelUrl
self.layers = {}
self.lid = {}
self.lod = {}
self.lin = {}
self.arch = self.makeArch()
def makeArch(self):
path1 = ["conv2d_1",
"batch_normalization_1",
"leaky_re_lu_1",
"max_pooling2d_1",
"conv2d_2",
"batch_normalization_2",
"leaky_re_lu_2",
"max_pooling2d_2",
"conv2d_3",
"batch_normalization_3",
"leaky_re_lu_3",
"conv2d_4",
"batch_normalization_4",
"leaky_re_lu_4",
"conv2d_5",
"batch_normalization_5",
"leaky_re_lu_5",
"max_pooling2d_3",
"conv2d_6",
"batch_normalization_6",
"leaky_re_lu_6",
"conv2d_7",
"batch_normalization_7",
"leaky_re_lu_7",
"conv2d_8",
"batch_normalization_8",
"leaky_re_lu_8",
"max_pooling2d_4",
"conv2d_9",
"batch_normalization_9",
"leaky_re_lu_9",
"conv2d_10",
"batch_normalization_10",
"leaky_re_lu_10",
"conv2d_11",
"batch_normalization_11",
"leaky_re_lu_11",
"conv2d_12",
"batch_normalization_12",
"leaky_re_lu_12",
"conv2d_13",
"batch_normalization_13",
"leaky_re_lu_13"
]
paralle1 = ["max_pooling2d_5",
"conv2d_14",
"batch_normalization_14",
"leaky_re_lu_14",
"conv2d_15",
"batch_normalization_15",
"leaky_re_lu_15",
"conv2d_16",
"batch_normalization_16",
"leaky_re_lu_16",
"conv2d_17",
"batch_normalization_17",
"leaky_re_lu_17",
"conv2d_18",
"batch_normalization_18",
"leaky_re_lu_18",
"conv2d_19",
"batch_normalization_19",
"leaky_re_lu_19",
"conv2d_20",
"batch_normalization_20",
"leaky_re_lu_20"
]
paralle2 = ["conv2d_21",
"batch_normalization_21",
"leaky_re_lu_21",
"space_to_depth_x2"
]
path2 = [
"conv2d_22",
"batch_normalization_22",
"leaky_re_lu_22",
"conv2d_23"
]
return path1, paralle1, paralle2, path2
def loadWeights(self):
model = load_model(self.modelUrl)
j = json.loads(model.to_json())
for i, layer in enumerate(j['config']['layers']):
ln = layer['name']
l = model.get_layer(name=layer['name'])
if layer['class_name'] != 'Concatenate':
self.lid[ln] = l.input_shape[3]
else:
self.lid[ln] = l.input_shape[0][3]
self.lod[ln] = l.output_shape[3]
w = l.get_weights()
if layer['class_name'] == 'Conv2D':
filter_size = layer['config']['kernel_size'][0]
if filter_size == 3:
self.layers[ln] = nn.Conv2d(self.lid[ln],self.lod[ln],
filter_size,padding=1,stride=1,bias=False)
elif filter_size==1:
self.layers[ln] = nn.Conv2d(self.lid[ln],self.lod[ln],
filter_size,padding=0,stride=1,bias=False)
self.layers[ln].weight.data = torch.from_numpy(w[0].transpose((3,2,0,1)))
elif layer['class_name'] == 'BatchNormalization':
self.layers[ln] = nn.BatchNorm2d(self.lid[ln])
self.layers[ln].weight.data = torch.from_numpy(w[0])
self.layers[ln].bias.data = torch.from_numpy(w[1])
self.layers[ln].running_mean.data = torch.from_numpy(w[2])
self.layers[ln].running_var.data = torch.from_numpy(w[3])
elif layer['class_name'] == 'LeakyReLU':
self.layers[ln] = nn.LeakyReLU(.1)
elif layer['class_name'] == 'MaxPooling2D':
self.layers[ln] = nn.MaxPool2d(2, 2)
elif layer['class_name'] == 'Lambda':
self.layers[ln] = scale_to_depth(2) |
st101394 | Have you solved it? I also have similar question. All my output is the same when I use model.eval(). |
st101395 | Now I had trained a model for predicting stock and the feature size is 175.I wonder is there a way can visual the feature importance like sklearn do? |
st101396 | Why not use the same sklearn analysis to check the principal components? See here
Agnès Mustar – 1 Nov 17
Principal Component Analysis (PCA) implemented with PyTorch | Agnès Mustar 66
What is PCA ? PCA is an algorithm capable of finding patterns in data, it is used to reduce the dimension of the data. If X is a matrix of size (m, n). We want to find an encoding fonction f such as f(X) = C where C is a matrix of size (m,l) with …... |
st101397 | Hi,
I have been trying to implement Variational autoencoder in pytorch. I was wondering if there is some way to initialize parameters mean and logvar? Thanks in advance. |
st101398 | After Training when I test my model I get the same tensor over and over?
Why?
Is it normal that the values are so big???
tensor([[ 1.2056e+16, 5.9767e+15, 2.1085e+15, 5.2158e+16, 7.4808e+15,
1.6452e+15, 1.6628e+16, 1.8721e+16, 9.9798e+14, 2.4065e+15,
1.0812e+16, 1.3105e+16, 7.6940e+14, 9.8767e+14, 7.3642e+15,
4.2856e+15, 1.0966e+16, 1.7450e+16, 2.2951e+15, 1.4780e+16,
-7.3730e+14, 3.9957e+15, 7.9150e+15, 8.4011e+15, 2.0821e+15,
1.1922e+15, 1.4118e+16, 5.7647e+15, 7.5982e+15, 5.0951e+15,
7.6691e+15, 8.4151e+15, 1.2151e+16, 1.0469e+16, 1.8380e+15,
7.6098e+15, -3.2812e+15, 1.9539e+15, 6.2806e+15, -5.4205e+15,
5.5007e+15, -3.2318e+15, 7.6758e+15, -7.1428e+14, 4.4580e+15,
2.5704e+15, 3.0881e+15, -2.6194e+15, 8.4862e+15, -1.2936e+15,
3.7698e+15, 1.9810e+15, 3.3016e+15, 1.6953e+15, 1.7504e+15,
2.4143e+15, 2.5783e+15, 8.6065e+15, 8.2097e+14, 2.9864e+15,
1.7083e+15, -1.1889e+15, -4.0307e+15, -3.3194e+15, -4.6815e+15,
-3.3368e+15, -3.9685e+15, -1.7121e+15, 7.3881e+14, 2.4879e+15,
3.2985e+15, 1.0613e+15, -1.7420e+15, 6.6461e+15, -9.2257e+14,
-4.3156e+15, 7.8808e+15, 5.7687e+15, 4.0117e+15, 5.5298e+13,
9.6740e+15, 5.1829e+15, 4.5807e+15, 2.5647e+15, -2.6201e+15,
4.1714e+15, -9.2970e+13, 2.3725e+15, 4.1076e+15, 5.0096e+13,
-3.7963e+14, -1.0503e+15, 8.4002e+14, 2.0620e+15, 9.3061e+14,
-1.8499e+15, -2.4000e+15, 2.9994e+14, -5.4544e+15, -1.5808e+15,
1.2891e+15, 2.4871e+15, 7.0153e+14, 1.9218e+15, -5.3665e+15,
-4.9205e+15, -4.5869e+15, -6.6558e+15, -6.4367e+15, -6.5210e+15,
-5.2257e+15, 2.0981e+15, -3.0363e+15, -6.1868e+14, -5.8302e+15,
-2.0618e+15, -2.4618e+15, 2.6335e+15, 1.5778e+15, -6.7361e+15,
2.0971e+15, 6.1178e+14, -1.8537e+15, 4.6063e+14, 3.4269e+15,
3.4286e+15, 1.2131e+15, -2.4635e+15, -1.9550e+15, -3.6220e+15,
-3.0985e+15, -8.2022e+14, -2.6846e+15, -2.6584e+15, 1.6548e+15,
3.9198e+15, 2.0738e+15, 1.4412e+15, 3.0269e+15, -2.3843e+15,
-9.4392e+14, -3.8304e+15, -1.0557e+15, -2.4536e+15, -4.0803e+15,
-2.7027e+15, -1.0560e+15, -2.0054e+15, -3.5940e+15, -3.6986e+14,
5.0988e+14, -3.1614e+15, -1.4806e+15, -1.6534e+15, -1.6811e+15,
-4.0643e+15, -2.4029e+15, -3.7630e+15, -2.1271e+15, -2.5981e+15,
-2.5673e+15, -2.1057e+15, -1.3371e+15, 7.5890e+14, 4.1515e+15,
-2.7834e+15, -2.4481e+15, 9.5509e+14, -2.6114e+15, 5.6024e+14,
-1.8457e+15, -2.0567e+15, -2.4415e+15, -7.2374e+14, -7.0101e+14,
-3.8846e+15, -3.2767e+14, 8.6820e+14, 4.5376e+14, -2.8405e+15,
-5.7237e+15, -2.9263e+15, -6.2191e+15, -5.3570e+15, -7.5934e+15,
-5.7062e+15, -6.6455e+15, -5.6229e+15, -1.4619e+15, -3.0077e+15,
-1.0732e+15, -3.8536e+15, -2.5924e+15, -3.5577e+15, -4.2650e+15,
-2.9791e+15, -5.9215e+15, -4.2179e+15]], device='cuda:0') |
st101399 | Since I want to buy a new computer.
I wonder how fast of titan v using tensor cores compared to 1080ti.
Current comparision shows that titan v is only 50% faster than 1080ti.
GitHub
u39kun/deep-learning-benchmark 98
deep-learning-benchmark - Deep Learning Benchmark for comparing the performance of DL frameworks, GPUs, and single vs half precision
But titan v has 110TFLOPS, I wonder how soon it will be full potential (10x faster than 1080ti) |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.