id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st116968
|
So cuda tensors have lock that hogwild! cannot be used with them? And cpu tensors don’t have locks? Didn’t know about it.
|
st116969
|
Yes hogwild training is a special lock free approach to training that exploits some of the benefits of a multipurpose CPU when the time taken for locks have become a bottleneck for certain model training
people.eecs.berkeley.edu
hogwildTR.pdf 14
267.20 KB
|
st116970
|
Yes, I just didn’t know that cuda tensors have locks. Thank you very much!
By the way, which doc says cuda tensors have locks?
|
st116971
|
oh ok. so its my understanding that only simple atomic operations can be done without locks with cuda but many parameter updating operations requires some locking structure. So cuda tensors don’t require in general but for your needs they would. Sorry should not have wrote in absolute terms. I would see if you can ask someone from Nvidia on more information on this as they may have more info or may have a way to do it. And also may have some docs for you.
|
st116972
|
Ah, I thought all pytorch’s cuda tensors have locks, but just couldn’t find any docs related to it. If not, I’m okay with it. Thank you very much !!
|
st116973
|
Hi,
I’m trying to implement SRGAN 31 in pytorch. The paper proposes a pixel shuffle layer in order to upscale images. However, when using such layer, the time per epoch increases drastically, a behavior not observed when running the same code in Tensorflow. It seems that the -pytorch- code is not utilizing the GPU correctly, as the volatile gpu memory oscillates between 10-90% : when removing the layer it stays around 98-99%, as it does with regular gans. Therefore, I’m wondering if the slow behavior is because of my code, or due to pytorch.
Here is my pytorch implementation :
def pixel_shuffle_layer(x, r, n_split):
def PS(x, r):
# assumes tf ordering
bs, a, b, c = x.size()
x = x.contiguous().view(bs, a, b, r, r)
x = x.permute(0,1,2,4,3)
x = torch.chunk(x, a, dim=1)
x = torch.cat([x_.squeeze() for x_ in x], dim=2)
x = torch.chunk(x, b, dim=1)
x = torch.cat([x_.squeeze() for x_ in x], dim=2)
return x.view(bs, a*r, b*r, 1)
# put in tf ordering
x = x.permute(0,2,3,1)
xc = torch.chunk(x, n_split, dim=3)
x = torch.cat([PS(x_, r) for x_ in xc], dim=3)
# put back in th ordering
x = x.permute(0,3,1,2)
return x
Here is the tensorflow implementation 18
def pixel_shuffle_layer(x, r, n_split):
def PS(x, r):
bs, a, b, c = x.get_shape().as_list()
x = tf.reshape(x, (bs, a, b, r, r))
x = tf.transpose(x, (0, 1, 2, 4, 3))
x = tf.split(x, a, 1)
x = tf.concat([tf.squeeze(x_) for x_ in x], 2)
x = tf.split(x, b, 1)
x = tf.concat([tf.squeeze(x_) for x_ in x], 2)
return tf.reshape(x, (bs, a*r, b*r, 1))
xc = tf.split(x, n_split, 3)
return tf.concat([PS(x_, r) for x_ in xc], 3)
Finally, here is how I’m calling the layer in my forward pass :
def forward(self, input):
val = self.deconv_one(input)
val = nn.ReLU()(val)
shortcut = val
for i in range(self.B):
mid = val
val = self.blocks[i](val)
val = val + mid
val = self.deconv_a(val)
if self.bn : val = self.bn_a(val)
val = val + shortcut
val = self.deconv_b(val)
bs, C, H, W = val.size()
val = pixel_shuffle_layer(val, 2, C / 2**2)
val = nn.ReLU()(val)
val = self.deconv_c(val)
bs, C, H, W = val.size()
val = pixel_shuffle_layer(val, 2, C / 2**2)
val = nn.ReLU()(val)
val = self.deconv_d(val)
if self.last_nl is not None :
val = self.last_nl(val)
return val
Thanks in advance,
Lucas
|
st116974
|
Just in case, there is a pixelshuffle implementation in Pytorch already, did you try that?
http://pytorch.org/docs/master/nn.html#torch.nn.PixelShuffle 117
|
st116975
|
Hi,
I’m trying to implement the gradient penalty in wgans. From what I understood, once (https://github.com/pytorch/pytorch/pull/1643 23) was merged with master it should be working. I installed from source today, and I’m getting a segmentation fault when I’m calling torch.autograd.grad. That being said, I’ve never used said function before, so maybe I’m doing it wrong. Any case, any help is welcomed
Thanks,
Lucas
(code) https://github.com/pclucas14/WassersteinGAN/blob/master/main.py 55
|
st116976
|
if you can create a branch with dummy data, so that i can clone and reproduce the segmentation fault, i will take a look.
Thank you for reporting this.
|
st116977
|
Hi Soumith,
Thanks for replying. I got away with it by using someone else’s wgan-gp implementation. However, if you still want to investigate the segmentation fault, let me know and I’ll make a branch.
|
st116978
|
I want to find the defination of torch._C.CudaFloatTensorBase class.But i didn’t find it in the torch/csrc/Module.cpp.Does anyone know where is it? Thanks
|
st116979
|
Hi,
This is slightly convoluted in the sense that it used macro to generate all types easily. Here is how it works:
The definition of the class is here 17 and is used with macro to generate all tensors types.
For Cuda types, this is generated from here 5.
This line is interpreted specially by this 4 build script that replaces it for Cuda Float type:
#define THC_GENERIC_FILE torch/csrc/generic/Tensor.cpp
#include THC/THCGenerateFloatType.h
This include file is present here 4 and is reponsible to set up all the proper macros such that the original generic file is properly interpreted for this lib and type.
If you want a better understanding on how these macro work in the TH/THC libraries and how they are used to generate all type, you can look at this 7 blogpost that describes this.
|
st116980
|
I recently got the error message ‘*** RuntimeError: cuDNN requires contiguous weight tensor’ when performing a convolution with a filterbank which was previously transposed and made contiguous. This only happens when the size of one of the axes is 1.
To pin the error down I wrote the following example:
import torch
def debug_contiguous(x):
xt = x.t()
print(x.__getstate__()[2:])
print(xt.is_contiguous())
print(xt.__getstate__()[2:])
print(xt.contiguous().__getstate__()[2:])
print(xt.clone().__getstate__()[2:])
debug_contiguous(x=torch.rand(2,3))
debug_contiguous(x=torch.rand(2,1))
which prints
((2, 3), (3, 1)) # prints size, strides
False
((3, 2), (1, 3))
((3, 2), (2, 1))
((3, 2), (2, 1))
((2, 1), (1, 1))
True
((1, 2), (1, 1))
((1, 2), (1, 1)) <-- stride should be (2,1)
((1, 2), (2, 1))
For the tensor of size (2,3) everything works as expected, xt.is_contiguous() is False and xt.contiguous() makes xt contiguous (which is reflected in the new strides (2,1)).
If the tensor has size (2,1) the transposed size is (1,2). Both have the same linearized layout in the memory which may be the reason why xt.is_contiguous() returns True. However, the stride after transposition is (1,1). As far as I understand, calling xt.contiguous() should update the stride to (2,1) which is not the case. A workaround is to call xt.clone() which indeed results in the correct stride.
I did not find the code of .contiguous in github but I guess the missing update of the stride may come from xt.is_contiguous() being True?
|
st116981
|
Hi,
Since the first dimension contains a single element, the value of the stride for this dimension will never be used. It can thus be an arbitrary value. As you can see here 30 in the code for isContiguous, dimensions of size 1 are ignored.
For the cudnn error, this is a false positive error that is now fixed in master.
|
st116982
|
Hi,
I have been trying to classify type of signals based on their values. My input is of Nx32 matrix, where N is the number of samples and 32 is the number of features of signal.
I am able to implement CNN code using pytorch but the output is always constant (0 for all inputs).Here is the code that i have been using. I am unable to figure what is going wrong?
Any help is appreciated
from future import division
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
class Net(nn.Module):
def init(self):
super(Net, self).init()
self.conv1 = nn.Conv1d(1, 6, 5)
self.conv2 = nn.Conv1d(6, 16, 5)
self.fc1 = nn.Linear(384, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 2)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
def training(train_data,train_labels,test_data,test_labels):
net = Net()
x = Variable(torch.Tensor(train_data), requires_grad=False)
y = Variable(torch.Tensor(train_labels), requires_grad=False)
x_2 = Variable(torch.Tensor(test_data), requires_grad=False)
y_2 = Variable(torch.Tensor(test_labels), requires_grad=False)
# criterion = nn.MSELoss()
# optimizer = optim.SGD(net.parameters(), lr=0.00001, momentum=0.01)
optimizer = optim.Adam(net.parameters())
criterion = nn.MultiLabelSoftMarginLoss()
new = x.view(-1,1,32)
new_x_2 = x_2.view(-1,1,32)
num_epochs = 10
for epoch in range(10):
losses=[]
optimizer.zero_grad()
outputs = net(new)
loss = criterion(outputs,y)
loss.backward()
optimizer.step()
losses.append(loss.data.mean())
print('[%d/%d] Loss: %.3f' % (epoch+1, num_epochs, loss.data.mean()))
net.eval()
outputs = net(new_x_2)
_, predicted = torch.max(outputs.data, 1)
print(predicted)
ar=0
for i in predicted:
if(i==1):
ar=ar+1
print(ar)
|
st116983
|
I’m new to pytorch, I want to know how can I update weights of different layers while retain weights of other layers at different iterations, anyone can help me out?
|
st116984
|
Suppose I have a matrix x of size m x p, and another matrix y of size m x p, how to compute the pdist2(in matlab) of
the two matrices, which gives me a matrix ``` m x m `` , each element d_ij = dist(x_i, y_j), where x_i and y_j is the coresponding row vecor. can any one give some elegant solution?
|
st116985
|
mp1 = torch.rand(m,p)
mp2 = torch.rand(m,p)
mmp1 = torch.stack([mp1]*m)
mmp2 = torch.stack([mp2]*m).transpose(0,1)
mm = torch.sum((mmp1-mmp2)**2,2).squeeze()
Then mm is the m*m matrix you want, with mm_ij = d(mp1_i, mp2_j)
|
st116986
|
I’ve changed the sine wave generator to use float32 to try to help compatibility:
import math
import numpy as np
import torch
T = 20
L = 1000
N = 100
np.random.seed(2)
x = np.empty((N, L), 'int64')
x[:] = np.array(range(L)) + np.random.randint(-4*T, 4*T, N).reshape(N, 1)
data = np.sin(x / 1.0 / T).astype('float32')
torch.save(data, open('traindata.pt', 'wb'))
I’ve tried .cuda()-ing everything I can think of but always get a type error. Here’s one of the tries.
import torch
import torch.nn as nn
from torch import autograd
import torch.optim as optim
import numpy as np
import matplotlib
matplotlib.use('Agg')
class Variable(autograd.Variable):
def __init__(self, data, *args, **kwargs):
data = data.cuda()
super(Variable, self).__init__(data, *args, **kwargs)
class Sequence(nn.Module):
def __init__(self):
super(Sequence, self).__init__()
self.lstm1 = nn.LSTMCell(1, 51).cuda()
self.lstm2 = nn.LSTMCell(51, 1).cuda()
def forward(self, input, future = 0):
input = input.cuda()
outputs = []
h_t = Variable(torch.zeros(input.size(0), 51), requires_grad=False)
c_t = Variable(torch.zeros(input.size(0), 51), requires_grad=False)
h_t2 = Variable(torch.zeros(input.size(0), 1), requires_grad=False)
c_t2 = Variable(torch.zeros(input.size(0), 1), requires_grad=False)
for i, input_t in enumerate(input.chunk(input.size(1), dim=1)):
h_t, c_t = self.lstm1(input_t, (h_t, c_t))
h_t2, c_t2 = self.lstm2(c_t, (h_t2, c_t2))
outputs += [c_t2]
for i in range(future):# if we should predict the future
h_t, c_t = self.lstm1(c_t2, (h_t, c_t))
h_t2, c_t2 = self.lstm2(c_t, (h_t2, c_t2))
outputs += [c_t2]
outputs = torch.stack(outputs, 1).squeeze(2)
return outputs
if __name__ == '__main__':
# set ramdom seed to 0
np.random.seed(0)
torch.manual_seed(0)
# load data and make training set
data = torch.load('traindata.pt')
input = Variable(torch.from_numpy(data[3:, :-1]), requires_grad=False)
input.cuda()
print(type(input))
target = Variable(torch.from_numpy(data[3:, 1:]), requires_grad=False).cuda()
# build the model
seq = Sequence()
#seq.double()
seq.cuda()
criterion = nn.MSELoss()
# use LBFGS as optimizer since we can load the whole data to train
optimizer = optim.LBFGS(seq.parameters())
#begin to train
for i in range(10):
print('STEP: ', i)
def closure():
optimizer.zero_grad()
out = seq(input).cuda()
loss = criterion(out, target)
print('loss:', loss.data.numpy()[0])
loss.backward()
return loss
optimizer.step(closure)
# begin to predict
future = 1000
pred = seq(input[:3], future = future)
y = pred.data.numpy()
The error is at optimizer.step(closure)
What’s missing?
|
st116987
|
@smth Thanks for following up.
Here’s the trace:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-34-a3ea2c1c18d8> in <module>()
68 loss.backward()
69 return loss
---> 70 optimizer.step(closure)
71 # begin to predict
72 future = 1000
/usr/local/lib/python3.5/dist-packages/torch/optim/lbfgs.py in step(self, closure)
98
99 # evaluate initial f(x) and df/dx
--> 100 orig_loss = closure()
101 loss = orig_loss.data[0]
102 current_evals = 1
<ipython-input-34-a3ea2c1c18d8> in closure()
63 def closure():
64 optimizer.zero_grad()
---> 65 out = seq(input).cuda()
66 loss = criterion(out, target)
67 print('loss:', loss.data.numpy()[0])
/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
204
205 def __call__(self, *input, **kwargs):
--> 206 result = self.forward(*input, **kwargs)
207 for hook in self._forward_hooks.values():
208 hook_result = hook(self, input, result)
<ipython-input-34-a3ea2c1c18d8> in forward(self, input, future)
27
28 for i, input_t in enumerate(input.chunk(input.size(1), dim=1)):
---> 29 h_t, c_t = self.lstm1(input_t, (h_t, c_t))
30 h_t2, c_t2 = self.lstm2(c_t, (h_t2, c_t2))
31 outputs += [c_t2]
/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
204
205 def __call__(self, *input, **kwargs):
--> 206 result = self.forward(*input, **kwargs)
207 for hook in self._forward_hooks.values():
208 hook_result = hook(self, input, result)
/usr/local/lib/python3.5/dist-packages/torch/nn/modules/rnn.py in forward(self, input, hx)
497 input, hx,
498 self.weight_ih, self.weight_hh,
--> 499 self.bias_ih, self.bias_hh,
500 )
501
/usr/local/lib/python3.5/dist-packages/torch/nn/_functions/rnn.py in LSTMCell(input, hidden, w_ih, w_hh, b_ih, b_hh)
23 if input.is_cuda:
24 igates = F.linear(input, w_ih)
---> 25 hgates = F.linear(hidden[0], w_hh)
26 state = fusedBackend.LSTMFused()
27 return state(igates, hgates, hidden[1]) if b_ih is None else state(igates, hgates, hidden[1], b_ih, b_hh)
/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py in linear(input, weight, bias)
447 def linear(input, weight, bias=None):
448 state = _functions.linear.Linear()
--> 449 return state(input, weight) if bias is None else state(input, weight, bias)
450
451
/usr/local/lib/python3.5/dist-packages/torch/nn/_functions/linear.py in forward(self, input, weight, bias)
8 self.save_for_backward(input, weight, bias)
9 output = input.new(input.size(0), weight.size(0))
---> 10 output.addmm_(0, 1, input, weight.t())
11 if bias is not None:
12 # cuBLAS doesn't support 0 strides in sger, so we can't use expand
TypeError: addmm_ received an invalid combination of arguments - got (int, int, torch.FloatTensor, torch.cuda.FloatTensor), but expected one of:
* (torch.FloatTensor mat1, torch.FloatTensor mat2)
* (torch.SparseFloatTensor mat1, torch.FloatTensor mat2)
* (float beta, torch.FloatTensor mat1, torch.FloatTensor mat2)
* (float alpha, torch.FloatTensor mat1, torch.FloatTensor mat2)
* (float beta, torch.SparseFloatTensor mat1, torch.FloatTensor mat2)
* (float alpha, torch.SparseFloatTensor mat1, torch.FloatTensor mat2)
* (float beta, float alpha, torch.FloatTensor mat1, torch.FloatTensor mat2)
didn't match because some of the arguments have invalid types: (int, int, torch.FloatTensor, torch.cuda.FloatTensor)
* (float beta, float alpha, torch.SparseFloatTensor mat1, torch.FloatTensor mat2)
didn't match because some of the arguments have invalid types: (int, int, torch.FloatTensor, torch.cuda.FloatTensor)
|
st116988
|
Brian_Mullen:
didn’t match because some of the arguments have invalid types: (int, int, torch.FloatTensor, torch.cuda.FloatTensor)
Your model is not transferred to CUDA maybe. Try adding the line: model.cuda()
|
st116989
|
I see the exact issue now. Your subclassing of autograd.Variable is wrong. Replace it instead with this if you want:
def Variable(data, *args, **kwargs):
return autograd.Variable(data.cuda(), *args, **kwargs)
Here’s a working example:
import torch
import torch.nn as nn
from torch import autograd
import torch.optim as optim
import numpy as np
import matplotlib
matplotlib.use('Agg')
def Variable(data, *args, **kwargs):
return autograd.Variable(data.cuda(), *args, **kwargs)
class Sequence(nn.Module):
def __init__(self):
super(Sequence, self).__init__()
self.lstm1 = nn.LSTMCell(1, 51).cuda()
self.lstm2 = nn.LSTMCell(51, 1).cuda()
def forward(self, input, future = 0):
input = input.cuda()
outputs = []
h_t = Variable(torch.zeros(input.size(0), 51), requires_grad=False)
c_t = Variable(torch.zeros(input.size(0), 51), requires_grad=False)
h_t2 = Variable(torch.zeros(input.size(0), 1), requires_grad=False)
c_t2 = Variable(torch.zeros(input.size(0), 1), requires_grad=False)
for i, input_t in enumerate(input.chunk(input.size(1), dim=1)):
h_t, c_t = self.lstm1(input_t, (h_t, c_t))
h_t2, c_t2 = self.lstm2(c_t, (h_t2, c_t2))
outputs += [c_t2]
for i in range(future):# if we should predict the future
h_t, c_t = self.lstm1(c_t2, (h_t, c_t))
h_t2, c_t2 = self.lstm2(c_t, (h_t2, c_t2))
outputs += [c_t2]
outputs = torch.stack(outputs, 1).squeeze(2)
return outputs
if __name__ == '__main__':
# set ramdom seed to 0
np.random.seed(0)
torch.manual_seed(0)
# load data and make training set
data = torch.load('traindata.pt')
input = Variable(torch.from_numpy(data[3:, :-1]), requires_grad=False)
target = Variable(torch.from_numpy(data[3:, 1:]), requires_grad=False).cuda()
# build the model
seq = Sequence()
#seq.double()
seq.cuda()
criterion = nn.MSELoss()
# use LBFGS as optimizer since we can load the whole data to train
optimizer = optim.LBFGS(seq.parameters())
#begin to train
for i in range(10):
print('STEP: ', i)
def closure():
optimizer.zero_grad()
out = seq(input).cuda()
loss = criterion(out, target)
print('loss:', loss.data.cpu().numpy()[0])
loss.backward()
return loss
optimizer.step(closure)
# begin to predict
future = 1000
pred = seq(input[:3], future = future)
y = pred.data.numpy()
|
st116990
|
Thank you so much for your help. Small note, it needed one more .cpu() in the last line
smth:
y = pred.data.numpy()
Needs to be:
y = pred.data.cpu().numpy()
|
st116991
|
I wrote a custom CUDA module, following the provided example: https://github.com/pytorch/extension-ffi 11
So far, my modules worked flawlessly. Note that unlike the example, I am using custom CUDA kernels. However, I just wanted to start training my model on multiple GPUs and run into ‘illegal memory access’ exceptions in the backward call. Since the custom CUDA module is working perfectly fine on a single GPU and since the exact same CUDA code is working perfectly fine with Torch on multiple GPUs, I suspect that something in my PyTorch related Python code is wrong:
class CustomFunction(torch.autograd.Function):
def __init__(self):
super(CustomFunction, self).__init__()
# end
def forward(self, input1, input2, input3):
self.save_for_backward(input1, input2, input3)
assert(input1.is_contiguous() == True)
assert(input2.is_contiguous() == True)
assert(input3.is_contiguous() == True)
output = input1.new(...).zero_()
# call C portion which call the CUDA portion whcih calls the forward kernel
return output
# end
def backward(self, gradOutput):
input1, input2, input3 = self.saved_tensors
assert(gradOutput.is_contiguous() == True)
gradInput1 = input1.new(...).zero_()
gradInput2 = input1.new(...).zero_()
gradInput3 = input1.new(...).zero_()
# call C portion which call the CUDA portion whcih calls the backward kernel
return gradInput1, gradInput2, gradInput3
# end
# end
When the call to the CUDA kernel in the backward function is removed, the ‘illegal memory access’ error does not appear but the model does of course not train correctly due to the missing gradients. I thus suspect some issue with the memory allocation and have already tried using gradOutput.new instead of input1.new in the backward function. I likewise used .get_device() to make sure that all the tensors in a call of the backward function are on the same GPU and can confirm that they are.
I normally am very reluctant to ask for help but am afraid that I might be missing something fundamental here. Is anyone able to provide any insight here? Thank you very much for making PyTorch happen by the way, it so far has been a joy to work with!
|
st116992
|
After tinkering with my code for a while, I noticed that changing from CUDA_VISIBLE_DEVICES="1,2" to CUDA_VISIBLE_DEVICES="0,1" makes everything work without any issues. Why does GPU 0 have to be included in the visible devices when using a custom CUDA module? What am I missing? Thanks!
|
st116993
|
this is kinda weird, and shouldn’t happen. Is there anyway I can debug this further? Can you open an issue on github.com/pytorch/pytorch 4 with a reproduction code of the problem?
|
st116994
|
Thank you for your message Soumith! I just created a repository that can potentially also serve as a reference for other people who are trying to create a CUDA extension: https://github.com/sniklaus/pytorch-extension 9
This extension simply computes the Hadamard product using a custom CUDA kernel. Everything works well if it is just being executed on a single graphics card but once more than one GPU is being used, the configuration seems to affect the execution. Please see the test.py file for more details.
|
st116995
|
Hey Simon,
I’ve run your code, and it’s not failing for me. Also it always computes 0.0. Is this unexpected:
CUDA_VISIBLE_DEVICES="1,3" python test.py
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
switching to DataParallel mode
for me, this works with
export CUDA_VISIBLE_DEVICES="0"
export CUDA_VISIBLE_DEVICES="1"
export CUDA_VISIBLE_DEVICES="2"
export CUDA_VISIBLE_DEVICES="3"
export CUDA_VISIBLE_DEVICES="0,1"
export CUDA_VISIBLE_DEVICES="2,3"
and fails with many others like
export CUDA_VISIBLE_DEVICES="0,2"
export CUDA_VISIBLE_DEVICES="0,3"
export CUDA_VISIBLE_DEVICES="1,2"
export CUDA_VISIBLE_DEVICES="1,3"
export CUDA_VISIBLE_DEVICES="0,1,2,3"
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
|
st116996
|
Thank you for spending your Sunday with trying this out! I have also noticed that you looked into many other topics, thank you Soumith! I am honestly amazed by how often I come across your work: papers, posts, repositories, talks. I would love to talk to you in person at some point. I will be in Hawaii for CVPR at the end of the month, let me know should you be around and want to grab shaved ice 5.
I just tried running it on a machine different from my main server and noticed that it works there without any issues as well. This is quite surprising since they only differ in the used graphics cards (the remaining hardware is identical, they even use the same motherboard). My main server has 4x Titan X (Maxwell) while the other machine I just tried has 4x Titan X (Pascal). They also both run the same version of PyTorch / CUDA / NVIDIA drivers. I will investigate this further and report back once I have a better idea of what’s going on. Thanks again!
|
st116997
|
I just had the chance to use a machine with the same hardware as my main server and it likewise does not cause the error. I doubt that the issue is related to PyTorch and I will post an update should I ever find out what is causing this behavior. If somebody else is able to provide more input, I would be happy to hear it though.
|
st116998
|
I tried to wrapper autograd.Variable to send the data to the GPU every time I construct a Variable
class Variable(autograd.Variable):
def __init___(self, data, *args, **kwargs):
data = data.cuda()
super(Variable, self).__init___(data, *args, **kwargs)
a = torch.randn(1,1)
print(a)
print(Variable(a))
print(Variable(a.cuda()))
However, I got the output as follows:
-0.2344
[torch.FloatTensor of size 1x1]
Variable containing:
-0.2344
[torch.FloatTensor of size 1x1]
Variable containing:
-0.2344
[torch.cuda.FloatTensor of size 1x1 (GPU 0)]
I expect Variable(a) would return torch.cuda.FloatTensor, but I got torch.FloatTensor.
Does anyone get the same problem?
Thank you!
|
st116999
|
Here’s a proper working such wrapper.
def Variable(data, *args, **kwargs):
return autograd.Variable(data.cuda(), *args, **kwargs)
|
st117000
|
Hi, I am mystified by the effect of batch size on the speed per iteration. I am using torchtext.data.Iterator to generate batches. If I change the batch size from 359 to 10770, the whole forward-backward pass for a batch will change from 60 seconds to 2 seconds. Is there a reason why a smaller batch size needs so much longer for one round of parameter update? I am only talking about 1 batch here, not the whole dataset.
|
st117001
|
The weird slowdown went away after I made the following update. There was something weird with cuda tensors. If I access a cuda tensor in a variable, the program gets really slow. A simple print(x.data) takes half a minute when it was very small. I couldn’t pin down what exactly was the problem, but the new version solved it.
pytorch: 0.1.12-py36_2cu80 soumith [cuda80] --> 0.1.12-py361_2cu80 soumith [cuda80]
|
st117002
|
I thought it was fixed, but I installed the latest dev version and it came back. The system would randomly hang for a few seconds. It seemed to happen after a optimizer step and when some CUDA operation is done. I will try to make a minimal case to repro this behavior.
|
st117003
|
The following script is a minimal example that would give different performance with different PyTorch releases. I attached the profile flowcharts with this thread. The two versions are the current master and the one from anaconda -c soumith. Both versions need a few seconds for the first forward call between ‘1’ and ‘2’ in the code, but the current dev version spends a good half a minute between ‘4’ and ‘5’ for every epoch. In two systems that I observed this, they are both Linux with CUDA 8.0.44 and Python 3.6.1.
The profile charts:
Anaconda version: https://osu.box.com/s/i3154wrjhplw7z9uzf4h9ojiwsftrbgm 3
Master version: https://osu.box.com/s/poa8b66o6w7c7bdq7aoi0yrvy8z95uvt
The weird thing is that it seems the slowdown is caused by print but it isn’t. If the print statement is changed to something else, say a = y.data == 3, then the slowdown will move to the next operation related to CUDA. In this script, it will move to x.cuda(). I also observe that the GPU is super busy after the backward call, which may have caused the slowdown, but I can’t say what is causing the GPU to be completely occupied for half a minute as no other program is running on it.
from torch import nn
from torch.autograd import Variable
import torch
import torch.nn.functional as F
import numpy as np
class CNN_Text(nn.Module):
def __init__(self):
super(CNN_Text, self).__init__()
V = 2000
D = 300
Co = 300
Ks = [3,4,5]
C = 359
Ci = 1
self.embed = nn.Embedding(V, D, padding_idx=1)
self.convs1 = nn.ModuleList([nn.Conv2d(Ci, Co, (K, D)) for K in Ks])
self.dropout = nn.Dropout(0.5)
self.fc1 = nn.Linear(len(Ks) * Co, C)
def forward(self, x):
x = self.confidence(x)
logit = F.log_softmax(x) # (N,C)
return logit
def confidence(self, x):
x = self.embed(x) # (N,W,D)
x = x.unsqueeze(1) # (N,Ci,W,D)
x = [F.relu(conv(x)).squeeze(3) for conv in self.convs1] # [(N,Co,W), ...]*len(Ks)
x = [F.max_pool1d(i, i.size(2)).squeeze(2) for i in x] # [(N,Co), ...]*len(Ks)
x = torch.cat(x, 1)
x = self.dropout(x) # (N,len(Ks)*Co)
linear_out = self.fc1(x)
return linear_out
cnn = CNN_Text()
cnn.cuda()
for i in range(10):
x = np.random.randint(0, 1500, (359, 15))
x[:, 0:5] = 1
x[:, -3:] = 1
y = np.random.randint(0, 359, (359,))
x = torch.from_numpy(x)
y = torch.from_numpy(y)
print('1')
x = Variable(x.cuda())
y = Variable(y.cuda())
x = F.log_softmax(cnn(x))
print('2')
loss = F.nll_loss(x, y)
print('3')
loss.backward()
print('4')
print(y.data)
print('5')
|
st117004
|
Also the anaconda version has quite a few places different from the documentation (norm does not support keepdim, there is no matmul etc.). Hopefully it will be updated soon.
|
st117005
|
whenever you add benchmarks with CUDA, you need to make sure you call torch.cuda.synchronize() right before collecting the time.
For example:
x = x.cuda()
print('1')
y = x ** 2
torch.cuda.synchronize()
print('2')
In your case, add torch.cuda.synchronize() right before print(‘2’) and print(‘3’) and print(‘4’)
|
st117006
|
Yes, you are right. I have added torch.cuda.synchronize() to the example and now the majority of time spent in this case is at the backward call. The torch_C._cuda_synchronize is the most time consuming operation, spending 469 seconds. So the weirdness of print taking a long time is gone, but why is backward so slow with the dev version?
The other version works just fine. The flowchart looks like the one posted. Nothing interesting.
|
st117007
|
I implemented Glove with Pytorch : https://gist.github.com/MatthieuBizien/de26a7a2663f00ca16d8d2558815e9a6 1.1k
I have a focus on speed on GPU
No big data transfert between the CPU and the GPU during training
All embeddings are stored in a matrix
Large batch size
I train on the newsgroup dataset in 15s with a GTX 1080. The original implementation, https://github.com/2014mchidamb/TorchGlove 282, needs multiple hours.
Feedbacks welcome!
|
st117008
|
Hi,
I am trying to validate my install. I have a GTX 1080, CUDA 8.0 + CuDNN 5.1.
With my setup, your script took 18 secs/epoch.
The 15 secs is a per epoch time or the time for the 10 epochs ?
Thanks!
|
st117009
|
I need to create a new model, by loading selected vgg filters.
I am using the following function inside VGG(nn.Module) class (Defined in torchvision example)
Here weights, bias are 2 dictionaries having selected filters from vgg
def _initialize_weights(self):
v = list(self.features)
for i in range(len(v)):
if i in [0,2,5,7,14,17,10,12,24,26,19,21,28]: #left last layer 28. Load original weights
if isinstance(i,nn.Conv2d):
w = self.weights['layer_'+str(i)]
w = torch.nn.parameter.Parameter(torch.from_numpy(np.array(w)))
b = self.bias['layer_'+str(i)]
b = torch.nn.parameter.Parameter(torch.from_numpy(np.array(b)))
self.features[i].weight = w
self.features[i].bias = b
But I am getting weird results. The output class obtained for the same image, changes everytime code is run.
|
st117010
|
did you put the model in .eval() mode? otherwise dropout in VGG will give you different output class for same image.
|
st117011
|
Hello. I want to transfer the weights of the first ‘k’ layers (where k can be any number less than the number of layers of the model) into another model. One way is that I can make another new model with its layers being the same as of the first k layers of my original model and then transfer it. But let us assume I do not know the structure of the pre trained model. Then how can I do the same ? Please guide
|
st117012
|
you have to know the structure of the pre-trained model. In PyTorch model = code
|
st117013
|
Using the advanced indexing currently merged in master, is it possible to do something like this to crop tensors in a batch:
x = torch.rand(2,3,4,4) #BSxCxHxW
y = x[[0,1],[0,1,2],[0:2, 2:4],[0:2, 2:4]]
# OR
y = x[[0,1],[0,1,2],[0, 2]:[2, 4],[0, 2]:[2, 4]]
which should give the top left patch of first image and bottom right patch of second image. The statements above give errors, but is there some other way to achieve this?
Thanks
|
st117014
|
At the moment, Trevor finished implemented cases like:
y = x[[0,1],[0,1,2],:, :]
Non : slicing on the non-advanced-index dimensions is not yet done.
|
st117015
|
I used net.cuda() and Variable.cuda() in my code and there is no other .cuda() appeared. I finetuned the ResNet-50 on Cifar-10, and I only changed the strides of the net to fit the 32*32 images. The batch_size I used is 128. The GPU is NVIDIA TITIAN X. And I found that it only did 4 iterations in 10min. I wonder whether it’s a little slow? And I want to know whether the .cuda() I used is enough?
Thanks!!!
|
st117016
|
the .cuda() you used is enough, but it is better to do:
Variable(x.cuda()), where x is the input data, rather than Variable(x).cuda()
To check whether GPU is used or not, you can run the command nvidia-smi and look at output.
|
st117017
|
As far as I checked RNN examples, I need to provide initial hidden values explicitly. It is just for convention or step-wise processing? Otherwise, is it okay to provide only batch sequence data and don’t touch the hidden values and use default settings?
|
st117018
|
you can use the default settings of all-zeros hidden states. the examples just try to make it explicit.
|
st117019
|
I know I could use the named_parameters() to do that.
However, when I write a simple test, I face a bug.
import torch
import torch.nn as nn
import torch.optim as optim
if __name__ == '__main__':
module = nn.Sequential(nn.Linear(2,3),nn.Linear(3,2))
params_dict = dict(module.named_parameters())
params = []
for key, value in params_dict.items():
if key[-4:] == 'bias':
params += [{'params':value,'lr':0.0}]
else:
params += [{'params':value,'lr':0.1}]
op = optim.SGD(params, momentum=0.9)
The error information:
Traceback (most recent call last):
File "test_lr.py", line 15, in <module>
op = optim.SGD(params, momentum=0.9)
File "/home/v-yawan1/anaconda2/lib/python2.7/site-packages/torch/optim/sgd.py", line 56, in __init__
super(SGD, self).__init__(params, defaults)
File "/home/v-yawan1/anaconda2/lib/python2.7/site-packages/torch/optim/optimizer.py", line 61, in __init__
raise ValueError("can't optimize a non-leaf Variable")
ValueError: can't optimize a non-leaf Variable
|
st117020
|
this actually seems like a bug. I’m investigating / isolating it. Let me get back to you on this and sorry about that.
|
st117021
|
This was fixed on master via https://github.com/pytorch/pytorch/commit/a76098ac1532d5e9ee24b4776258ae731627f8e3 134
and will be fixed in the next release.
For now your workaround is:
import torch.nn as nn
import torch.optim as optim
if __name__ == '__main__':
module = nn.Sequential(nn.Linear(2,3),nn.Linear(3,2))
params_dict = dict(module.named_parameters())
params = []
for key, value in params_dict.items():
if key[-4:] == 'bias':
params += [{'params':[value],'lr':0.0}]
else:
params += [{'params':[value],'lr':0.1}]
op = optim.SGD(params, momentum=0.9)
|
st117022
|
I ran into a problem that I think is related.
I have a CNN which I created using nn.ModuleList:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.modules = list()
self.modules.append(nn.Conv2d(1, 32, (3, 3), (1, 1), (1, 1)))
self.modules.append(nn.PReLU(32))
# etc.: keep adding some modules, the last one being No.14
self.model = nn.ModuleList(self.modules)
Then, in my main file, when I create the optimizer, I write the following:
model = Net()
model = model.cuda()
optimizer = optim.Adam([{'params': model.modules[14].parameters(), 'lr': 0.1*opt.lr}], lr=opt.lr, weight_decay=1e-5)
So by this line I thought that I would have the learning rate equal to opt.lr 6 for all the modules, EXCEPT the module #14, for which I want a 10 times lower learning rate. However, I get totally weird convergence. To check if that has something to do with the initialization, I removed the “0.1” in the optimizer initialization, making it basically having the same “lr” as every other module in my Net. BUT! The same weird convergence holds, much slower vs. if I just do a regular optimizer initialization (optimizer = optim.Adam(model.parameters(), lr=opt.lr 4, weight_decay=1e-5)) What might be the issue? Thanks!
|
st117023
|
Looks like you’re not passing the optimizer the rest of the model’s parameters, only the parameters of model.module[14]. Instead try:
optimizer = optim.Adam([{'params': model.modules[14].parameters(), 'lr': 0.1*opt.lr}, {'params': [module.parameters() for index, module in enumerate(model.modules) if index != 14]}], lr=opt.lr, weight_decay=1e-5)
|
st117024
|
Thanks! That sounds like a good idea, however now after I corrected according to this suggestion, I have an error:
TypeError: optimizer can only optimize Variables, but one of the params is parameters
|
st117025
|
Alexey,
This part of your latest code is wrong:
{'params': [module.parameters() for index, module in enumerate(model.modules) if index != 14]}
It will put the python iterable module.parameters() into a list, and hence the error message. You can convert an iterable into a list for example via: list(module.parameters())
|
st117026
|
Soumith,
If I do
optimizer = optim.Adam([{'params': model.modules[14].parameters(), 'lr': 0.1*opt.lr}, {'params': list(module.parameters()) for index, module in enumerate(model.modules) if index != 14}], lr=opt.lr, weight_decay=1e-5)
then it seems again that the convergence is very slow and I’m not passing all my params to the optimizer. Or did I put the list() in the wrong place?
So far I tried two ways:
optimizer = optim.Adam([{'params': model.modules[14].parameters(), 'lr': 0.1*opt.lr}, {'params': model.modules[0].parameters(), 'lr':opt.lr}, {'params': model.modules[1].parameters(), 'lr':opt.lr}, {'params': model.modules[2].parameters(), 'lr':opt.lr}, {'params': model.modules[3].parameters(), 'lr':opt.lr}, {'params': model.modules[4].parameters(), 'lr':opt.lr}, {'params': model.modules[5].parameters(), 'lr':opt.lr}, {'params': model.modules[6].parameters(), 'lr':opt.lr}, {'params': model.modules[7].parameters(), 'lr':opt.lr}, {'params': model.modules[8].parameters(), 'lr':opt.lr}, {'params': model.modules[9].parameters(), 'lr':opt.lr}, {'params': model.modules[10].parameters(), 'lr':opt.lr}, {'params': model.modules[11].parameters(), 'lr':opt.lr}, {'params': model.modules[12].parameters(), 'lr':opt.lr}, {'params': model.modules[13].parameters(), 'lr':opt.lr}], lr=opt.lr, eps=1e-8, weight_decay=1e-5)
and this works very well. But, obviouslty, it’s not satisfactory w.r.t. readability.
And I tried the second way:
optimizer = optim.Adam([{'params': model.modules[14].parameters(), 'lr': 0.1*opt.lr}, {'params': module.parameters() for index, module in enumerate(model.modules) if index != 14}], lr=opt.lr, eps=1e-8, weight_decay=1e-5)
This is much clearer and Pythonic, but it just doesn’t work.
UPD:
I changed to a more Pythonic way, and this time it works:
ml = list()
ml.append({'params': model.modules[14].parameters(), 'lr': 0.1*opt.lr})
for index, module in enumerate(model.modules):
if (index != 14):
ml.append({'params': module.parameters()})
optimizer = optim.Adam(ml, lr=opt.lr, eps=1e-8, weight_decay=1e-5)
Thanks Soumith and Andy!
|
st117027
|
I have got a difficult network to construct and I am not sure how to do it right.
I have got an input part and another part consisting of sequentially added blocks of the same type.
I draw a picture of it that you can understand it easily:
diagram.png962×321 7.55 KB
How to I declare the output of one layer as the input of another one in the constructor of my network?
Where should I add my parameters as class members to be updated correctly?
Here is a minimum working example:
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data as data_utils
import torch.optim as optim
import torch.nn.init as init
import torch.autograd as AG
# Define Custom Layer
class CLayer(AG.Function):
def __init__(self, tmp_):
super(CLayer, self).__init__()
self.tmp = tmp_
def forward(self):
output = self.tmp + torch.ones(self.tmp.size())
return output
# Define Block
class Block(nn.Module):
def __init__(self, A_, x_, y_):
super(Block, self).__init__()
self.A = A_
self.x = x_
self.y = y_
self.customLayer = CLayer(self.x)
def forward(self):
tmp = self.x - torch.transpose(self.A)*self.y
output = CLayer(tmp)
return output
# Define Network
class Net(nn.Module):
def __init__(self, A_, x0_):
super(Net, self).__init__()
self.x0 = x0_
self.A = A_
self.fc1 = nn.Linear(50, 50)
self.fc1.weight.data = self.A
self.block0 = Block(self.A, x0, ??)
self.blockN = nn.ModuleList([Block(self.A, ??, ??) for i in range(2)])
def forward(self):
y = self.fc1(x)
x = self.block0(self.A, self.x0, y)
for i, l in enumerate(self.ihts):
x = self.blockN(self.A, x, y)
return x
A_ = torch.randn(m,n)
x0_ = torch.zeros(1,n)
model = Net(A_, x0_)
|
st117028
|
Since multi differentiation is not yet in stable, maybe the response is simply that something is not implemented yet.
Otherwise, why is the last line raising the error in comment?
from torch import nn, Tensor
from torch.autograd import Variable, grad
criterion = nn.MSELoss()
x = Variable(Tensor(5).normal_(), requires_grad = True)
y = Variable(Tensor(5).normal_())
l1 = (x - y).pow(2).sum().div(y.numel())
a = grad(l1, x, create_graph = True)
b = grad(a[0][3], x)
l2 = criterion(x, y)
a = grad(l2, x, create_graph = True)
b = grad(a[0][3], x) # "RuntimeError: differentiated input is unreachable"
|
st117029
|
Yes, we’ve converted all of torch.* functions to support double differentiation, and we’re working through nn.* now.
|
st117030
|
Suppose I have 1 x 1 torch variable and I want to convert it into a float, is there a better way than doing this?
x.data.numpy()[0, 0]
(Couldn’t find it in docs).
|
st117031
|
Regarding Variables, note that you cannot convert one directly to a numpy array, nor one of its coefficients to a scalar type. Since Variables only “wrap” around Tensors, to keep the autograd semantic working, if x is a Variable, x[i] is a 1d Variable of size 1.
To access the values of [the Tensor of] a Variable, you have to do it through its .data attribute, which is a Tensor.
|
st117032
|
If I have a BxCxHxW Variable which has a graph associated with it (It is the output of a CNN) and another similar Variable which is the output of another CNN. Now I want to replace a part of the first by second and pass the new Variable through a third network and then call backward on loss.
Right now, I run into an error because some Variables have been modified in-place. To fix that I am calling clone and then replacing in the third variable one by one, like this:
def refine(old_rep, fine_rep, ys, xs):
new_rep = old_rep.clone()
for i in range(fine_rep.size(0)):
for j in range(10):
img_id = i // 10
y = int(ys[i])
x = int(xs[i])
new_rep[img_id,:,y:y+1,x:x+1] = fine_rep[i]
return new_rep
Is there a faster way to do the same?
Thanks
|
st117033
|
This is a snippet that works for me from what you described as trying to do:
import torch
from torch.autograd import Variable
x = Variable(torch.randn(10), requires_grad=True)
y = x ** 2
a = Variable(torch.randn(10), requires_grad=True)
b = a ** 2
y[5:] = b[5:]
z = y ** 3
z.sum().backward()
Can you give me a small snippet of the failure case?
I’m wondering if the last layer of your first and second CNN have in-place ReLU or something, which makes them restricted from doing another in-place op on their output. This should be easy to workaound (make the ReLU non in-place).
|
st117034
|
last layer of your first and second CNN have in-place ReLU
Yes! I understood the reason now. I had an inplace ReLU from torchvision’s resnet.
Thanks!
|
st117035
|
Hello,
I have 2 models, with two separate loss functions (loss1 and loss2) and optimizers. The output of one model (a linear layer) is used by the other as an input. Based on some previous questions asked here, I was able to get this model training by repackaging the output of the first model as a Variable :
op1 = model1(data1)
op2 = Variable(op1.data1)
op3 = model2(data2,op2)
loss1 = criterion1(op1,target1)
loss1.backward()
optimizer1.step()
loss2 = criterion2(op2,target2)
loss2.backward()
optimizer2.step()
Q. This seems to work. But how do I know if the each optimizer is taking the correct set of gradients? I assume that it is, as each optimizer is associated with a specific set of parameters (none shared), so the number of gradients needs to be consistent.
Q. I was now trying to use the loss2 to guide the optimization of model1, Does this make sense??
loss1 = criterion1(op1,target1)
loss1.backward()
loss2 = criterion2(op2,target2)
loss2 = loss2 + lambda*loss1
loss2.backward()
This gives me an error, and tells me to use retain_variables=True in my first backward() call. When I do this the model does start training.
Q. What is retain_variables doing? Is this the correct way to do what I want (i.e. guide the parameter updates of one model using the loss of the other). Or does this just not make sense?
Thanks,
Gautam
|
st117036
|
in case 2, you dont need to do loss1.backward(). Because loss2 = loss2 + lambda * loss1, when you call loss2.backward(), loss1’s backward will also be called.
You can use retain_variables=True, which will keep all the intermediate buffers around to do loss1.backward twice, but it’ll lead in double gradients in your case.
|
st117037
|
Hi @smth, thanks for the reply.
I have been seeing some strange behavior (mostly good).
I am calling optimizer1.step() after I do loss1.backward() and optimizer2.step() after loss2.bakward(), so the double gradient shouldn’t affect anything since I already did the associated optimizer step. Is that correct?
Since loss1 and loss2 are associated with different models (no shared params), does the lambda*loss1 make any difference to loss2? I was starting to think that it doesn’t, but the two models (case 1 and case 2) give quite different results.
Thanks,
Gautam
|
st117038
|
I’m trying to familiarize myself with a basic LSTM in PyTorch, and I’m getting a strange error. The bottom of the traceback refers to a _copyParams function in the torch/backends/cudnn/rnn.py file. Basically, the statement asserts that the from/to tensors of the parameters being copied fails, suggesting there’s a tensor type mismatch. I added a quick print statement in the file for debugging, and it says that the function is trying to copy a FloatTensor to a DoubleTensor. I’m sure this is an error in my code and not a bug in the codebase, which is why I’m posting it here. You can find my code in this notebook 35.
> torch.cuda.FloatTensor torch.cuda.DoubleTensor
> ---------------------------------------------------------------------------
> AssertionError Traceback (most recent call last)
> <ipython-input-7-bccbf62ef6f2> in <module>()
> 14
> 15 # forward + backward + optimize
> ---> 16 outputs = tester(inputs)
> 17 loss = loss_function(outputs, labels)
> 18 loss.backward()
> ~/anaconda3/envs/torch/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
> 204
> 205 def __call__(self, *input, **kwargs):
> --> 206 result = self.forward(*input, **kwargs)
> 207 for hook in self._forward_hooks.values():
> 208 hook_result = hook(self, input, result)
> <ipython-input-4-fbaa41914871> in forward(self, x)
> 15
> 16 def forward(self, x):
> ---> 17 output, self.hidden = self.lstm(x,self.hidden)
> 18 output = self.fc(output.view(x.size()[1],-1))
> 19 return torch.clamp(output,0,49688)
> ~/anaconda3/envs/torch/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
> 204
> 205 def __call__(self, *input, **kwargs):
> --> 206 result = self.forward(*input, **kwargs)
> 207 for hook in self._forward_hooks.values():
> 208 hook_result = hook(self, input, result)
> ~/anaconda3/envs/torch/lib/python3.6/site-packages/torch/nn/modules/rnn.py in forward(self, input, hx)
> 89 dropout_state=self.dropout_state
> 90 )
> ---> 91 output, hidden = func(input, self.all_weights, hx)
> 92 if is_packed:
> 93 output = PackedSequence(output, batch_sizes)
> ~/anaconda3/envs/torch/lib/python3.6/site-packages/torch/nn/_functions/rnn.py in forward(input, *fargs, **fkwargs)
> 341 else:
> 342 func = AutogradRNN(*args, **kwargs)
> --> 343 return func(input, *fargs, **fkwargs)
> 344
> 345 return forward
> ~/anaconda3/envs/torch/lib/python3.6/site-packages/torch/autograd/function.py in _do_forward(self, *input)
> 200 self._nested_input = input
> 201 flat_input = tuple(_iter_variables(input))
> --> 202 flat_output = super(NestedIOFunction, self)._do_forward(*flat_input)
> 203 nested_output = self._nested_output
> 204 nested_variables = _unflatten(flat_output, self._nested_output)
> ~/anaconda3/envs/torch/lib/python3.6/site-packages/torch/autograd/function.py in forward(self, *args)
> 222 def forward(self, *args):
> 223 nested_tensors = _map_variable_tensor(self._nested_input)
> --> 224 result = self.forward_extended(*nested_tensors)
> 225 del self._nested_input
> 226 self._nested_output = result
> ~/anaconda3/envs/torch/lib/python3.6/site-packages/torch/nn/_functions/rnn.py in forward_extended(self, input, weight, hx)
> 283 hy = tuple(h.new() for h in hx)
> 284
> --> 285 cudnn.rnn.forward(self, input, hx, weight, output, hy)
> 286
> 287 self.save_for_backward(input, hx, weight, output)
> ~/anaconda3/envs/torch/lib/python3.6/site-packages/torch/backends/cudnn/rnn.py in forward(fn, input, hx, weight, output, hy)
> 254 w.zero_()
> 255 params = get_parameters(fn, handle, w)
> --> 256 _copyParams(weight, params)
> 257
> 258 if tuple(hx.size()) != hidden_size:
> ~/anaconda3/envs/torch/lib/python3.6/site-packages/torch/backends/cudnn/rnn.py in _copyParams(params_from, params_to)
> 182 for param_from, param_to in zip(layer_params_from, layer_params_to):
> 183 print(param_from.type(), param_to.type())
> --> 184 assert param_from.type() == param_to.type()
> 185 param_to.copy_(param_from)
> 186
> AssertionError:
Thanks for your help!
|
st117039
|
i suspect that the data returned from the DataLoader is returned as DoubleTensor instead of what the model wants by default: FloatTensor.
Right after # get the inputs, i.e. after the inputs, labels = ..., add the line:
inputs = inputs.float()
|
st117040
|
Thanks a ton, Soumith! I’d used a transform in the DataLoader, but only transformed the targets to FloatTensor.
|
st117041
|
I have a trained nn.Embedding model. I would like to extract its parameter, then save as text file like Glove, so other guy can reuse a trained word representation for other problem.
I have no idea how to do it yet. Please help me. Some code example might helpful.
|
st117042
|
m = nn.Embedding(...)
# m.weight contains the embedding weights.
parameters = m.weight.numpy()
# save it as you wish
|
st117043
|
it’was occured the following error when I tried to test the sequence to sequence demo
fzuir@fzuir-31:~/delaiQ$ python seq2seq_translation_tutorial.py **
Reading lines…
Read 135842 sentence pairs
Trimmed to 10853 sentence pairs
Counting words…
Counted words:
fra 4489
eng 2925
[u’je ne suis pas pret a me battre .’, u’i m not ready to fight .’]
2m 41s (- 37m 36s) (5000 6%) 2.8834
5m 15s (- 34m 12s) (10000 13%) 2.3491
THCudaCheck FAIL file=/py/conda-bld/pytorch_1493676237139/work/torch/lib/THC/generic/THCStorage.c line=32 error=30 : unknown error
Traceback (most recent call last):
** File “seq2seq_translation_tutorial.py”, line 808, in
** trainIters(encoder1, attn_decoder1, 75000, print_every=5000)**
** File “seq2seq_translation_tutorial.py”, line 674, in trainIters**
** decoder, encoder_optimizer, decoder_optimizer, criterion)**
** File “seq2seq_translation_tutorial.py”, line 618, in train**
** return loss.data[0] / target_length**
RuntimeError: cuda runtime error (30) : unknown error at /py/conda-bld/pytorch_1493676237139/work/torch/lib/THC/generic/THCStorage.c:32
anyone can sovle it ?
Thanks!
|
st117044
|
error 30 is usually a GPU initialization error. See if any other CUDA programs work on your system, and if not, you can use “sudo” once to start any cuda program of your choice. After that, it should be fixed.
|
st117045
|
So I’ve tried to compile Pytorch locally for development purposes under the (dev) anaconda environment and another one under (lisa) which I install from the instructions from pytorch.org 1, I’ve got a recent warning when running the VAE code on Pytorch tutorials when running it on the newest version.
(dev) [SLURM] suhubdyd@kepler2:~/research/models/pytorch-models/vae$ python main.py
THCudaCheck FAIL file=/u/suhubdyd/research/dl-frameworks/pytorch/torch/lib/THC/generated/../generic/THCTensorMathPointwise.cu line=247 error=8 : invalid device function
Traceback (most recent call last):
File "main.py", line 138, in <module>
train(epoch)
File "main.py", line 108, in train
recon_batch, mu, logvar = model(data)
File "/u/suhubdyd/.conda/envs/dev/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "main.py", line 71, in forward
mu, logvar = self.encode(x.view(-1, 784))
File "main.py", line 54, in encode
h1 = self.relu(self.fc1(x))
File "/u/suhubdyd/.conda/envs/dev/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/u/suhubdyd/.conda/envs/dev/lib/python2.7/site-packages/torch/nn/modules/linear.py", line 54, in forward
return self._backend.Linear.apply(input, self.weight, self.bias)
File "/u/suhubdyd/.conda/envs/dev/lib/python2.7/site-packages/torch/nn/_functions/linear.py", line 14, in forward
output.add_(bias.expand_as(output))
RuntimeError: cuda runtime error (8) : invalid device function at /u/suhubdyd/research/dl-frameworks/pytorch/torch/lib/THC/generated/../generic/THCTensorMathPointwise.cu:247
while it doesn’t matter for the stable distribution
(lisa) [SLURM] suhubdyd@kepler2:~/research/models/pytorch-models/vae$ python main.py
Train Epoch: 1 [0/60000 (0%)] Loss: 549.307800
Train Epoch: 1 [1280/60000 (2%)] Loss: 311.565613
Train Epoch: 1 [2560/60000 (4%)] Loss: 236.338989
Train Epoch: 1 [3840/60000 (6%)] Loss: 225.371078
Train Epoch: 1 [5120/60000 (9%)] Loss: 208.993668
Train Epoch: 1 [6400/60000 (11%)] Loss: 206.427368
Train Epoch: 1 [7680/60000 (13%)] Loss: 208.580597
Train Epoch: 1 [8960/60000 (15%)] Loss: 201.324646
Train Epoch: 1 [10240/60000 (17%)] Loss: 191.824326
Train Epoch: 1 [11520/60000 (19%)] Loss: 195.214188
Train Epoch: 1 [12800/60000 (21%)] Loss: 189.770447
Train Epoch: 1 [14080/60000 (23%)] Loss: 173.119644
Train Epoch: 1 [15360/60000 (26%)] Loss: 179.030197
Train Epoch: 1 [16640/60000 (28%)] Loss: 170.247345
Train Epoch: 1 [17920/60000 (30%)] Loss: 169.193451
Train Epoch: 1 [19200/60000 (32%)] Loss: 162.828690
Train Epoch: 1 [20480/60000 (34%)] Loss: 158.171326
Train Epoch: 1 [21760/60000 (36%)] Loss: 158.530518
Train Epoch: 1 [23040/60000 (38%)] Loss: 155.896255
Train Epoch: 1 [24320/60000 (41%)] Loss: 158.835968
Train Epoch: 1 [25600/60000 (43%)] Loss: 152.416977
Train Epoch: 1 [26880/60000 (45%)] Loss: 153.593964
Train Epoch: 1 [28160/60000 (47%)] Loss: 147.944260
Train Epoch: 1 [29440/60000 (49%)] Loss: 148.223892
Train Epoch: 1 [30720/60000 (51%)] Loss: 145.770905
Train Epoch: 1 [32000/60000 (53%)] Loss: 144.410706
Train Epoch: 1 [33280/60000 (55%)] Loss: 147.592163
Train Epoch: 1 [34560/60000 (58%)] Loss: 149.320328
|
st117046
|
for anyone looking for a follow-up thread https://github.com/pytorch/pytorch/issues/1955 418
|
st117047
|
Hi all,
I need to implement double backward for 2D max pooling.
I’ve been trying to understand how some of the other ops were converted, without much success.
Can anyone provide any insights?
e.g. A rough explanation of how the Linear layer was converted might be really helpful (what is ctx??)
Thanks.
|
st117048
|
I found this quite helpful for understanding the Autograd refactor and how the new function defs work: https://github.com/pytorch/pytorch/pull/1016 29
However, for max-pooling I am still quite confused as it seems like the logic is implemented in the THNN backend…I cannot actually find the source code for this.
Should the pooling function be moved out of THNN in order to implement the higher order derivatives? Or is there a simple way to use the existing functions?
|
st117049
|
Found this comment in the Threshold function that was added to allow for ReLU double backward :
# TODO: This class should be removed once THNN function support Variable backward
So my understanding is that I need to write something similar for pooling, at least until THNN support is added.
|
st117050
|
I have two systems, where the first has GeForce GTX 780 Ti with CUDA 8.0 (driver version: 375.26) and the other has Tesla M2070 with CUDA 7.5.18 (driver version: 352.99).
I installed both with blood-edge version on top of Python 3.6. (conda install -c soumith magma-cuda80 for the first machine and conda install -c soumith magma-cuda75 for the second machine)
I tested the following simple code:
import torch
from torch.autograd import Variable
a = Variable(torch.randn(3,4,5), requires_grad=True).cuda()
b = torch.randn(3,4,5).cuda()
a.backward(b)
The code works on the first machine but failed on the other machine as follows:
THCudaCheck FAIL file=/users/PAS0396/osu7806/pytorch/torch/lib/THC/generic/THCTensorCopy.c line=65 error=46 : all CUDA-capable devices are busy or unavailable
Traceback (most recent call last):
File "test.py", line 5, in <module>
a.backward(b)
File "/users/PAS0396/osu7806/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 146, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
File "/users/PAS0396/osu7806/anaconda3/lib/python3.6/site-packages/torch/autograd/_functions/tensor.py", line 163, in backward
return grad_output.cpu()
File "/users/PAS0396/osu7806/anaconda3/lib/python3.6/site-packages/torch/tensor.py", line 31, in cpu
return self.type(getattr(torch, self.__class__.__name__))
File "/users/PAS0396/osu7806/anaconda3/lib/python3.6/site-packages/torch/cuda/__init__.py", line 276, in type
return super(_CudaBase, self).type(*args, **kwargs)
File "/users/PAS0396/osu7806/anaconda3/lib/python3.6/site-packages/torch/_utils.py", line 33, in _type
return new_type(self.size()).copy_(self, async)
RuntimeError: cuda runtime error (46) : all CUDA-capable devices are busy or unavailable at /users/PAS0396/osu7806/pytorch/torch/lib/THC/generic/THCTensorCopy.c:65
Since CUDA itself is working (no problems in cuda() methods before calling backward()), I wonder why this would happen on the second system.
|
st117051
|
Hi,
pytorch only supports compute capability >= 3.0
Unfortunately, the Tesla M2070 is a 2.0 compute capability card.
|
st117052
|
Oh. Sorry to hear that. Torch7 worked well on that machine without any problems.
Thanks!
|
st117053
|
You might try building form source but it would require some additional patches (look for closed issues in the main repo). But we don’t support them officially.
|
st117054
|
@apaszke I built from the most recent source as follows.
export CMAKE_PREFIX_PATH=/home/kimjook/anaconda3
conda install numpy mkl setuptools cmake gcc cffi
conda install -c soumith magma-cuda75
git clone https://github.com/pytorch/pytorch
cd pytorch
pip install -r requirements.txt
python setup.py install
Could you let me know which closed issues you are referring to? https://github.com/pytorch/pytorch/issues/665 5 seems somewhat related but there is no build error in my case.
Thanks! (I understand that supporting old devices is annoying, but I am somewhat frustrated since my almost the same model worked well on Torch7 doesn’t work on PyTorch.)
|
st117055
|
That’s the issue I was thinking about, but maybe you don’t need it for some reason.
|
st117056
|
I am little confused here. The official document says it needs NVIDIA GPU with compute capability >= 2.0
http://pytorch.org/docs/master/torch.html 8
Screenshot_2017-06-29_10-48-26.png1064×418 41.7 KB
|
st117057
|
We should update that part. I’m doing it now.
We started with the commitment of cc >= 2.0, but it has been infeasible, as 2.0 is simply too old and several newer APIs dont work on it.
|
st117058
|
Hi,
I have trained encoder-decoder for some data. Now I want to use same weights in the hidden layers of encoder and decoder for different data. Only change I need in embedding layer of encoder & decoder and output layer of decoder.
Network is same as this 12
Any pointers, how to do this?
Thanks.
|
st117059
|
you can copy the .weight.data over from the Embedding layers of the trained encoder-decoder into the new one.
|
st117060
|
When I tried to test “Translation with a Sequence to Sequence Network and Attention” , I got my GPU locked up.
I found that the temperature of GPU was too high…
anyone can solve it?
|
st117061
|
your GPU is overheating because a lot of computation is being done.
Try to keep the GPU in a cool place, or turn on the fan / Air Conditioning.
|
st117062
|
Hi, I am constructing an optimizer of resnet. As http://pytorch.org/docs/master/optim.html 15 mentioned, we can only do default optimizer parameter setting on all layers or call every layer one by one.
The problem is, since I’m training on ResNet and I just want to modify learning rate of only ONE particular layer. What tricks I can do to initialize one layer specifically and setting others to default?
Thanks!
|
st117063
|
you can use the Per-parameter options: http://pytorch.org/docs/master/optim.html#per-parameter-options 107
|
st117064
|
lst = [[0, 1, -4, 8],
[2, -3, 2, 1],
[5, -8, 7, 1]]
torch.Tensor(lst)
This is a 4 x 3 matrix. How do swap first and second row to the following matrix?
lst = [[2, -3, 2, 1],
[0, 1, -4, 8],
[5, -8, 7, 1]]
torch.Tensor(lst)
|
st117065
|
you can do this with an index_copy. You can generate linear indices such as [0, 1, 2, 3] using torch.arange, and then swap 0 and 1. Then use these as indices to index_copy.
Alternatively, you can do something simpler too:
x = tlist[0].clone()
tlist[0] = tlist[1]
tlist[1] = x
|
st117066
|
When I tried use the cuda8.0 on Jetson TX1 , report ‘cudaerror : too many resources requested for launch’. And I’s work’s with CPU.
Any help will be appreciated.
|
st117067
|
Because I’m fine-tuning a neural network, I always need change something in the class of network (e.g., activation, neuron number, …). How can I load a trained model without the class of its network structure?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.