id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st117968
|
If that does what you want, it probably is correct.
An easier way to express it would be something like
adiff = torch.abs(output-target)
batch_max = 0.2*torch.max(adiff).data[0]
sqdiff = (adiff*adiff+batch_max*batch_max)/(2*batch_max)
return adiff.clamp(max=batch_max)+(sqdiff-1).clamp(min=batch_max)
Note though that clamp is not differentiable w.r.t. the bounds, but your definition of batch_max isn’t, either. It probably does not matter much.
Best regards
Thomas
|
st117969
|
thanks. its a per-batch statistic. I’m open to other suggestions if you’re worried it could be a problem, but at the very least autograd is able to differentiate it. btw, I ended up running torch.mean() on that expression prior to returning it.
|
st117970
|
Can anyone explain what is going on here?
import torch
from torch.autograd import Variable
import numpy as np
x = Variable(torch.Tensor([3.0]),requires_grad=True)
b = np.float32(3)
b*x
Returns
Out[11]:
array([[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
9
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]], dtype=object)
Casting b to a normal float on the other hand gives the expected result.
|
st117971
|
It is a problem on how numpy multiplication is done unfortunately, because you do b*x, it is numpy implementation that is used. You can try doing x*b and it should fail with a nice error message from pytorch.
|
st117972
|
Unfortunately there is nothing we can do because due to the way python handle the * operator, if the left element is a numpy object, the numpy functions are going to be used and they are not aware of pytorch tensors
|
st117973
|
Fair enough, it may be worth a mention somewhere in the docs perhaps. And many thanks for clarifying!
The problem also only occurs when you multiply a numpy float with a Variable, tensors are ok actually.
|
st117974
|
Assume we have y as a function of x, y = f(x), and z as a function of y, z = g(y).
How can I compute the gradient w.r.t. y first (dz/dy, use this to do something else), and then compute the gradients w.r.t. x (dy/dx)?
|
st117975
|
Hello,
is this approximately what you need: let’s say you have
from torch.autograd import Variable
x = Variable(torch.randn(4), requires_grad=True)
y = f(x)
y2 = Variable(y.data, requires_grad=True) # use y.data to construct new variable to separate the graphs
z = g(y2)
(there also is Variable.detach 19, but not now)
Then you can do (assuming z is a scalar)
z.backward() # this computes dz/dy2 in y2.grad
y.backward(y2.grad) # this computes dy/dx * y2.grad
print (x.grad)
Note that the .backward evaluates the derivative at the last forward computation.
(I hope this is correct, I don’t have access to my pytorch right now.)
Best regards
Thomas
|
st117976
|
Hello everyone!
I’m trying to build a neural network that can generate label score for classification and use the NLLLoss2d function.
I have 1000 samples, each sample is a vector of 180 entries. Thus the input is a 1000x100 matrix. For each sample in the input, I am trying to generate two 3x5 matrices, containing scores for the two labels. Thus the output should be a 1000x2x3x5 tensor as described in the doc of NLLLoss2d function.
In the network, I used an array of nn.Linear() functions:
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.linear = [[torch.nn.Linear(100, 2) for i in range(5)] for j in range(3)]
def forward(self, x):
y = [[[0,0] for i in range(5)] for j in range(3)]
for i in range(3):
for j in range(5):
y[i][j]=torch.nn.functional.log_softmax(nnFunc.relu(self.linear[i][j](x)))
return y
However, the output of the network is a list of dimension 3x5x1000x2. The first two dimensions are of list type, and the last two dimensions are of Variable type.
I am trying to permutate the tensor I got from the network, but I’m not sure it could be used later by the backward-propagation function.
I would appreciate for any help!
|
st117977
|
yes you can call torch.permute on the output to make it to your required shape, and backpropagation happens correctly.
|
st117978
|
Thanks for your reply. I tried using permute in the model, but it showed the error
there are no graph nodes that require computing gradients.
The network I was using is
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.linear = [[torch.nn.Linear(100, 2) for i in range(5)] for j in range(3)]
def forward(self, x):
N = len(x)
y = torch.Tensor([[np.zeros((N,2)) for i in range(5)] for j in range(3)])
for i in range(3):
for j in range(5):
y[i][j]=torch.nn.functional.log_softmax(nnFunc.relu(self.linear[i][j](x))).data
return Variable(y.permute(2,3,0,1))
I have to put Variable() wrapper there, otherwise it would show the error
‘float’ object has no attribute ‘getitem’
|
st117979
|
duguyiqiu:
there are no graph nodes that require computing gradients.
replace:
self.linear = [[torch.nn.Linear(100, 2) for i in range(5)] for j in range(3)]
with
self.linear = nn.ModuleList([nn.ModuleList([nn.Linear(100,2) for i in range(5) ])for j in range(3)])
This makes your model parameters visible, so to say.
|
st117980
|
Thanks for your reply! One more thing I did to make the code running is to add
requires_grad=True
in the Variable().
Now I got zero compilation error and the code could be executed.
However, I found that the parameters do not change after I did the
optimizer.zero_gradient()
loss.backward()
optimizer.step()
process. This means the gradient the code calculated is zero. Do you think this is to do with how I got the return value y which somehow makes the optimizer treat it as a constant?
|
st117981
|
Enlightened by this thread, I found the solution, which is to use the torch.stack() function. So to get the output y, I did
y = torch.stack([torch.stack([nnFunc.log_softmax(nnFunc.relu(self.linear[i][j](x))) for j,m in enumerate(l)],0) for i,l in enumerate(self.linear)],1)
and then use the torch.permute() function:
return y.permute(2,3,1,0)
|
st117982
|
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
for i, (images, labels) in enumerate(train_loader):
images = Variable(images.view(-1, sequence_length, input_size)).cuda()
# Forward + Backward + Optimize
optimizer.zero_grad()
outputs = rnn(images)
_, predicted = torch.max(outputs.data, 1)
loss = criterion(outputs, predicted)
loss.backward()
optimizer.step()
error happen with line loss = criterion(outputs, predicted)
AttributeError: ‘torch.cuda.LongTensor’ object has no attribute 'requires_grad’
how can I use criterion with predicted? thank you
|
st117983
|
thanks for your answer, before I have used :labels = Variable(labels).cuda(),
error also exist as:AttributeError: ‘torch.cuda.LongTensor’ object has no attribute 'requires_grad’
this error location:
loss = criterion(outputs, predicted)
I also used : labels.data = predicted, occur other errors!
|
st117984
|
u can’t use predicted to get loss, u should use labels, because u need to get loss by comparing ground truth and predicting label, the predict is outputs, and the ground truth is labels
loss = criterion(outputs, labels)
|
st117985
|
thanks for you!Well,you are right, but I attempt to do it with a new way.Now, I know how to modify:
…
, predicted = torch.max(outputs.data, 1)
labels.data.copy(predicted)
loss = criterion(outputs, labels)
…
thinks, SherlockLiaoSherlock
|
st117986
|
I find the example in http://pytorch.org/docs/nn.html#torch-nn-init 36 use nn.init.xavier_uniform and so on.
The problem is that the torch.nn.init must be imported manually, if someone tried from torch import nn, he still can not use nn.init immediately.
|
st117987
|
Sorry for the noob question, but I can’t find an answer anywhere online.
I have a batch of zero padded inputs, representing word vector IDs. I can create an embedding layer and look up those word vector values.
How do I do something simple like add the word vectors for each row, up to word vector X, where X is different for every row?
Even something like torch.nonzero(t) returns a “not implemented for type Variable” error.
The Numpy function would be np.ma.mean()
https://docs.scipy.org/doc/numpy/reference/generated/numpy.ma.mean.html 36
Is there a way to do this in PyTorch, without batch size == 1, which would backprop loss back to my embedding layer? Thanks!
|
st117988
|
I would masked_fill_ with zeros for the entries past the word vectors (if they aren’t already filled with zeros) and then sum along the whole axis. If you want an average, you can then divide by the lengths tensor suitably expand_ased.
|
st117989
|
Thanks! Will check out those functions. Of course the brute force way would be to subtract out the non-masked values, etc…
I did it through several repeat operations – probably not the most efficient but it’s correct and easy to follow. Will rewrite it better as needed. Useful function.
|
st117990
|
During Training, How to freeze intermediate layers of network architecture in transfer learning?
|
st117991
|
Just need to set requires_grad of the parameters of those intermediate layers to False. There’s an example here (in the context of Resnet 18):
http://pytorch.org/docs/notes/autograd.html#excluding-subgraphs 40
|
st117992
|
Is it possible to modify module weights with a backward hook or would it mess up the gradients? If not, is there a way to modify a specific module’s weights without having to search through the entire network using apply? I’m thinking of something like clipping weights with a backward hook.
|
st117993
|
Hi Nick,
From the little I have learnt about autograd while converting the simplest functions, I would expect that it does not mess up the gradient calculation (which stores its needs in a context). It might not make much sense to do this during the backward pass and then apply the optimizer as usual, but that would depend on what you have in mind.
You can access the parameters as attributes of the module. The documentation for the torch nn modules lists them under Variables, e.g. for torch.nn.Linear 20.
These are Parameters, so you can manipulate their .data field. The original Wasserstein GAN implementation does this to clip the weights 108, although they loop over the parameters to catch all of them.
I hope this helps.
Best regards
Thomas
|
st117994
|
ah yes duh… Would be stupid to change the weights before the optimizer step. Thanks for the response !
|
st117995
|
image.png1265×403 11.9 KB
I have indicatored the problem happen in sencond batch, when use ys = ys.cuda(), someone say it is that i am not release the first batch, but i don’t underdstand.
could anyone explain why this happen?
|
st117996
|
I am implementing an extension using FFI with generic files in this repo 9.
I manually linked the library as in the file 4. Is there a better way to do it?
|
st117997
|
A better way is to get the include_path and lib_path like as given here: https://github.com/pytorch/extension-ffi/issues/8#issuecomment-300812066 21 and then directly linking against the libraries present in lib_path. You can first get it via python, and then set them as env variables of your shell script that is calling cmake. Inside cmake, you can read this env variables.
This way it is guaranteed to not link against a [Lua]Torch installation.
|
st117998
|
Thanks a lot for the detailed solution!
It is working nicely at build.py#L21 10.
|
st117999
|
Are you sure you didn’t find what you wanted here 200? Coz I’m amazed at the loss functions covered by PyTorch.
Please search the forum once before asking, as this question was discussed multiple times in the past. This thread will be really helpful.
If interested, do check out how they’ve coded the existing ones. It’s actually quite simple. You can check it out here 72
|
st118000
|
In the init method, can I do something like
self.input = nn.Linear(28 * 28, 28 * 28, requires_gradients=False)
or is it not they way it should be done?
|
st118001
|
maybe u can do this way
self.input = nn.Linear(28*28, 28*28)
self.input.weight.data.requires_grad=False
self.input.bias.data.requires_grad=False
|
st118002
|
That looks like it should work. Is there a reason why you wrote the second line twice?
|
st118003
|
I make a mistake, the second line should be bias if u use default parameter in nn.Linear
|
st118004
|
Tried this example code but got an error:
>>> a = torch.randn(4, 4)
>>> torch.mean(a, 1)
-0.5172
-0.2325
0.4547
-0.6532
[torch.FloatTensor of size 4x1]
>>> torch.mean(a, 1, True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: torch.mean received an invalid combination of arguments - got (torch.FloatTensor, int, bool), but expected one of:
* (torch.FloatTensor source)
* (torch.FloatTensor source, int dim)
How can I solve this error ?
Also, how can I compute the mean of a matrix ? For instance, I have a b x c x h x w tensor, and I would like to compute the mean of each feature map, so that I got a b x c x 1 x 1 mean tensor. Just like np.mean(a, dim=(2,3)) does.
|
st118005
|
This is a relatively recent change, you may want to upgrade your pytorch version. Also I am not even sure that it is in the binary releases yet and may only be available with the source install.
For your problem, doesn’t the regular torch.mean does what you want?
|
st118006
|
Thanks for this quick response. You’re right, it’s not in the binary release. I will try compile with the source. Also, it doesn’t do what I want. For instance, I have a 1x3x4x4 tensor, what I would like to do is to compute the mean of the 3 channels so that I get a tensor of size 1x3x1x1, while torch.mean will only calculate mean of a single dimension not cross two dimensions.
>>> import torch
>>> a = torch.randn(1, 3, 4, 4)
>>> torch.mean(a)
-0.1468020509540414
>>> torch.mean(a,dim=1)
(0 ,0 ,.,.) =
-0.3284 0.1365 -0.4091 -0.6045
-0.3535 -0.8380 0.0724 0.4533
-0.2629 0.7440 0.2587 -0.0283
-0.5283 -0.3177 -0.4128 0.0697
[torch.FloatTensor of size 1x1x4x4]
>>>
|
st118007
|
I use torch.mean(torch.mean(input, dim=2),dim=2) to get what I want now, just wonder if there’s more elegant way.
|
st118008
|
Found the solution, first view the tensor as bxcx(hxw), then use torch.mean(). Thanks for your help.
|
st118009
|
Solved by acgtyrant in post #2
This question is not important now.
|
st118010
|
This question is not important now.
github.com/pytorch/pytorch
Flag to check if a Module is on CUDA similar to is_cuda for Tensors 168
opened
Jan 25, 2017
closed
Jan 25, 2017
napsternxg
When I have an object of a class which inherits from nn.Module is there anyway I can check if I the...
medium priority (this tag is deprecated)
|
st118011
|
I was going through the transfer learning tutorial , and I noticed this
_ , prediction = torch.max(outputs.data , 1)
So I wondered what the first return data was.
I ran the following code , can someone help me understand the output difference
a = Variable(tt.Tensor([[1,2,.01],
...: [5,0.1,2],
...: [4,5,6]]))
In [16]: a
Out[16]:
Variable containing:
1.0000 2.0000 0.0100
5.0000 0.1000 2.0000
4.0000 5.0000 6.0000
[torch.FloatTensor of size 3x3]
p = torch.max(a.data , 1)
print(p)
(
2
5
6
[torch.FloatTensor of size 3x1]
,
1
0
2
[torch.LongTensor of size 3x1]
)
I understood the first output , but can not understand the second result ( LongTensor)
|
st118012
|
I have non-sudo access to a machine with NVIDIA GPUs and CUDA 7.5 installed. I installed PyTorch with CUDA 7.5 support, which seems to have worked:
>>> import torch
>>> torch.cuda.is_available()
True
To get some practice, I followed tutorial for machine translation using RNNs 7. When I set USE_CUDA = False and the CPUs are used, everything works quite alright. However, when want to utilize the GPUs with USE_CUDA = True I get the following error:
Traceback (most recent call last):
...
File "seq2seq.py", line 229, in train
encoder_output, encoder_hidden = encoder(input_variable[ei], encoder_hidden)
File "/.../python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "seq2seq.py", line 144, in forward
output, hidden = self.gru(embedded, hidden)
File "/.../python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/.../python2.7/site-packages/torch/nn/modules/rnn.py", line 91, in forward
output, hidden = func(input, self.all_weights, hx)
...
File "/.../python2.7/site-packages/torch/backends/cudnn/rnn.py", line 42, in init_rnn_descriptor
cudnn.DropoutDescriptor(handle, dropout_p, fn.dropout_seed)
File "/usr/lib/python2.7/ctypes/__init__.py", line 383, in __getitem__
func = self._FuncPtr((name_or_ordinal, self))
AttributeError: python: undefined symbol: cudnnCreateDropoutDescriptor
Exception AttributeError: 'python: undefined symbol: cudnnDestroyDropoutDescriptor' in <bound method DropoutDescriptor.__del__ of <torch.backends.cudnn.DropoutDescriptor object at 0x7fe540efec10>> ignored
I’ve tried to use Google to search for that error but got no meaningful results. Since I’m rather a newbie with PyTorch and CUDA, I have no idea how to go on from here. The full setup is Ubuntu 14.04, Python 2.7, CUDA 7.5.
|
st118013
|
if possible, try to start your python this way:
unset LD_LIBRARY_PATH
unset LD_PRELOAD
python your_program.py
I suspect that you have cudnn installed on your machine that is not a correct version (maybe 6 RC?) and it seems to be wrongly being loaded into the process instead of the one shipped by PyTorch.
|
st118014
|
I want to implement a RNN-based translation model, and the size of vocabulary is more than 100k. The model training takes too much time probably due to the computation of softmax at the output layer.
I found noise contrastive estimation (NCE) should be a good solution (Mnih, A., & Teh, Y. W. (2012). A fast and simple algorithm for training neural probabilistic language models. arXiv preprint arXiv:1206.6426.), but PyTorch hasn’t provided the NCE loss function yet. Is there a way to address it?
|
st118015
|
Thank you for sharing that thread. There doesn’t seems to be a good solution at this moment…
|
st118016
|
I wonder what the preferred way of dealing with variable-size inputs? Say, I’d like to train on batches of different sizes with different spatial dimensions (height and width).
Thanks!
|
st118017
|
How can i control the init method of a layer like conv layer and fc layer?
how can i init a layer’s weights in a specific way?
|
st118018
|
u can initialize layer this way
import scipy.stats as stats
stddev = 0.1
X = stats.truncnorm(-2, 2, scale=stddev)
values = torch.Tensor(X.rvs(layer.weight.data.numel()))
layer.weight.data.copy_(values)
or u can just simply fill 0
layer.weight.data._fill(0)
|
st118019
|
Assuming I have self.dropout1 = nn.Dropout() layer defined in the init,
is it OK to anneal the dropout rate in the training loop by modifying it like self.model.dropout1.p = 0.9 , 0.8 … 0.0?
Or is there a better way?
|
st118020
|
It should work. However, a more decent way I would say is to use torch.nn.functional.dropout http://pytorch.org/docs/nn.html?highlight=dropout#torch.nn.functional.dropout 69
like
x = F.dropout(x, p, self.is_training)
|
st118021
|
If I use the functional, that functional layer is not included, when I print out my model with print(self.model). Is there a way to circumvent that ? I find it quite useful to print out a model as a final check…
|
st118022
|
I see. I guess if you want to print it out, your proposed method is the best solution.
|
st118023
|
Hi,
I work on mac os Sierra.
I’ve managed to install pytorch with cuda from source.
It works when I use python through terminal.
However, When I launch spyder with the terminal, it crashes.
I have the following message in the terminal:
/Users/Clement/anaconda/bin/pythonw: line 3: 92336 Illegal instruction: 4 /Users/Clement/anaconda/python.app/Contents/MacOS/python “$@”
I can’t manage to solve this issue, could you please help me?
Thanks
|
st118024
|
Hi guys, I implement the Decoupled Neural Interfaces using Synthetic Gradients in pytorch. The paper uses synthetic gradient to decouple the layers among the network, which is pretty interesting since we won’t suffer from update lock anymore. I test my model in mnist and almost achieve result as the paper claimed.
GitHub
andrewliao11/dni.pytorch 74
Implement Decoupled Neural Interfaces using Synthetic Gradients in Pytorch - andrewliao11/dni.pytorch
reference:
Decoupled Neural Interfaces using Synthetic Gradients 3
Understanding Synthetic Gradients and Decoupled Neural Interfaces 8
|
st118025
|
my system is osx, when I used torch.unsqueeze or torch.linspace, such like those, return the error: can’t find reference ‘unsqueeze’ in ‘init.py’. What’s the problem? Is my pytorch not install good? Or do I need to install any other moulde?
|
st118026
|
maybe you have a folder called torch in your current directory, and it’s __init__.py is being used
|
st118027
|
Thanks. I have fixed it. I think my pytorch didn’t installed good by pip. And I used conda to install again. Then fixed it.
|
st118028
|
Having the same issue with pip, Pycharm also can’t resolve the functions when installing with pip on OSX.
Don’t really want to change everything to Anaconda though…
|
st118029
|
You can load Anaconda to Pycharm. Just google how to do it. Coz a little bit complex to tell you by few words.
|
st118030
|
When I use torch.ge 6() function in network, I got following error:
‘Variable’ object is not callable
Code:
def forward(self, x):
y = x.sum(dim = 1)
yy = y.view(y.size(0), -1)
avg = yy.mean(dim = 1)
mask = torch.ge(y, avg(0))
where, x is an NCH*W tensor.
|
st118031
|
This is because you are trying to call avg by passing in a value of 0. I think you wanted to do an access (i.e. avg[0] on it instead? Even so I don’t think the tensors y and avg[0] wouldn’t match here …?
|
st118032
|
It only accept float for input, I correct to
mask = torch.ge 30(y, float(avg.data.numpy()[0]) ])
Then it works well.
|
st118033
|
I have a project in which I am taking the output of one network that I am training, and feeding it into a pretrained network in evaluate mode and optimizing the output of the second network by training the first network. It’s amazing that PyTorch’s autograd allows me to do this, it really is a flexible platform. One of the problems I am having however is that the output of the first network is a 1x10 tensor like [1,2,3,4,5,6,7,8,9,10]. The input I would like to feed into the second network is a 1x10x13x26 tensor where there are 10 13x26 “images” each filled with of the values above. Of course I could use a for loop iterate through the 1x10 tensor and create a new tensor input based upon this but this wouldn’t allow autograd to function. How can I expand the output from my first network to match the required input from the second network while maintaining autograd and without changing the architecture of either network?
|
st118034
|
This code is modified from the DQN tutorial (which runs on my machine). However, this code does not. It core dumps in the backward method. Am I doing something dumb?
Thank you, in advance, for any assistance.
class DQN(nn.Module):
def __init__(self):
super(DQN, self).__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size=5, stride=2)
self.bn1 = nn.BatchNorm2d(16)
self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2)
self.bn2 = nn.BatchNorm2d(32)
self.conv3 = nn.Conv2d(32, 32, kernel_size=5, stride=2)
self.bn3 = nn.BatchNorm2d(32)
self.head = nn.Linear(192, 3)
def forward(self, x):
x = F.relu(self.bn1(self.conv1(x)))
x = F.relu(self.bn2(self.conv2(x)))
x = F.relu(self.bn3(self.conv3(x)))
return self.head(x.view(x.size(0), -1))
model = DQN()
optimizer = optim.RMSprop(model.parameters())
action_batch = Variable(LongTensor([[0],[1],[2],[0],[1],[2],[0],[1],[2],[0]]))
next_states = Variable(torch.rand(10,3,26,70),volatile=True)
state_batch = Variable(torch.rand(10,3,26,70))
state_action_values = model(state_batch)
next_state_values = model(next_states)
next_state_values.volatile = False
expected_state_action_values = (next_state_values * 0.999) + 1.0
loss = F.smooth_l1_loss(state_action_values, expected_state_action_values)
optimizer.zero_grad(); model.zero_grad();
loss.backward()
|
st118035
|
I removed one of the conv layers and it works. I assume my core dump must have something to do with some of the reported bugs.
class DQN(nn.Module):
def __init__(self):
super(DQN, self).__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size=5, stride=2)
self.bn1 = nn.BatchNorm2d(16)
self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2)
self.bn2 = nn.BatchNorm2d(32)
self.head = nn.Linear(1920, 3)
def forward(self, x):
x = F.relu(self.bn1(self.conv1(x)))
x = F.relu(self.bn2(self.conv2(x)))
return self.head(x.view(x.size(0), -1))
model = DQN()
optimizer = optim.RMSprop(model.parameters())
action_batch = Variable(LongTensor([[0],[1],[2],[0],[1],[2],[0],[1],[2],[0]]))
next_states = Variable(torch.rand(10,3,26,70),volatile=True)
state_batch = Variable(torch.rand(10,3,26,70))
state_action_values = model(state_batch)
next_state_values = model(next_states)
next_state_values.volatile = False
expected_state_action_values = (next_state_values * 0.999) + 1.0
loss = F.smooth_l1_loss(state_action_values, expected_state_action_values)
optimizer.zero_grad(); model.zero_grad();
loss.backward()
|
st118036
|
where is the coredump stopped? can you look at it in gdb?
also are you on pytorch 0.1.12? can you see if master fixes it? we fixed some rare segfaults recently
|
st118037
|
when i ran my lstm code on GPU, the GPU utilization is always a few percent like 2%, but the used dedicated memory did increase about 1G.
the question is, was i really using my GPU to accelerate my training process?
i used caffe before, caffe will have a high GPU utilization rate. What’s the difference between torch and caffe when using GPU?
|
st118038
|
you are using the gpu.
maybe you are either bottlenecked by data loading, or your lstm is so small that launching the gpu compute is slower than actually computing on the gpu.
|
st118039
|
I’m using a modified version of the CIFAR10 tutorial to classify some of my own images. I added more convolutional layers because I’m trying to classify higher resolution images (512x384 pixels) and I’m trying to use my GPU to accelerate the training, but it doesn’t seem to be using it. I run nvidia-smi while the network is training and it shows the GPU barely being used at all. The results seem fine, but it just seems to be using just the CPU. I think I’m calling the .cuda() method on all the right objects and I think there should be plenty of work to keep the GPU busy. Here are the relevant portions of the code. Any help would be much appreciated. If it matters, I’m using Ubuntu 16.04 with a GTX 1070 and NVIDIA driver 375.39. Thanks!
#Define a Convolution Neural Network
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.verbose = False
self.pool = nn.MaxPool2d(2, 2)
self.conv1 = nn.Conv2d(3, 6, 5)
self.conv2 = nn.Conv2d(6, 10, 5)
self.conv3 = nn.Conv2d(10, 16, 5)
self.conv4 = nn.Conv2d(16, 16, 5)
self.conv5 = nn.Conv2d(16, 18, 5)
self.conv6 = nn.Conv2d(18, 21, 5)
self.fc1 = nn.Linear(168, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 7)
self.cuda() #Convert this module to CUDA
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
x = self.pool(F.relu(self.conv4(x)))
x = self.pool(F.relu(self.conv5(x)))
x = self.pool(F.relu(self.conv6(x)))
x = x.view(-1, 168)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
…
#Load data
trainset = ImageDataset.ImageDataset()
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batchSize, shuffle=True, num_workers=2)
testset = ImageDataset.ImageDataset(train=False)
testloader = torch.utils.data.DataLoader(testset, batch_size=3, shuffle=True, num_workers=2)
########################################################################
# 3. Define a Loss function and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
########################################################################
# 4. Train the network
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
# wrap them in Variable
inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda())
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
|
st118040
|
because you have clearly called inputs.cuda() and labels.cuda() it is definitely using the GPU.
I suspect you are bottlenecked on data loading. Try increasing the num_workers in your DataLoader and see if that helps.
|
st118041
|
Increasing the num_workers from 2 to 15 definitely sped up the epoch time It’s about half what it was. You’re right that it’s definitely using the GPU. I meant that the GPU load was lower than I expected. When I run nvidia-smi, I see python listed as one of the processes using the GPU. With num_workers=15, I run htop and see that the CPU is maxed out. I have a Core i7-7600K and all 8 cores are maxed out, so maybe the GPU just has a much higher capacity than I thought. I ran this experiment and saw that the CPU took 17.2 sec and the GPU took 2.9 sec, but it took some pretty huge matrices to make the GPU beat the CPU.
import torch
import time
dim = [9513, 22120]
startTime = time.time()
a = torch.Tensor(dim[0], dim[1])
b = torch.Tensor(dim[1], dim[0])
c = a.mm(b)
endTime = time.time()
print("CPU Runtime: %.5f [sec]"%(endTime-startTime))
startTime = time.time()
ac = a.cuda()
bc = b.cuda()
cc = ac.mm(bc)
endTime = time.time()
print("GPU Runtime: %.5f [sec]"%(endTime-startTime))
|
st118042
|
HI, I meet a problem when adding a new NN Module. There are two LSTM in the new module and LSTM need initial hidden state h_0.
init_hidden = (Variable(torch.zeros(1, inputs.size(1), self.hiddenchannel)),
Variable(torch.zeros(1, inputs.size(1), self.hiddenchannel)))
It worked when I run outputs = model(inputs) with CPU mode. However if I run model = model.cuda(), outputs = model(inputs) Then error occurs.
I modify h_0 as:
init_hidden = (Variable(torch.zeros(1, inputs.size(1), self.hiddenchannel).cuda()),
Variable(torch.zeros(1, inputs.size(1), self.hiddenchannel).cuda()))
and run model = model.cuda(), outputs = model(inputs) It works correctly.
So how to init h_0 thus it works in CPU & GPU mode when running outputs = model(inputs) and model = model.cuda(), outputs = model(inputs)?
Thanks
|
st118043
|
@smth I would appreciate it if you can give some help on this problem. Thanks you .
|
st118044
|
Hi,
Depending on where you create your hidden state you have few possibilities.
If you have access to the current input, you can do input_tensor.new(1, inputs.size(1), self.hiddenchannel).zero_().
If you don’t, you can use .type_as(input_tensor) at the beginning of your forward function.
|
st118045
|
I am using the nn.Batchnorm layer on an autoencoder
self.enc1 = nn.Linear(28 * 28, 1000)
self.enc1bn = nn.BatchNorm1d(1000)
self.enc2 = nn.Linear(1000, 1000)
self.enc2bn = nn.BatchNorm1d(1000)
self.enc3 = nn.Linear(1000, 1000)
self.enc3bn = nn.BatchNorm1d(1000)
self.bottleneck = nn.Linear(1000, 1000)
self.dec1 = nn.Linear(1000, 1000)
self.dec1bn = nn.BatchNorm1d(1000)
self.dec2 = nn.Linear(1000, 1000)
self.dec2bn = nn.BatchNorm1d(1000)
self.dec3 = nn.Linear(1000, 1000)
self.dec3bn = nn.BatchNorm1d(1000)
self.dae_out = nn.Linear(1000, 28 * 28)
This seems to affect the training and gives a poorer reconstruction of the input, compared to when I am not using batchnorm. My layer structure is : Linear -> Batchnorm -> RELU .
Any ideas why this is ?
|
st118046
|
It’s hard to tell why, but for generative models BatchNorm can have weird side effects.
|
st118047
|
I need to calculate a loss which is composed by some different parts of a net,but different parts need to have a different weights to constitute a loss value.How should I do for this purpose?
I wonder if the form
loss+=math.exp(-1*float(epoch+1)/args.epochs)*loss_fn(model_output[i],label)
is correct???
Can the gradients flow correctly though net in this way?
Thanks for your advice!
|
st118048
|
I an new to pytorch and I want to show the structure of CNN built in pytorch, is there any tool? Thanks!
|
st118049
|
if you want to visualize the graph, there isn’t a current way in 0.1.12 to do this. We are exposing APIs for this in 0.2.
|
st118050
|
As the topic says, I don’t understand how to decide the num_features from the doc, is it a number which can be randomly picked?
Thank you.
|
st118051
|
it is the number of channels of your input to InstanceNorm2d.
If your input will be Tensor of size: 2 x 3 x 4 x 5 then num_features should be 3
|
st118052
|
Since pytorch 0.1.11, I had a strange error using DataParallel.
I have some code which has no error using a single GPU.
However, using multiple GPUs, I got ‘out of memory error’ even with the same batch size.
(Note that this code works without any error on pytorch 0.1.10)
Does anyone experience a similar problem since pytorch 0.1.11 ?
Now, I am trying to write the small snippet reproducing this problem.
|
st118053
|
We’ve run many programs fine with v0.1.11.
I’m happy to help investigate further if you get me a snippet.
|
st118054
|
In the transfer learning example 26, I noticed that the input data has to be of the shape [224, 224, 3]. What if my input data is smaller grayscale image? Do I have to rescale and convert to RGB?
|
st118055
|
yes, you have to upscale it and converted it to RGB to pass it through the default pre-trained resnet.
|
st118056
|
Hi Pytorch
I have a module that returns a dict of outputs (I like to keep them labeled so I can disregard some of them during testing). This apparently breaks everything, including __call__ (I use forward instead, i understand this is bad. It also breaks data parallelism. I was wondering if there were best practices for such a thing (where I want to return several tensors, each with their own loss, keep them organized, etc). Would hooks be applicable here, for example?
Thanks
Matt
|
st118057
|
It’s easiest for now to return a tuple; if you need labels maybe use a namedtuple?
|
st118058
|
I have a model which requires storing variables in an array and updating it with deeply-nested loops where Python performance isn’t great. I wanted to rewrite this in Cython (which docs say should be possible), but I haven’t been able to find a tutorial or example code for doing this. Does anyone have any pointers on where I can find one?
|
st118059
|
I noticed that some Module class wrap the Function class from torch.nn.functional again. For example, there is Dropout in torch.nn.functional which is imported from torch/nn/_functions/dropout.py, and there is Dropout in torch.nn which is imported from torch/nn/modules/dropout.py too.
I read the Neural Networks tutorial 14, it use only some functions from torch.nn.functional in defining a Module, so I do not know what is the need of these modules.
|
st118060
|
Solved by trypag in post #2
This question was already answered a number of time.
|
st118061
|
This question was already answered a number of time.
Difference of methods between torch.nn and functional
Both torch.nn and functional have methods such as Conv2d, Max Pooling, ReLU etc. However, many public codes writes Conv and Linear layer in a class __init__ and call it with ReLU and Pooling in forward(). Is there a good reason for that ?
I am guessing that because Conv and Linear consist of learnable parameters which wrapped within functional module. And then define them in __init__ as members for the class. For ReLU, Pooling which do not require learnable parameters just to be called in forwa…
What's the difference between `torch.nn.functional` and `torch.nn`?
It seems that there are quite a few similar function in these two modules.
Take activation function (or loss function) as an example, for me the only difference is we need to instantiate the one in torch.nn but not for torch.nn.functional.
What I want to know is if there were any other further difference, say, the efficiency?
How to choose between torch.nn.Functional and torch.nn module?
In PyTorch you define your Models as subclasses of torch.nn.Module.
In the __init__ function, you are supposed to initialize the layers you want to use. Unlike keras, Pytorch goes more low level and you have to specify the sizes of your network so that everything matches.
In the forward method, you specify the connections of your layers. This means that you will use the layers you already initialized, in order to re-use the same layer for each forward pass of data you make.
torch.nn.Functiona…
|
st118062
|
I am trying to implement a paper that uses a 1/2 stride CONV layer as in-network upsampling. However, when I enter 0.5 as value for stride in nn.Conv2d, it (obviously) throws an error. Any suggestions on this? This is a really popular paper, so I’m sure it has been done in PyTorch before!
Link to paper’s network architecture: http://cs.stanford.edu/people/jcjohns/papers/eccv16/JohnsonECCV16Supplementary.pdf 87
|
st118063
|
I’m in Justin Johnson’s Stanford course, and he happened to talk about that today. Thanks!
|
st118064
|
I want to concatenate two tensors, but I get the following error:
‘list’ object has no attribute ‘cat’
My code:
def forward(self, x):
n, c, h, w = x.size()
feat = x[0, :, 0, 0]
this_img_conv = torch.zeros(c).cuda()
this_img_conv = [Variable(this_img_conv)]
this_img_conv = this_img_conv.cat(feat, 0)
I want to save the specific vector of x to the new tensor of this_img_conv. How can I achieve this function?
|
st118065
|
As for the third line of your forward method,
which is this_img_conv = [Variable(this_img_conv)],
you should try this_img_conv = Variable(this_img_conv) without square brackets,
to make this_img_conv a Variable , not a list.
BTW, for concatenation, you can see torch.cat 47
|
st118066
|
Hi all. Working on my first PyTorch project!
I am embedding an input vector and feeding it and an initial hidden state into the GRU cell. Here is the code:
class Encoder(nn.Module):
def __init__(self, input_size, embedding_size=500, hidden_size=1000):
super(Encoder, self).__init__()
self.hidden_size = hidden_size
self.embedding = nn.Embedding(input_size, embedding_size)
self.gru = nn.GRU(embedding_size, hidden_size, 1)
def forward(self, input, hidden):
embedded = self.embedding(input).view(1, 1, -1)
output, hidden_state = self.gru(embedded, hidden)
return output, hidden_state
Unfortunately, the code breaks at the second line of forward. I am able to embed the input just fine! The size after the embedding is (1, 1, 500). However, I get this error when GRU tries to run:
TypeError: addmm_ received an invalid combination of arguments - got (int, int, torch.LongTensor, torch.FloatTensor), but expected one of:
This is followed by a long list of acceptable inputs. However, I never touch the input that is in question (the input which is torch.FloatTensor, which should be torch.LongTensor); I only pass GRU the first of the two Tensors, which has correct type! Any idea on what may be going wrong?
|
st118067
|
As the question states, I have loaded the pretrained Resnet101 (model = models.resnet50(pretrained=True)) model in pytorch and would like to know how to selectively modify the weights of layers and test the model.
Lets say for simplicity that there are only 5 bottlenecks b1,b2,b3,b4,b5 in the model followed by one FC layer fc1. I would like to keep the weights for the layers in b1 (first bottleneck) while setting the weights of every layer in the following bottleneck after that to 0 so I can see how it performs just using the b1 weights.
Here is a good visualization of the ResNet architecture: Resnet50 110
And here is what b1 would look like starting at the pool1 layer all the way up to res2a:
Screen Shot 2017-05-11 at 1.11.10 AM.png930×1646 74 KB
Thank you!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.