id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st119068
|
So I have a few questions for implementing a custom functions. We need to do something different than gradient descend thus we try to to implement something within the forward/backward mode of pytorch, but calculating more than gradients. However, currently on a very simple example, pytorch just crashes with:
Traceback (most recent call last):
File "/home/alex/work/python/nn-second-order/bin/test.py", line 94, in <module>
loss.backward()
File "/usr/local/lib/python3.4/dist-packages/torch/autograd/variable.py", line 145, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
RuntimeError: could not compute gradients for some functions
The code we are using looks like this:
import torch
from torch.autograd import Variable
torch.nn.Linear
class Gn_SquareLoss(torch.autograd.Function):
def forward(self, x, y):
self.save_for_backward(x, y)
diff = (x - y)
return diff.pow(2)
def backward(self, grad_output):
# N x D
x, y = self.saved_tensors
diff = (x - y) * 2
# N x D
grad_input = grad_output.clone()
# N x D
g = [grad_input * diff, grad_input * diff]
d = g[0].size()[1]
# D x D
m = [torch.eye(d, d) * 2, torch.eye(d, d) * 3]
# (N+D) x D
s1 = torch.cat((m[0], g[0]), 0)
# (N+D) x D
s2 = torch.cat((m[0], g[0]), 0)
return s1, s2
class Gn_Dot(torch.autograd.Function):
def forward(self, x, w):
self.save_for_backward(x, w)
return torch.mm(x, w)
def backward(self, grad_output):
# N x H, H x D
x, w = self.saved_tensors
# (N + D) x D
grad_input, = grad_output.clone()
d = grad_input.size()[1]
# D x D
m = grad_input[:d]
# N x D
g = grad_input[d:]
# N x H
dx = torch.mm(g, torch.transpose(w, 0, 1))
# H x D
dw = torch.mm(torch.transpose(x, 0, 1), g)
# D x H
m = torch.mm(m, torch.transpose(w, 0, 1))
# (D + N) x H
dx = torch.cat((m, dx), 0)
return dx, dw
class Gn_Tanh(torch.autograd.Function):
def forward(self, x):
self.save_for_backward(x)
return torch.tanh(x)
def backward(self, grad_output):
x, = self.saved_tensors
grad_input = grad_output.clone()
d = grad_input.size()[1]
m = grad_input[:d]
g = grad_input[d:]
dx = torch.cat((m, g * (1 - torch.tanh(x).pow(2))), dimension=0)
return dx
dtype = torch.FloatTensor
# dtype = torch.cuda.FloatTensor # Uncomment this to run on GPU
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random Tensors to hold input and outputs, and wrap them in Variables.
x = Variable(torch.randn(N, D_in).type(dtype), requires_grad=False)
y = Variable(torch.randn(N, D_out).type(dtype), requires_grad=False)
# Create random Tensors for weights, and wrap them in Variables.
w1 = Variable(torch.randn(D_in, H).type(dtype), requires_grad=True)
w2 = Variable(torch.randn(H, D_out).type(dtype), requires_grad=True)
affine = Gn_Dot()
tanh = Gn_Tanh()
square_loss = Gn_SquareLoss()
learning_rate = 1e-6
for t in range(500):
# Forward pass: compute predicted y using operations on Variables; we compute
# ReLU using our custom autograd operation.
y1 = affine(x, w1)
h1 = tanh(y1)
y2 = affine(h1, w2)
loss1 = (y - y2).pow(2).sum()
loss = square_loss(y2, y).sum()
print(t, loss.data[0], loss1.data[0])
# Manually zero the gradients before running the backward pass
w1.grad.data.zero_()
w2.grad.data.zero_()
# Use autograd to compute the backward pass.
loss.backward()
# Update weights using gradient descent
w1.data -= learning_rate * w1.grad.data
w2.data -= learning_rate * w2.grad.data
Note that in order to satisfy pytorch’s requirement to have the number of gradients outputed same as inputs, we concatenate the extra thing we calculate (for testing purposes this is identity matrix) and then in each layer we peel them out.
Now the main issue is that this should work, however the error pytorch is giving us in a mystery why? My guess is something is happening in the C API, but I do not know what.
|
st119069
|
There are plenty of small bugs in your code. If you replace the functions by
affine = torch.mm
tanh = torch.tanh
squere_loss = torch.nn.MSELoss()
and one at a time change to your function, you will have better error messages.
A few points by the way in Gn_Tanh:
in your case, most of the time you want to do def backward(self, grad_output) instead of def backward(self, *grad_output)
you should unpack the saved tensors (it is a tuple, not a tensor), so x, = self.saved_tensors
you are getting the size of grad_input.size()[1], but then using it to index the 0th dimension. Is that right?
It’s interesting though that the error messages get lost when everything is put together,
|
st119070
|
Thanks for the fast reply, and I’m always happy to learn!
I can not exchange 1 by 1 the functions, as what is returned as a “gradient” is the concatenation of the gradient + identity. Thus if I plug in only one of my Functions, the graph will break since the shapes will not match.
Thanks, I’ve now changed everything to backward(self, grad_output) and unpack them from tuples (Updated the code in the post as well).
I should have mentioned that the error occurs immediately after the return statement of Gn_SquareLoss.backward so non of the other function’s backward methods get called.
To your question - if the grad is standard gradient is NxD of the loss, I append an identity on the 0th axis, thus now return (N+D)xD. Then in the backward of each other functions I unpack that two a DxD and NxD tensors via indexing as you said.
Hope this make it clear.
|
st119071
|
Hi all,
I have got a batch of features (N features). Now I want to pass message among different features.
I found that if I manipulate the features along batch dim, then errors will be raised when backward(). The codes are like below:
import torch
from torch.autograd import Variable
import numpy as np
import torch.nn as nn
a = Variable(torch.Tensor(3, 5).normal_(), requires_grad=True)
b = Variable(torch.Tensor(3, 5).normal_(), requires_grad=True)
for i in range(b.size()[0]):
for j in range(a.size()[0]):
b[i] = b[i] + a[j]
b.backward(torch.Tensor(5, 3).normal_())
When running, I got “inconsistent tensor size at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.7_1485444530918/work/torch/lib/TH/generic/THTensorCopy.c:40”.
Is it possible to divid the original Variable into several parts and then process them?
Best,
Yikang
|
st119072
|
Isn’t it the same as if you just summed up a along that dimension and then added it to each element of b?
import torch
from torch.autograd import Variable
import numpy as np
import torch.nn as nn
a = Variable(torch.Tensor(3, 5).normal_(), requires_grad=True)
b = Variable(torch.Tensor(3, 5).normal_(), requires_grad=True)
a_summed = a.sum(0) # sum along first dimension
result = b + a_summed.expand_as(b)
result.backward(torch.randn(5, 3))
|
st119073
|
Yes, that’s the same. However, the ‘backward()’ will lead to an error? Could you tell me I to do that?
|
st119074
|
No, my solution will work. Yours will raise an error, because you’re assigning things to b. Assignments are in-place operations, and they will fail if performed on leaf Variables.
|
st119075
|
Thank you for your timely reply.
I apologize for misleading you with the example.
The code is just an example, What I want to do is passing message along batch dimension. Actually, the link is dynamic during processing.
So what should I do if I want to manipulate the feature along batch dim.
In addition, what’s the meaning of leaf Variables?
Best,
Yikang
|
st119076
|
I’m sorry but I don’t know what passing a message along a dimension is. I’d recommend you to look for out-of-place functions that can do this, but I can’t recommend any, as I don’t know what are you trying to achieve.
Leaf Variables are Variables that are not results of any computation, so:
x = Variable(torch.randn(5, 5)) # this is a leaf
y = x + 2 # y is not a leaf
|
st119077
|
I have got that! The problem has fixed. Just add an operation will be enough.
import torch
from torch.autograd import Variable
import numpy as np
import torch.nn as nn
x = Variable(torch.Tensor(3, 5).normal_(), requires_grad=True)
y= Variable(torch.Tensor(3, 5).normal_(), requires_grad=True)
a = x + 0
b = y + 0
for i in range(b.size()[0]):
for j in range(a.size()[0]):
b[i] = b[i] + a[j]
b.backward(torch.Tensor(3, 5).normal_())
What I want to know is whether pytorch’s autograd supports doing operation on part of the variables. That’s what I need in my future project. Actually, it does support.
Thank you very much for your help and patience.
Best,
Yikang
|
st119078
|
In most cases (like in your example), you can avoid these assignments, and I’d recommend doing that. It will likely result in better performance. Nevertheless, we do support them, so if there’s no other way, you can still do that.
|
st119079
|
Got it. Thank you very much. I really appreciate your works for PyTorch and the CV community!
Best,
Yikang
|
st119080
|
I have been using PyTorch for a couple of weeks now, working mostly with complex RNNs models. In such situations, the natural way of accumulating state for me till now has been to create a Tensor and keep filling it with new state at each time step. But I haven’t been able to write code like that in PyTorch since in-place ops on leaf nodes are not supported. I have had to use lists with manual concatenation and indexing is messy.
I can use the ugly hack shown above where in a dummy operation is performed to make a node non-leaf. But it would be great to have some implicit adjustment that allows for in-place ops on leaves.
If you think this would be a useful feature to have (or conversely if its not something you are intentionally against), and if you can give some pointers as to how to make this happen, I can work on it and submit a PR .
I hope I am not missing something trivial here.
|
st119081
|
the only reason for us to not support inplace operations on leaves is for correctness.
We shall re-review how complex it would get to support some inplace operations on leaves.
Thanks a lot for the feedback.
|
st119082
|
You can fill in a leaf Variable. Just don’t declare it as requiring grad - you don’t need grad w.r.t. the original content that will get overwritten.
|
st119083
|
I have explored torch.__all__, torch.__dir__ or dir(torch) and torch.__builtins__. I got some feel for the former two, but no idea about the last one.
What torch.__all__, torch.__dir__ or dir(torch) do:
Please correct me if I am wrong:
torch.__all__, torch.__dir__ or dir(torch) are lists of names of attributes, functions and modules defined under torch;
we can use torch.a-name-from-the-list to access and use
the list of torch.__all__ has 156 items, the list of torch.__dir__ or dir(torch) has 199 items
the later list contains the former list
However, I have no idea the use of the dictionary in torch.__builtins__
there are 151 key:value in the dict
only 12 keys are shared with torch.__dir__ or dir(torch)
we can’t use torch.name to access the keys or values of the list
So, what are the elements of torch.__builtins__ for? How we use them if needed?
Thanks a lot! @Veril @apaszke
|
st119084
|
Thanks @smth for your quick response!
Given they are called builtin functions or classes or types, please correct me if I am wrong:
once import torch, these builtin functions are ready to use, correct?
if the above is correct, and given any is a builtin function of torch, then why I can’t find torch.any, or torch.tensor.any? Where is any to be found and be used?
all the keys of torch.builtins are here:
any, __name__, StopAsyncIteration, RuntimeWarning, BufferError, NotImplementedError, UnboundLocalError, repr, UnicodeTranslateError, UnicodeWarning, max, IndentationError, BytesWarning, None, IsADirectoryError, __IPYTHON__, AttributeError, reversed, ConnectionError, BaseException, issubclass, ConnectionResetError, range, staticmethod, PendingDeprecationWarning, __doc__, AssertionError, GeneratorExit, chr, abs, RecursionError, MemoryError, __package__, frozenset, str, enumerate, ImportError, dreload, bytes, float, map, TabError, True, ZeroDivisionError, False, memoryview, super, type, filter, SystemError, getattr, callable, __build_class__, setattr, sum, round, ConnectionAbortedError, OSError, ValueError, dict, DeprecationWarning, RuntimeError, TypeError, pow, hasattr, exec, zip, ArithmeticError, help, LookupError, UserWarning, PermissionError, print, ReferenceError, SystemExit, IndexError, complex, all, license, UnicodeError, input, vars, open, KeyboardInterrupt, bytearray, StopIteration, ConnectionRefusedError, __import__, SyntaxError, KeyError, UnicodeEncodeError, dir, ProcessLookupError, sorted, FutureWarning, ascii, ChildProcessError, next, IOError, EnvironmentError, format, id, __spec__, ImportWarning, divmod, TimeoutError, EOFError, Ellipsis, OverflowError, object, eval, hash, locals, InterruptedError, delattr, int, FloatingPointError, credits, Exception, min, BlockingIOError, __loader__, globals, NotADirectoryError, NameError, slice, get_ipython, classmethod, UnicodeDecodeError, copyright, compile, bool, isinstance, ord, __debug__, len, hex, ResourceWarning, BrokenPipeError, Warning, tuple, oct, iter, FileNotFoundError, NotImplemented, set, property, SyntaxWarning, FileExistsError, bin, list
|
st119085
|
__builtins__ has nothing to do with torch; it’s the special Python name for the quasi-module containing the language’s builtin functions, and all modules have it as an attribute.
The any function is a Python built-in, not a torch function, but there is probably a way to achieve the same thing on a torch tensor.
|
st119086
|
I build a net, I used cross entropy loss, the forward is success, but the backward failed! I don’t know why it doesn’t work.
RuntimeErrorTraceback (most recent call last)
<ipython-input-3-c3211f22ae0b> in <module>()
132 print "loss: {}, train_acc: {}".format(loss.data[0], train_acc)
133
--> 134 loss.backward()
135 opt.step()
136
/root//lib/python2.7/site-packages/torch/autograd/variable.pyc in backward(self, gradient, retain_variables)
144 'or with gradient w.r.t. the variable')
145 gradient = self.data.new().resize_as_(self.data).fill_(1)
--> 146 self._execution_engine.run_backward((self,), (gradient,), retain_variables)
147
148 def register_hook(self, hook):
/root//lib/python2.7/site-packages/torch/autograd/_functions/tensor.pyc in backward(self, grad_output)
307 def backward(self, grad_output):
308 return tuple(grad_output.narrow(self.dim, end - size, size) for size, end
--> 309 in zip(self.input_sizes, _accumulate(self.input_sizes)))
310
311
/root//lib/python2.7/site-packages/torch/autograd/_functions/tensor.pyc in <genexpr>((size, end))
306
307 def backward(self, grad_output):
--> 308 return tuple(grad_output.narrow(self.dim, end - size, size) for size, end
309 in zip(self.input_sizes, _accumulate(self.input_sizes)))
310
RuntimeError: out of range at /data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.10_1488756735684/work/torch/lib/TH/generic/THTensor.c:367
|
st119087
|
opt = O.Adagrad(filter(lambda p: p.requires_grad,model.parameters()),lr=config["lr"],)
I remove one parameter in code . because I wanna use pretrained vector (glove) , is this reason?
|
st119088
|
Hi,
Unfortunately without more information about what you run, it’s hard to help you.
The error is that one of the gradients passed back to your Concat operation does not contain the right number of dimensions.
You should make sure that you never change the content of a Variable by accessing its .data.
|
st119089
|
thank you for your time. I do the concat operation!but I never change the content of a Variable .
|
st119090
|
Can you share a small example that would allow us to reproduce this problem?
Or share the code corresponding to your concat operation and what you do with the concat output.
|
st119091
|
this is my example! I finally reproduced it!
import torch
import torch.nn as nn
from torch.autograd import Variable
import numpy as np
class Fnn2D3(nn.Module):
def __init__(self,input_dim, hidden_dim, dp_ratio):
super(Fnn2D3,self).__init__()
self.out = nn.Sequential(
nn.Dropout(dp_ratio),
nn.Linear(input_dim, hidden_dim,bias=False),
nn.ReLU(),
nn.Dropout(dp_ratio),
nn.Linear(hidden_dim, hidden_dim,bias=False),
nn.ReLU())
def forward(self, inputs):
a,b,c = inputs.size()
inputs = inputs.view(-1,c)
outputs = self.out(inputs)
outputs = outputs.view(a,b,-1)
return outputs
class Mlp2(nn.Module):
def __init__(self,input_dim, hidden_dim, output_dim,dp_ratio):
super(Mlp2,self).__init__()
self.out = nn.Sequential(
nn.Dropout(dp_ratio),
nn.Linear(input_dim,hidden_dim,bias=False),
nn.ReLU(),
nn.Dropout(dp_ratio),
nn.Linear(hidden_dim,output_dim,bias=False)
)
def forward(self, inputs):
return self.out(inputs)
class Example(nn.Module):
def __init__(self):
super(Example,self).__init__()
self.cmp1 = Fnn2D3(600,200,0.2)
self.cmp2 = Fnn2D3(600,200,0.2)
self.mlp = Mlp2(400,200,3,0.2)
def forward(self,a,b,c,d):
a = self.cmp1(torch.cat((a, c), 2))
b = self.cmp2(torch.cat((b, d), 2))
a = torch.mean(a,1)
b = torch.mean(b,1)
print a.size()
print b.size()
# hypo_mpool:(batch,1,cmp_dim),prem_mpool..
out = self.mlp(torch.cat((a,b), -1).view(5,-1))
return out
a = Variable(torch.randn(5,30,300))
b = Variable(torch.randn(5,23,300))
c = Variable(torch.randn(5,30,300))
d = Variable(torch.randn(5,23,300))
e = Variable(torch.from_numpy(np.array([1,0,1,2,1])))
opt = O.Adagrad(model.parameters(),lr=config["lr"],)
model = Example()
output = model(a,b,c,d)
print output.size()
criterion = nn.CrossEntropyLoss()
loss = criterion(output,e)
print loss
loss.backward()
opt.step()
|
st119092
|
It looks like there is a problem in the backward of the cat operation when the given dimension is negative.
You can replace torch.cat((a,b), -1) by torch.cat((a,b), a.dim()-1) as a temporary fix.
I will look into what is causing this bug when I have more time (cc @apaszke )
|
st119093
|
When I using Pytorch for conditional gan , I found that the training speed had little change no matter using GPU nor CPU.
There are some mistake in my codes?
|
st119094
|
there might not be mistakes in your code.
If you run a very small model then it will have same speed on CPU and GPU.
If you run large model, you might see speedup.
You can look at our examples for GAN code which is fast: https://github.com/pytorch/examples/ 246
|
st119095
|
Are you putting the net/model in cuda? Unlike Tensorflow, here you have to specify that you want to use GPU.
For small models there isn’t much difference, but for big models there difference is quite big.
|
st119096
|
@Ismail_Elezi,Thanks for your suggestion. I use DCGAN framework(5 conv layers for D net and 5 deconv layers for G net). For every variables I use a=a.cuda(), this setting right?
|
st119097
|
Yes, you should send all the variables in cuda, but in addition you also need to send both the neural networks in cuda, in addition to the criterior (which contains the cost function).
Something like:
if opt.cuda:
discriminator.cuda()
generator.cuda()
criterion.cuda()
input, label = input.cuda(), label.cuda()
might do the trick.
|
st119098
|
Hi everyone,
I hope to trian the REINFORCE algorithm in a batch mode with batch size larger than 1. From this example: https://github.com/pytorch/examples/blob/master/reinforcement_learning/reinforce.py 69
I have a feeling that this code need to be modified:
for action, r in zip(model_Net.saved_actions, rewards):
action.reinforce(r)
optimizer.zero_grad()
autograd.backward(model_Net.saved_actions, [None for _ in model_Net.saved_actions])
My question is how to pass the ‘r’ and ‘action’ in a batch mode in back-propagation ? It might related to reshape the ‘action’ values in a way to allow back propagation.
Right now I came up with an idea that is to compute ‘r’ and ‘action’ in batch mode in forward passes, but update the gradients sequentially (1 sample at a time) in back-propagations (e.g., run ‘finish_episode’ several times). But it’s obviously not optimal.
Thanks in advance,
Rein
|
st119099
|
If the shape of an action variable has batch dimension b, then you can call action.reinforce using a reward that’s either a scalar or a vector of length b.
|
st119100
|
Hi, are there are any very simple tutorials, or code samples of how to use .reinforce in batch mode?
Nothing, complicated, (or long to train like gym environments or Atari), just say a synthetic linear regression dataset with some noise added?
This would be really helpful to newbs !
Where’s the test code for .reinforce? Probably a good place to start from, if you wanted to write a batch-wise test problem?
OK- update - the .reinforce tests are in here,
github.com
pytorch/pytorch/blob/v0.1.10/test/test_autograd.py 83
import contextlib
import gc
import sys
import math
import torch
import unittest
from copy import deepcopy
from collections import OrderedDict
from torch.autograd import gradcheck
from common import TestCase, run_tests
from torch.autograd._functions import *
from torch.autograd import Variable, Function
if sys.version_info[0] == 2:
import cPickle as pickle
else:
import pickle
PRECISION = 1e-4
This file has been truncated. show original
|
st119101
|
Hi!
This code works fine and BiRNN converges:
def forward(self, x):
# Set initial states
h0 = Variable(torch.zeros(self.num_layers * 2, x.size(0), self.hidden_size)).cuda() # 2 for bidirection
c0 = Variable(torch.zeros(self.num_layers * 2, x.size(0), self.hidden_size)).cuda()
# Forward propagate RNN
#pack = torch.nn.utils.rnn.pack_padded_sequence(x, batch_size*[28], batch_first=True)
out, _ = self.lstm(x, (h0, c0))
# Decode hidden state of last time step
#out = out[0].view(-1, sequence_length, hidden_size*2)
out = self.fc(out[:, -1, :])
return out
Result:
Epoch [1/2], Step [100/600], Loss: 0.6213
Epoch [1/2], Step [200/600], Loss: 0.2935
Epoch [1/2], Step [300/600], Loss: 0.2289
Epoch [1/2], Step [400/600], Loss: 0.1926
Epoch [1/2], Step [500/600], Loss: 0.0635
Epoch [1/2], Step [600/600], Loss: 0.0311
Epoch [2/2], Step [100/600], Loss: 0.1164
Epoch [2/2], Step [200/600], Loss: 0.0957
Epoch [2/2], Step [300/600], Loss: 0.1021
Epoch [2/2], Step [400/600], Loss: 0.0675
Epoch [2/2], Step [500/600], Loss: 0.1220
Epoch [2/2], Step [600/600], Loss: 0.0311
Test Accuracy of the model on the 10000 test images: 97 %
But then when I use pack_padded_sequence loss is stuck:
def forward(self, x):
# Set initial states
h0 = Variable(torch.zeros(self.num_layers * 2, x.size(0), self.hidden_size)).cuda() # 2 for bidirection
c0 = Variable(torch.zeros(self.num_layers * 2, x.size(0), self.hidden_size)).cuda()
# Forward propagate RNN
pack = torch.nn.utils.rnn.pack_padded_sequence(x, batch_size*[28], batch_first=True)
out, _ = self.lstm(pack, (h0, c0))
# Decode hidden state of last time step
out = out[0].view(-1, sequence_length, hidden_size*2)
out = self.fc(out[:, -1, :])
return out
Result:
Epoch [1/2], Step [100/600], Loss: 2.3037
Epoch [1/2], Step [200/600], Loss: 2.3045
Epoch [1/2], Step [300/600], Loss: 2.3089
Epoch [1/2], Step [400/600], Loss: 2.2948
Epoch [1/2], Step [500/600], Loss: 2.2972
Epoch [1/2], Step [600/600], Loss: 2.3111
Epoch [2/2], Step [100/600], Loss: 2.3000
Epoch [2/2], Step [200/600], Loss: 2.2944
Epoch [2/2], Step [300/600], Loss: 2.2878
Epoch [2/2], Step [400/600], Loss: 2.2956
Epoch [2/2], Step [500/600], Loss: 2.2993
Epoch [2/2], Step [600/600], Loss: 2.3057
Test Accuracy of the model on the 10000 test images: 11 %
Why does it happen? As I understand it should work.
Using the pack_padded_sequence does not have any reason here because I use images which have fixed sequence size, but I learn Pytorch and try different things
Thanks!
|
st119102
|
I think you unpacked the sequence incorrectly. Use pad_packed_sequence with the RNN output.
|
st119103
|
VladislavPrh:
#out = out[0].view(-1, sequence_length, hidden_size*2)
Why is this commented here & not in the next snippet?
|
st119104
|
As apaszke correctly mentioned, the error was in out = out[0].view(-1, sequence_length, hidden_size*2) so I should have used pad_packed_sequence
|
st119105
|
nn.DataParallel allows to replicate and parallelize the execution of a model by sharding over the batch.
This assumes the model can fit inside of GPU memory. Is there a natural way in pytorch to run across multi-GPU a single model.
On a similar topic, give a GAN setting with a generator and a discriminator and two GPUs, what is the recommendation to speed-up the computation, given the dependency between discriminator and generator?
|
st119106
|
Yes, you can split your single model across multiple-GPUs in PyTorch with minimum fuss. Here is an example from @apaszke :
class MyModel(nn.Module):
def __init__(self, split_gpus):
self.large_submodule1 = ...
self.large_submodule2 = ...
self.split_gpus = split_gpus
if split_gpus:
self.large_submodule1.cuda(0)
self.large_submodule1.cuda(1)
def forward(self, x):
x = self.large_submodule1(x)
if split_gpus:
x = x.cuda(1) # P2P GPU transfer
return self.large_submodule2(x)
One caveat (to min. fuss) is that you probably want to try several split points for optimal GPU memory consumption across multiple devices! [Here] (https://gist.github.com/ajdroid/420624cdd6643c397b3a62c68904791c 117) is a more fleshed out example with VGG-16
|
st119107
|
This is interesting, but it does not really run it in parallel. While you’re running module 1, module 2 is idling (actually, the corresponding GPU) and viceversa. I was looking for a way to keep both GPUs busy at all times.
|
st119108
|
@claudiomartella
claudiomartella:
This assumes the model can fit inside of GPU memory. Is there a natural way in pytorch to run across multi-GPU a single model.
This inherently will involve some idling because of the sequential nature of the forward + backward. This is model parallelism (as opposed to data parallelism). For example, see here 162
|
st119109
|
What are the expected behaviours of these multiple outputs?
That is, what if I want to do BP from out1 sometimes and do BP from out2 sometimes?
I’ve tried to directly do BP like this.
class mm(nn.Module):
def __init__(self):
super(mm, self).__init__()
self.m = nn.Linear(3,2)
self.m2 = nn.Linear(2,4)
def forward(self, input):
o1 = self.m(input)
o2 = self.m2(o1)
return o1, o2
mmodel = mm()
optimizer = optim.SGD(mmodel.parameters(), lr=1, momentum=0.8)
#optimizer = optim.Adam(mmodel.parameters(), lr=1, betas=(0.5, 0.999))
input = Variable(torch.randn(1,3))
print(mmodel.m.weight.data)
print(mmodel.m2.weight.data)
mmodel.zero_grad()
output1, output = mmodel(input)
output1.backward(torch.ones(1,2))
#output.backward(torch.ones(1,4))
optimizer.step()
print(mmodel.m.weight.data)
print(mmodel.m2.weight.data)
When I use momentum based SGD, it would also update m2's weight especially if I have done some updates with output updates. That is due to the optimizer uses previous momentum to update the m2's weight. Is there any recipe for this? One hacky way I can think of is to check if grad is 0 and update nonzero gradient.
|
st119110
|
Hi,
I am trying to implement a CNN where my output must be only one element. Si I am doing sth like this (which I took from the GAN example):
class Discriminator(nn.Module):
def __init__(self, nc,ndf):
super(Discriminator, self).__init__()
self.features = nn.Sequential(
# input is (nc) x 64 x 64
nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf) x 32 x 32
nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 2),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*2) x 16 x 16
nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 4),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*4) x 8 x 8
nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 8),
nn.LeakyReLU(0.2, inplace=True)
# state size. (ndf*8) x 4 x 4
#nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),
#nn.Linear(ndf * 8,1),
#nn.Sigmoid()
)
self.classifier= nn.Sequential(
nn.Linear(self.inpelts,1),
nn.Sigmoid()
)
def forward(self, input):
x=self.features(input)
x = x.view(x.size(0), -1)#B,chans*H*W
self.inpelts=x.size(1)
print 'x shape ',x.size()
output=self.classifier(x)
return output
However, in order to make it work, I need to define in advance the number of elements going to the Linear (fully connected) layer.
Is there a way to build it automatically, like in Caffe or Tensorflow where it detects the number of elements by itself?
Thanks!
|
st119111
|
There is no way to detect this automatically with nn.Linear at the moment.
If you use the functional interface: http://pytorch.org/docs/nn.html#torch.nn.functional.linear 26 then you can initialize the weights in the constructor to some size:
linear_weight = nn.Parameter(torch.randn(1, 1), requires_grad=True) # resize this in the forward function
And in the forward function:
# some stuff before, where x is the input to linear
if self.linear_weight.data.nelement() == 1:
self.linear.weight.data.resize_(1, x.size(1)) # output_features x input_features)
# initialize weights randomly
self.linear.weight.data.normal_(-0.1, 0.1)
x = nn.functional.linear(x, self.linear_weight)
|
st119112
|
Hello, I’m trying to use torch.save and torch.load in my script.
Something strange happened when I saved a 800*3*480*640 FloatTensor A, and re-loaded it, all value after A[582][2] became 0. I believe 582 and 2 are two special numbers because I have tested this on another 1200*3*480*640 FloatTensor B, still all value of re-loaded B after B[582][2] became 0.
I’m confused by this, is there any restriction about using torch.save and torch.load? Here’s how I used them:
A = torch.from_numpy(A_array)
checkEmpty(A) # passed
torch.save(A, 'A_tensor')
A = torch.load('A_tensor')
checkEmpty(A) # failed
Then to find the first ZERO map:
for i in range(A.size()[0]):
for j in range(A.size()[1]):
if torch.max(A[i][j]) == 0.0:
print(i, j)
The first (i j) is (582 2)
|
st119113
|
are you on version 0.1.6 or earlier? we fixed a bug for very large tensors being serialized in 0.1.7.
|
st119114
|
Thank you. But how to check the version of currently used Pytorch?
And I installed it through:
pip install https://download.pytorch.org/whl/cu75/torch-0.1.10.post2-cp27-none-linux_x86_64.whl
It looks like version 0.1.10.
|
st119115
|
What OS are you on?
I just tried this small snippet on Linux (CentOS7) and on OSX:
import torch
a = torch.ones(800*3*480*640)
print(a.eq(0).sum())
torch.save(a, 'a.pth')
b = torch.load('a.pth')
print(b.eq(0).sum())
On Linux it works fine, on OSX i get an error, which i am investigating:
0
Traceback (most recent call last):
File "a.py", line 5, in <module>
torch.save(a, 'a.pth')
File "/Users/soumith/code/pytorch/torch/serialization.py", line 120, in save
return _save(obj, f, pickle_module, pickle_protocol)
File "/Users/soumith/code/pytorch/torch/serialization.py", line 192, in _save
serialized_storages[key]._write_file(f)
RuntimeError: Unknown error: -1
|
st119116
|
smth:
torch.save(a, ‘a.pth’)
b = torch.load(‘a.pth’)
And by running your script, I got 200410112 as the result of
print(b.eq(0).sum())
|
st119117
|
I tried it on an ubuntu 14.04 as well, but couldn’t reproduce the issue.
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.2 LTS
Release: 14.04
Codename: trusty
However, the OSX failure is good, I am tracking it here and trying to find out the issue there: https://github.com/pytorch/pytorch/issues/1031 12
Can you tell me about your OS, do you have any locale set, or do you just use the EN locale?
You can find your current locale with the command locale
$ locale
LANG=en_US.UTF-8
LANGUAGE=
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_PAPER="en_US.UTF-8"
LC_NAME="en_US.UTF-8"
LC_ADDRESS="en_US.UTF-8"
LC_TELEPHONE="en_US.UTF-8"
LC_MEASUREMENT="en_US.UTF-8"
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=
I’m not sure locale matters, i am trying to eliminate variables.
Also, can you give me your kernel version with uname -a:
$ uname -a
Linux fatbox 3.16.0-37-generic #51~14.04.1-Ubuntu SMP Wed May 6 15:23:14 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
And lastly, can you check if you have enough free-space on your machine? df -h will give the answer:
$ df -h
Filesystem Size Used Avail Use% Mounted on
**/dev/sda2 355G 302G 35G 90% /**
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 5.9G 4.0K 5.9G 1% /dev
tmpfs 1.2G 1.7M 1.2G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 5.9G 124M 5.8G 3% /run/shm
none 100M 152K 100M 1% /run/user
/dev/sda4 96M 29M 68M 30% /boot/efi
/dev/sdb2 2.7T 2.0T 609G 77% /media/hdd2
|
st119118
|
Xiaoyu_Liu:
for i in range(A.size()[0]):
for j in range(A.size()[1]):
if torch.max(A[i][j]) == 0.0:
print(i, j)
Sorry for the late, I used np.save and np.load to solve the problem in the end. And I just tried torch.save and torch.load again, still the same problem. I ran the commands you mentioned, the results are:
$ lab_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.4 LTS
Release: 14.04
Codename: trusty
$ locale
LANG=en_CA.UTF-8
LANGUAGE=en_CA:en
LC_CTYPE="en_CA.UTF-8"
LC_NUMERIC="en_CA.UTF-8"
LC_TIME="en_CA.UTF-8"
LC_COLLATE="en_CA.UTF-8"
LC_MONETARY="en_CA.UTF-8"
LC_MESSAGES="en_CA.UTF-8"
LC_PAPER="en_CA.UTF-8"
LC_NAME="en_CA.UTF-8"
LC_ADDRESS="en_CA.UTF-8"
LC_TELEPHONE="en_CA.UTF-8"
LC_MEASUREMENT="en_CA.UTF-8"
LC_IDENTIFICATION="en_CA.UTF-8"
LC_ALL=
$ uname -a
Linux sengled-gpu-1 4.2.0-35-generic #40~14.04.1-Ubuntu SMP Fri Mar 18 16:37:35 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 32G 4.0K 32G 1% /dev
tmpfs 6.3G 1.9M 6.3G 1% /run
/dev/sda2 854G 578G 233G 72% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 32G 76K 32G 1% /run/shm
none 100M 56K 100M 1% /run/user
/dev/sda1 511M 3.4M 508M 1% /boot/efi
While I don’t understand them at all.
|
st119119
|
Hi I suddenly realized perhaps my version is not the latest one, because I installed it one day before I checked it again. Also I can’t use torch.__version__ , which seems proved the thought. Really sorry about the interruption!
But when I try to install the latest version, this error happens,
SSLError: [Errno 1] _ssl.c:510: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure
If you are experienced with it, can you tell me how to solve that? If not, I’ll figure out it somewhere else:smiley:
|
st119120
|
Hello,
do
pip install pyopenssl
pip install ndg-httpsclient
pip install pyasn1
and try installing the package again.
Best,
Alex
|
st119121
|
Thanks! But after installing these, the problem can’t be solved yet… Then I refreshed the pytorch website to install again, it works. Magical…
|
st119122
|
I noticed that if I at some point I do for example sd = model.state_dict(), then the sd variable will be updated in every backward pass. How can I keep such variable from updating?
|
st119123
|
I think it is a same question as below.
if you’ll update all parameters except a few parameter you don’t want to update, it is same that you don’t update a few parameters.
Updating the parameters of a few nodes in a pre-trained network during training vision
Hi,
I am really new to PyTorch and was wondering if there is a way to specify only a subset of neurons (of a particular layer) to update during training and freeze the rest. Say, update only 2500 nodes of the 4096 in AlexNet, FC7. param.requires_grad seems to apply to all the neurons.
Appreciate your inputs.
|
st119124
|
I think that is a different issue. I want to update all parameters. I just want to be able to keep in memory a copy of a previous state_dict.
I solved it saving the state_dict in disk and then loading it, but still I wonder how to do it in memory.
|
st119125
|
you can also do sd_clone = copy.deepcopy(sd) instead of writing it to disk and reading it again…
|
st119126
|
when define a new net I find that I would use an object of a nn.Module class for multiple times so I just defined this object one time and use it in every nn.Module as follow:
feature = FeatureExtracter(SUBMODEL)
class Net1(nn.Module):
def forward(self,x):
s_feats =feature(x,LAYER)
......#do something
return ......
class Net2(nn.Module):
def forward(self,x):
s_feats =feature(x,LAYER)
......#do something
return ......
was that a bad programming habit ? or is there any reason in pytorch I shouldn’t do that, and instead define new object in every Net like this:
class Net1(nn.Module):
def __init__(self):
super().__init__()
self.feature = FeatureExtracter(SUBMODEL)
def forward(self,x):
s_feats = self.feature(x,LAYER)
......
return ......
class Net2(nn.Module):
def __init__(self):
super().__init__()
self.feature = FeatureExtracter(SUBMODEL)
def forward(self,x):
s_feats = self.feature(x,LAYER)
......
return ......
|
st119127
|
Hello I am trying to do a simple test, I want to show it a number at t=0 and then I want it to output that number k step in the future. Meanwhile the network is going to be shown zeros. But I am getting an error when I am doing backward. I am not sure how to read the error message.
Here is the code I wrote:
import argparse
import gym
import numpy as np
from itertools import count
from collections import namedtuple
import os
import torch
import random
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.autograd as autograd
from torch.autograd import Variable
import torchvision.transforms as T
import cv2
import pickle
import glob
import time
import subprocess
from collections import namedtuple
import resource
import math
class Policy(nn.Module):
def __init__(self):
super(Policy, self).__init__()
self.fc1 = nn.Linear(5,5)
self.lstm = nn.LSTMCell(5, 2)
self.fc2 = nn.Linear(2,1)
def forward(self, x, hidden):
y = self.fc1(x)
hx,cx = self.lstm(y,hidden)
y = self.fc2(hx)
return y, hx,cx
model = Policy()
optimizer = optim.Adam(model.parameters(),lr=1)
step = 10
for i in range(100):
yhat = Variable(torch.zeros(step,1))
target = Variable(torch.zeros(step,1))
target[-1,0] = 1
cx = Variable(torch.zeros(1,2))
hx = Variable(torch.zeros(1,2))
hidden= [hx,cx]
for j in range(step):
x = Variable(torch.zeros(1,5))
if j is 0:
x += 1
y, hx,cx = model(x,hidden)
print (hx.data.numpy())
hidden = (hx,cx)
yhat[j] = y.clone()
print ('done - Hoping the last value should be zero')
#learning
optimizer.zero_grad()
error = ((yhat-target)*(yhat-target)).mean()
error.backward()
optimizer.step()
Here is the error I get,
RuntimeError: matrices expected, got 1D, 2D tensors at /data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.10_1488756735684/work/torch/lib/TH/generic/THTensorMath.c:1224
I am sure I am just using something like a silly person.
|
st119128
|
Can you show us a full stack trace? Something has an invalid size, but I don’t know where.
|
st119129
|
done - the last output should be one
Traceback (most recent call last):
File "/home/jtremblay/code/Personal-git/dqn/simpleLstm.py", line 84, in <module>
error.backward()
File "/home/jtremblay/anaconda2/lib/python2.7/site-packages/torch/autograd/variable.py", line 146, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
File "/home/jtremblay/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/linear.py", line 22, in backward
grad_input = torch.mm(grad_output, weight)
RuntimeError: matrices expected, got 1D, 2D tensors at /data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.10_1488756735684/work/torch/lib/TH/generic/THTensorMath.c:1224
[Finished in 0.7s with exit code 1]
[cmd: ['/home/jtremblay/anaconda2/bin/python', '-u', '/home/jtremblay/code/Personal-git/dqn/simpleLstm.py']]
[dir: /home/jtremblay/code/Personal-git/dqn]
[path: /home/jtremblay/anaconda/bin]
Here is the full stack, sorry I should have added the whole thing.
|
st119130
|
I ran the code on my machine and it worked. I only removed the cv2 import, rest of it was exactly same. Output:
Version:
>>> torch.__version__
'0.1.10+16a133e'
OUTPUT:
[[ 0.22190067 0.113309 ]]
[[ 0.22374019 0.17195135]]
[[ 0.24251971 0.20971343]]
[[ 0.25022674 0.23256381]]
[[ 0.25297049 0.24633361]]
[[ 0.25373745 0.2547116 ]]
[[ 0.25378832 0.25985831]]
[[ 0.25362667 0.26304504]]
[[ 0.25343812 0.26503012]]
[[ 0.2532804 0.26627228]]
done - Hoping the last value should be zero
[[ 7.61594176e-01 -6.59305393e-19]]
[[ 9.63655114e-01 -1.15941839e-06]]
[[ 9.94877398e-01 -7.77833122e-07]]
[[ 9.99219239e-01 -7.20439346e-07]]
[[ 9.99815702e-01 -7.12471092e-07]]
[[ 9.99897778e-01 -7.11375492e-07]]
[[ 9.99909043e-01 -7.11224914e-07]]
[[ 9.99910653e-01 -7.11203825e-07]]
[[ 9.99910831e-01 -7.11200983e-07]]
[[ 9.99910891e-01 -7.11200926e-07]]
done - Hoping the last value should be zero
[[ 0.76159418 -0. ]]
[[ 9.64020252e-01 -4.48575378e-12]]
[[ 9.95050907e-01 -2.20972351e-12]]
[[ 9.99326706e-01 -1.97817370e-12]]
[[ 9.99906898e-01 -1.94826468e-12]]
[[ 9.99985516e-01 -1.94424338e-12]]
[[ 9.99996126e-01 -1.94369932e-12]]
[[ 9.99997556e-01 -1.94362603e-12]]
[[ 9.99997735e-01 -1.94361671e-12]]
[[ 9.99997795e-01 -1.94361302e-12]]
done - Hoping the last value should be zero
[[ 0.76159418 -0. ]]
[[ 9.64026868e-01 -3.94646712e-17]]
[[ 9.95054364e-01 -1.57264918e-17]]
[[ 9.99329090e-01 -1.36518715e-17]]
[[ 9.99909043e-01 -1.33884757e-17]]
[[ 9.99987543e-01 -1.33531386e-17]]
[[ 9.99998152e-01 -1.33483426e-17]]
[[ 9.99999583e-01 -1.33477065e-17]]
[[ 9.99999762e-01 -1.33476039e-17]]
[[ 9.99999821e-01 -1.33476039e-17]]
done - Hoping the last value should be zero
[[ 0.76159418 0. ]]
[[ 9.64027464e-01 -7.70289075e-22]]
[[ 9.95054662e-01 -2.58641162e-22]]
[[ 9.99329209e-01 -2.18775464e-22]]
[[ 9.99909163e-01 -2.13788550e-22]]
[[ 9.99987662e-01 -2.13120985e-22]]
[[ 9.99998271e-01 -2.13030714e-22]]
[[ 9.99999702e-01 -2.13018117e-22]]
[[ 9.99999881e-01 -2.13016502e-22]]
[[ 9.99999940e-01 -2.13016072e-22]]
done - Hoping the last value should be zero
[[ 0.76159418 0. ]]
[[ 9.64027584e-01 -3.21531385e-26]]
[[ 9.95054722e-01 -9.34462056e-27]]
[[ 9.99329269e-01 -7.73195391e-27]]
[[ 9.99909222e-01 -7.53276268e-27]]
[[ 9.99987721e-01 -7.50614017e-27]]
[[ 9.99998331e-01 -7.50253251e-27]]
[[ 9.99999762e-01 -7.50207491e-27]]
[[ 9.99999940e-01 -7.50200250e-27]]
[[ 1.00000000e+00 -7.50200250e-27]]
done - Hoping the last value should be zero
[[ 0.76159418 0. ]]
[[ 9.64027584e-01 -2.75070681e-30]]
[[ 9.95054722e-01 -7.05951502e-31]]
[[ 9.99329329e-01 -5.73105970e-31]]
[[ 9.99909222e-01 -5.56879258e-31]]
[[ 9.99987721e-01 -5.54712352e-31]]
[[ 9.99998331e-01 -5.54420359e-31]]
[[ 9.99999762e-01 -5.54382320e-31]]
[[ 9.99999940e-01 -5.54375925e-31]]
[[ 1.00000000e+00 -5.54375925e-31]]
done - Hoping the last value should be zero
|
st119131
|
This is interesting, it is still not working on my end even without cv2. Are you running pytorch with cuda 8.0?
|
st119132
|
pranav:
torch.version
Why can’t I run this script for checking the version? Here’s what I got:
>>> torch.__version__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute '__version__'
|
st119133
|
@Xiaoyu_Liu because maybe you are not on the latest version of pytorch. we introduced this after 0.1.9
|
st119134
|
smth:
because maybe you are not on the latest version of pytorch. we introduced this after 0.1.9
I see, perhaps I should re-install Pytorch to see whether it can solve my torch.save and torch.load problem as well!
|
st119135
|
I tried on my laptop and with a clean pytorch install (using conda) and I still get the error with the grads. It is weird, I am trying to make sense out of the problem and it seemed that the last layer (the fully connected) wants to do a backward with size two. But the output is of size one.
The backward function in the linear class if I print the following variables:
def backward(self, grad_output):
print (grad_output)
input, weight, bias = self.saved_tensors
grad_input = grad_weight = grad_bias = None
print(self.needs_input_grad)
if self.needs_input_grad[0]:
print ('back')
# print (self)
print (grad_output,weight)
I get
back
(
1.00000e-02 *
-6.6986
[torch.FloatTensor of size 1]
,
0.6127 0.6033
[torch.FloatTensor of size 1x2]
)
Where the first variable is equal to the loss calculated by error = (yhat-target).pow(2).mean(), I am confused as to why the backward pass is expecting something of size 1x2.
Here is the code
import numpy as np
import torch
import random
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.autograd as autograd
from torch.autograd import Variable
import torchvision.transforms as T
class Policy(nn.Module):
def __init__(self):
super(Policy, self).__init__()
self.fc1 = nn.Linear(5,5)
self.lstm = nn.LSTMCell(5, 2)
self.fc2 = nn.Linear(2,1)
def forward(self, x, hidden):
y = self.fc1(x)
hx,cx = self.lstm(y,hidden)
y = self.fc2(hx)
return y, hx,cx
model = Policy()
optimizer = optim.Adam(model.parameters())
step = 1
for i in range(100):
yhat = Variable(torch.zeros(step,1))
target = Variable(torch.zeros(step,1))
target[-1,0] = 1
cx = Variable(torch.zeros(1,2))
hx = Variable(torch.zeros(1,2))
hidden= [hx,cx]
for j in range(step):
x = Variable(torch.zeros(1,5))
if j is 0:
x += 1
x = Variable(x.data)
y, hx,cx = model(x,hidden)
# print (hx.data.numpy())
hidden = (hx,cx)
print ('y',y)
print ('hidden',hidden)
yhat[j] = y
print ('done - the last output should be one')
#learning
optimizer.zero_grad()
error = (yhat-target).pow(2).mean()
print (error)
error.backward()
optimizer.step()
|
st119136
|
The problem is when you do [quote=“jtremblay, post:12, topic:1097”]
yhat[j] = y
[/quote]
>>> print 'yhat[j] size: ', yhat[j].size(), 'y size: ', y.size()
yhat[j] size: torch.Size([1]) y size: torch.Size([1, 1])
so this is causing issues when backpropping.
If you just change that line to:
yhat[j] = y[0]
everything works properly.
Full code (same as yours, except for that one line):
import numpy as np
import torch
import random
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.autograd as autograd
from torch.autograd import Variable
import torchvision.transforms as T
class Policy(nn.Module):
def __init__(self):
super(Policy, self).__init__()
self.fc1 = nn.Linear(5,5)
self.lstm = nn.LSTMCell(5, 2)
self.fc2 = nn.Linear(2,1)
def forward(self, x, hidden):
y = self.fc1(x)
hx,cx = self.lstm(y,hidden)
y = self.fc2(hx)
return y, hx,cx
model = Policy()
optimizer = optim.Adam(model.parameters())
step = 1
for i in range(100):
yhat = Variable(torch.zeros(step,1))
target = Variable(torch.zeros(step,1))
target[-1,0] = 1
cx = Variable(torch.zeros(1,2))
hx = Variable(torch.zeros(1,2))
hidden= [hx,cx]
for j in range(step):
x = Variable(torch.zeros(1,5))
if j is 0:
x += 1
x = Variable(x.data)
y, hx,cx = model(x,hidden)
# print (hx.data.numpy())
hidden = (hx,cx)
print ('y',y)
print ('hidden',hidden)
yhat[j] = y[0]
print ('done - the last output should be one')
#learning
optimizer.zero_grad()
error = (yhat-target).pow(2).mean()
print (error)
error.backward()
optimizer.step()
Gives the following output:
('y', Variable containing:
-0.7611
[torch.FloatTensor of size 1x1]
)
('hidden', (Variable containing:
-0.1136 0.1655
[torch.FloatTensor of size 1x2]
, Variable containing:
-0.2690 0.3413
[torch.FloatTensor of size 1x2]
))
done - the last output should be one
Variable containing:
3.1013
[torch.FloatTensor of size 1]
('y', Variable containing:
-0.7580
[torch.FloatTensor of size 1x1]
)
('hidden', (Variable containing:
-0.1126 0.1623
[torch.FloatTensor of size 1x2]
, Variable containing:
-0.2672 0.3351
[torch.FloatTensor of size 1x2]
))
done - the last output should be one
Variable containing:
3.0906
[torch.FloatTensor of size 1]
('y', Variable containing:
-0.7549
[torch.FloatTensor of size 1x1]
)
('hidden', (Variable containing:
-0.1115 0.1591
[torch.FloatTensor of size 1x2]
, Variable containing:
-0.2654 0.3289
[torch.FloatTensor of size 1x2]
))
done - the last output should be one
Variable containing:
3.0798
[torch.FloatTensor of size 1]
....
I trained it for large number of iterations and the loss converged.
|
st119137
|
Well here’s a pretty simple problem, how do you go from a
i) classification problem with a single output from your model, and a
loss = nn.CrossEntropyLoss(output, labels)
ii) to a regression problem with a mu, and sigma2 (mean & variance) output from your model, which then goes through
y_pred = torch.normal( mu, sigma2.sqrt() )
and
loss = F.smooth_l1_loss(y_pred, labels)
Basically I want to change a MNIST classifier into regression exercise which outputs a Gaussian distribution. The bit that’s tripping me up is that the output y_pred is now is now stochastic, so I guess I need a .reinforce() on it, but I still don’t not get how to do this?
Here’s the relevant bit of my code,
def forward(self, x):
# Set initial states
h0 = Variable(torch.zeros(self.num_layers*2, x.size(0), self.hidden_size)) # 2 for bidirection
c0 = Variable(torch.zeros(self.num_layers*2, x.size(0), self.hidden_size))
# Forward propagate RNN
out, _ = self.lstm(x, (h0, c0))
# Decode hidden state of last time step
mu = self.mu( out[:, -1, :] )
sigma2 = self.sigma2( out[:, -1, :] )
return mu, sigma2
rnn = BiRNN(input_size, hidden_size, num_layers, num_classes)
# Loss and Optimizer
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
# Train the Model
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
images = Variable(images.view(-1, sequence_length, input_size))
labels = Variable( labels.float() )
# Forward + Backward + Optimize
optimizer.zero_grad()
#outputs = rnn(images)
mu, sigma2 = rnn(images)
sigma2 = (1 + sigma2.exp()).log() # ensure positivity
y_pred = torch.normal( mu, sigma2.sqrt() )
y_pred = y_pred.float()
#y_pred = Variable( torch.normal(mu, sigma2.sqrt()).data.float() )
loss = F.smooth_l1_loss( y_pred , labels )
loss.backward()
optimizer.step()
and the compile error,
File "main_v1.py", line 90, in <module>
loss.backward()
File "/home/ajay/anaconda3/envs/pyphi/lib/python3.6/site-packages/torch/autograd/variable.py", line 158, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
File "/home/ajay/anaconda3/envs/pyphi/lib/python3.6/site-packages/torch/autograd/stochastic_function.py", line 13, in _do_backward
raise RuntimeError("differentiating stochastic functions requires "
RuntimeError: differentiating stochastic functions requires providing a reward
It’s modified from
yunjey/pytorch-tutorial/blob/master/tutorials/07%20-%20Bidirectional%20Recurrent%20Neural%20Network/main.py
OR, perhaps I’m making it more complicated than it needs to be with the Gaussian thing? Should I just stick an encoder on the output of the LSTM ???
Thanks a lot
|
st119138
|
I’m not sure I understand exactly what you want to do, but would the same reparametrisation trick as in the VAE paper and implementations (e.g. pytorch/examples) work with the “usual” procedure?
You would convert standard normal randoms to a variable and then transform them with mu and sigma2. That way, the randoms are fixed w.r.t. the differentiation.
|
st119139
|
Hi @tom that’s what I think too!
I’ll give it a try
I’m fed up of all this .reinforce stuff !!!
Just to be a bit more clear, what I want to learn is a mapping from images to single real numbers y_pred, and those real number should be as close to the labels/class indices labels of the images as possible, as measured by loss = F.smooth_l1_loss(y_pred, labels)
Cheers
|
st119140
|
Well the error should be quite self-explanatory, you haven’t provided the reward to the stochastic output. Cal .reinforce(reward) on y_pred, but before you cast it! Casts return a new Variable and it’s no longer a stochastic output!
|
st119141
|
Thanks @apaszke !!! That’s helpful.
My confusions, actually conceptual/the way I setup the problem - I haven’t figured out what the reward should be in this context!
It’s nothing to do with PyTorch, I just haven’t thought carefully enough about what I’m actually trying to do here - I was carrying over an idea from continuous action reinforcement learning, and it doesn’t seem to make sense in the context of regression?
|
st119142
|
HI guys:
there are two multi-dimension matrix
a=[64,10,1,1]; b=[64,10,28,28]
I found that a*b can be implemented in TensorFlow. And the result is also [64,10,28,28].
But when I used Pytorch, it didn’t work.
So how should I do?
THANKS
|
st119143
|
I assume that you are looking for element-wise multiplication where the last two dimensions are broadcasted for a?
If so, you can do this with a.expand_as(b) * b
|
st119144
|
Hi,
This is a question or recommendation about the inputs of DataParallel.
Currently, the DataParallel forces the inputs having same size. But is it possible to support data with different sizes?
I am using PyTorch to implement my detection and segmentation frameworks (computer vision). For example, in each image, the number of objects are different. My current solution is to add some dummy to make the annotations of different images in the same batch have same dimensions. I think this works bounding boxes but not a good solution for segmentation masks.
I would like to hear the better solution for this kind of problems.
|
st119145
|
I don’t think there’s any general way in which we could add support for batches of inputs of different sizes. You can always subclass DataParallel and override the scatter method, so that it splits the data differently.
|
st119146
|
Consider the following code:
from torch.autograd import Variable
t = torch.Tensor([2])
bool(torch.max(t) < 2)
Out[4]:
False
bool(torch.max(t) < 3)
Out[5]:
True
However, If you do the same with a Variable:
bool(torch.max(v) < 2)
Out[6]:
True
bool(torch.max(v) < 3)
Out[7]:
True
In the mean time I call v.data to overcome this.
|
st119147
|
It is because torch.max(v) returns another Variable containing a Tensor with a single element (so that you can use it and autograd will work as expected) while torch.max(t) returns a python number.
For conditions (that are not differentiable and thus you don’t want to keep a Variable) you can either do torch.max(v.data) or torch.max(v).data[0].
|
st119148
|
I want to initialize different convolution layers of a network with different methods. Is there any way to do this thing? Please help me.
|
st119149
|
I found a course on Deep Learning from Chinese University of Hong Kong useful. They have a tutorial on PyTorch that shows how we can initialize different layers.
Here 83 is the link to the course.
Here 77 is the link to the slides for the tutorial.
Here 63 is the link to the code that is discussed during the tutorial.
Here 11 is the link to the audio for the class session.
Slide number 14 talks about how to initialize parameters. In the audio at 36 minutes in, the instructor talks about this slide. I think this will help.
EDIT-1:
Here is the reply for the Weight Initialization questions that shows how to initialize a layer with Xavier init. Extrapolation from one layer to many layers should be simple.
|
st119150
|
To change the values of a subset elements of a tensor, in theano we have inc_subtensor(), what is the equivalence in pytorch?
|
st119151
|
Hi,
You should take a look at the set of functions called index_* they allow you to work with sub-tensors.
|
st119152
|
Yes, I noticed there is torch.index_select() function. However this function returns a new tensor not a view, so if I do
t2 = torch.index_select(t1, axis, index)
t2 += 1.0
Tensor t1 will stay unchanged. I eventually need t1 to be changed.
|
st119153
|
you can do standard numpy-like indexing:
Try this:
t1 = torch.randn(10, 5)
t2 = t1[:, 3]
t2.fill_(0)
print(t1)
|
st119154
|
@smth What if I need not just an integer index but a list of integers, e.g.
I want indexing like this
t1[:,[1,3,4]] += 1.0
However this is not supported by pytorch now, is there another way or I have to use a for-loop?
|
st119155
|
And t1[:,[1,3,4]] += 1.0 is implemented, but instead of giving it [1, 3, 4] you need to wrap that in a LongTensor
|
st119156
|
@david-leon index_add_ is documented here 26 with all the other index_* functions
|
st119157
|
@albanD Well, this is awkward how I missed it … and thanks a lot!
@apaszke I tried with t1[:,torch.LongTensor([1,3,4])] but no luck, error raised as
TypeError: indexing a tensor with an object of type LongTensor. The only supported types are integers, slices, numpy scalars and torch.LongTensor or torch.ByteTensor as the only argument.
My pytorch version is 0.1.10_2
|
st119158
|
Thanks, to be clear for future readers:
t1[torch.LongTensor([1,3,4])] works, but t1[torch.LongTensor([1,3,4]), :] does not.
And one more question which is related: if I want to do indexing like:
t1[[1,3,0], [1,3,4]]
what is the most efficient way to do this in pytorch? In theano we can do it the same as in numpy, however pythorch does not support this yet.
|
st119159
|
I can’t figure it out with gather, according to its syntax:
torch.gather(input, dim, index, out=None)
gather can only handle one dimensional indexing.
For the time being, I do the indexing via python loop:
a1 = torch.stack([t1[Idx1[i],Idx2[i]] for i in range(3)])
in which
Idx1 = torch.LongTensor([1,3,0]) and Idx2 = torch.LongTensor([1,3,4])
Apparently this is not an efficient nor an elegant way.
|
st119160
|
Hi Everyone,
@Veril @apaszke
I am new to pytorch, and I would like to learn more of tensor operations by experimenting the test_torch.py file, since it got lots of examples inside.
Basically, I want to run each test_function on each tensor operation individually, so I can see how the operation work.
How I tried, but failed:
# inside jupyter notebook
%cd /path to /pytorch-master/test
import torch
import test_torch
test1 = test_torch.TestTorch(methodName='runTest')
test1.test_dot()
However, there is no error, nor return anything at all. Then I went to test_torch.py and added two print to test_dot() function as below, but nothing showed.
def test_dot(self):
types = {
'torch.DoubleTensor': 1e-8,
'torch.FloatTensor': 1e-4,
}
for tname, prec in types.items():
v1 = torch.randn(100).type(tname)
v2 = torch.randn(100).type(tname)
res1 = torch.dot(v1, v2)
print(res1)
res2 = 0
for i, j in zip(v1, v2):
res2 += i * j
print(res2)
self.assertEqual(res1, res2)
Could anyone help me find a way to experiment on each test function individually?
Thanks a lot!
Daniel
|
st119161
|
I was having issues using Conv1d for a sequence learning application because of an assertion error about tensor dimensions. So I tried making a minimal code example 12 which fails with this error 9.
Am I doing something wrong or is this a bug? The dimension of the input is [batchsize, channels, data] < 4, and this fits with the documentation if I can read the equation right.
There is a previous post here [which I can’t link because newbie link limit] that mentions the same error for conv2d without batch_size, and a possible hack around it with unsqueeze, but this shouldn’t be necessary for conv1d with batch_size.
|
st119162
|
this should be fixed on the master branch of pytorch.
v0.1.11 releasing in 2 days will have this fix.
|
st119163
|
Hi, guys
when I run my model on the CPU, the model occupies all cpu cores in default. And I export the OMP_NUM_THREADS=1, it almost takes the same time for the same input. So I wander that why the former which use all cpu cores makes no improvement over the latter?
|
st119164
|
And I attempt to install from source or binary, but no change. And the OS is CentOS Linux release 7.3.16.11
|
st119165
|
What’s your PyTorch version? Also, did you try running e.g. with 4 OMP threads? I think the problem appears because when you’re using all the cores, they’re competing over cache space and can’t proceed as effectively.
|
st119166
|
My pytorch version is 0.1.10. And as you say, set the OMP_NUM_THREADS=4 and it works well. Thanks for your reply.
|
st119167
|
I am writing some code for something similar to RoI pooling. The gradient propagates back well when I use CPU but not on GPU? Does anyone have any idea? Thanks a lot.
A demo is like this.
CPU:
out = torch.zeros(1, 3, 6, 6)
vout = Variable(out)
fmap = np.arange(3 * 6 * 6).reshape((1, 3, 6, 6))
tmap = Variable(torch.from_numpy(fmap).float(), requires_grad=True)
mask = torch.zeros(1, 6, 6).byte()
mask[0, 2:5, 2:5] = 1
mask = Variable(mask.expand(1, 3, 6, 6))
masked = tmap.masked_select(mask).view(3, -1)
pooled = torch.max(masked, 1)[0][:, 0]
vout[0, :, 0, 0] = pooled
# similar to the operation above
mask = torch.zeros(1, 6, 6).byte()
mask[0, 3:6, 3:6] = 1
mask = Variable(mask.expand(1, 3, 6, 6))
masked = tmap.masked_select(mask).view(3, -1)
pooled = torch.max(masked, 1)[0][:, 0]
vout[0, :, 1, 1] = pooled
a = torch.mean(vout)
a.backward()
print tmap.grad
image.png447×514 14.6 KB
GPU:
out = torch.zeros(1, 3, 6, 6)
vout = Variable(out).cuda()
fmap = np.arange(3 * 6 * 6).reshape((1, 3, 6, 6))
tmap = Variable(torch.from_numpy(fmap).float(), requires_grad=True).cuda()
mask = torch.zeros(1, 6, 6).byte().cuda()
mask[0, 2:5, 2:5] = 1
mask = Variable(mask.expand(1, 3, 6, 6))
masked = tmap.masked_select(mask).view(3, -1)
pooled = torch.max(masked, 1)[0][:, 0]
vout[0, :, 0, 0] = pooled
mask = torch.zeros(1, 6, 6).byte().cuda()
mask[0, 3:6, 3:6] = 1
mask = Variable(mask.expand(1, 3, 6, 6))
masked = tmap.masked_select(mask).view(3, -1)
pooled = torch.max(masked, 1)[0][:, 0]
vout[0, :, 1, 1] = pooled
a = torch.mean(vout)
a.backward()
print tmap.grad
The result is None.
I am using version 0.1.9.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.