id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st119868
|
Hello,
Given the following code:
def forward(self, x):
x = x.transpose(2,1)
x = self.max_pool(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
Does exist an elegant way to get back the results of each layer? Or I should use a dictionary containing a key for each layer result?
Thank you
|
st119869
|
Hi @lcelona,
This post might be useful How to extract features of an image from a trained model
|
st119870
|
Hi,
I saved a model of a simple convnet created without using any loops in the class using torch.save(). When I load the same saved model using torch.load() and print parameters param.data iterating through model.parameters(), it prints different values each time while running the code.
I used torch.cuda.manual_seed(1234) while training the model. But I guess it should not affect loading the saved model.
It might be a silly mistake on my part, but I could not figure it out.
|
st119871
|
Edit: There was a mistake in my code that I overwrote the loading with a model = ConvNet() definition. Please close this thread.
|
st119872
|
Hi,
Is there a way to use Cuda through numpy arrays inside the forward() and backward() functions in custom Modules? For example, if I want to compute fft2 in GPU in the example in https://github.com/pytorch/tutorials/blob/master/Creating%20extensions%20using%20numpy%20and%20scipy.ipynb 9 how should I go about that?
Just converting the model and input to cuda results in “RuntimeError: numpy conversion for FloatTensor is not supported” error at “result = abs(rfft2(numpy_input)) line”.
class BadFFTFunction(Function):
def forward(self, input):
numpy_input = input.numpy()
result = abs(rfft2(numpy_input))
return torch.FloatTensor(result)
def backward(self, grad_output):
numpy_go = grad_output.numpy()
result = irfft2(numpy_go)
return torch.FloatTensor(result)
|
st119873
|
Numpy doesn’t support GPU arrays, so there’s no way to do this. If you know CUDA you can see how to use your own kernel in this gist 131.
|
st119874
|
Hi, I am using pytorch right now and there is one problem I encountered.
I previously used torch7 framework, and when I use any criterion, I can explicitly get the df_do.
Basically the back-propagated error to the network.
I am wondering can I obtain that in pytorch?
I need to modify that during training.
When I see the example, the code is like this:
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
So all the network is back-propagated when I call loss.backward() ?
|
st119875
|
You can add hooks in the variables (or modules) of the network.
This is explained in this part of the documentation 10.
In your case, you could add a hook in the loss, to change the gradients as you wish.
|
st119876
|
Thanks!
I wasn’t so sure what hook mean when I first read the document.
But as you point out, I think now I get it.
I will take a look at the document.
Thanks!
|
st119877
|
Hi Francisco ,
Can user-defined hook function take any argument?
For example, for different batch of input data, the hook might perform differently based on some input.
Seems like I could not find related answers.
Thanks!
|
st119878
|
You need to make the hook a closure - just redefine it at every step, and use the data in its body:
optimizer.zero_grad()
output = model(data)
def hook(d_output):
return d_output * data.mean() # you can use the data here
output.register_hook(hook)
loss = criterion(output, target)
loss.backward()
optimizer.step()
|
st119879
|
There is an open issue in github tracking it down https://github.com/pytorch/pytorch/issues/494 1.2k
|
st119880
|
So I read the inline documentation about mark_dirty() here:
github.com
pytorch/pytorch/blob/fb2d28f477c76bd94e3e3e9d2f424caa295d75c3/torch/autograd/function.py#L69 43
"""
self.to_save = tensors
def mark_dirty(self, *args):
"""Marks given tensors as modified in an in-place operation.
**This should be called at most once, only from inside the**
:func:`forward` **method, and all arguments should be inputs.**
Every tensor that's been modified in-place in a call to :func:`forward`
should be given to this function, to ensure correcness of our checks.
It doesn't matter wheter the function is called before or after
modification.
"""
self.dirty_tensors = args
def mark_shared_storage(self, *pairs):
"""Marks that given pairs of distinct tensors are sharing storage.
**This should be called at most once, only from inside the**
:func:`forward` **method, and all arguments should be pairs of
I don’t quite understand what extra checks are needed for inplace operators. Would be great if the devs can give some hints. Thanks!
|
st119881
|
If you are doing an in-place operation, and further operate on the original Tensor, the backward gradients might be wrong.
Let’s take a small example:
y = x^2 z = x^2.
In this case, the gradient is 2x.
So, the input is needed to compute the gradients in the backward.
If we do all the operations out-of-place, we can hold onto the value of x and it’s not a problem to compute correct gradients.
However, if we do the second operation in-place via: z = x.pow_(2), where x is a Variable, we cannot compute the backward pass of y = x^2 correctly.
on all Variables, we have an internal version counter to track these things, and mark_dirty ensures that this version counter is correctly calculated.
If the user does an operation where the backward cannot be correctly computed, then an error is thrown.
|
st119882
|
I would expect the output of RNN to be contiguous in memory. This doesn’t seem to be the case. For instance, the final output in this snippet has output.is_contiguous() == False.
train = True
num_layers = 1
bidirectional = True
bi = 2 if bidirectional else 1
x = Variable(torch.from_numpy(_x), volatile=not train)
batch_size, seq_length, input_dim = x.size()
rnn = nn.LSTM(input_dim, model_dim / bi, num_layers,
batch_first=True,
bidirectional=bidirectional,
)
h0 = Variable(torch.zeros(num_layers * bi, batch_size, model_dim / bi), volatile=not train)
c0 = Variable(torch.zeros(num_layers * bi, batch_size, model_dim / bi), volatile=not train)
print(x.is_contiguous())
# True
# Expects (input, h_0):
# input => batch_size x seq_length x model_dim
# h_0 => (num_layers x bi[1,2]) x batch_size x model_dim
# c_0 => (num_layers x bi[1,2]) x batch_size x model_dim
output, (hn, cn) = self.encode(x, (h0, c0))
print(output.is_contiguous())
# False
|
st119883
|
Yeah I think that’s expected. Depending on the chosen backend, a contig or non-contig result may be returned. Why is that a problem?
|
st119884
|
Okay. Noticed the same behavior for on cpu/gpu. I don’t have a specific problem, but assumed that if input is contiguous then output should/would be as well. Thanks for the response!
|
st119885
|
No, I don’t think we’ve every guaranteed that. I’ll take a look at RNNs anyway, thanks for the notice!
|
st119886
|
Let’s say I have this Template class
class Template(nn.Module):
def __init__(self):
super().__init__()
def forward(self, *input):
pass
and I would like to build a model out of multiple instance of Template, given a specific number of layers.
class BuildModel(nn.Module):
def __init__(self, number_of_layers):
super().__init__()
# self.layer_n = Template() for n in range(0, number_of_layers)
def forward(self, *input):
pass
I thought to implement this with a list of Template's instances, but soon realised this won’t go through the nn.Module's __setattr__. So, how to handle this situation of building a model out of multiple sub-modules given a layer count?
Idea
Hmm… Would setattr(self, 'layer_n', Template()) be a solution?
|
st119887
|
Hey @Atcold,
Have you looked at the torchvision models, I think the resnet model does what you want :
github.com
pytorch/vision/blob/master/torchvision/models/resnet.py 66
import torch.nn as nn
import math
import torch.utils.model_zoo as model_zoo
__all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101',
'resnet152']
model_urls = {
'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth',
'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth',
'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',
'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth',
'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth',
}
def conv3x3(in_planes, out_planes, stride=1):
"""3x3 convolution with padding"""
This file has been truncated. show original
|
st119888
|
So, if I got this right, are you suggesting to make a list of layers and expand it into a nn.Sequential()?
In my situation I need instantiate a different number of independent layer which I will later connect in a symmetrical fashion.
Would using setattr, which takes names as strings, do the job?
Let me jot down what I intend to do.
Say number_of_layers is 2.
b - a - A - B
If number_of_layers is 3, then
c - b - a - A - B - C
And so on. And each block is fed also with other inputs. So, it’s not a simple Sequential().
|
st119889
|
@Atcold you have reached the limit of my knowledge can’t help you sorry
Just about the nn.Sequential point, I implemented a network with 3 branches in parallel, I am not using nn.Sequential, I just end up summing the variables of my branches, this is pure autograd. Hope it can help
|
st119890
|
what about
def template(num_layers):
if num_layers == 1:
return nn.Sequential([a,A])
else:
return nn.Sequential([f(num_layers), template(num_layers-1), F(num_layers)])
If I recall correctly lists of modules/tensors within a module is tricky : https://pytorch.slack.com/archives/general/p1485447780001394 15
|
st119891
|
OK, maybe I’m simply confused (I’ve edited my previous posts, fixing some names).
My BuildModel class would like to build a model based on a variable number of layers Template (which come in two types, represented before with lowercase and uppercase single letters).
The most straightforward solution would be creating two lists lower and UPPER, where I can reference the instance of my Template class. Doing so would create troubles with the current non-recursive implementation of nn.Module.__setattr__().
OK, I’m going to write them down in an extended form and create a BuildOneLayerModel() and BuildTwoLayerModel(). Perhaps I can later see a pattern and generalise to a BuildModel(number_of_layer) class.
|
st119892
|
Oh, that would be great. I would have been mucking around with string based attribute settings, otherwise … which should be just fine, no?
|
st119893
|
Yes, string attributes are fine. I’ve just merged nn.ModuleList and nn.ParameterList. There are no docs right now, I’ll add them tomorrow. You can give a list of modules to the constructor and later access them using the regular indexing syntax with integers. Same goes for ParameterList.
|
st119894
|
From nvidia-smi, I can see that during training, my pyTorch script is only using one GPU. Is there a way to make it use the others? (I have two). Thanks.
|
st119895
|
Sure, but keep in mind the way to use multi-GPU depends on the application. You might be interested in the DataParallel 118 module, or you can spread it over multiple GPUs by passing an additional argument to .cuda() call - .cuda(1) will place the tensor/module on 2nd GPU.
|
st119896
|
Hi @apaszke, thanks, but just a few questions - I currently have a class where I define my net: So I have:
class DNN(nn.Module):
def __init__(self):
super(DNN, self).__init__()
//stuff
def forward(self, input):
Now in my main script, I call:
myDNN = DNN()
myDNN.cuda()
I took a look at the documentation for the DataParallel, and what I currently do is this:
myDNN = torch.nn.DataParallel(DNN(), device_ids=[0, 1, 2])
Now this seems to run, BUT, it complains that it doesnt know what “forward” is. (The forward member function that it).
So I am confused as to what exactly I should be putting through this “DataParallel” exactly… every member function of my DNN?..
Thanks
|
st119897
|
No, you only wrap a single module in DataParallel and it will parallelize the whole subtree. Not sure what the error is, but it’s likely a bug in your model. I can’t help much without the message and stack trace.
|
st119898
|
I want to make sure that I am utilizing the batchnorm functionality properly: I understand how to code up the layers to perform the batchnorm, however one thing I want to make sure that I am doing right, is basically putting the net into “evaluation” mode, so that the parameters of the batch_norm do not keep getting updated, when I am say, periodically running through the validation set, or when I am done and running over new data etc.
Therefore as I understand it, I have a myNet object, that was made from class Net(nn.module). Now during my training loop, I have to make sure that: myNet.train() is exists, and when I enter my validation loop every so often, I have to execute myNet.eval(), and then run on that. Of course myNet.train() will be executed after that when I go back to training. This also means however that once I have finished the net, I should execute myNet.eval().
Is this correct? Thank you.
|
st119899
|
Exactly. This will affect dropout too. The activations at dropout layers need to be scaled during execution to account for the fact that certain units were deactivated during training - eval() takes care of this.
|
st119900
|
Hi,
I’m adapting the DCGAN models from the PyTorch examples and had a problem saving and loading the parameters.
Specifically, when I saved the parameters and then read them back in at a later stage, I would get a different output for a consistent input.
I debugged and found what I believed to be the problem, however I’m not sure of the cause. I am using a loop to define hidden layers of the MLP (I’m not using the conv nets):
self.nhl = 4
self.nh = [2, 2, 2, 2]
self.fc = []
for i in range(self.nhl-1):
self.fc.append(nn.Linear(self.nh[i], self.nh[i+1]))
which gave the following output on repeated executions of the program (with a model saved earlier):
jordan [ src ] $ python main.py
First layer --> 0.0268119424582 0.0
hidden layer --> 0.0
hidden layer --> 0.558199584484
hidden layer --> 0.203323662281
output --> 0.305809795856 -0.139149516821 0.538217186928
jordan [ src ] $ python main.py
First layer --> 0.0268119424582 0.0
hidden layer --> 0.0
hidden layer --> 0.45568972826
hidden layer --> 0.0742047131062
output --> 0.425929844379 -0.366338938475 0.542093634605
jordan [ src ] $ python main.py
First layer --> 0.0268119424582 0.0
hidden layer --> 0.0
hidden layer --> 0.0
hidden layer --> 0.460717827082
output --> 0.345742940903 -0.0932856351137 0.694804787636
Note that the outputs at each of the layers changes on each execution (I confirmed the parameters loaded are all consistent on each iteration).
Forward is implemented as:
x = F.relu(self.input(_input))
print 'First layer -->', x.data[0][0], x.data[0][1]
for i in range(self.nhl-1):
x = F.relu(self.fc[i]((x)))
print 'hidden layer -->', x.data[0][0]
output = self.output(x)
If I unroll the loops then it produces consistent output. Is there a bug in my loop or is there a reason you can’t use loops to define hidden layers?
|
st119901
|
Hi,
I think the reason is a nn.Module does not recognize any module/parameter in a Python list. You can check this by print(model)
Therefore, any parameters for those modules won’t show up when you call model.parameters(). I guess it also mess up the saving process, leading to the problem you described above.
Some solution can be found in this thread: List of nn.Module in a nn.Module
|
st119902
|
Is it possible to do the following matrix inplace modification using python for loop without breaking the autograd?
|
st119903
|
Yes it is. This should work:
L = Variable(torch.Tensor(i_size, j_size))
# it's important to not specify requires_grad=True
# it makes sense - you don't need grad w.r.t. original L content,
# because it will be overwritten.
for i in range(i_size):
for j in range(j_size):
L[i, j] = # compute the value here
But beware, it might be very very slow! Not only because you’ll be looping over elements in Python, but also because it will involve a lot of autograd ops to compute this, and there’s a constant overhead associated with each one. It’s not a huge problem if you’re doing relatively expensive computation like matrix multiplication or convolution, but for simple ops it can be more expensive than the computation alone.
In the vast majority of cases it is possible to rewrite the equations so that you don’t have to compute the individual elements in the loop, but you can use only a few matrix-matrix operations that achieve the same thing, but will compute the results in C using highly optimized routines. For examples you can look at how @fmassa rewrote the loss function in another thread.
|
st119904
|
Thanks for the explanation, it is very useful!
But I have another question, In my case, L is updated at each timestep, and the output at each timestep is calculated based on the new L and weights W.
loss = 0
L = Variable(torch.Tensor(i_size, j_size))
W = Parameter(torch.Tensor(20, 20)) #fake size
for t in range(time_step):
for i in range(i_size):
for j in range(j_size):
L[i, j] = func(L[i,j]) # a function of wrt old L
out = get_output(L, W) #output computed from L and weights W
loss += loss_func(out, label[t])
In this case, can I still get gradient of loss wrt the weights W using autograd? It seems that L is overwritten at each timestep.
|
st119905
|
Yes, of course, it will work. Autograd has built in checks for in-place modifications, so if it doesn’t raise an error, it will work. Otherwise .clone() might help you. One important thing is that you shouldn’t reuse the same L Variable indefinitely - its history is going to get longer and longer. You can probably reuse it for a single training sequence, but then you should do sth like repackage_hidden from the language modelling example, to allow the graph to get freed.
|
st119906
|
When running these code:
x = Variable(torch.FloatTensor([[1,2],[3,4]]))
z = torch.cat([x[0,1],x[1,0]],1)
I got an error message:
Traceback (most recent call last):
File "pra.py", line 85, in <module>
z = torch.cat([x[0,1],x[1,0]],1)
File "/usr/local/lib/python2.7/site-packages/torch/autograd/variable.py", line 748, in cat
return Concat(dim)(*iterable)
File "/usr/local/lib/python2.7/site-packages/torch/autograd/_functions/tensor.py", line 303, in forward
self.input_sizes = [i.size(self.dim) for i in inputs]
RuntimeError: dimension 2 out of range of 1D tensor at /Users/soumith/code/pytorch-builder/wheel/pytorch-src/torch/lib/TH/generic/THTensor.c:24
But similar code like this:
z = torch.cat([torch.FloatTensor([2]),torch.FloatTensor([3])],1)
or this:
x = Variable(torch.FloatTensor([[1,2],[3,4]]))
z = torch.cat([torch.diag(x[0,1]),torch.diag(x[1,0])],1)
works well.
I’m totally confused. I really need to extract some elements from one tensor and cat them into another tensor. What should I do?
Thanks!
|
st119907
|
The tensors you’re putting into cat are only 1D and you’re trying to concatenate them along dim=1 i.e. dimension 2. If you give 0 to cat it will work ok. Remember that python is 0-based.
I’m not sure why does it work for regular tensors, I’ll need to look into that, but I think it’s an unintended behaviour.
|
st119908
|
Thanks for the explanation!
So what should I do if I want to get a 2D tensor from a bunch of 1D tensor? I tried this:
x = Variable(torch.FloatTensor([[1,2],[3,4]]))
z = torch.cat([x[0,1],x[1,0]],0)
y = torch.cat([z,z],1)
But got this:
Traceback (most recent call last):
File “pra.py”, line 86, in
y = torch.cat([z,z],1)
File “/usr/local/lib/python2.7/site-packages/torch/autograd/variable.py”, line 748, in cat
return Concat(dim)(*iterable)
File “/usr/local/lib/python2.7/site-packages/torch/autograd/_functions/tensor.py”, line 303, in forward
self.input_sizes = [i.size(self.dim) for i in inputs]
RuntimeError: dimension 2 out of range of 1D tensor at /Users/soumith/code/pytorch-builder/wheel/pytorch-src/torch/lib/TH/generic/THTensor.c:24
update:
Never mind, I did that using tensor.unsqueeze().
|
st119909
|
Or, you can use torch.stack, e.g.:
>>> import torch
>>> x = torch.range(1, 2)
>>> y = torch.range(3, 4)
>>> torch.stack([x, y], 0)
1 2
3 4
[torch.FloatTensor of size 2x2]
|
st119910
|
I have a problem when I use dictonaries that contain parameters inside a module. Such parameters are not registered in the list of module’s parameters.
is there a quick fix for this?
e.g. inside the init of a module:
self.waggrs = {}
self.list_relations={}
for rel in range(10):
self.waggrs[rel]={'w':nn.Parameter(1.+torch.randn( 2)),'b':nn.Parameter(torch.zeros(2))}
self.list_relations[rel]=nn.Linear(numfeatures, numrel)
model.parameters() does not contains the parameters
|
st119911
|
Can you pls explain what this function do?
def repackage_hidden(h):
"""Wraps hidden states in new Variables, to detach them from their history."""
if type(h) == Variable:
return Variable(h.data)
else:
return tuple(repackage_hidden(v) for v in h)
|
st119912
|
I see thank you. Is this a general practice for BPTT in pytorch or just memory efficiency?
|
st119913
|
A general practice. You want to cut out the history of the outputs, so the memory ever gets freed. Otherwise there’s always going to be a reference to it. In the future it will be enough to call .detach() but not yet.
|
st119914
|
It seems that torch topk and sort functions return both the indices and values for Tensor inputs, but they return only values for Variable inputs. When I look at the source code, the Topk function in autograd/_functions/tensor.py has an input parameter return_indices with default value False, but the topk function in autograd/variable.py do not use this parameter when calling Topk.
Is there a reason for this? I need to get the indices of smallest k elements of a Variable. Is there a way to do this currently in pytorch?
Thanks.
|
st119915
|
Ah, it is in fact a bug and we have to fix it. I’ve created an issue 360 to track it.
|
st119916
|
Hi All, I am having a strange problem on all our Titan X Maxwell (older one) with pytorch:
gist.github.com
https://gist.github.com/culurciello/47acb81fde1d19082f2a59c73c3c2ce0 7
pytorch failure Titan X (Maxwell)
elab@gpu5 ~/pytorch-examples/imagenet [master*]$ python3 main.py -a alexnet /media/SuperSSD/test-dataset-train-val/
=> creating model 'alexnet'
Traceback (most recent call last):
File "main.py", line 286, in <module>
main()
File "main.py", line 129, in main
train(train_loader, model, criterion, optimizer, epoch)
File "main.py", line 165, in train
output = model(input_var)
File "/usr/local/lib/python3.4/dist-packages/torch/nn/modules/module.py", line 210, in __call__
This file has been truncated. show original
We are on: Ubuntu 14.04.5 LTS
I can confirm we can run Torch7 on those machines with no issues.
Also on the new Pascal Titan X, we do not have this problem. Maybe an issues with driver?
Anyone else has the same issue?
|
st119917
|
After @smth recommendation I updated nccl and was able to make one workstation work that way.
|
st119918
|
Hi - simple and dumb question, I have the loss that is computed on the GPU, but I want to extract it and use it back at the CPU. How do I transfer a [torch.cuda.FloatTensor of size 1 (GPU 0)] back to the CPU?
Thanks!
|
st119919
|
There’s a function .cpu() which is probably what you need.
Probably you could also just access by indexing: variable_lives_on_cpu = mycudatensor[0]
|
st119920
|
Dumb question, but how do I make a scalar Variable? I’d like to add a trainable parameter to a vector, but I keep getting size-mismatch problems.
# Works, but I can't make the 2 a
# parameter, so I can't do gradient descent on it
Variable(torch.from_numpy(np.ones(5))) + 2
Thanks!
|
st119921
|
Solved by fmassa in post #2
Instead of having a number, you should instead have a one-element vector encapsulated in a Variable.
Note that we don’t have yet broadcasting implemented in pytorch, but it will be implemented soon, so for the moment you need to expand the tensor by hand.
x = Variable(torch.from_numpy(np.ones(5)))…
|
st119922
|
Instead of having a number, you should instead have a one-element vector encapsulated in a Variable.
Note that we don’t have yet broadcasting implemented in pytorch, but it will be implemented soon 258, so for the moment you need to expand the tensor by hand.
x = Variable(torch.from_numpy(np.ones(5)))
y = Variable(torch.Tensor([2]).double()) # numpy is double by default
z = x + y.expand(x.size())
If you need to increase the number of dimensions of the tensor, you can use the unsqueeze function 62
|
st119923
|
Hi,
I am trying to implement Squeezenet and train it on Cifar10 data, I have got the code ready but there seems to be some problem, my training set accuracy never increases though the loss function graph makes sense.
In Squeezenet, fire module require us to concatenate 1x1 Convolution and 3x3 convolution, to achieve this I have used torch.cat function? Below is the code for fire module, I want to know if its right?
class fire(nn.Module):
def __init__(self, inplanes, squeeze_planes, expand_planes):
super(fire, self).__init__()
self.conv1 = nn.Conv2d(inplanes, squeeze_planes, kernel_size=1, stride=1)
self.relu1 = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(squeeze_planes, expand_planes, kernel_size=1, stride=1)
self.conv3 = nn.Conv2d(squeeze_planes, expand_planes, kernel_size=3, stride=1, padding=1)
self.relu2 = nn.ReLU(inplace=True)
# using MSR initilization
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2./n))
def forward(self, x):
x = self.conv1(x)
x = self.relu1(x)
out1 = self.conv2(x)
out2 = self.conv3(x)
out = torch.cat([out1, out2], 1)
out = self.relu2(out)
return out
|
st119924
|
Thanks
should I upload the training and the complete model code as well?
https://github.com/gsp-27/pytorch_Squeezenet 72
|
st119925
|
I wrote it by mistake.
Did you get a chance to look at it? Is there any mistake?
|
st119926
|
Now I am trying to use 55 epoch learning rule used by @Soumith in his imagenet-multiGPU code, but I am facing a weird issue, it is giving me segfault, but when I prepare optimizer by choosing a static learning rate it runs fine?
Though it gives me poor accuracy but no segfault.
Updated code:
training_code 20
|
st119927
|
I think the learning rate itself was the issue, it must be doing division by zero somewhere, I changed the learning rate and now it seems to be working fine.
|
st119928
|
No, segfaults are never caused by zero division. If you get a small repro script I can fix it.
|
st119929
|
is there a chance they are cause by not zeroing the grad parameters
The script that I used for training is:
def paramsforepoch(epoch):
p = dict()
regimes = [[1, 18, 1e-3, 5e-4],
[19, 29, 5e-3, 5e-4],
[30, 43, 1e-3, 0],
[44, 52, 5e-4, 0],
[53, 1e8, 1e-4, 0]]
for i, row in enumerate(regimes):
if epoch >= row[0] and epoch <= row[1]:
p['learning_rate'] = row[2]
p['weight_decay'] = row[3]
return p
avg_loss = list()
fig1, ax1 = plt.subplots()
fig2, ax2 = plt.subplots()
# train the model
# TODO: Compute training accuracy and test accuracy
# TODO: train it on some data and see if it overfits.
# TODO: train the data on final model
# create a temporary optimizer
optimizer = optim.SGD(net.parameters(), lr=args.learning_rate, momentum=args.momentum, weight_decay=0.0005)
def adjustlrwd(p):
for param_group in optimizer.state_dict()['param_groups']:
param_group['lr'] = p['learning_rate']
param_group['weight_decay'] = p['weight_decay']
# train the network
def train(epoch):
# set the optimizer for this epoch
if epoch > 0 or epoch > 18 or epoch > 29 or epoch > 43 or epoch > 52:
p = paramsforepoch(epoch)
print("Configuring optimizer with lr={:.3f} and weight_decay={:.3f}".format(p['learning_rate'], p['weight_decay']))
adjustlrwd(p)
###########################################################################
global avg_loss
correct = 0
net.train()
for b_idx, (data, targets) in enumerate(train_loader):
# trying to overfit a small data
if b_idx == 100:
break
if args.cuda:
data.cuda(), targets.cuda()
# convert the data and targets into Variable and cuda form
data, targets = Variable(data), Variable(targets)
# train the network
optimizer.zero_grad()
scores = net.forward(data)
loss = F.nll_loss(scores, targets)
# compute the accuracy
pred = scores.data.max(1)[1] # get the index of the max log-probability
correct += pred.eq(targets.data).cpu().sum()
avg_loss.append(loss.data[0])
loss.backward()
optimizer.step()
if b_idx % args.log_schedule == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, (b_idx+1) * len(data), len(train_loader.dataset),
100. * (b_idx+1) / 100, loss.data[0]))
# also plot the loss, it should go down exponentially at some point
ax1.plot(avg_loss)
fig1.savefig("Squeezenet_loss.jpg")
# now that the epoch is completed plot the accuracy
accuracy = correct / 6400.0
print("training accuracy ({:.2f}%)".format(100*accuracy))
ax2.plot(100*accuracy)
fig2.savefig("Training-test-acc.jpg")
|
st119930
|
Hi,
I have an issue when I pass the same image in the model, I have different output.
I definitively did something wrong, but can you help me?
a = transforms.ToTensor()(transforms.Scale(225)(imagesList[0]))
t = torch.Tensor(2,3,224,224)
t[0] = a
t[1] = a
mod = models.alexnet(pretrained=True)
out = mod(Variable(t))
print(out[0])
print(out[1])
print((out.data[0] == out.data[1]).all())
I have this output :
Variable containing:
-1.5060
0.1141
1.3188
⋮
-4.5860
-0.8714
4.6969
[torch.FloatTensor of size 1000]
Variable containing:
-1.6095e+00
-1.8007e-01
2.6349e+00
⋮
-5.1909e+00
-2.2386e+00
5.0429e+00
[torch.FloatTensor of size 1000]
False
EDIT : I have the same problem if I do two forward pass with a batch of 1
|
st119931
|
That’s because there’s Dropout inside the model and it applies a different mask to each batch element at each forward. If you want to disable it call model.eval() to put it in evaluation mode.
As a side note, don’t use == to compare floats. Floating point operations are slightly inaccurate and can differ by very small amounts. Use (tensor1 - tensor2).abs().max() to find the maximal difference between the outputs (and assume they’re equal if it’s very small, e.g. 1e-10).
|
st119932
|
How can I calculate gradients with respect to a loss defined by the gradients?
For example, in tensorflow, I could call the function tf.gradients() two times, using the first result in the second function call.
My specific problem is this, I am implementing TRPO, and I have:
flat_grad <-flattened gradients of network parameters w.r.t. a loss
x <- a tensor with the same shape as flat_grad
and I need the gradients of the network parameters w.r.t (flat_grad * x)
In the process of flattening the gradients, I had to convert everything into a numpy array, which broke the backprop chain. How can I solve this problem?
|
st119933
|
You can flatten the gradients using torch.cat([g.view(-1) for g in grads], 0). This is supported by autograd.
Regarding taking grad of grad, we will support it (that’s why var.grad is a Variable too), but it’s still work in progress. At this point there’s no way to do this, but hopefully it will be implemented in the next month. Sorry!
|
st119934
|
Most torch examples have the input tensor assigned to a variable named input. This breaks the inbuilt python function of the same name.
A discussion is needed for adopting common python specific convention to pytorch.
|
st119935
|
Are you going to be using the input builtin in the middle of your model? I doubt it. It’s a stupid name to reserve by a language, and I don’t think it’s ever going to be used in this context by anyone, so I’m 100% ok with overriding it in that case.
I know that many people will consider that a “bad practice”, but I seriously don’t think that giving up readability for the sake of satisfying a rule that doesn’t make a lot of sense in this context is worth it.
|
st119936
|
Regardless of discussions about the meaningfulness of reserving the symbol “input”, I always have a very bad feeling whenever I see “input” overwritten in a python program. Quite often this results in meaningless error messages when, for instance, a undefined variable error should’ve been thrown.
Also, built-in linters in text editors tend to not highlight “input” as undefined or as unused.
|
st119937
|
Well, no one said it is a convention. We don’t encourage any particular naming convention when writing your own models, so feel free to use any other name if you find it better/safer. I think it mostly doesn’t matter, since the model logic tends to be quite simple, and any usage of input will likely immediately raise a TypeError. But as I said, we don’t plan to establish any conventions for naming arguments.
|
st119938
|
Sure, I was just suggesting that not overwriting reserved symbols will bring an increase in the quality of the examples (however small that increase is).
|
st119939
|
If someone sends a PR I’ll accept it, but it’s not a priority for us now, so we’ll do it later otherwise.
|
st119940
|
self.value_optimizer.zero_grad()
preValue = self.value_net(state).data
nextValue = self.value_net(next_state).data
expectedValue = (self.gamma * nextValue) + reward
preValue = Variable(preValue)
expectedValue = Variable(expectedValue)
loss = F.smooth_l1_loss(preValue, expectedValue)
loss.backward()
self.value_optimizer.step()
**self._execution_engine.run_backward((self,), (gradient,), retain_variables)**
**RuntimeError: there are no graph nodes that require computing gradients**
When I run on these codes, the problem occur in loss.backward(), so what can I do to solve it?
|
st119941
|
self.value_optimizer.zero_grad()
# Here, when you unpack the data, you detach the data from the graph
# No backpropagation through the model is possible, because you got rid
# of the reference to the grpah.
preValue = self.value_net(state).data
nextValue = self.value_net(next_state).data
expectedValue = (self.gamma * nextValue) + reward
# Here, you repack the tensors in Variables, but the history of
# operations is not retained - they are leaf Variables.
# Also you didn't specify that they require gradients (they don't
# by default).
preValue = Variable(preValue)
expectedValue = Variable(expectedValue)
loss = F.smooth_l1_loss(preValue, expectedValue)
# At this point your while graph looks like this - no model there:
# preValue expectedValue
# \ /
# smooth_f1_loss
# |
# loss
loss.backward()
self.value_optimizer.step()
If I understand correctly, it seems that you want to do Q-learning. You might want to take a look at our DQN tutorial 28.
|
st119942
|
When I try to use only
preValue = self.value_net(state) # here without .data
preValue = Variable(preValue) # get rid of this line.
it works. Maybe it is the mechanism of the Variable and tensor is not matched for .backward()
|
st119943
|
Try this:
self.value_optimizer.zero_grad()
preValue = self.value_net(state)
nextValue = self.value_net(next_state).detach() # don't backprop this way
expectedValue = (self.gamma * nextValue) + reward
loss = F.smooth_l1_loss(preValue, expectedValue)
loss.backward()
self.value_optimizer.step()
|
st119944
|
I have tried it, but this problem occurs.
**expectedValue = (self.gamma * nextValue) + reward**
** File “/home/tommy/anaconda2/lib/python2.7/site-packages/torch/autograd/variable.py”, line 676, in add**
** return self.add(other)**
** File “/home/tommy/anaconda2/lib/python2.7/site-packages/torch/autograd/variable.py”, line 286, in add**
** return self._add(other, False)**
** File “/home/tommy/anaconda2/lib/python2.7/site-packages/torch/autograd/variable.py”, line 282, in _add**
** assert not torch.is_tensor(other)**
AssertionError
|
st119945
|
Hi,
I was wondering if gradients are automatically computed if Numpy is in the Forward() function of nn.module, i.e, torch tensors are converted to numpy arrays, a numpy op is applied and then we convert it back to torch tensors.
Is there any implication of doing so?
Thanks!
|
st119946
|
Hi,
No, if you use numpy operations inside the forward of your module, they won’t create nodes in the computation graph of the network, and thus won’t be differentiated.
You can write a new Function though that uses numpy internally, but you need to provide the backward computation for it. Here is a nice introductory tutorial 178 explaining how to use numpy to create new Functions.
|
st119947
|
I have downloaded and split the CIFAR 10 data using the given torch.datasets api, now I want to separate out training and validate data, say I want out 50000 sample 49000 as training example and 1000 as validation examples, how do I achieve this??
Also say I keep the batch size as 64, then in the case where I have 50000 training samples last batch will not have the same number of samples as the other batches, how is this case handled??
|
st119948
|
Never mind, thanks
I figured out a way, the dataset loader of cifar 10 has len attribute and if I run the loop only till (len-1), it should do the trick
|
st119949
|
Hello all,
I train a simple RNN network to predict a label on each input timestep on a huge random dataset.
I record memory usage while training, and notice that it is increasing linearly with dataset size:
mem-graph.png800×600 3.74 KB
(VSIZE = Virtual Memory recorded by Ubuntu, %MEM: How much % RAM it takes, x-axis = time in second)
My training script for reference:
class testNet(nn.Module):
def __init__(self):
super(testNet, self).__init__()
self.rnn = nn.RNN(input_size=200, hidden_size=1000, num_layers=1)
self.linear = nn.Linear(1000, 100)
def forward(self, x, init):
x = self.rnn(x, init)[0]
y = self.linear(x.view(x.size(0)*x.size(1), x.size(2)))
return y.view(x.size(0), x.size(1), y.size(1))
net = testNet()
init = Variable(torch.zeros(1, 4, 1000))
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.01, momentum=0.9)
total_loss = 0.0
for i in range(10000): #10000 mini-batch
input = Variable(torch.randn(1000, 4, 200)) #Seqlen = 1000, batch_size = 4, feature = 200
target = Variable(torch.LongTensor(4, 1000).zero_())
optimizer.zero_grad()
output = net(input, init)
loss = criterion(output.view(-1, output.size(2)), target.view(-1))
loss.backward()
optimizer.step()
total_loss += loss[0]
print(total_loss)
I expect memory usage not increasing per mini-batch. What might be the problem? (Correct me if my script is wrong)
|
st119950
|
hi @NgPDat
I’m trying to reproduce your results.
Can you tell me the units of VSIZE? Is it bytes?
And %MEM, is it a percentage of the system memory?
So far, my run is pretty stable at around 105MB, after 400 mini-batches, I will wait for some time.
|
st119951
|
I think I see the problem. You have to remember that loss is a Variable, and indexing Variables, always returns a Variable, even if they’re 1D! So when you do total_loss += loss[0] you’re actually making total_loss a Variable, and adding more and more subgraphs to its history, making it impossible to free them, because you’re still holding a reference. Just replace total_loss += loss[0] with total_loss += loss.data[0] and it should be back to normal.
|
st119952
|
Work like a charm! Thank you.
I think I have better understanding of Variable now.
|
st119953
|
VSIZE is in kilobytes.
Yes, %MEM is percentage of the system memory.
The whole script for recording memory usage is from here: http://stackoverflow.com/questions/7998302/graphing-a-processs-memory-usage 285
|
st119954
|
Hello, I get this weird error while running my code.
class Residual(nn.Module):
def __init__(self,dropout, shape, negative_slope, BNflag = False):
super(Residual, self).__init__()
self.dropout = dropout
self.linear1 = nn.Linear(shape[0],shape[1])
self.linear2 = nn.Linear(shape[1],shape[0])
self.dropout = nn.Dropout(self.dropout)
self.BNflag = BNflag
self.batch_normlization = nn.BatchNorm1d(shape[0])
self.leakyRelu = nn.LeakyReLU(negative_slope = negative_slope , inplace=False)
def forward(self, X):
x = X
if self.BNflag:
x = self.batch_normlization(x)
x = self.leakyRelu(x)
x = self.dropout(x)
x = self.linear1(x)
if self.BNflag:
x = self.batch_normlization(x)
x = self.leakyRelu(x)
x = self.dropout(x)
x = self.linear2(x)
x = torch.add(x,X)
return x
res = Residual(0.5,[100,200],0.2,False)
I haven’t passed any data yet.
|
st119955
|
When I try to find out some Variable result who is the biggest. However, I found that “Variable” can not compare with other parameters, such as ndarray or FloatTensor. Thus, is there any thing wrong with Variable, or what should I do?
|
st119956
|
What are you trying to do? Comparison operators are broken on Variables right now (there’s already an issue on that 81), but all this should be achievable with regular methods too (e.g. eg, ne, le, gt, etc.).
|
st119957
|
Oh, thanks. Maybe there is another way to solve this problem for transforming Variable into Ndarray. However, I think to use comparison operators for variable is more convenient.
|
st119958
|
@cumttang, if you want to convert your Variable v into a ndarray you can type v.data.numpy().
If you simply want to compare it with a FloatTensor you can use v.data.
|
st119959
|
Is there a way one can inspect the gradInputs of the network?
Or there is no more such a thing?
I mean, I just computed my loss J and would like to check what’s the signal that is driven into the network.
net = Net()
h = net(x)
J = loss(h, y)
J.backward()
|
st119960
|
What’s a signal driven to a network? You want to inspect gradients w.r.t. intermediate outputs? Use module 13 or variable 14 hooks.
|
st119961
|
Signal: derivative of the loss w.r.t. the output of the network.
OK, I’ll check these hooks. Thank you.
|
st119962
|
Hello! in https://github.com/OpenNMT/CTranslate 38 - we are parsing in C the luatorch serialized format for complete inference in pure C code. Are specifications of pytorch serialized format somewhere so that we can do the same with new format?
|
st119963
|
I can write up the specs, but they’re much more complicated now. Serialized files are actually tars containing 4 other files - one listing tensors, one listing storages, one with storage data and one with system info (long size, endianness, etc.). I think for this use case it would by much simpler to take advantage of some more standardized format, with a C library supporting loading from it (such as HDF5 or protobuf). Do you think that would be ok?
|
st119964
|
Thanks for your answer. yes I think this would be great - but cannot you use torch native serialization since you have them at hand in the TH* libraries - the 4 containers you are talking about could be simply put in a lua table - and this would avoid dependencies with other libraries? I understand that the pytorch objects (variables) would not be compatible with lua modules objects but at least would be readable.
|
st119965
|
No we can’t. These are not continers, these four are binary blobs.
In Lua torch serialization has been written from scratch, in PyTorch we depend on pickle to save the objects, and while it allows us to have very robust serialization that conforms to python standards, it gives us less control, and is unreadable by Lua. I don’t think it’s worth changing that. It’s a complex matter, and conforming even to some of the old Lua standards would hurt usability for all Python users.
HDF5 is quite widespread and will provide you with a way to save weights in Python and load them in Lua.
|
st119966
|
ok I understand - I thought it was a specific serialization. I untared the .pt file and the unpickling recipe seems to be explicit in serialization.py. Since our end goal is reading from C - I think we can deal with untar/pickle - and from C dump back a lua format if we want to go back to lua/nngraph. In addition, we do need the compute graph structure - so you would need to make a new format just for that hdf5 or other serialization. I will give a shot at the untaring/unpickling from C.
|
st119967
|
Keep in mind that PyTorch checkpoints don’t contain the model structure - pickle doesn’t save the code, and it’s the runtime that defines the graph. You can’t go from a pickled model to nngraph or any other representation using only the .pt file. You’d need to dump the graph structure for a single iteration first (starting from output.creator).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.