id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st118168
|
Have you looked at this: https://github.com/longcw/yolo2-pytorch/blob/master/darknet.py 106
|
st118169
|
Hi! I’m reading up on PyTorch, and would like to understand a bit better how custom RNN cells work.
Does save_for_backward() work for operations like RNN cells where the same operation instance is forwarded multiple times before backward is called, or does anything special need to be done for this use case?
Are there any limitations as to what operations can be used in an RNN cell, or can it be assumed that all built-in operations will save/restore state as needed (incl. any cuDNN state, if accelerated) such that they work in this multiple forward, then multiple backward case?
Thanks!
|
st118170
|
Does save_for_backward() work for operations like RNN cells where the same operation instance is forwarded multiple times before backward is called, or does anything special need to be done for this use case?
Yes. It works fine.
This is because RNN cells are of type nn.Module, but save_for_backward is really implemented inside autograd.Function. All autograd Functions inside a graph only have single instantiations. If you have K time steps, you will have K instantiations of the relevant functions. nn.Module are abstractions on top of autograd Functions that make this seamless.
if you are referring to nn.RNN there are limitations documented by the API itself. But you can create your own RNN as you wish.
|
st118171
|
Thanks!
I noticed that the Dropout function has chosen to save it’s mask as self.noise rather than via self.save_for_backward(noise). Is there any difference between the two? I see plenty of other functions that are using save_for_backward() even when it’s only a single tensor being saved.
|
st118172
|
self.save_for_backward (or ctx.save_for_backward in the new function API) is needed for saving any tensors that are one of the inputs to the function; for intermediate values (including noise masks) you should use assignment to self.attr or ctx.attr instead.
|
st118173
|
The state attribute in the opt. class is a defaultdict which helps to add new parameter states in the state dict as they are being used while training. But once you save the optimizer state_dict and load it back, the state attribute is a dict and not a defaultdict which assumes that before saving, all the parameters in the network are present in the state dict.
I believe, when loading the opt. using a saved state_dict the state attribute should be a defaultdict. Such behavior was also noticed by this post.
In the optimizer’s param key of the param_groups the order of the parameters (in which they were given to the optimizer’s init) matters.
In load_state_dict the snippet shows this :
id_map = {old_id: p for old_id, p in zip(chain(*(g['params'] for g in saved_groups)), chain(*(g['params'] for g in groups)))} state = {id_map.get(k, k): v for k, v in state_dict['state'].items()}
Now consider model (when using, say, Adam optimizer)
class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.p1 = nn.Linear(2,3, False) self.p2 = nn.Linear(3,4, False)
Now after saving, if the order in which the parameters are defined in the model changes i.e. if I change the class to have
self.p2 = nn.Linear(3,4, False) self.p1 = nn.Linear(2,3, False)
the loaded optimizer’s state for p1 will be mapped to p2 and vice-versa. I tried this and this indeed happens which is wrong and now training cannot proceed (step() will, rightly so, give an error).
The nn.Module class is robust to such behavior as it uses parameter names instead of id ordering.
Shouldn’t the optimizer also use parameter names instead of ids and relying on the ordering in which they are supplied to the optimizer when initializing?
|
st118174
|
this is totally worth fixing IMO, can you open an issue with the exact contents on https://github.com/pytorch/pytorch 26
|
st118175
|
Hello,
I am trying a new algorithm that I have implemented outside of PyTorch. Is this a legit way to copy out and copy in PyTorch model parameters or am I causing problems under the hood?
My code assumes that modules/parameters returned from named_children() and parameters() methods always return those items in the same order.
Pack into a numpy vector doing something like:
Qmods = [mod for name,mod in self.named_children() if name in self.Qlayers]
for mod in Qmods:
Qparams += [parm for parm in mod.parameters()]
Ws = np.hstack([np.ravel(netp.data.numpy()) for netp in Qparams])
return Ws
Do stuff to Ws
Unpack from numpy vector doing something like:
for qmod in [mod for name,mod in self.named_children() if name in self.Qlayers]:
for netp in qmod.parameters():
stopIdx = startIdx + np.prod(netp.data.size())
netp.data.copy_(Tensor(np.reshape(Ws[startIdx:stopIdx],netp.data.size())))
startIdx = stopIdx
I also do something similar to grab gradients from the network which are used to update Ws outside of PyTorch. I realize this may be inefficient. Just want to make sure I am not breaking PyTorch in some way by accessing/overwriting network weights and gradients in this way.
Thank you, very much, for your guidance.
|
st118176
|
accessing the weights / gradients should be fine this way.
Something you can do that might be nicer is to get the model’s state_dict
|
st118177
|
def softmax(input, axis=1):
input_size = input.size()
trans_input = input.transpose(axis, len(input_size)-1)
trans_size = trans_input.size()
input_2d = trans_input.contiguous().view(-1, trans_size[-1])
soft_max_2d = F.softmax(input_2d)
soft_max_nd = soft_max_2d.view(*trans_size)
return soft_max_nd.transpose(axis, len(input_size)-1)
|
st118178
|
Thanks for this snippet!
Just in case, make sure you do import torch.nn.functional as F first
|
st118179
|
Are there any (theoretical) reasons for not taking the batch average loss in the VAE example?
Right now both the KL divergence and the BCE aren’t being averaged.
github.com
pytorch/examples/blob/master/vae/main.py#L84 15
return self.decode(z), mu, logvar
model = VAE()
if args.cuda:
model.cuda()
def loss_function(recon_x, x, mu, logvar):
BCE = F.binary_cross_entropy(recon_x, x.view(-1, 784))
# see Appendix B from VAE paper:
# Kingma and Welling. Auto-Encoding Variational Bayes. ICLR, 2014
# https://arxiv.org/abs/1312.6114
# 0.5 * sum(1 + log(sigma^2) - mu^2 - sigma^2)
KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
# Normalise by same number of elements as in reconstruction
KLD /= args.batch_size * 784
return BCE + KLD
|
st118180
|
I dont think there are strong theoretical reasons. Joost (original author of that code) was porting some code over exactly.
|
st118181
|
Hi
On CPU the model I implemented worked ok, but when converting it to cuda, it gives the following error
Traceback (most recent call last):
File “train_main.py”, line 281, in
main()
File “train_main.py”, line 270, in main
trn_loss, trn_acc = _trn_epoch(model, epoch, batchid)
File “train_main.py”, line 235, in _trn_epoch
loss.backward()
File “/home/hgodhia/miniconda2/envs/anlp/lib/python2.7/site-packages/torch/autograd/variable.py”, line 146, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
File “/home/hgodhia/miniconda2/envs/anlp/lib/python2.7/site-packages/torch/nn/_functions/thnn/auto.py”, line 175, in backward
update_grad_input_fn(self._backend.library_state, input, grad_output, grad_input, *gi_args)
TypeError: CudaSoftMax_updateGradInput received an invalid combination of arguments - got (int, torch.cuda.FloatTensor, torch.FloatTensor, torch.cuda.FloatTensor, torch.cuda.FloatTensor), but expected (int state, torch.cuda.FloatTensor input, torch.cuda.FloatTensor gradOutput, torch.cuda.FloatTensor gradInput, torch.cuda.FloatTensor output)
I have checked thoroughly through the code and where ever I used tensors I changed them to cuda tensors like
if torch.cuda.is_available():
zero_t = zero_t.cuda(0)
end_idxs_flat = end_idxs_flat.cuda(0)
including the embedding layers etc
rest assured the input to loss
loss = loss_function(scores, a)
are cuda tensor variables
loss_function = nn.NLLLoss()
if torch.cuda.is_available():
loss_function = loss_function.cuda(0)
Silly error, RESOLVED.
Thanks
|
st118182
|
Hi everyone,
Simple question: How can I add regularization (especially dropout) for LSTMCell? It doesn’t have this option like LSTM layer
|
st118183
|
I am new to Pytorch. When I try to create my own module with Pytorch. I meet some problem.First, I want to write a backward() function in python,but I can not use nn.SpatialAdaptiveMaxPooling().backwrad().Then how can i get the gradinput?Thank you for your help.
|
st118184
|
You don’t need to write your own backward function. You just need to write your forward function and your backward function will be generated automatically via autograd. See http://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html 141 for an example.
|
st118185
|
Thank you for your replay, I am sorry that I did not express meaning correctly. Actually,I want to write a new layer,not a module, then how to write the backward function?
|
st118186
|
If I want to implement some functions running on GPU , which , however, cannot completely use the official function and tensor to express, Should I have to write the cuda file by myself? @smth
|
st118187
|
(you dont need to tag me in your question, dont do that).
Yes write your own files. See https://github.com/pytorch/extension-ffi 13
|
st118188
|
I can successfully calculate high order grad on a simple formula, but fail in nn.Module
x = autograd.Variable(torch.randn(2, 2), requires_grad=True)
y = x ** 2
x_grad = autograd.grad(outputs=y, inputs=x,
grad_outputs=torch.ones(y.size()),
create_graph=True, only_inputs=True)[0]
z = x_grad ** 2
autograd.grad(outputs=z, inputs=[x],
grad_outputs=torch.ones(z.size()),
only_inputs=False)
The above code will be correct
net = nn.Linear(2, 2)
x = autograd.Variable(torch.randn(2, 2), requires_grad=True)
y = net(x)
x_grad = autograd.grad(outputs=y, inputs=x,
grad_outputs=torch.ones(y.size()),
create_graph=True, only_inputs=True)[0]
z = x_grad ** 2
autograd.grad(outputs=z, inputs=[x],
grad_outputs=torch.ones(z.size()),
only_inputs=False)
This will raise error
RuntimeErrorTraceback (most recent call last)
<ipython-input-191-3b4da0254135> in <module>()
10 autograd.grad(outputs=z, inputs=[x],
11 grad_outputs=torch.ones(z.size()),
---> 12 only_inputs=False)
/home/users/gang.cao/env/lib/python2.7/site-packages/torch/autograd/__init__.pyc in grad(outputs, inputs, grad_outputs, retain_graph, create_graph, only_inputs)
144 return Variable._execution_engine.run_backward(
145 outputs, grad_outputs, retain_graph,
--> 146 inputs, only_inputs)
147
148
RuntimeError: there are no graph nodes that require computing gradients
|
st118189
|
When I translate the documents of pytorch, I find some errors in it. As below:
1.the avg_pool3dhttp://pytorch.org/docs/nn.html#avg-pool3d 1, in which the kt should be dt
2. the avg_pool2dhttp://pytorch.org/docs/nn.html#avg-pool2d 1, in which the explanation of parameters ceil_mode and count_include_pad cannot be understanding.
I want to update the topic all the time to improv the quality of the English and Chinese documents. How about your opinions?@smth @apaszke
|
st118190
|
thanks for pointing these out. Yes, please do send more issues with the docs by updating this topic / adding new comments.
I’ll fix these docs in master.
|
st118191
|
these suggestions are now fixed via https://github.com/pytorch/pytorch/commit/7dd8571bc6002315b4de72a2b5b03a189527a379 2
|
st118192
|
I open a new issue 2 in the repository. Then we(the Chinese document translation group) will gradualy add the mistakes we find to this issue.And in the next week our translation will be finished.
|
st118193
|
Hello all,
I’ve never worked with Torch, only tf/th…, but the workflow seemed very pleasant to me so I decided try to learn. I know that pytorch is a very new project but it would be neat if one writes a “pytorch for dummies”.
For instance, the fallowing code results in error:
(class model above)
self.affine1(10, 100)
(...)
x = np.random.random(10)
ipt = torch.from_numpy(x)
probs = model(Variable(ipt))
Then
TypeError: addmm_ received an invalid combination of arguments - got (int, int, torch.DoubleTensor, torch.FloatTensor), but expected one of:
* (torch.DoubleTensor mat1, torch.DoubleTensor mat2)
(...)
How is the proper data preparation in pytorch?
Thanks in advance and looking forward to use pytorch (the performance of the cart pole in openai gym was better than with other frameworks).
Obs: As I didn’t saw any other mechanic-ish question topic, nor a question flag, hope that is not off topic for the forum.
|
st118194
|
hi there.
In this particular case, thanks for your feedback. We will improve our onboarding process, by making more newbie friendly tutorials.
In this case, x = np.random.random(10) returns a numpy ndarray of dtype=float64.
When you call ipt = torch.from_numpy(x), ipt is now a torch.DoubleTensor.
However, your model is expecting a torch.FloatTensor, so you simply do:
ipt = torch.from_numpy(x)
ipt = ipt.float()
That’s all
|
st118195
|
Hi @gabrieldlm,
I’m planning to release a series of video tutorials on PyTorch, something along the line of my torch-Video-Tutorials 229.
Would this meet your request?
Please, let me know of anything you think it should be included, so that I can better plan the structure of my lessons.
|
st118196
|
Hi @Atcold,
Great material! I was thinking on something like the video 2.2 47, a tutorial about how pytorch works.
@smth,
Thanks for the reply, but after changing the tensor type to float I get the fallowing error:
RuntimeError: matrices expected, got 1D, 2D tensors at /Users/soumith/anaconda/conda-bld/pytorch0.1.6_1484755992574/work/torch/lib/TH/generic/THTensorMath.c:857
and the output for printing the ipt variable is the fallowing:
print ipt
(...)
[torch.FloatTensor of size 10]
|
st118197
|
@gabrieldlm pytorch does not support broadcasting yet. You are trying to send ipt wich is a 1D vector of size 10 into a matrix-multiply, which expects a 2D Tensor.
You can do: ipt = ipt.view(1, 10) if it fits your usecase.
|
st118198
|
Hey @gabrieldlm, I created a new repo: pytorch-Video-Tutorials 147.
Feel free to edit the wiki with your suggestions.
|
st118199
|
Haha, I’m reading the docs and the source code too! It’s been a nice journey so far
|
st118200
|
This is the first place I ended up in when searching the error: matrices expected, got 1D, 2D tensors at /py/conda-bld/pytorch_1490983232023/work/torch/lib/TH/generic/THTensorMath.c:1224
And even from smth’s comment and the documentation I’d say it’s not immediately obvious that the solution, at least in my case, is that the dimensionality of the input needed to be 3D rather than 2D. It does explain why .view() makes frequent appearances in some of the tutorials.
|
st118201
|
I use the time.clock to evaluate the forward time, code like:
start_t = clock()
output = net(Variable(input))
forward_time += (clock() - start_t) * 1000
I found it’s not very stable when test time.
The forward time of
start_t = clock()
output = net(Variable(input))
forward_time += (clock() - start_t) * 1000
output = output.data[0][0].cpu().numpy()
is faster than the time of
start_t = clock()
output = net(Variable(input))
forward_time += (clock() - start_t) * 1000
out = output.data[0][0]
Although I just put the clock() before and after “out = net(xxx)”, the other code after time testing will also effect the forward time.
|
st118202
|
Suppose I have a model M with some parameters P. I train this using Adam and save the state_dict of the model and the optimizer. I now add few more parameters in the model to make it Mn and the parameters are Pn.
To load the variables from the partial model that is saved I do the following
state = Mn.state_dict() lstate = torch.load(model_path) state.update(lstate) Mn.load_state_dict(state)
This ensures that the old variables are loaded from the saved model and the new ones are in their initialized state.
If I try to do the same with the optimizer, it complains that the number of parameters in the new optimizer’s state_dict() is more than the loaded state dict.
What is the recommended method to restore the partial optimizer variables?
|
st118203
|
we haven’t really thought this through for the optimizer.
Best to either look into the optimizer internals (they’re not that complicated and they’re in python), or just construct a new optimizer.
|
st118204
|
I have been looking into the internals but its kind of tricky as the optimizer uses 'id’s, which change with each run, instead of parameters names as in the model state dict.
I guess for now the best/easiest solution is to make a new optimizer.
I have couple of follow-up questions about the optimizer I will ask in a separate post as they are more generic.
|
st118205
|
the torch.nn.functional.softmax require the input which must have two dimensions . But now, I have a input has three dimensions(0, 1, 2). I want to softmax this input at dimension 2.
for example:
s
Variable containing:
(0 ,.,.) =
5 1 1 1
(1 ,.,.) =
1 1 1 1
(2 ,.,.) =
1 1 1 1
[torch.cuda.FloatTensor of size 3x1x4 (GPU 0)]
after softmax(s):
F.softmax(s)
Variable containing:
(0 ,.,.) =
0.9647 0.3333 0.3333 0.3333
(1 ,.,.) =
0.0177 0.3333 0.3333 0.3333
(2 ,.,.) =
0.0177 0.3333 0.3333 0.3333
[torch.cuda.FloatTensor of size 3x1x4 (GPU 0)]
this result is not what I want, Because it softmax the input at dimension 0.
How can I do to softmax the input in dimension 2? In other words, I have a multiple dimensions input, and want to softmax the input at the dimension wich can be specified.
|
st118206
|
Why softmax function can't specify the dimension to operate
def softmax(input, axis=1):
input_size = input.size()
trans_input = input.transpose(axis, len(input_size)-1)
trans_size = trans_input.size()
input_2d = trans_input.contiguous().view(-1, trans_size[-1])
soft_max_2d = F.softmax(input_2d)
soft_max_nd = soft_max_2d.view(*trans_size)
return soft_max_nd.transpose(axis, len(input_size)-1)
|
st118207
|
Hi,
I’m new to pytorch, and I find that pytorch’s model files size is much smaller than torch’s. For example: resent-18
pytorch’s model 5 is 44.7MB, torch’s model 2 is 138MB
Why there is so much difference ?
Thanks in advance.
|
st118208
|
torch’s models still saves gradWeight/gradBias i think. So pytorch saves half size on that.
|
st118209
|
Yes, that’s right, after I set the gradWeight/gradBias to nil, the two model are nearly same.
Thanks!
|
st118210
|
Hi, I am wondering if it is possible to include an IF-ELSE statement in the LSTM part of the code. Here is the section:
if flag == True:
print("reconstruction")
h_t, c_t = self.lstm1(z, (h_t, c_t))
h_t2, c_t2 = self.lstm2(c_t, (h_t2, c_t2))
else:
print("prediction")
h_t, c_t = self.lstm1(z_null, (h_t, c_t))
h_t2, c_t2 = self.lstm2(c_t, (h_t2, c_t2))
where z_null is a all-zero vector with the same shape as z.
So what I want to do is that at each time-step, the LSTM could either have an input or only use the information from previous hidden state.
Since PyTorch is a dynamic network tool, I assume it should be able to do this. But during my experiment, seems like the LSTM actually gets the input at each time-step, regardless of the IF-ELSE statement.
Could someone help me with this question? Thanks!
|
st118211
|
You probably want to use nn.LSTMCell or explicitly pass one timestep of data at a time to nn.LSTM.
|
st118212
|
Hi James,
Thanks for your answer. yes I am using the LSTMCell. The structure is similar to
self.lstm = nn.LSTMCell(in_dim, out_dim)
The vanilla LSTM works fine. Can we use an IF-ELSE statement to control if the LSTMCell gets only the previous hidden state or gets both hidden state and an actual input?
Again, thanks for your help!
|
st118213
|
Hi Ruotian,
Sorry I just posted the core part of the code. The flag is randomly generated to be either True or False.
flag = random.choice([True, False])
And yes, it goes to both the “reconstruction” part and the “prediction” part of the code. Is this type of code supported by PyTorch? I noticed that there is an example code using the similar idea https://github.com/jcjohnson/pytorch-examples#pytorch-control-flow–weight-sharing 22.
Thank you for your help.
Eric
|
st118214
|
Yeah, what you’re trying to do is absolutely supported and should work. If you’re still running into problems, can you paste a little more of your code? Also make sure everything that needs to happen for every iteration (like the random.choice) is in forward and not __init__.
|
st118215
|
Hi James,
Thanks for your explanation. Here is more of my code. (sorry for some copyright issue, I can’t post the entire code here). From the printed information, I can see that the LSTM does go to both modes (flag == true or flag == false). But LSTM sees the input regardless of the mode.
class LSTM_MODEL(nn.Module):
def __init__(self):
super(LSTM_MODEL, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(3380, 1024)
self.fc12 = nn.Linear(2048,1024)
self.fc21 = nn.Linear(1024, 512)
self.fc22 = nn.Linear(1024, 512)
self.fc3 = nn.Linear(512,1024)
self.fc4 = nn.Linear(1024, 3380)
self.convtranspose1 = nn.ConvTranspose2d(10, 1, kernel_size = 5)
self.convtranspose2 = nn.ConvTranspose2d(20, 10, kernel_size = 5)
self.relu = nn.ReLU()
self.sigmoid = nn.Sigmoid()
self.unpool = nn.MaxUnpool2d(2)
self.lstm1 = nn.LSTMCell(512, 1000)
self.lstm2 = nn.LSTMCell(1000, 512)
def feature(self, x):
......
def decode(self, z, idx1, idx2):
.......
def forward(self, input, future = 0, train_flag = 1):
if train_flag == 1:
print("training")
else:
print("testing")
outputs = []
h_t = Variable(torch.zeros(200, 1000).float(), requires_grad=False)
c_t = Variable(torch.zeros(200, 1000).float(), requires_grad=False)
h_t2 = Variable(torch.zeros(200, 512).float(), requires_grad=False)
c_t2 = Variable(torch.zeros(200, 512).float(), requires_grad=False)
FEATURE_null = Variable(torch.zeros(200,512).float(),requires_grad=False)
if args.cuda:
h_t = h_t.cuda()
c_t = c_t.cuda()
h_t2 = h_t2.cuda()
c_t2 = c_t2.cuda()
FEATURE_null = FEATURE_null.cuda()
###############################
# LSTM
###############################
for i, input_t in enumerate(input.chunk(input.size(1), dim=1)):
input_t = input_t.squeeze(1).contiguous()
x_feature, idx1, idx2 = self.feature(input_t)
# important: arbitrarily choose 0 or 1.
if train_flag == 1: # training: arbitrarily choosing mode
flag = random.choice([True, False])
else: # test: prediction
flag = False
if i == 0: #( first time step always gets True Flag)
flag = True
## the following is the lstm part.
if flag == True:
print("flag is True")
h_t, c_t = self.lstm1(x_feature, (h_t, c_t))
h_t2, c_t2 = self.lstm2(c_t, (h_t2, c_t2))
else:
print("flag is False")
h_t, c_t = self.lstm1(FEATURE_null, (h_t, c_t))
h_t2, c_t2 = self.lstm2(c_t, (h_t2, c_t2))
recon_x = self.decode(c_t2, idx1, idx2)
outputs += [recon_x]
for i in range(future):# if we should predict the future
h_t, c_t = self.lstm1(FEATURE_null, (h_t, c_t))
h_t2, c_t2 = self.lstm2(c_t, (h_t2, c_t2))
recon_x = self.decode(c_t2, idx1, idx2)
outputs += [recon_x]
outputs = torch.stack(outputs, 1).squeeze(1)
return outputs, mu_list, logvar_list, lstm_hidden
model = VAE()
if args.cuda:
model.cuda()
def loss_function():
.........
|
st118216
|
somthing equivalent to the following numpy code.
Thanks!
import numpy as np
data = np.random.rand(4,5,6)
first = np.array([0,1,2])
second = np.array([2,3,1])
res = data[first, second]
print(res)
|
st118217
|
this is not yet possible, but we are working on Advanced Indexing. We will have it by release 0.2
|
st118218
|
you can follow this issue for progress: https://github.com/pytorch/pytorch/issues/1080 632
|
st118219
|
Hi,
Is it possible to implement the following architecture in pytorch?
github.com
cmusatyalab/openface/blob/master/models/openface/nn4.def.lua 20
-- Model: nn4.def.lua
-- Description: Implementation of NN4 from the FaceNet paper.
-- Input size: 3x96x96
-- Number of Parameters from net:getParameters() with embSize=128: 6959088
-- Components: Mostly `nn`
-- Devices: CPU and CUDA
--
-- Brandon Amos <http://bamos.github.io>
-- 2015-09-18
--
-- Copyright 2015-2016 Carnegie Mellon University
--
-- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License.
-- You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
This file has been truncated. show original
I couldn’t find SpatialConvolutionMM,SpatialCrossMapLRN and nn.Normalize
Thanks
|
st118220
|
CrossMapLRN is missing and isn’t wrapped yet. ConvolutionMM is Conv2d and nn.Normalize is simply just implementable using torch.* operations.
|
st118221
|
Hi Smth, thanks for your reply.
But OpenFace has inception layer, any such layer in Pytorch?
|
st118222
|
Hi Lee,
Have you worked this out? I meet the same situation these days, maybe we can discuss how to implement this in pytorch.
|
st118223
|
Hey
I want to change (Vanilla) RNN model by applying Tensor Factorization method on it. The equations are as:
(Vanilla) RNN:
h(t) = tanh( W[hx].x(t) + U[hh].h(t) + b(h))
FTRNN:
h(t) = tanh( W[hfx] diag (W[fxi].I) W[fxx].x(t) + U[hfh] diag (U[fhi].I) U[fhh].h(t-1) + b(h))
where as, W[hfx] has dimensions nh x nf ; W[fxi] has dimensions nf x |I| ; W[fxx] has dimensions nf x nx and similarly for hidden to hidden layer
fx & fh are tensor factors for input and hidden layer respectively for input to hidden layer (as represented by W) and hidden to hidden layer ( as represented by U).
Hyper parameters are nh = hidden size, nfx = factor size(input to hidden layer), nfh = factor size (hidden-hidden layer)
Moreover,x = input features (given to each time step) to model and I = constant/scaler (it is also given to model)
I am beginner in pytorch. Would some one please help me or have any suggestion to implement FTRNN in pytorch or should I have to change (Source code for torch.nn.modules.rnn) ?
I shall be very thankful to you
|
st118224
|
look at Implementation of Multiplicative LSTM for some pointers on implementing custom rnns.
|
st118225
|
Hi,
I am confused while reading documentation of CrossEntropyLoss. Does n and nClasses are the same thing? If not, what is n?
“It is useful when training a classification problem with n classes”
…
“input has to be a 2D Tensor of size batch x n.
This criterion expects a class index (0 to nClasses-1) as the target for each value of a 1D tensor of size n”
|
st118226
|
for instance using a sequence of images to classify for instance a men running or walking ?
thanks
joseph
|
st118227
|
If you search for PyTorch and “Image Captioning” you will find a lot of examples:
https://github.com/ritchieng/the-incredible-pytorch 782
https://github.com/ruotianluo/neuraltalk2.pytorch 432
https://github.com/yunjey/pytorch-tutorial/tree/master/tutorials/09%20-%20Image%20Captioning 451
|
st118228
|
Hello! I have a question concerning online learning with pyTorch. Usually, samples are supplied to networks as a list. In online learning, in contrast, training and prediction uses single samples at a time.
With feed forward networks you can simply set a batch size of one and supply the network with a data set consisting of only one sample. With a recurrent network, however, I need to keep the activation of the recurrent neurons for the next incoming sample.
The pseudocode below illustrates what I want to do with a time series iterator that consists of individual samples.
input_value = 0.
while True:
target_value = time_series.next()
output_value = network.train(input_value, target_value)
error = output_value - target_value
input_value = target_value
I got the impression pyTorch might be appropriate to achieve this with minimal invasion. In every example I found, however, the samples were provided in one big data set.
Can you do this reasonably easy with pyTorch? A code example would be highly appreciated. Thanks a lot!
|
st118229
|
Your pseudocode is already basically correct - here’s a simplified case:
input = some_input
hidden = model.init_hidden()
for i in range(seq_len):
input, hidden = model(input, hidden)
# Do something with final states...
Here’s a more thorough example modified from a tutorial 68
for i in range(target_length):
output, hidden = decoder(input, hidden)
loss += criterion(output, target[i])
# Create new input from max value
top_v, top_i = output.data.topk(1)
top_i = top_i[0][0]
input = Variable(torch.LongTensor([[top_i]]))
In this case the input is an embedding and the output is from log_softmax, so to get the next input you have to create a new input from the maximum value.
|
st118230
|
hello sean! thanks for your answer! unfortunately, i am having a bit of trouble understanding it. first, i was expecting to read unsqueeze somewhere because of the notice box here. 7 second, although reading the tutorial you reference, i cannot see why the input is a maximum instead of simply the current value in the time series.
if it is possible, a minimal working example would help me out greatly. thanks a lot!
|
st118231
|
Here’s a working example that uses teacher forcing half of the time, and trains on its own outputs the other half: https://gist.github.com/spro/ef26915065225df65c1187562eca7ec4 63
You often see unsqueeze because Linear layers expect B x N tensors while RNNs expect S x B x N (sequence, batch, size). In the referenced tutorial the inputs are in a different shape from the outputs (inputs are character indexes in a LongTensor, outputs are probabilities in a FloatTensor) so you have to do some manual work to convert it.
|
st118232
|
maybe we have a different understanding of online learning? i think about this. 77
|
st118233
|
Yeah I misunderstood what you were asking about. If you had some blocking get_latest_sample function, this should work - mostly the same as offline training but using a input of size 1 (or some chosen chunk size). Importantly, keep the last input and hidden state around for future time steps (I also updated the above gist to pass hidden as an argument to rnn.forward)
last_targets = get_latest_sample()
hidden = None
while True:
inputs = last_targets
targets = get_latest_sample()
outputs, hidden = model(inputs, hidden)
optimizer.zero_grad()
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
last_targets = targets
|
st118234
|
Hello Sir,
In my project I’m trying to use 3D convloutional networks
on some video frames, but when I perform a forward propagation, the program
shows this error:
Traceback (most recent call last):
File "C3D_training.py", line 309, in <module>
main()
File "C3D_training.py", line 152, in main
train(train_loader, model, criterion, optimizer, epoch)
File "C3D_training.py", line 187, in train
output = model.forward(input_var)
File "C3D_training.py", line 41, in forward
x = F.relu(self.conv1a(x))
File "/usr/local/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 330, in forward
self.padding, self.dilation, self.groups)
File "/usr/local/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 88, in conv3d
return f(input, weight, bias)
RuntimeError: 2D or 4D weight tensor expected, but got: [64 x 3 x 3 x 3 x 3] at /py/conda-bld/pytorch_1490903321756/work/torch/lib/THNN/generic/SpatialConvolutionMM.c:15
I don’t know exactly what is the origin of this error,
and what it means ?
any help will be appreciated.
Thank you,
|
st118235
|
you are sending 5D tensor for some reason instead of 4D. Maybe you are using nn.Conv2d where you actually need to use nn.Conv3d
|
st118236
|
I am actually using conv3d and maxpool3d in all the model,
something like this:
self.conv1a = nn.Conv3d(3,64,(3,3,3),stride=(1,1,1),padding=(1,1,1),bias =True)
self.pool1 = nn.MaxPool3d((1,2,2),stride=(1,2,2))
|
st118237
|
is there something wrong with this model definition ?
class C3D(nn.Module):
def __init__(self):
super(C3D,self).__init__()
self.conv1a = nn.Conv3d(3,64,(3,3,3),stride=(1,1,1),padding=(1,1,1),bias =True)
self.pool1 = nn.MaxPool3d((1,2,2),stride=(1,2,2))
self.conv2a = nn.Conv3d(64,128,(3,3,3),stride=(1,1,1),padding=(1,1,1),bias =True)
self.pool2 = nn.MaxPool3d((2,2,2),stride=(2,2,2))
self.conv3a = nn.Conv3d(128,256,(3,3,3),stride=(1,1,1),padding=(1,1,1),bias =True)
self.conv3b = nn.Conv3d(256,256,3,stride=(1,1,1),padding=(1,1,1),bias =True)
self.pool3 = nn.MaxPool3d((2,2,2),stride=(2,2,2))
self.conv4a = nn.Conv3d(256,512,(3,3,3),stride=(1,1,1),padding=(1,1,1),bias =True)
self.conv4b = nn.Conv3d(512,512,(3,3,3),stride=(1,1,1),padding=(1,1,1),bias =True)
self.pool4 = nn.MaxPool3d((2,2,2),stride=(2,2,2))
self.conv5a = nn.Conv3d(512,512,(3,3,3),stride=(1,1,1),padding=(1,1,1),bias =True)
self.conv5b = nn.Conv3d(512,512,(3,3,3),stride=(1,1,1),padding=(1,1,1),bias =True)
self.pool5 = nn.MaxPool3d((2,2,2),stride=(2,2,2))
self.fc6 = nn.Linear(512,4096,bias=True)
self.fc7 = nn.Linear(4096,4096,bias=True)
self.fc8 = nn.Linear(4096,487,bias=True)
def forward(self,x):
x = F.relu(self.conv1a(x))
x = self.pool1(x)
x = F.relu(self.conv2a(x))
x = self.pool2(x)
x = F.relu(self.conv3a(x))
x = F.relu(self.conv3b(x))
x = self.pool3(x)
x = F.relu(self.conv4a(x))
x = F.relu(self.conv4b(x))
x = self.pool4(x)
x = F.relu(self.conv5a(x))
x = F.relu(self.conv5b(x))
x = self.pool5(x)
x = F.relu(self.fc6(x.view(1,512)))
x = F.dropout(x,p=0.5)
x = F.relu(self.fc7(x))
x = F.dropout(x,p=0.5)
return F.softmax(self.fc8(x))
and this is how I’m loading the data :
traindir = os.path.join(args.data, 'train')
valdir = os.path.join(args.data, 'test')
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
train_loader = torch.utils.data.DataLoader(
datasets.ImageFolder(traindir, transforms.Compose([
transforms.CenterCrop(112),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize,
])),
batch_size=50, shuffle=True, num_workers=args.workers, pin_memory=True)
I think I’m giving the model as input : Tensors which are not 5D !!?
|
st118238
|
A seemingly very basic question but I’ve gone through a few tutorials and am still at a loss. :rimshot:
What are the basic rules of thumb for deciding when something should have an if statement with .cuda()?
It almost seems random in some of the tutorials. Parts of model classes have to return .cuda(), other parts don’t. Even when the whole class itself then gets .cuda(). The main tutorial I’ve spent time with is the really good seq2seq 8 one. Initially I downloaded the older version from the github repo that didn’t have the cuda options already included. So I spent some time changing random things to .cuda() until it worked. There must be a more sensible way of understanding what needs .cuda().
|
st118239
|
It should almost always be possible to get away with only two calls to .cuda(): one on the data and one on the model. Everywhere else you should use tensor.new to create tensors on the same device as other tensors you already have.
|
st118240
|
That’s great. Thanks for the suggestion. Just to clarify what you mean by judicious use of tensor.new. I haven’t been able to find any documentation for tensor.new. I’m guessing the general idea is whenever defining a tensor within a Variable to use tensor.new().zeros(x, y) and this will avoid having to use an is_cuda if statement.
As for the data, I’m going to experiment with this myself soon but I thought I’d add a question in for anyone else who comes along. Torchtext and the regular dataloader both create iterators. Can you run .cuda() on the batch iterator or do have to run it on every Variable batch the comes out of the iterator.
There examples in seq2seq tutorial of the encoder and decoder models being cuda’d
EDIT: I just noticed that torchtext does indeed already have a built in option for specifying device when creating the batch iterators
|
st118241
|
This is probably more of a stackoverflow kind of question but I’ve [found]
(https://stats.stackexchange.com/questions/235844/should-training-samples-randomly-drawn-for-mini-batch-training-neural-nets-be-dr 85) a [bunch] (https://datascience.stackexchange.com/questions/10204/should-i-take-random-elements-for-mini-batch-gradient-descent 36) of links 11 that I’m not really sure answer the question entirely. It’s kind of a practical issue as well.
In torchtext the batch iterator shuffles the training data, put it into batches and then infinitely returns batches in a random order. The order of the observations within the batches is always the same. It seems unlikely this will cause any issues with training. All the same I was wondering if it’s considered more optimal to shuffle within each batch after every epoch?
|
st118242
|
You can get both behaviors in torchtext (I believe it’s shuffle=True) but also I think it’s still unclear which is better for a given task – I’ve seen significant effects in both directions but haven’t carried out any sort of systematic analysis.
|
st118243
|
Good call, I missed the shuffle option. Good to know that it’s another thing to keep track of in terms of possible influences on training performance.
|
st118244
|
weights_dict = torch.load(weightspath)
File "/home/user/local/python3/anaconda3/lib/python3.6/site-packages/torch/serialization.py", line 229, in load
return _load(f, map_location, pickle_module)
File "/home/user/local/python3/anaconda3/lib/python3.6/site-packages/torch/serialization.py", line 377, in _load
result = unpickler.load()
File "/home/user/local/python3/anaconda3/lib/python3.6/site-packages/torch/serialization.py", line 348, in persistent_load
data_type(size), location)
File "/home/user/local/python3/anaconda3/lib/python3.6/site-packages/torch/serialization.py", line 85, in default_restore_location
result = fn(storage, location)
File "/home/user/local/python3/anaconda3/lib/python3.6/site-packages/torch/serialization.py", line 67, in _cuda_deserialize
return obj.cuda(device_id)
File "/home/user/local/python3/anaconda3/lib/python3.6/site-packages/torch/_utils.py", line 57, in _cuda
with torch.cuda.device(device):
File "/home/user/local/python3/anaconda3/lib/python3.6/site-packages/torch/cuda/__init__.py", line 127, in __enter__
torch._C._cuda_setDevice(self.idx)
RuntimeError: cuda runtime error (10) : invalid device ordinal at torch/csrc/cuda/Module.cpp:84
|
st118245
|
What’s weightspath? How was the model saved? If it was saved as a state_dict(), you may need to reconstruct the class that generated the model first before loading it.
|
st118246
|
Thanks for your reply.
yes, it is saved as a state_dict,
weights_dict = torch.load(weightspath) simply loads it as a dictionary.
It seems to me that if the model is trained using GPU #3 on another machine, the saved model can not be loaded by a different machine with only 2 gpus.
|
st118247
|
I want to optimise the graident like the paper “Improved Training of Wasserstein GANs”. Something like, D(input), D.backward(), loss = D_loss + || input.grad - 1||. I found that input.grad don’t have creator, so the graph of grad don’t connect to the net, it won’t implement somethine like input.grad.backward(). How could I do this? By the way, the author of that paper using tensorflow.
|
st118248
|
Currently not suppoted. Check here for more discussion How to implement gradient penalty in PyTorch
|
st118249
|
It’s already implemented, but the PR waits for review. It’s probably going to be merged next week.
|
st118250
|
I want to modify the resnet model imported from torchvision.models, but the default input is 3-channel matrix.
I tried to rebuild a Net class as below:
class Net(nn.Module):
def init(self, base):
super(Net, self).init()
num_ftrs = base.fc.in_features
self.conv1 = nn.Conv2d(4, 64, kernel_size=(7,7), stride=(2,2), padding=(3,3), bias=False)
self.base_model = nn.Sequential(
*list(base.children())[1:]
)
self.fc = nn.Linear(num_ftrs, 17)
def forward(self, x):
x = self.conv1(x)
x = x.view(10, 64, 128, 128) #batchsize is 10, image size is 256x256
feature = self.base_model(x)
out = self.fc(feature)
return F.sigmoid(out)
base_model = models.resnet50()
model = Net(base_model)
but errors are:
RuntimeError: matrix and matrix expected at /b/wheel/pytorch-src/torch/lib/THC/generic/THCTensorMathBlas.cu:237
|
st118251
|
Hi, I just transfer from Tensorflow to Pytorch. One quick question about the regularization loss in the Pytorch,
Does Pytorch has something similar to Tensorflow to calculate all regularization loss automatically?
tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
Or we need to implement it by ourselves?
|
st118252
|
if you simply want to use it in optimization, you can use keyword weight_decay of torch.optim.Optimizer.
|
st118253
|
Thanks, if I want to output the penalty, for example l2 loss. Do you think the following workable, as a simple implemetation?
def get_reg_loss(model):
reg_loss = 0
for param in model.parameters():
reg_loss += param**2
_lambda = 0.001
reg_loss += _lambda * reg_loss
return reg_loss
|
st118254
|
I think it’s ok, but notice that it will also penalt bias.
maybe it is better to use named_parameters, it’s added in 0.1.12
import torch
import torch.nn as nn
import torch.optim as optim
m = nn.Sequential(
nn.Linear(10, 20),
nn.ReLU(),
nn.Linear(20, 20),
nn.ReLU(),
)
weights, biases = [], []
for name, p in m.named_parameters():
if 'bias' in name:
biases += [p]
else:
weights += [p]
optim.SGD([
{'params': weights},
{'params': biases, weight_decay=0}
], lr=1e-2, momentum=0.9, weight_decay=1e-5)
|
st118255
|
Hi,
Thank you very much. Does the pretrained model has this functionality?
model = models.resnet18(pretrained=True)
num_ftrs = model.fc.in_features
model.fc = nn.Linear(num_ftrs, 2)
model = model.cuda()
weights, biases = [], []
for name, p in model.named_parameters():
if 'bias' in name:
biases += [p]
else:
weights += [p]
This will gives me a bug:
AttributeError: 'ResNet' object has no attribute 'named_paramenters'
|
st118256
|
Hi,
I am quite new to PyTorch and try to build my own loss function. The key component in my model is the ability to create a max/min/mean of losses. Specifically, suppose we have a python list of losses, each of which being a PyTorch Variable:
losses = [Variable(torch.randn(1)) for _ in xrange(5)]
Is there a way for me to create the mean loss of this python list of losses? Note that the mean loss should also have type as Variable as at the end I need to call backward to compute the gradient.
|
st118257
|
You can use the builtin pytjon operator sum for that.
total_loss = sum(losses) / len(losses)
|
st118258
|
Another way would be to concatenate all the losses into one tensor, and call the operation you want.
cat_losses = torch.cat(losses, 0)
mean_loss = cat_losses.mean()
...
Both approaches work with autograd
|
st118259
|
Hey,
torch.cumprod() does not have the “exclusive=True” flag as in tensorflow is that right?
|
st118260
|
From the tensorflow documentation:
By default, this op performs an inclusive cumprod, which means that the first element of the input is identical to the first element of the output:
tf.cumprod([a, b, c]) ==> [a, a * b, a * b * c]
By setting the exclusive kwarg to True, an exclusive cumprod is performed instead:
tf.cumprod([a, b, c], exclusive=True) ==> [1, a, a * b]
|
st118261
|
I can try to just divide the result from cumprod() by the first element from each row, but it’s not numerically stable if the first element is close to 0
|
st118262
|
Yeah, there is no such option in PyTorch.
But you can do something like
res = torch.cumprod(torch.Tensor([1, a, b, c]), 0)
res = res[:-1]
|
st118263
|
Hey,
another question, when I try to do cumprod() on a Variable, it says: Type Variable doesn’t implement stateless method cumprod, can you explain why is this method stateless (which means it does not track the computations happened in here I guess?)? Thanks a lot!
|
st118264
|
cumprod is not yet implemented in autograd, but there is a PR coming soon https://github.com/pytorch/pytorch/pull/1439 71
|
st118265
|
Hi,
Two issues are found for pytorch 0.1.11 when I run the following Encoder CNN.
if param.requires_grad = False, pytorch 0.1.10 takes only a small amount of GPU compared to 0.1.11. Pytorch 0.1.11 takes almost same amount of GPU memory no matter the requires_grad value.
Even if param.requires_grad = True, pytorch 0.1.10 takes about half of GPU usage compared to 0.1.11. When I dig into the code , I find the second batch forward doubles the GPU usage in 0.1.11, but only negligible memory usage increases in the second batch in 0.1.10
class EncoderCNN(nn.Module):
def init(self, embed_size):
""“Load the pretrained ResNet-152 and replace top fc layer.”""
super(EncoderCNN, self).init()
self.resnet = models.resnet152(pretrained=True)
for param in self.resnet.parameters():
param.requires_grad = False
# param.requires_grad = True
self.resnet.fc = nn.Linear(self.resnet.fc.in_features, embed_size)
self.bn = nn.BatchNorm1d(embed_size, momentum=0.01)
self.init_weights()
def init_weights(self):
"""Initialize the weights."""
self.resnet.fc.weight.data.normal_(0.0, 0.02)
self.resnet.fc.bias.data.fill_(0)
def forward(self, images):
"""Extract the image feature vectors."""
features = self.resnet(images)
features = self.bn(features)
return features
|
st118266
|
@apaszke
I also post the problem here with some background information
github.com/yunjey/pytorch-tutorial
[Image Captioning] High GPU memory usage when using pytorch 0.11 for image captioning 30
opened
Apr 27, 2017
closed
May 28, 2017
hanzhanggit
Hi,
I am running the 09 image captioning model. When I used the pytorch 0.10, I can set the batchsize to be...
|
st118267
|
I have the following model and training method. In simple terms this what nll_loss does right? it takes set of output from neurons and it picks out the winning neuron (index of the neuron).
class AND(nn.Module):
def __init__(self, input_count):
super(AND, self).__init__()
assert(input_count >= 2)
self.linear = nn.Linear(input_count, 2)
def forward(self, x):
return F.log_softmax(self.linear(x))
from itertools import product
def and_gate(*inputs):
for i in inputs:
if i == 0:
return 0
else:
return 1
def truth_table_for_and(size=2):
table = []
for sample in product([0, 1], repeat=size):
table.append([list(sample), and_gate(*sample)])
return table
Example output of truth table:
pprint(truth_table_for_and(2))
[[[0, 0], 0], [[0, 1], 0], [[1, 0], 0], [[1, 1], 1]]
and2 = AND(2)
truth_table2 = truth_table_for_and(2)
def train(epochs, print_every=10):
optimizer = optim.SGD(and2.parameters(), lr=0.1, momentum=0.01)
for epoch in range(epochs):
for i, o in truth_table2:
i , o = Variable(torch.Tensor([i])), Variable(torch.Tensor([o]))
o_ = and2(i)
optimizer.zero_grad()
print(o_.size(), o.size())
loss = F.nll_loss(o_, o)
loss.backward()
optmizer.step()
if epoch % print_every == 0:
print('loss: {}'.format(loss.data[0]))
train(100)
torch.Size([1, 2]) torch.Size([1]) <---- output of print(o_.size(), o.size())
In the F.nll_loss function, i get the following error. I have pondered for a while, but could not get it to work.
TypeError: FloatClassNLLCriterion_updateOutput received an invalid combination of arguments - got (int, torch.FloatTensor, torch.FloatTensor, torch.FloatTensor, bool, NoneType, torch.FloatTensor), but expected (int state, torch.FloatTensor input, torch.LongTensor target, torch.FloatTensor output, bool sizeAverage, [torch.FloatTensor weights or None], torch.FloatTensor total_weight)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.