id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st117468
|
If you don’t have skip-layer connections, the order should usually be the same. Just note that you need to prune weight and bias in Conv2d and BatchNorm2d (if you use them).
If you have skip-layer connections, you can build a dependency graph about who passes output to who. You can then use the dependency graph to trace which convs take input from which convs. One example that builds such a graph can be found here 28 (look up pytorch_visualize.py). This example performs graph visualization.
A much simpler way, however, is to “hijack” the forward function of Conv2d, BatchNorm2d, activations functions and so on, like so:
old_forwards = {nn.Conv2d: nn.Conv2d.forward}
def hijack_forward(self, input):
if hasattr(input, 'drop_mask'):
drop_input_channels(self, var.drop_mask) # modify weights and biases
drop_mask = drop_output_channels(self) # modify weights and biases
output = old_forwards[self.__class__](self, input)
output.drop_mask = drop_mask
return output
nn.Conv2d.forward = hijack_forward
# perform one forward pass
# reset hijacked forward functions
|
st117469
|
Thanks for your answer!
The hijacking approach looks nice but I still need to keep track of dependencies between layers right?
In order to update the in_channels of some of them, I mean.
|
st117470
|
var.drop_mask is computed from a previous convolution layer. It tells you which input channels should be dropped.
|
st117471
|
Hi,
In some time series prediction, e.g. stocks, weather forecast, the sequences are generally very long. My current method is using sliding window to slice a long sequence to many shorter and fixed-length sequence. I works but very slow. However, I noticed that since the differences among the neighboring sequences are very small, most computation are wasted.
A natural method is feeding the long sequence directly to the network, iterating over it one step at a time for sever steps as a batch, then calculating the loss, back-propagating the gradients and updating the weights, then doing next batch… I think this can save over 99% computations. Is it possible? How to implement this?
Thanks!
Ben
|
st117472
|
with batch_first=True. the packed input versioon still outputs result with shape (T * B * dim)
import torch
from torch import nn
from torch.autograd import Variable
from torch.nn.utils.rnn import pad_packed_sequence as unpack
from torch.nn.utils.rnn import pack_padded_sequence as pack
def sort_batch(data, seq_len):
batch_size = data.size(0)
sorted_seq_len, sorted_idx = torch.sort(seq_len, dim=0, descending=True)
sorted_data = data[sorted_idx]
_, reverse_idx = torch.sort(sorted_idx, dim=0, descending=False)
return sorted_data, sorted_seq_len, reverse_idx
data = torch.rand(4,7,10)
lens = torch.LongTensor([3,7,2,1])
s_data, s_len, reverse_idx = sort_batch(data, lens)
emb = pack(Variable(s_data), list(s_len),batch_first=True)
rnn = nn.LSTM(10, 20, 2, batch_first =True, bidirectional=False)
input = emb
h0 = Variable(torch.randn(2, 6, 20))
c0 = Variable(torch.randn(2, 6, 20))
packed_output, hn = rnn(input)
result, lens = unpack(packed_output)
print(result.size())
import torch
from torch import nn
from torch.autograd import Variable
from torch.nn.utils.rnn import pad_packed_sequence as unpack
from torch.nn.utils.rnn import pack_padded_sequence as pack
data = Variable(torch.rand(4,7,10))
lens = torch.LongTensor([3,7,2,1])
#s_data, s_len, reverse_idx = sort_batch(data, lens)
rnn = nn.LSTM(10, 20, 2, batch_first =True, bidirectional=False)
h0 = Variable(torch.randn(2, 6, 20))
c0 = Variable(torch.randn(2, 6, 20))
output, hn = rnn(data)
#result, lens = unpack(packed_output)
print(output.size())
|
st117473
|
Doesn’t pack_padded_sequence store the representation as T x B x dim?
So calling the lstm should be with batch_first=False?
Similarly with the pad_packed_sequence. No?
|
st117474
|
i had installed latest pytorch with cudnn-5 last time. Now I have installed new cudnn-6. do i need to resintall pytorch? or can i leave my old installation as it is? thanks!
|
st117475
|
How can I do this?
I have a tensor A of size, say, 4x10x64x64.
I want to find all elements where A is bigger than 255, and set them to 255.
I did idx = torch.gt 24(A, 255), and obtained another tensor which contains ones or zeros. But I don’t understand how to pass these to A to alter it.
|
st117476
|
I want to load several torch-trained models in pytorch. How can I do it since the models from torch does not have graph.
|
st117477
|
Convert/import Torch model to PyTorch
Hi,
Great library! I’d like to ask if it is possible to import a trained Torch model to PyTorch…
Thanks
|
st117478
|
GitHub
fanq15/caffe_to_torch_to_pytorch 107
Contribute to fanq15/caffe_to_torch_to_pytorch development by creating an account on GitHub.
|
st117479
|
Hi,
I am trying to use pack and unpack for bidirectional RNN with variable length. I noticed significant performance loss about 10x slower with pack and unpack process. Is is normal or I use it wrong ?
Here I attached some benchmark code :
run_lstm_bf() is 10x faster than run_lstm_bf_pack() using %timeit operation on IPython
import torch
from torch import nn
from torch.autograd import Variable
from torch.nn.utils.rnn import pack_padded_sequence as pack, pad_packed_sequence as unpack
x = Variable(torch.randn(20, 700, 512).cuda())
x_bs = Variable(x.data.clone().transpose(1, 0))
xlen = [690]*20
llstm_bf = nn.LSTM(512, 512, bidirectional=True, batch_first=True)
llstm_bs = nn.LSTM(512, 512, bidirectional=True, batch_first=False)
llstmcell = nn.LSTMCell(512, 512)
llstmcell_bw = nn.LSTMCell(512, 512)
llstm_bf.cuda()
llstm_bs.cuda()
llstmcell.cuda()
llstmcell_bw.cuda()
def run_lstm_bf() :
res, _ = llstm_bf(x)
res.sum().backward()
def run_lstm_bs() :
res, _ = llstm_bs(x_bs)
res.sum().backward()
def run_lstm_bf_pack() :
xpack = pack(x, xlen, True)
res,_ = llstm_bf(xpack)
xunpack = unpack(res, True)
res = xunpack[0]
res.sum().backward()
def run_lstm_bs_pack() :
xpack = pack(x_bs, xlen, False)
res,_ = llstm_bs(xpack)
xunpack = unpack(res, False)
res = xunpack[0]
res.sum().backward()
init_h, init_c = Variable(torch.cuda.FloatTensor(20, 512).zero_()), Variable(torch.cuda.FloatTensor(20, 512).zero_())
def run_lstmcell() :
prev_h, prev_c = (init_h, init_c)
all_h = []
for ii in range(x.size(1)) :
(prev_h, prev_c) = llstmcell(x[:, ii], (prev_h, prev_c))
all_h.append(prev_h)
pass
prev_h, prev_c = (init_h, init_c)
all_h_bw = []
for ii in range(x.size(1)-1, -1, -1) :
(prev_h, prev_c) = llstmcell_bw(x[:, ii], (prev_h, prev_c))
all_h_bw.append(prev_h)
pass
all_h = torch.stack(all_h)
all_h_bw = torch.stack(all_h_bw)
all_h_comb = torch.cat([all_h, all_h_bw], 2)
all_h_comb.sum().backward()
pass
|
st117480
|
You need to use synchronize (torch.cuda.synchronize, or something close to that) before and after the code being timed in order to get accurate timeit results; once you do that the speed penalty will probably be closer to 2x (depending on length/batch size). It’s still a lot, and it might be worth it to move the packing and unpacking to C eventually.
|
st117481
|
Hi @jekbradbury,
I didn’t do the synchronize, but I also noticed that using packing and unpacking with bidirectional RNN makes Pytorch almost 3 times slower than Tensorflow. Does pytorch plan to improve this part?
Thanks!
|
st117482
|
Yes, this performance could absolutely be improved, and it’s something the core devs have talked about. But they’re fairly busy right now with distributed etc. so the devs are also open to PRs that reimplement pack and pad in C++ if someone wants to fix it quickly.
|
st117483
|
Some other numbers: I have a 4x320 bidir LSTM network followed by a linear transform (typical setup with the CTC loss function).
The network computation takes 1.2sec, while packing takes 1.3 secs, which is more than half of the computation time.
A C++ implementation would be much appreciated.
|
st117484
|
Hi,
I get the following error - RuntimeError: Expected hidden size (1, 32, 300), got (1L, 16L, 300L).
The input however is of shape [torch.cuda.FloatTensor of size 476x300 (GPU 0)]
, batch_sizes=[32, 32, 32, 32, 32, 32, 32, 32, 31, 29, 26, 23, 19, 16, 14, 12, 8, 7, 5, 5, 3, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
both pack_wv and the lstm are initialised batch_first=False. Any help would be highly appreciated.
|
st117485
|
gradients = autograd.grad(outputs=disc_interpolates, inputs=interpolates, AttributeError: module 'torch.autograd' has no attribute 'grad'
|
st117486
|
Is there a list of torch operations for which backward has been implemented and those for which it isn’t?
|
st117487
|
this issue tracks that info and hopefully is up to date: https://github.com/pytorch/pytorch/issues/440 509
|
st117488
|
I get this error upon calling .step() of a optimizer on my model after backprop:
Traceback (most recent call last):
File "gru_model_biliear.py", line 259, in <module>
optimizer.step()
File "/home/nilabhra/miniconda2/envs/pytorch/lib/python2.7/site-packages/torch/optim/adam.py", line 74, in step
p.data.addcdiv_(-step_size, exp_avg, denom)
RuntimeError: sizes do not match at /py/conda-bld/pytorch_1493676237139/work/torch/lib/THC/generated/../generic/THCTensorMathPointwise.cu:566
The final layer of the model is a nn.Bilinear layer. The error goes away if I replace it with a nn.Linear layer.
Possible bug in the backward function?
|
st117489
|
Can you run under PDB and report what the runtime sizes of p.data and exp_avg are?
|
st117490
|
Might be a while before I can get the tensor sizes via PDB. I did check the grad sizes after back prop. The Bilinear layer weight.data was of the shape 5x5x5 while the weight.grad was of the shape 5x5. The shape of the bias and it’s gradient was consistent
|
st117491
|
Alright, learning PDB enough to get the tensor sizes was a piece of cake.
It seems they are indeed of different shape:
|
st117492
|
That certainly sounds like the issue. Here’s the implementation https://github.com/pytorch/pytorch/blob/5bb13485b8484a37f9afad67582512cf53ed13cb/torch/nn/_functions/linear.py#L35; 5 I can see a few tricky lines but there isn’t (at least to me) an obvious bug. Maybe you can add some print statements to it locally and try to narrow things down? Or feel free to just open a github issue.
|
st117493
|
I think I should open a github issue. I have seen the implementation. Not sure if the loop body is correct, will try to understand it a bit more to see if I can come up with a fix myself.
|
st117494
|
this should shortly be fixed in master. (thanks @griffinliang for the proposed fix, it was correct).
github.com/pytorch/pytorch
fix incorrect grad_weight in Bilinear 17
pytorch:master ← pytorch:bilinearfix
opened
Jun 6, 2017
soumith
+20
-12
|
st117495
|
Hi,
I am starting with pytorch, and I have seen that my implementation requires more GPU memory than my tensorflow implementation of the same architecture.
Here’s a part of my code:
class seg_GAN(object):
def __init__(self, batch_size=10, height=512,width=512,channels=3, wd=0.0005,nfilters_d=64, checkpoint_dir=None, path_imgs=None, learning_rate=2e-8,lr_step=30000,lam_fcn=1, lam_adv=1,adversarial=False,nclasses=5):
self.adversarial=adversarial
self.channels=channels
self.lam_fcn=lam_fcn
self.lam_adv=lam_adv
self.lr_step=lr_step
self.wd=wd
self.learning_rate=learning_rate
self.batch_size=batch_size
self.height=height
self.width=width
self.checkpoint_dir = checkpoint_dir
self.path_imgs=path_imgs
self.nfilters_d=nfilters_d
self.organ_target=1#1 eso 2 heart 3 trach 4 aorta
self.nclasses=nclasses
self.netG=UNet(self.nclasses,self.channels)
self.netG.apply(weights_init)
if self.adversarial:
self.netD=Discriminator(self.nclasses,self.nfilters_d,self.height,self.width)
self.netD.apply(weights_init)
self.dst = myDataSet(self.path_imgs, is_transform=True)
self.trainloader = data.DataLoader(self.dst, batch_size=self.batch_size, shuffle=True, num_workers=2)
def train(self,config):
print 'verion ',torch.__version__
start=0#TODO change this so that it can continue when loading a model
print("Start from:", start)
label_ones=torch.ones(self.batch_size)
label_zeros=torch.zeros(self.batch_size)
y_onehot = torch.FloatTensor(self.batch_size,self.nclasses,self.height, self.width)
#print 'shape y_onehot ',y_onehot.size()
if self.adversarial:
self.netD.cuda()
self.netG.cuda()
label_ones,label_zeros,y_onehot=label_ones.cuda(),label_zeros.cuda(),y_onehot.cuda()
y_onehot_var= Variable(y_onehot)
label_ones_var = Variable(label_ones)
label_zeros_var = Variable(label_zeros)
if self.adversarial:
optimizerD = optim.Adam(self.netD.parameters(), lr = self.learning_rate, betas = (0.5, 0.999))
optimizerG = optim.Adam(self.netG.parameters(), lr = self.learning_rate, betas = (0.5, 0.999))
for it in range(start,config.iterations):#epochs
for i, (images,GT) in enumerate(self.trainloader):
y_onehot.resize_(GT.size(0),self.nclasses,self.height, self.width)
y_onehot.zero_()
label_ones.resize_(GT.size(0))
label_zeros.resize_(GT.size(0))
images = Variable(images.cuda())
#images = Variable(images)
#print 'unique ',np.unique(GT.numpy())
GT=GT.cuda()
#print 'image size ',images.size()
#print 'GT size ',GT.size()
#print 'shape y_onehot ',y_onehot.size()
y_onehot.scatter_(1,GT.view(GT.size(0),1,GT.size(1),GT.size(2)),1)#we need to add singleton dim so thatnum of dims is equal
GT=Variable(GT)#N,H,W
if self.adversarial:
##########################
#Update Discriminator
##########################
#train with real samples
self.netD.zero_grad()
#print self.netD
output=self.netD(y_onehot_var)#this must be in one hot
errD_real =F.binary_cross_entropy(output,label_ones_var)#loss_D
errD_real.backward()#update grads of netD
# train with fake
fake = self.netG(images)#this is a prob map which we want to be similar to y_onehot
#print 'fake sz',fake.size()
output = self.netD(fake.detach())#only for speed, so grads of netg are not computed
errD_fake = F.binary_cross_entropy(output, label_zeros_var)
errD_fake.backward()
optimizerD.step()#update the parameters of netD
############################
# Update G network
###########################
self.netG.zero_grad()
if self.adversarial:
output_D=self.netD(fake)
output_G, GT,label_ones,output_D
errG = self.loss_G(fake,GT, label_ones_var,output_D)#here we should use ones with the fakes
else:
fake = self.netG(images)
errG = self.loss_G(fake,GT)
errG.backward()#backprop errors
optimizerG.step()#optimize only netG params
I guess I am not converting tensors to Variables in a correct way or maybe because I am doing it in the training loop, could you please take a look a let me know what can I change to gain efficiency and memory if possible?
Thanks!
|
st117496
|
Does it have anything to do with the fact that PyTorch doesn’t use static buffers, and re-allocates buffers in every pass? Like to get an answer from someone who knows this better.
|
st117497
|
Try setting requires_grad=False to all Variable 's that do not require gradient computation (i.e. input images and labels). For instance, replace
images = Variable(images.cuda())
by
images = Variable(images.cuda(), requires_grad=False)
During inference (i.e. when you do not want to backpropagate gradients), you should also set volatile=True.
These things will help reducing memory consumption a bit. For more info. on this, check http://pytorch.org/docs/notes/autograd.html 80.
|
st117498
|
When a module is used multiple times, for example in a siamese network, are the gradients averaged?
[...]
output1 = net(image1)
output2 = net(image2)
loss = ...
loss.backward()
[...]
|
st117499
|
I upgraded my pytorch version and since then It is asking me for NVIDIA drivers for my computer for :
torch.load(...)
It didn’t ask me for those before and as I don’t a GPU and I am not running any code with .cuda() I don’t see why it is asking me for this …
Any idea ?
Justin
|
st117500
|
I tried to reinstall pytorch and torchvision but it keeps asking me for a nvidia gpu eventhough I don’t have a gpu and it was working without before …
|
st117501
|
Are you sure you are installing the correct package? If you have no NVIDIA GPU you must not install the PyTorch package for CUDA 8.0. For instance, if you are using pip and python 3.6, you should install PyTorch by running:
pip install http://download.pytorch.org/whl/cu75/torch-0.1.12.post2-cp36-cp36m-linux_x86_64.whl
|
st117502
|
I found the solution , apparently if I saved a model on a machine with .cuda() I have to open it with a machin which has a GPU, so I just needed to put .CPU() before saving it.
|
st117503
|
There is a WGAN implementation in Pytorch(the official one). https://github.com/martinarjovsky/WassersteinGAN/blob/master/models/dcgan.py 47
I am unsure whats happening around line 52, where we are supposed to do a global average pooling.
The last layer is :
main.add_module('final.{0}-{1}.conv'.format(cndf, 1),nn.Conv2d(cndf, 1, 4, 1, 0, bias=False))
and in the forward function, there is :
output = output.mean(0)
return output.view(1)```
what does output = output.mean(0) do ?
I read the docs and mean returns average of supplied dimension. So i created a similar shaped tensor as I would expect from the last convolutional layer in the model above.
The shape was (1x1x10x10 ). batch size * one layer output * width* height
when i run .mean(0) on this tensor, it does NOT return the global mean of the 10x10 matrix.
> In [38]: a
> Out[38]:
> (0 ,0 ,.,.) =
> 0.3912 0.8033 0.3859
> 0.0037 0.1687 0.2818
> 0.2725 0.4355 0.2085
> [torch.FloatTensor of size 1x1x3x3]
> In [39]: a.mean(0)
> Out[39]:
> (0 ,0 ,.,.) =
> 0.3912 0.8033 0.3859
> 0.0037 0.1687 0.2818
> 0.2725 0.4355 0.2085
> [torch.FloatTensor of size 1x1x3x3]
I can't understand whats wrong. Help is greatly appreciated
|
st117504
|
Solved by analvikingur in post #2
If you have tensor A with shape of [1, a, b, c] then calling A.mean(0) will return you new tensor with averaged over zero-axis elements.
Since you have A.size(0) = 1 nothing will happen after doing so.
If you want to average whole tensor you have to do it by A.mean()
|
st117505
|
If you have tensor A with shape of [1, a, b, c] then calling A.mean(0) will return you new tensor with averaged over zero-axis elements.
Since you have A.size(0) = 1 nothing will happen after doing so.
If you want to average whole tensor you have to do it by A.mean()
|
st117506
|
Hi,
I did an experiment on how batchnorm works in pytorch. I wrote following code:
import torch
import torch.nn as nn
bn = nn.BatchNorm2d(4)
bn.weight.data = torch.arange(0,4)
bn.bias.data = torch.arange(4,8)
bn.running_mean.data = torch.arange(8,12)
bn.running_var.data = torch.arange(12,16)
inp = torch.autograd.Variable(torch.randn(1,4,4,4))
out = bn(inp)
I set a breakpoint at functional.py#L501-L504 9 and print the running_mean and running_var . I mean as below:
def batch_norm(input, running_mean, running_var, weight=None, bias=None,
training=False, momentum=0.1, eps=1e-5):
import pdb
pdb.set_trace()
f = torch._C._functions.BatchNorm(running_mean, running_var, training, momentum, eps, torch.backends.cudnn.enabled)
return f(input, weight, bias)
I saw a strange malfunction. Always their values were 0 and 1 respectively, regardless of any initialization (as I did above):
(Pdb) running_mean
0
0
0
0
[torch.FloatTensor of size 4]
(Pdb) running_var
1
1
1
1
[torch.FloatTensor of size 4]
Could you please check it?
|
st117507
|
If you have training as False, then it’s most likely not updating the running values, since it thinks that it is doing inference only, in which case the running mean and var are conceptually only read-only.
|
st117508
|
bn.running_mean and bn.running_var are tensors
so as the problem in this post:
How to convert a keras layer to equivalent pytorch layer?
Hello there,
May someone provide some codes to help me how can I convert keras convolution, batchnorm, leakyRelu and maxpooling layers to pytorch equivalent? Are there any differences between these layers in keras and pytorch? I have wrote some codes to do this but not sure about the correctness.
I have a .h5 file which has the pretrained model in keras. What I want to do is to create a dictionary variable in python which is indexed by the layer name and has the definition of each layer in pyt…
|
st117509
|
Did you receive above value in batch_norm function? I have installed pytorch using anaconda and the version is:
pytorch 0.1.12 py35_2cu80 [cuda80] soumith
|
st117510
|
Depending on model.eval() call, you can choose between training mode and testing mode. So here, the training flag is True. So updating these variables can be possible.
|
st117511
|
running_mean is tensor rather than Variable, which means you should not set attr data for tensor, it makes no sense.
import torch
import torch.nn as nn
bn = nn.BatchNorm2d(4)
bn.weight.data = torch.arange(0,4)
bn.bias.data = torch.arange(4,8)
bn.running_mean = torch.arange(8,12)
bn.running_var = torch.arange(12,16)
inp = torch.autograd.Variable(torch.randn(1,4,4,4))
out = bn(inp)
print bn.running_mean
|
st117512
|
So it causes error:
TypeError: cannot assign 'torch.FloatTensor' as parameter 'weight' (torch.nn.Parameter or None expected)
|
st117513
|
Hi,
So i noticed an OOM issue with RNNs when i run in open-loop mode. Interestingly, for my particular use-case, I can’t set the inputs as Volatile because I will be making a backward pass.
This is a sample test case:
rnn1 = nn.GRU(63,512).cuda()
output = nn.Linear(512,63).cuda()
input_frame = Variable(torch.randn(16,1,63)).cuda()
hidden = Variable(torch.randn(1,1,512)).cuda()
for i in xrange(2000):
state,hidden = rnn(input_frame,hidden)
input_frame = output(state.squeeze()).unsqueeze(1)
And no, 2000 isn’t some random large input i’m trying, it’s very much necessary for my particular problem.
So one particular fix i noticed was using this variant instead:
rnn2 = nn.GRUCell(63,512).cuda()
output = nn.Linear(512,63).cuda()
input_frame = Variable(torch.randn(16,63)).cuda()
hidden = Variable(torch.randn(16,512)).cuda()
for i in xrange(2000):
hidden = rnn(input_frame,hidden)
input_frame = output(hidden)
I did not get OOM errors when using the GRUCell instead (I’m guessing this doesn’t use the cudnn rnn kernels?).
So in summary, I use the nn.GRU() for training (in teacher-forcing mode) and the nn.GRUCell() layer in open-loop mode (i’ll still need to make a backward pass here though).
Is there a simple way to tie weights between the GRUCell and GRU layers? Something like:
rnn2.load_state_dict(rnn1.state_dict())
(or is there another simpler fix to this issue?)
Any help is much appreciated. Thanks in advance.
Update:
So a temporary fix I found was this:
gru_state_dict = rnn1.state_dict()
for key in gru_state_dict.keys():
gru_state_dict[key.replace('_l0','')] = gru_state_dict.pop(key)
rnn2.load_state_dict(gru_state_dict)
Do you have any other elegant solutions? I guess OOM errors occur specifically in pytorch because it by-default caches the states during the forward pass and reuses it during the efficient backward pass? Would it be possible to temporarily switch-off this caching process to tradeoff memory against time?
Best,
Rithesh
|
st117514
|
this is long-gone I’m sure, but wanted to say that https://github.com/pytorch/pytorch/pull/1691 101 will improve memory usage (especially in your case).
|
st117515
|
I would like to get the individual loss for each sample, but the loss functions such as crossEntropyLoss output the average loss in a minibatch. Could I do it using minibatch?
I couldn’t find any python code defining these loss functions. Do I need to modify the C backend?
Thanks!
|
st117516
|
You are right, it’s in C backend. Or you can write your own cross entropy loss. You can use gather to extract the log probability of correct class label.
github.com
pytorch/pytorch/blob/master/torch/nn/functional.py#L559 29
"but got {}".format(p))
if p == 0 or not training:
return input
alpha = -1.7580993408473766
keep_prob = 1 - p
# TODO avoid casting to byte after resize
noise = input.data.new().resize_(input.size())
noise.bernoulli_(p)
noise = Variable(noise.byte())
output = input.masked_fill(noise, alpha)
a = (keep_prob + alpha ** 2 * keep_prob * (1 - keep_prob)) ** (-0.5)
b = -a * alpha * (1 - keep_prob)
return output.mul_(a).add_(b)
def dropout2d(input, p=0.5, training=False, inplace=False):
|
st117517
|
Hi,
I wrote bellowing code to use original layer.
class binary_activate(Function):
def forward(self,x):
positives = torch.ge(x, 0)
negatives = torch.le(x, 0)
le0xmin1 = torch.mul(negatives.float(), -1)
binary_output = positives.float() + le0xmin1.float()
self.save_for_backward(x)
return binary_output
def backward(self, grad_output):
result = self.saved_tensors[0]
grad_input = F.hardtanh(Variable(result)).data * grad_output
return grad_input
class Model1(nn.Module):
def __init__(self):
super(Model1, self).__init__()
self.fc1 = nn.Linear(INPUT_SIZE,HIDDEN1)
self.fc2 = nn.Linear(HIDDEN1, HIDDEN2)
self.fc3 = nn.Linear(HIDDEN2, HIDDEN3)
self.fc4 = nn.Linear(HIDDEN3, OUTPUT_SIZE)
self.binary = binary_activate()
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = self.fc4(x)
x = self.binary(x)
return x
model = Model1()
model.cuda()
optimizer = torch.optim.Adam(model.parameters(),lr=0.0002)
model.train()
criterion = nn.HingeEmbeddingLoss().cuda()
for i in range(EPOCH):
x = Variable(torch.FloatTensor(torch.randn(batch_size,INPUT_SIZE)).cuda())
logits = model(x)
loss = criterion(logits, x)
optimizer.zero_grad()
loss.backward()
optimizer.step()
Then, I got error which said
result = self.saved_tensors[0]
RuntimeError: Trying to backward through the graph second time, but the buffers have already been freed. Please specify retain_variables=True when calling backward for the first time.
Though Error message specifies the problem somehow, I couldn’t figure out the way to solve it.
How can I backward correctly?
Thanks
|
st117518
|
By the way, what I want to do is
・Forward ・・・ To out put binary value of [-1,1]. When the input was x, output should be 1sign(x).
・Backward・・・ To return Costhardtanh(x). x has values same as the input for Forward.
I would really appreciate for your help.
Thanks.
|
st117519
|
It seems to me that you use the self.binary twice in the forward pass. (Although I don’t see that in your code)
To avoid all the possible conflicts, I would suggest not to save the Function instance as a class attribute(especially if the Function has buffer).
Another suggestion is to replace your binary_activate with other simple Function(but also has save_for_backward), and see if it’s the problem of your implementation of the binary_activate, or because of the way you use the Function.
|
st117520
|
Thank you for your comment.
I tried to replace the binary_activate as follows, but I still get same error.
So, It seems to be the way of using Function.
class binary_activate(Function):
def forward(self,x):
self.save_for_backward(x)
return x
def backward(self, grad_output):
result = self.saved_tensors[0]
grad_input = result
return grad_input
"I would suggest not to save the Function instance as a class attribute(especially if the Function has buffer)."
Excuse me, but I don’t get this meaning. I also tried to declare and call the binary_activate class out of the Model class, though, still have the same error.
Is this the way you mentioned? If not, could you tell me more detail?
Thanks.
|
st117521
|
The lines self.binary = binary_activate, and then the usage later, x = self.binary(x), is what is causing the problem. You are saving one instance of your binary_activate class and reusing that one instance every time you forward through Model1. Instead, create a new instance of binary_activate on each forward, i.e. get rid of self.binary and replace the line using it in forward with x = binary_activate()(x).
|
st117522
|
the parameter affine in nn.BatchNorm2d() is true when I train the model
and I need to set affine is False when I test the model
Do I get the right understand?
|
st117523
|
No. Affine only switches the gamma and beta transform that you can see in the docs 520. Use module.eval() to switch it to evaluation mode.
Also, remember that using input Variables created with volatile=True will make inference much faster and will greately reduce memory usage.
|
st117524
|
No, it doesn’t. But you only need the input to be volatile to perform inference efficiently. No need to touch the parameters, as volatile=True takes precedence over all flags, and doesn’t even consider parameter flags. Just create the input like that: Variable(input, volatile=True)
|
st117525
|
No. It’s never recommended to reuse the same Variables between iterations. This will likely lead to graphs growing indefinitely and increasing memory usage. Just recreate them every time, it’s extremely cheap to do.
|
st117526
|
What I did in the WassersteinGAN code is not an optimal approach, i have to fix that code (will do now).
|
st117527
|
@Veril note that these are Tensors not Variables. Modifying tensors as many times as you want is ok - they don’t remember what you did with them. Variables do, and this makes the graphs longer and longer.
|
st117528
|
It would be a great slowdown, but not necessarily a leak. We free the buffers once you call backward so you’d be only using up CPU memory for the graph nodes.
|
st117529
|
I’ve fixed the WassersteinGAN code via https://github.com/martinarjovsky/WassersteinGAN/commit/e553093d3b2a44a1b6d0c8739a973598af6aa535 161
@apaszke in my (old code) case I’ve hacked up reusing variables carefully, but it’s a hack.
|
st117530
|
One problem occurred when I did model.eval. The output of the network grew exponentially and the sigmoid function at the end of the network in my cost function gave me overflow. when I don’t apply model.eval the network does not generate any warning or error, but when I do, it generates. Could you please tell me why this problem happen? Here is my network structure:
YoloV2 (
(path1): ModuleList (
(0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True)
(2): LeakyReLU (0.1)
(3): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(4): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(5): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True)
(6): LeakyReLU (0.1)
(7): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(8): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(9): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True)
(10): LeakyReLU (0.1)
(11): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(12): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True)
(13): LeakyReLU (0.1)
(14): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(15): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True)
(16): LeakyReLU (0.1)
(17): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(18): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(19): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
(20): LeakyReLU (0.1)
(21): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(22): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True)
(23): LeakyReLU (0.1)
(24): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(25): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
(26): LeakyReLU (0.1)
(27): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(28): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(29): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
(30): LeakyReLU (0.1)
(31): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(32): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
(33): LeakyReLU (0.1)
(34): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(35): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
(36): LeakyReLU (0.1)
(37): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(38): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
(39): LeakyReLU (0.1)
(40): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(41): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
(42): LeakyReLU (0.1)
)
(parallel1): ModuleList (
(0): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(1): Conv2d(512, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(2): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True)
(3): LeakyReLU (0.1)
(4): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(5): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
(6): LeakyReLU (0.1)
(7): Conv2d(512, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(8): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True)
(9): LeakyReLU (0.1)
(10): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(11): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
(12): LeakyReLU (0.1)
(13): Conv2d(512, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(14): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True)
(15): LeakyReLU (0.1)
(16): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(17): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True)
(18): LeakyReLU (0.1)
(19): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(20): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True)
(21): LeakyReLU (0.1)
)
(parallel2): ModuleList (
(0): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True)
(2): LeakyReLU (0.1)
(3): space_to_depth (
)
)
(path2): ModuleList (
(0): Conv2d(1280, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True)
(2): LeakyReLU (0.1)
(3): Conv2d(1024, 425, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
)
Here is comprehensive detail about my problem.
https://github.com/pytorch/pytorch/issues/1725 170
|
st117531
|
Hey
Is there a PyTorch equivalent to theano’s hard sigmoid?
http://deeplearning.net/software/theano/library/tensor/nnet/nnet.html 142
Thanks
|
st117532
|
nn.Hardtanh has optional parameters for min and max linear values. depending on the details of your problem this might be sufficient.
|
st117533
|
I wanted to implement NLP from scratch (collobert 2011 ). NLP from scratch has two training model :
word level log likelihood and
sentence level log likelihood.
Is there any package in pytorch getting sentence level log likelihood and its parameters updates?
or could anybody give some hints for such implementation.
Thanks in advance!
|
st117534
|
Hi,
I am searching ways to compute the loss using slices. The code below is an example that works, but it has a fixed length (1:30). What I need requires the use of variable length, one for each element of my dataset.
To explain the context, I have a convnet, but one dimension of the cases in my dataset has variable size. I am not sure what I have to do, but I am using pad to equalize the length of my data. I think that would be easier to slice the y_pred and the y after the conv to calculate the loss.
Any ideas how I could do this? Are there better ways to do this?
I can’t transform the data.
x = Variable(torch.randn(160,21,60,1))
y = Variable(torch.LongTensor(160,60,1).random_(0, 4), requires_grad=False)
model = Model()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(),lr=0.001,momentum=0.9)
for t in range(500):
y_pred = model(x)
loss = criterion(y_pred.narrow(2,1,30),y.narrow(1,1,30))
print(t, loss.data[0])
optimizer.zero_grad()
loss.backward()
optimizer.step()
|
st117535
|
I don’t get what you mean. Do you want
size2 = y_pred.size(2)
loss = criterion(y_pred ,y[:,:size2,:])
it shall work, you can put input of various length to criterion.
|
st117536
|
@chenyuntc Thank you for aswering me.
Each example in my dataset has a different width. I am not sure if I could use this in a convnet, so I made all example with equal width using 0s. So all examples have the width equals the wider.
I am trying to understand how I could do this. Sorry if the answer to my problem is very obvious, is that I’m still studying all this stuff.
If I understand your example, I should use a minibatch=1, right?
|
st117537
|
Is it an NLP problem? Input shall have the same width in a batch. You can use pack_padded_sequence 12 to pad your input to the same size or handle it with your own code(seems you have done it with using 0s padding).
Maybe you want reshape y and y_pred to calculate loss as below
loss = criterion(y_pred.view(-1,4), y.view(-1,1))
Sorry If I get wrong, I am not familiar with such problem.
|
st117538
|
What should I refer to understand the concepts of multiprocessing package of Torch?
|
st117539
|
As you might guess, documentation is a great starting point:
http://pytorch.org/docs/notes/multiprocessing.html 13
http://pytorch.org/docs/multiprocessing.html 5
Since it’s almost same as the python’s native module, it’s docs 1
|
st117540
|
Time for your regularly scheduled custom loss-function question !
I’m trying to create a variant of MSE. My target and my prediction are RGB color images. What I’d like to do is weigh certain target colors differently, so that pixels that were originally black don’t get penalized as much as non-black pixels.
For grayscale images, this can be done rather easily:
# Pixels originally black should only contribute 50%
scale = 0.5
y_true_f = y_true.view(-1)
y_pred_f = y_pred.view(-1)
diff = y_true_f - y_pred_f
zero = y_true.eq(0).float()
result = diff.addcmul(-scale, zero, y_true_f)
mse = torch.mean(result**2,dim=0)
The problem: My images are normalized + standardized, per channel, before being fed into the net. So the new 0-value for red might be -0.3, the 0-value for green might be 0.01, and the 0-value for blue might be -0.1.
How can I alter the above so that instead of checking for 0 directly, I check for the ‘transformed’ 0-value in the R,G,B channels…simultaneously? y_true comes in as [batch, channel, y, x] so I can use indexing to split it out… But I don’t just want to perform the above if the Red value is equal to the transformed 0… or if the G value == the transformed 0… but rather if All 3 RGB channels equal their respective transformed zero.
I hope the question is clear
|
st117541
|
Figured this out.
Though as a human I understand all 3 channel components as being inherantly tied together, for the machine, they need not be. If my target is (0,0,0) and I have two predictions: (100,100,100) and (50,100,100), the loss assigned to the later of course would be less than the former. Tl;dr - handle each channel individually, since they each have their own target minimal values.
|
st117542
|
I have a two-stream CNN net. One stream is in cuda 0 and outputs feature f0. Another is in cuda 1 and outputs feature f1. The features are moved to cpu by f0 = f0.cpu() , f1 = f1.cpu(). After computing the loss =L2(f1,f2) on cpu.
In the backward stage, errors are reported as follows:
File “bilinear.py 5”, line 585, in
train(i)
File “bilinear.py 5”, line 517, in train
loss.backward()
File “/home/zhengyun.zy/anaconda/lib/python2.7/site-packages/torch/autograd/variable.py”, line 146, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
File “/home/zhengyun.zy/anaconda/lib/python2.7/site-packages/torch/nn/_functions/thnn/auto.py”, line 175, in backward
update_grad_input_fn(self.backend.library_state, input, grad_output, grad_input, *gi_args)
RuntimeError: Assertion `THCTensor(checkGPU)(state, 3, input, gradInput, gradOutput)’ failed. Some of weight/gradient/input tensors are located on different GPUs. Please move them to a single one. at /py/conda-bld/pytorch_1493676237139/work/torch/lib/THCUNN/generic/Threshold.cu:49
|
st117543
|
Hello there,
May someone provide some codes to help me how can I convert keras convolution, batchnorm, leakyRelu and maxpooling layers to pytorch equivalent? Are there any differences between these layers in keras and pytorch? I have wrote some codes to do this but not sure about the correctness.
I have a .h5 file which has the pretrained model in keras. What I want to do is to create a dictionary variable in python which is indexed by the layer name and has the definition of each layer in pytorch. Here is the code:
def loadWeights(self):
model = load_model('keras_model.h5')
j = json.loads(model.to_json())
for i, layer in enumerate(j['config']['layers']):
ln = layer['name']
l = model.get_layer(name=layer['name'])
if layer['class_name'] != 'Concatenate':
self.lid[ln] = l.input_shape[3]
else:
self.lid[ln] = l.input_shape[0][3]
self.lod[ln] = l.output_shape[3]
w = l.get_weights()
if layer['class_name'] == 'Conv2D':
filter_size = layer['config']['kernel_size'][0]
if filter_size == 3:
self.layers[ln] = nn.Conv2d(self.lid[ln],self.lod[ln],
filter_size,padding=1,stride=1,bias=False)
elif filter_size==1:
self.layers[ln] = nn.Conv2d(self.lid[ln],self.lod[ln],
filter_size,padding=0,stride=1,bias=False)
self.layers[ln].weight.data = torch.from_numpy(w[0].transpose((3,2,0,1)))
elif layer['class_name'] == 'BatchNormalization':
self.layers[ln] = nn.BatchNorm2d(self.lid[ln])
self.layers[ln].weight.data = torch.from_numpy(w[0])
self.layers[ln].bias.data = torch.from_numpy(w[1])
self.layers[ln].running_mean.data = torch.from_numpy(w[2])
self.layers[ln].running_var.data = torch.from_numpy(w[3])
elif layer['class_name'] == 'LeakyReLU':
self.layers[ln] = nn.LeakyReLU(.1)
elif layer['class_name'] == 'MaxPooling2D':
self.layers[ln] = nn.MaxPool2d(2, 2)
elif layer['class_name'] == 'Lambda':
self.layers[ln] = scale_to_depth(2)
May someone verify this code for me?
Thanks by the way!
|
st117544
|
I have a list of tensors from which I am sampling using the torch.multinomial function. I should mention that the list is formed from the last layer of a convnet. I am sampling from this list and forwarding the sampled feature cube through an lstm. The LSTM solves a regression problem by fitting the sampled feature cube to some labeled data set which I provide.
When I call backward() on my loss however, I get
13 def _do_backward(self, grad_output, retain_variables):
14 if self.reward is _NOT_PROVIDED:
---> 15 raise RuntimeError("differentiating stochastic functions requires "
global RuntimeError = undefined
16 "providing a reward")
17 result = super(StochasticFunction, self)._do_backward((self.reward,), retain_variables)
RuntimeError: differentiating stochastic functions requires providing a reward
> /home/lex/anaconda2/envs/py27/lib/python2.7/site-packages/torch/autograd/stochastic_function.py(15)_do_backward()
13 def _do_backward(self, grad_output, retain_variables):
14 if self.reward is _NOT_PROVIDED:
---> 15 raise RuntimeError("differentiating stochastic functions requires "
16 "providing a reward")
17 result = super(StochasticFunction, self)._do_backward((self.reward,), retain_variables)
What does it expect as reward?
FWIW, I am training like so:
clsfx_crit = nn.CrossEntropyLoss()
regress_crit = nn.MSELoss()
clsfx_optimizer = torch.optim.Adam(resnet.parameters(), clr)
rnn_optimizer = optim.SGD(regressor.parameters(), rlr)
# Train classifier
for epoch in range(maxIter): #run through the images maxIter times
for i, (train_X, train_Y) in enumerate(train_loader):
images = Variable(train_X)
labels = Variable(train_Y)
#rnn input
rtargets = targ_X[:,i:i+regressLength,:]
#reshape targets for inputs
rtargets = Variable(rtargets.view(regressLength, -1))
# Forward + Backward + Optimize
clsfx_optimizer.zero_grad()
rnn_optimizer.zero_grad()
#predict classifier outs and regressor outputs
outputs = resnet(images)
routputs = regressor(rtrain_X)
#compute loss
loss = clsfx_crit(outputs, labels)
rloss = regress_crit(routputs, rtargets)
#backward pass
loss.backward()
rloss.backward()
# step optimizer
clsfx_optimizer.step()
rnn_optimizer.step()
|
st117545
|
When using a stochastic function in the graph (such as torch.multinomial), you have to give it a reward before backproping through the graph.
Here’s an example:
github.com
pytorch/examples/blob/master/reinforcement_learning/reinforce.py#L58-L70 149
def finish_episode():
R = 0
policy_loss = []
rewards = []
for r in policy.rewards[::-1]:
R = r + args.gamma * R
rewards.insert(0, R)
rewards = torch.Tensor(rewards)
rewards = (rewards - rewards.mean()) / (rewards.std() + np.finfo(np.float32).eps)
for log_prob, reward in zip(policy.saved_log_probs, rewards):
policy_loss.append(-log_prob * reward)
optimizer.zero_grad()
|
st117546
|
Thank you!
I have not done a lot of work in gym. And to be honest, I do not understand why the concept of reward should come up in a logistic regression problem.
Granted that the input to the LSTM is sampled using a multinomial function, the LSTM should not have to think of long term rewards as in the reinforce episodic setting example that you gave above.
If you are saying that I should cast the regression learning problem into the form of a reinforcement learning problem, in my case, what would you suggest my reward should be before I backprop? Targets I predicted during the forward pass? How exactly would you recommend I call rloss.backward() as in the code snippet I gave early on?
Please pardon my many questions but I am a little confused about the introduction of a reward variable into the graph and a little more clarification would greatly suffice.
|
st117547
|
Sampling from a multinomial distribution is a discrete stochastic operation that you can’t backpropagate through. In other words, if you get exact gradients for your LSTM parameters by backpropagating through the regression and LSTM parts of your network, you can’t continue this process and get exact gradients for your convnet parameters.
You can do either of two other things, through. You can set the parameters of the convnet to be requires_grad=False, so they are fixed and don’t need gradients. Or you can use a strategy for estimating the gradients of the convnet parameters; in order to do this you need to provide the reward (or the negative of the loss) for each sample directly to the stochastic operation node; this is a process that’s typically used in policy gradient reinforcement learning so the API uses terms and concepts from that world.
The math behind this is laid out in a paper called Stochastic Computation Graphs, by Schulman et al.
|
st117548
|
Thank you very much indeed for the explanation and the paper reference.
I just read through the Schulman’s paper. I am however unclear about certain aspects of your answers:
(i) why did you mention that I have to use the negative of the loss function as an argument when I am calling rloss.backward()?
(ii) What are best practices for choosing reward scalars if I decide against using the negative of the loss for estimating the gradients?
Would appreciate your response.
Thanks!
|
st117549
|
loss goes down, reward goes up. Hence reward can be negative of the loss function
|
st117550
|
Thanks!
And sorry for the much bother. I am doing the backprop like this:
#predict classifier outs and regressor outputs
outputs = resnet(images)
routputs = regressor(rtrain_X)
#compute loss
loss = clsfx_crit(outputs, labels)
rloss = regress_crit(routputs, rtargets)
#backward pass
loss.backward()
rloss.backward(-rloss)
# step optimizer
clsfx_optimizer.step()
rnn_optimizer.step()
print ("Epoch [%d/%d], Iter [%d] cLoss: %.8f, rLoss: %.4f" %(epoch+1, maxIter, i+1,
loss.data[0], rloss.data[0]))
My rloss is clearly not a tuple but the stack trace gives this error:
rloss = Variable containing:
1.00000e+05 *
4.8973
[torch.cuda.DoubleTensor of size 1 (GPU 0)]
470
471 # step optimizer
/home/lex/anaconda2/envs/py27/lib/python2.7/site-packages/torch/autograd/variable.pyc in backward(self=Variable containing:
1.00000e+05 *
4.8973
[torch.cuda.DoubleTensor of size 1 (GPU 0)]
, gradient=Variable containing:
1.00000e+05 *
-4.8973
[torch.cuda.DoubleTensor of size 1 (GPU 0)]
, retain_variables=False)
144 'or with gradient w.r.t. the variable')
145 gradient = self.data.new().resize_as_(self.data).fill_(1)
--> 146 self._execution_engine.run_backward((self,), (gradient,), retain_variables)
self._execution_engine.run_backward = <built-in method run_backward of torch._C._EngineBase object at 0x7f8dd92b9210>
self = Variable containing:
1.00000e+05 *
4.8973
[torch.cuda.DoubleTensor of size 1 (GPU 0)]
gradient = Variable containing:
1.00000e+05 *
-4.8973
[torch.cuda.DoubleTensor of size 1 (GPU 0)]
retain_variables = False
147
148 def register_hook(self, hook):
RuntimeError: element 0 of gradients tuple is not a Tensor or None
> /home/lex/anaconda2/envs/py27/lib/python2.7/site-packages/torch/autograd/variable.py(146)backward()
144 'or with gradient w.r.t. the variable')
145 gradient = self.data.new().resize_as_(self.data).fill_(1)
--> 146 self._execution_engine.run_backward((self,), (gradient,), retain_variables)
147
148 def register_hook(self, hook):
What type of data does backward really expect?
|
st117551
|
calling the backward with an explicitly specified Tensor variable does not help either:
rloss.backward(torch.Tensor([1]).cuda())
gives
147
148 def register_hook(self, hook):
/home/lex/anaconda2/envs/py27/lib/python2.7/site-packages/torch/autograd/stochastic_function.pyc in _do_backward(self=<torch.autograd._functions.stochastic.Multinomial object>, grad_output=(), retain_variables=True)
13 def _do_backward(self, grad_output, retain_variables):
14 if self.reward is _NOT_PROVIDED:
---> 15 raise RuntimeError("differentiating stochastic functions requires "
global RuntimeError = undefined
16 "providing a reward")
17 result = super(StochasticFunction, self)._do_backward((self.reward,), retain_variables)
RuntimeError: differentiating stochastic functions requires providing a reward
> /home/lex/anaconda2/envs/py27/lib/python2.7/site-packages/torch/autograd/stochastic_function.py(15)_do_backward()
13 def _do_backward(self, grad_output, retain_variables):
14 if self.reward is _NOT_PROVIDED:
---> 15 raise RuntimeError("differentiating stochastic functions requires "
16 "providing a reward")
17 result = super(StochasticFunction, self)._do_backward((self.reward,), retain_variables)
|
st117552
|
see the example that i pointed you to, you are doing this wrong. you need to call .reinforce on the stochastic outputs before calling backward.
|
st117553
|
Thank you very kindly. I indeed see my error.
Also, I noticed in the example that the saved_actions are drawn from a multinomial distribution (hence it is the stochastic variable). I am not all too familiar with gym’s API calls but it seems the rewards variable is the result of a deterministic env.step(...) call. Am I correct?
I have a similar code structure. I sample from a list of tensors using the multinomial function and then save the results from the sampling into a list of actions in my network model class.
What should the reward variable in my situation be? The output when I call forward on my stochastic variable by my lstm regressor? If I do that, it rightly raises a runtime error:
raise RuntimeError("reinforce() can be only called on outputs "
RuntimeError: reinforce() can be only called on outputs of stochastic functions
So actions in your example is a stochastic variable, I agree. But you called action.reinforce on r and everything is honky dory. I guess my question is what is the rationale for doing this
R = 0
rewards = []
for r in policy.rewards[::-1]:
R = r + args.gamma * R
rewards.insert(0, R)
rewards = torch.Tensor(rewards)
rewards = (rewards - rewards.mean()) / (rewards.std() + np.finfo(np.float32).eps)
and r in policy.rewards[::-1] is stochastic?
|
st117554
|
In the RL example, action is the immediate result of multinomial (i.e., it’s a stochastic variable) and reward is an ordinary tensor.
|
st117555
|
Could you point me to where this reinforce function is defined and implemented? For some weird reason, I keep getting
RuntimeError: reinforce() can be only called on outputs of stochastic functions
when I call reinforce as follows:
"""
regress input is the output of a multinomial sampling i.e. Variable containing: torch.DoubleTensor of size 4096x1
r is a Variable containing: 0.2610 [torch.DoubleTensor of size 1]
"""
regress_input.reinforce(r)
|
st117556
|
Looks like the problem is coming from me. r was a LongTensor in my case. Casting it to double seems to fix the issue.
Still, I do not understand how rewards are generated as in the reinforce.py example that @smth earlier mentioned. In the gym example, they seem to the the output of the env.step() function. In my case, I am extracting the last layer of a convnet, sampling from this layer to generate the input to a regressor network. In order to call backward on the stochastic input, I have learnt I have to compute the rewards by calling .reinforce(). I am not sure where the rewards should be populated in my framework. For now, I am doing something like this:
R, rewards = 0, []
for r in policy.rewards[::-1]: #my policy.rewards is initially an empty list as in the example
R = r + args.gamma * R
rewards.insert(0, R)
rewards = torch.Tensor(rewards)
#... ... ... .... ...
From Ronald Williams paper 1, the REINFORCE algorithm is given by:
\Delta w_{ij} = \alpha_{ij} (r - b_{ij} ) e_{ij}
When we call the StochasticVariable.reinforce(someTensor) function in pytorch, what is going on under the hood? How are the reinforcement baseline b_{ij}, characteristic eligibility, e_{ij} and the learning rate factor, \alpha_{ij} computed?
I would like to know to be sure I am doing the correct thing. Sorry for my troubles but thanks for your help so far. I seem to be getting there.
|
st117557
|
I am trying to calculate the gradient of J(f(x)/t, y) with respect to the input x, where J is the cross-entropy loss function and f(x) is the neural network output, y is the label and t is the scaling factor. So I am using the following code to implement it,
inputs = Variable(images.cuda(CUDA_DEVICE), requires_grad = True)
outputs = net(inputs)
outputs.data = outputs.data / t
labels = Variable(labels).cuda(CUDA_DEVICE))
loss = criterion(outputs, labels)
loss.backward()
But it seems the gradient with respect to the input is wrong. When t is large, then inputs.grad.data.norm(1) should be very small, but it is not.
|
st117558
|
Don’t operate on the data; any change on the data will not be included in the graph.
Instead, if you want to do in-place division, you can do outputs.div_(t). However, as suggested in the reference, just avoid any in-place operations.
reference:
http://pytorch.org/docs/notes/autograd.html#in-place-operations-on-variables 48
|
st117559
|
I insert a 128-dim fc(activated by sigmoid) before the last fc layer and modified the output to 10-dim, freeze all the former layers, and train the last 2 layers on my dataset(about 5000 pics from imagenet), before the first time of training I checked the output of the layer right before the 128-dim layer, they are normal, but after 10 trainings they became all negative, the loss rate increase and didn’t decrease in the succeeding 10 trainings. I don’t know why
|
st117560
|
I’ve implemented cosine normalization [1], attempting to match the paper [2].
However, when I use this code, I end up getting NaNs in _grad.data when calling .backward() after a few thousand samples presented to my network (exact count varies). Without using CosNorm (either with stock ResNet, or just switching to BatchNorm), the problem seems to go away (can’t prove a negative, though, since it might just be slower to happen).
My primary question is: How can I reliably investigate issues like this? I registered hooks, and nothing passed in there had NaNs, and so I eventually resorted to randomly disabling or changing parts of the code until I got enough clues to find a culprit (and even now, all the evidence I have is circumstantial).
My secondary question is: Any ideas why this normalization would be unstable? I was exclusively wrapping Conv2d instances.
Thanks!
https://gist.github.com/elistevens/5383d51c6c3b3f756ce3b312ef53f3a8 59
https://arxiv.org/pdf/1702.05870v2.pdf 22 .
|
st117561
|
if x.norm or w.norm is 0, then you essentially have 1e-5 in the denominator.
If you pass that gradient through a couple of BatchNorm layers’ backwards, then it might start generating NaNs, and not at the CosNorm itself.
|
st117562
|
While I wasn’t using BatchNorm in the network when using CosNorm, I can see the general point of small numbers divided by other small numbers might be unstable.
Adding an additional + 1.0 to the denominator has been stable for about 40 epochs of training now, though I can’t say if it’s had any significant impact on the ability of the layer to normalize or not. I worry that repeated layers of this will end up pushing the norm of x towards zero.
I whipped up a hack of a script to see the numerical effects, and it seems stable for a variety of weight scalings and normalization constants, though:
import random
x = random.random()
w = random.random() + 0.5
def simcos(x, layer_count):
avg_list = [[] for n in range(layer_count)]
for n in range(100):
w = [(random.random() + 0.5) * 3 for n in range(layer_count)]
xin = x
for i in range(layer_count):
xout = xin * w[i] * random.random()
xnorm = xout * 2 / (xin * w[i] + 1)
avg_list[i].append(xnorm)
xin = xnorm
return [sum(l)/100 for l in avg_list]
for x in [0.1, 0.5, 0.9, 1.0, 1.5, 2.0, 10.0]:
print(x, simcos(x, 50)[::5])
|
st117563
|
I saved my pre-trained model and load it when need re-training, but sometimes I modify the net’s structure and I want the program to automatically check whether the parameters still fit, if not, train the net from scratch. Are there any methods can do this?
|
st117564
|
i did it this way:
# https://discuss.pytorch.org/t/how-to-load-part-of-pre-trained-model/1113/3 ######
def load_valid(model, pretrained_file, skip_list=None):
pretrained_dict = torch.load(pretrained_file)
model_dict = model.state_dict()
# 1. filter out unnecessary keys
pretrained_dict1 = {k: v for k, v in pretrained_dict.items() if k in model_dict and k not in skip_list }
# 2. overwrite entries in the existing state dict
model_dict.update(pretrained_dict1)
model.load_state_dict(model_dict)
##-------------------------
#example on usage
pretrained_file = '/root/share/project/pytorch/data/pretrain/inception/inception_v3_google-1a9a5a14.pth'
net=Inception3(num_classes=10)
if pretrained_file is not None: #pretrain
skip_list = ['fc.weight', 'fc.bias']
load_valid(net, pretrained_file, skip_list=skip_list)
|
st117565
|
I am writing my own transforms for data augmentation. for example
def my_transform_1(x, params=default_values):
#do something
return x
def my_transform_2(x, params=default_values):
#do something
return x
Following the documentation, i should have use the following code in the Dataset:
train_dataset = MyCustomtDataset(...,
transform=transforms.Compose([
transforms.Lambda(lambda x: my_transform_1(x)),
transforms.Lambda(lambda x: my_transform_2(x)),
],
....)
However, my code still works if I use:
train_dataset = MyCustomtDataset(...,
transform=[
lambda x: my_transform_1(x),
lambda x: my_transform_2(x),
],
....)
#note:
MyCustomtDataset(Dataset):
...
def __getitem__(self, index):
img = self.images[index]
if self.transform is not None:
for t in self.transform:
img = t(img) #taking care of multiple transform here
What is the use of transforms.Compose and transforms.Lambda? I look at their code, but found that they are just empty wrapper? Is my code ok?
class Lambda(object):
'''Applies a lambda as a transform.'''
def __init__(self, lambd):
assert isinstance(lambd, types.LambdaType)
self.lambd = lambd
def __call__(self, img):
return self.lambd(img)
class Compose(object):
'''Composes several transforms together.'''
def __init__(self, transforms):
self.transforms = transforms
def __call__(self, img):
for t in self.transforms:
img = t(img)
return img
|
st117566
|
Yep, they are empty wrappers much like torch.utils.data.DataSet.
They are around to make composing and creating new transforms trivial.
BTW, There is no requirement pass transform as an argument to class MyCustomtDataset(torch.utils.data.DataSet). Only requirement is that __getitem__ and __len__ have to be implemented.
|
st117567
|
Hi, I want to convert the AlexNet to the fully convolutional alexnet. I’m tryining to reshape the fc weighst into convoutioanl form using the following line:
fcn_model.conv1.load_state_dict({"weight": **alexnet.classifier.1.state_dict()**["weight"].view(6, 6, 4096, 1),"bias"...
but I face a sytax error in this place: alexnet.classifier.1.
So, How should I access the state_dict for the operations inside a nn.Sequential?
Thanks
Saeed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.