id
stringlengths
3
8
text
stringlengths
1
115k
st119268
Hello I am trying to play around with the multi process implementation (link 13 and I ran into a problem with the grad being None when I initialize my model. Here is a simple example: from __future__ import print_function import argparse import os import sys import torch import torch.multiprocessing as mp import torch.nn as nn import torch.nn.functional as F class MyModel(torch.nn.Module): def __init__(self): super(MyModel, self).__init__() self.main = nn.Sequential( nn.Linear(256,6),nn.Softmax()) def forward(self,x): return self.main(x) def train(model): # This for loop will break sharing of gradient buffers. It's not # necessary but it reduces the contention, and has a small memory cost # (equal to the total size of parameters). for param in model.parameters(): param.grad.data = param.grad.data.clone() # Construct data_loader, optimizer, etc. for data, labels in data_loader: input1 = torch.ones(256) optimizer.zero_grad() loss_fn(model(input1), torch.zeros(6)).backward() optimizer.step() # This will update the shared parameters if __name__ == '__main__': num_processes = 4 model = MyModel() # NOTE: this is required for the ``fork`` method to work model.share_memory() processes = [] for rank in range(num_processes): p = mp.Process(target=train, args=(model,)) p.start() processes.append(p) for p in processes: p.join() I get the following error: Process Process-4: Traceback (most recent call last): File "/home/jtremblay/anaconda2/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/home/jtremblay/anaconda2/lib/python2.7/multiprocessing/process.py", line 114, in run self._target(*self._args, **self._kwargs) File "/home/jtremblay/code/pytorch-a3c/grad_example.py", line 25, in train param.grad.data = param.grad.data.clone() AttributeError: 'NoneType' object has no attribute 'data' Process Process-2: Traceback (most recent call last): File "/home/jtremblay/anaconda2/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/home/jtremblay/anaconda2/lib/python2.7/multiprocessing/process.py", line 114, in run self._target(*self._args, **self._kwargs) File "/home/jtremblay/code/pytorch-a3c/grad_example.py", line 25, in train param.grad.data = param.grad.data.clone() AttributeError: 'NoneType' object has no attribute 'data' Process Process-3: Traceback (most recent call last): File "/home/jtremblay/anaconda2/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/home/jtremblay/anaconda2/lib/python2.7/multiprocessing/process.py", line 114, in run self._target(*self._args, **self._kwargs) File "/home/jtremblay/code/pytorch-a3c/grad_example.py", line 25, in train param.grad.data = param.grad.data.clone() AttributeError: 'NoneType' object has no attribute 'data' Process Process-1: Traceback (most recent call last): File "/home/jtremblay/anaconda2/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/home/jtremblay/anaconda2/lib/python2.7/multiprocessing/process.py", line 114, in run self._target(*self._args, **self._kwargs) File "/home/jtremblay/code/pytorch-a3c/grad_example.py", line 25, in train param.grad.data = param.grad.data.clone() AttributeError: 'NoneType' object has no attribute 'data' I know I could create them manually but I am not sure this is a wanted behaviour.
st119269
param.grad can now be None when it used to be zero. We made this change to better support sparse gradients. When you share a model, the gradients are not shared. That means you don’t need to do param.grad.data = param.grad.data.clone() – it won’t do anything, and as you saw breaks when .grad is None. So if you just remove the cloning your code should work fine.
st119270
I’m working on annotating time series with recurrent neural networks (in particular, I’m trying to replicate Alex Graves’s experiments from his book Supervised Sequence Labelling with Recurrent Neural Networks 27, but I’m still a bit confused about the seq_len dimension in recurrent layers in PyTorch. As far as I understand, that length corresponds to the length of the unfolding in time for BPTT during training. Since PyTorch models have a dynamic computational graphs, I can use the trained model on sequences of different lengths. What happens if I do so? E.g., if I have a network with two LSTM hidden layers, what is the difference in terms of the state of the LSTM units and their output if I feed a trained network with a sequence of 100 time steps at once, compared to feeding the same network with the same time series but one sample at a time? I understand there might be two questions here, one related to general LSTM networks, the other about specific PyTorch implementation.
st119271
It will be equivalent in terms of the output. You can use different seq_lengths with a single module, but passing in a larger tensor allows the backend to batch some operations, and that leads to 3-10x speedup in most cases. Also, note that if you want to process each time step separately, we recommend using *Cell modules (batched ones use too much memory at the moment 25). Another trick is to forward the largest possible sequence you’re going to use at the beginning. This will allow the allocator to prepare large enough buffers, that will be able to be reused during the whole training.
st119272
Hi, I am trying to load DenseNet-BC .t7 model using load_lua('densenet-121.t7'), but it throws an exception saying T7ReaderException: don't know how to deserialize Lua class cudnn.SpatialConvolution. How do I fix this?
st119273
Load your model in Lua, convert all cudnn modules to nn modules, load it in PyTorch
st119274
Hi, I really like the library so far. However, I was wondering if broadcasting is on the roadmap, and what would be your current suggestion for work-arounds? For a toy example, say I want to train OLS linear regression model and want to compute the net input from a 2D dataset (w_1 * x_1 + w_2 * x_2 + bias) with 6 training instances (just to have a toy example for illustrative purposes) . The following wouldn’t work) since there’s no broadcasting when the bias is added (x.mm 4(weights) + bias) x = Variable(torch.Tensor([[1.0, 1.0], [2.0, 2.0], [3.0, 3.0], [4.0, 4.0], [5.0, 5.0], [6.0, 6.0]])) weights = Variable(torch.zeros(2, 1)) bias = Variable(torch.zeros(1)) net_input = x.mm(weights) + bias A workaround would be to add 1s to the input tensor, I guess: x = Variable(torch.Tensor([[1.0, 1.0, 1.0], [1.0, 2.0, 2.0], [1.0, 3.0, 3.0], [1.0, 4.0, 4.0], [1.0, 5.0, 5.0], [1.0, 6.0, 6.0]])) weights = Variable(torch.zeros(3, 1)) net_input = x.mm(weights) What would be your thoughts on that?
st119275
Adding broadcasting to most operations is definitely on our roadmap, and will be hopefully ready quite soon. Since it’s so often asked for we’ll probably reprioritize that and have it implemented soonish. For now there are two solutions - one that works already, another one that will be implemented very soon (presented in the same order): You can do broadcasting by manually adding singleton dimensions and expanding along them. This doesn’t do any memory copy, and only does some stride tricks: net_input = x.mm(weights) net_input += bias.unsqueeze(0).expand_as(net_input) Another way, that’s only slightly more convenient is to add a .broadcast function that combines possible many unsqueezes and expands into a single call. You’d just pass in the desired size. I realize that it might not be a suggestion for a tutorial, where you want to show all the ops, but if you want to do this in your code right now, I’d use linear function from torch.nn.functional: import torch.nn.functional as F output = F.linear(input, weights, bias) # bias is optional
st119276
expand on singleton dims is cool, but still want to know .broadcast not available yet right now?
st119277
I decided to add that functionality to .expand, instead of adding a new function. This should work now: x = torch.randn(1) y = torch.randn(2, 3, 4, 5, 6) print(y + x.expand_as(y))
st119278
That’s very handy! You can also avoid that entirely and just use the broadcasting ops I provided directly, which are largely compatible with numpy / theano / keras / tensorflow: Tip: using keras compatible tensor dot product and broadcasting ops
st119279
@jphoward actually it seems that the align function from your code could be replaced with expand_as, right?
st119280
apaszke: actually it seems that the align function from your code could be replaced with expand_as, right? @apaszke, aligning two tensors for broadcasting requires more than what expand_as provides. In particular, it’s not at all unusual to have each tensor have a unit axis in a different place, so each tensor needs to be expanded along different axes. expand_as does not do this, and the API doesn’t make it possible, since it has to be able to change both tensors and return them both. Let me know if you need more info - I’m not sure I’ve done a great job of explaining!
st119281
Hi everyone, I’m currently trying to reproduce the results from “Residual networks behave like ensembles of relatively shallow networks” (https://arxiv.org/abs/1605.06431 4) with pytorch, and was wondering if there is any way to select specific layers (e.g. inputs / outputs of the 40th residual block) within a network. The closest to an answer that I can think of is either using loss.creator.previous_functions[0][0] in a chained manner, or manually listing the hundreds of residual layers when building a model and assigning a return function to each of them. Any advice out of pytorch would also be greatly appreciated. Thanks
st119282
I think this is a answer of your question How to extract features of an image from a trained model Hi all, I try examples/imagenet of pytorch. It is awesome and easy to train, but I wonder how can I forward an image and get the feature extraction result? After I train with examples/imagenet/main.py, I get model as, model_best.pth.tar And I load this file with model = torch.load('model_best.pth.tar') which gives me a dict. How can I use forward method to get a feature (like fc7 layer’s output of Alexnet) of an image with this model?
st119283
Thanks for the link. This seems to be a good starting point. I also learned that one can easily reach the weight values using model.state_dict().keys() which lists the keys of all weights and biases belonging to our model, then access their values with model.state_dict()['key_name_of_weight']
st119284
Hi guys, I was wondering if someone can help me out on this one. My problem is the following. I load the mnist dataset using the data loader. I transform the data to numpy to do some operations and transform it back to torch.Tensor. then I do the following: train = torch.utils.data.TensorDataset(train_data, train_ds.train_labels) train_loader = torch.utils.data.DataLoader(train, batch_size=args.batch_size, shuffle=True) But when I run the training function I get the following errors: Has anyone faced a similar problem. TypeError Traceback (most recent call last) /home/user/mnist.py in <module>() 141 142 for epoch in range(1, args.epochs + 1): --> 143 train(epoch) 144 test(epoch) 145 /home/mnist.py in train(epoch) 111 data, target = Variable(data), Variable(target) 112 optimizer.zero_grad() --> 113 output = model(data) 114 loss = F.nll_loss(output, target) 115 loss.backward() /home/site-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs) 208 209 def __call__(self, *input, **kwargs): --> 210 result = self.forward(*input, **kwargs) 211 for hook in self._forward_hooks.values(): 212 hook_result = hook(self, input, result) /home/ in forward(self, x) 88 89 def forward(self, x): ---> 90 x = F.relu(self.linear1(x)) 91 x = F.linear_drop(x) 92 x = F.relu(self.linear2(x)) /home/lib/python2.7/site-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs) 208 209 def __call__(self, *input, **kwargs): --> 210 result = self.forward(*input, **kwargs) 211 for hook in self._forward_hooks.values(): 212 hook_result = hook(self, input, result) /home/lib/python2.7/site-packages/torch/nn/modules/linear.pyc in forward(self, input) 51 return self._backend.Linear()(input, self.weight) 52 else: ---> 53 return self._backend.Linear()(input, self.weight, self.bias) 54 55 def __repr__(self): /home/miniconda2/lib/python2.7/site-packages/torch/nn/_functions/linear.pyc in forward(self, input, weight, bias) 8 self.save_for_backward(input, weight, bias) 9 output = input.new(input.size(0), weight.size(0)) ---> 10 output.addmm_(0, 1, input, weight.t()) 11 if bias is not None: 12 # cuBLAS doesn't support 0 strides in sger, so we can't use expand TypeError: addmm_ received an invalid combination of arguments - got (int, int, torch.DoubleTensor, torch.FloatTensor), but expected one of: * (torch.DoubleTensor mat1, torch.DoubleTensor mat2) * (torch.SparseDoubleTensor mat1, torch.DoubleTensor mat2) * (float beta, torch.DoubleTensor mat1, torch.DoubleTensor mat2) * (float alpha, torch.DoubleTensor mat1, torch.DoubleTensor mat2) * (float beta, torch.SparseDoubleTensor mat1, torch.DoubleTensor mat2) * (float alpha, torch.SparseDoubleTensor mat1, torch.DoubleTensor mat2) * (float beta, float alpha, torch.DoubleTensor mat1, torch.DoubleTensor mat2) * (float beta, float alpha, torch.SparseDoubleTensor mat1, torch.DoubleTensor mat2) > /home/miniconda2/lib/python2.7/site-packages/torch/nn/_functions/linear.py(10)forward() 8 self.save_for_backward(input, weight, bias) 9 output = input.new(input.size(0), weight.size(0)) ---> 10 output.addmm_(0, 1, input, weight.t()) 11 if bias is not None: 12 # cuBLAS doesn't support 0 strides in sger, so we can't use expand
st119285
Solved by jhjungCode in post #2 The type of data from dataloader seem to be float64, so, you can change this float32 like below # if img is numpy img = img.astype('float32') # if img is torch.tensor img = img.float()
st119286
The type of data from dataloader seem to be float64, so, you can change this float32 like below # if img is numpy img = img.astype('float32') # if img is torch.tensor img = img.float()
st119287
I implemented my model with pytorch framework for its concise and efficient design, and I wonder if there is any BibTex for citing?
st119288
No, we don’t have any papers published yet. You can use a link to our github repository.
st119289
I am trying to implement recurrent weighted average (RWA) in pytorch code 9. I just changed the RNN class to RWA from the practical pytorch’s char-rnn-classification example. But it is giving me the error below: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) in () 20 for epoch in range(1, n_epochs + 1): 21 category, line, category_tensor, line_tensor = training_pair() —> 22 output, loss = train(category_tensor, line_tensor) 23 current_loss += loss 24 <ipython-input-214-4bee22e367f7> in train(categroy_tensor, line_tensor) 7 loss = criterion(output, category_tensor) 8 print("loss:" , loss) ----> 9 loss.backward() 10 11 for p in rnn.parameters(): /usr/local/lib/python3.5/dist-packages/torch/autograd/variable.py in backward(self, gradient, retain_variables) 143 'or with gradient w.r.t. the variable') 144 gradient = self.data.new().resize_as_(self.data).fill_(1) --> 145 self._execution_engine.run_backward((self,), (gradient,), retain_variables) 146 147 def register_hook(self, hook): /usr/local/lib/python3.5/dist-packages/torch/autograd/_functions/basic_ops.py in backward(self, grad_output) 37 38 def backward(self, grad_output): ---> 39 a, b = self.saved_tensors 40 return grad_output.mul(b), grad_output.mul(a) 41 RuntimeError: Trying to backward through the graph second time, but the buffers have already been freed. Please specify retain_variables=True when calling backward for the first time. The code I changed looks like below: import torch.nn as nn from torch.autograd import Variable import torch.nn.functional as F class RWA(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(RWA, self).__init__() self.max_steps = 1 self.batch_size = 1 self.hidden_size = hidden_size self.n = Variable(torch.Tensor(self.batch_size, hidden_size), requires_grad=True) self.d = Variable(torch.Tensor(self.batch_size, hidden_size), requires_grad=True) self.x2u = nn.Linear(input_size, hidden_size) self.c2g = nn.Linear(input_size + hidden_size, hidden_size) self.c2q = nn.Linear(input_size + hidden_size, hidden_size) self.out = nn.Linear(hidden_size, output_size) def forward(self, input, hidden): h = F.tanh(hidden) for i in range(len(input)): combined = torch.cat((input[i], h), 1) u = self.x2u(input[i]) g = self.c2g(combined) q = self.c2q(combined) q_greater = F.relu(q) scale = torch.exp(-q_greater) a_scale = torch.exp(q-q_greater) self.n = (self.n * scale) + ((u * F.tanh(g)) * a_scale) self.d = (self.d * scale) + a_scale h = F.tanh(torch.div(self.n, self.d)) output = self.out(h) return output, h def init_hidden(self): return Variable(torch.randn(1, self.hidden_size)) n_hidden = 128 rwa = RWA(n_letters, n_hidden, n_categories) print("n_letters:", n_letters, "n_hidden:", n_hidden, "n_categories:", n_categories) print(rwa) n_letters: 57 n_hidden: 128 n_categories: 18 RNN ( (x2u): Linear (57 -> 128) (c2g): Linear (185 -> 128) (c2q): Linear (185 -> 128) (out): Linear (128 -> 18) ) I tried the solution to a similar problem but it does not work. It should be working because i am re-initializing initial hidden state at every iterations. def train (categroy_tensor, line_tensor): hidden = rwa.init_hidden() rwa.zero_grad() output, hidden = rwa(line_tensor, hidden) loss = criterion(output, category_tensor) print("loss:" , loss) loss.backward() for p in rwa.parameters(): p.data.add_(-learning_rate, p.grad.data) return output, loss.data[0] I suspect the self.n and self.d. So I tried chaning them to nn.Parameter, but now it complains when I try to assign a value here: self.n = (self.n * scale) + ((u * F.tanh(g)) * a_scale). What should I do to solve this problem?
st119290
Author of the RWA model here. I saw your post, and I wanted to let you know a flaw has been discovered in my code. The flaw deals with the numerical stability of the RWA model. If left uncorrected it prevents the model from forming long-term memories. Once you fix your code you may discover the issue you are having go away. Maybe so, maybe not. I just posted the corrected code on my repo (https://github.com/jostmey/rwa 56). I owe a special thanks to some dude named Alex Nichol (a.k.a. unixpickle) for finding the bug. I am re-running all my results. So far the corrected model appears to run at least as well as before
st119291
If self.n and self.d are initial states which you want to optimize as additional parameters, then they should be nn.Parameter instances and shouldn’t be assigned to during forward. If they’re just temporary variables to hold state during the forward pass, they should not be attributes of self and should be created and assigned to as ordinary local variables during forward.
st119292
Thank you for opening my new conversation.I have little chances to speak English ,so my English is bad and maybe the way that I express is not appropriate.I am sorry for this. I put some pictures in folder and I use the ImageFolder to get my datasat.What i would like to know is if the dataset could bu used in net without changing to torch.utils.data.TensorDataset ?
st119293
If you have a folder structure similar to the imagenet one (each folder represents a class, and has images in it), then you can directly use the torchvision.dataset.ImageFolder, and no need to go through the torch.utils.data.TensorDataset. Here is an example 63.
st119294
Hi, there, I am a newbie to PyTorch and I am really impressed by its speed and flexibility. Amazing work! But I have one question about the memory usage. I noticed that PyTorch usually stores all immediate variables when building graph. The memory requirement could be very intense in the case where a large network is called many times in a for loop. Below is a piece of sample codes. There is only one network AntoEncoderDecoder. We iteratively feed data through it for many times. My understanding is PyTorch will store all immediate results in AntoEncoderDecoder and thus requires a lot of memory. Compared with its equivalent implementation of theano, PyTorch requires more GPU memory. In theano, I can use a batch size of 10. But only 2 in PyTorch. I am wondering is there anything special to take care of in order to reduce the memory usage in this case. Any suggestions would be welcome! Thank you! model = AntoEncoderDecoder() output = Variable(torch.zeros(256,256)) for i in range(iterations): output = model( output )
st119295
Hi, Just to be sure I understand what you are trying to do: You have an auto encoder model, a variable output and you want to apply you auto encoder iterations times to it and then backpropagate all the way through these iterations to learn an auto encoder that repetitively encodes and decodes multiple times before producing the final output? Here, your complete network is the auto encoder stacked iterations times. When you run your model, we just keep in memory what is necessary to compute the backward pass, nothing more. If you do not need to compute the backward pass, you can use the volatile option as follow output = Variable(torch.zeros(256,256), volatile=True) to tell pytorch that it does not need to keep the buffers for the backward. If you do so, calling .backward() on the output will obviously raise an error.
st119296
We never store all intermediate variables, but only the ones that are absolutely necessary to compute the backward. Everything else is freed immediately. What kind of modules/operations are you using in your network? The increased memory usage might be coming from the fact that we’re always trying to pick fast algorithms in cuDNN, even at a cost of memory usage. We’ll be trying to make it possible to affect these choices in the future.
st119297
@albanD Thanks for your suggestion! Yes, your understanding is correct. Actually I do need to compute the backward pass. So the volatile option may not be applicable here. @apaszke Thanks for the info! Currently my module functions like a rnn. So only simple matrix multiplication, slicing, and nonlinearity function are used but they repeated by many times due to the iteration.
st119298
@qianguih I don’t know of any memory issues of these ops, they shouldn’t differ that much from what Theano needs. Are you using 0.1.10?
st119299
Yes, it is 0.1.10. I guess theano has optimized every computation graph a lot during the function compilation.
st119300
i have encountered the following error: gradient = torch.ge 16(inputs.grad.data,0) File “/Users/liangshiyu/anaconda2/lib/python2.7/site-packages/torch/tensor.py”, line 331, in sub return self.sub(other) TypeError: sub received an invalid combination of arguments - got (torch.cuda.FloatTensor), but expected one of: (float value) didn’t match because some of the arguments have invalid types: (torch.cuda.FloatTensor) (torch.FloatTensor other) didn’t match because some of the arguments have invalid types: (torch.cuda.FloatTensor) (float value, torch.FloatTensor other)
st119301
Problem solved, thanks. Here is another problem, how do convert a torch.cuda.floatTensor type tensor to a torch.floatTensor. Thanks.
st119302
I have installed PyTorch from source using an Anaconda environment. I did setup CMAKE_PREFIX_PATH to point to anaconda’s root directory. After installation it seems that Torch dependencies are linked to system shared libraries, and not to those in anaconda3/lib/. Is this normal? More precisely, should libiomp5.so.1 be linked to [...]/anaconda3/lib/libiomp5.so.1 and/or should libgomp.so.1 be linked to [...]/anaconda3/lib/libgomp.so.1 instead of /usr/lib/x86_64-linux-gnu/libgomp.so.1? $ ldd torch/lib/libTHC.so.1 linux-vdso.so.1 => (0x00007fffeebe7000) libcudart.so.8.0 => /usr/local/cuda-8.0/targets/x86_64-linux/lib/libcudart.so.8.0 (0x00007fb1aac3a000) libcublas.so.8.0 => /usr/local/cuda-8.0/targets/x86_64-linux/lib/libcublas.so.8.0 (0x00007fb1a8289000) libTH.so.1 => /home/tudor/pytorch/torch/lib/libTH.so.1 (0x00007fb1a7c92000) libcurand.so.8.0 => /usr/local/cuda-8.0/targets/x86_64-linux/lib/libcurand.so.8.0 (0x00007fb1a3d29000) libcusparse.so.8.0 => /usr/local/cuda-8.0/targets/x86_64-linux/lib/libcusparse.so.8.0 (0x00007fb1a121a000) libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fb1a0e98000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fb1a0b8f000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fb1a0978000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fb1a05af000) /lib64/ld-linux-x86-64.so.2 (0x0000562627347000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fb1a03ab000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fb1a018d000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fb19ff85000) libmkl_intel_lp64.so => not found libmkl_intel_thread.so => not found libmkl_core.so => not found libiomp5.so => not found libgomp.so.1 => /usr/lib/x86_64-linux-gnu/libgomp.so.1 (0x00007fb19fd61000)
st119303
Which libgomp will be used depends on the compiler you used to build PyTorch. If you install gcc from anaconda, it will be linked with /anaconda3/lib/libgomp.so.1, if you use a system compiler, it will be linked with a system-wide lib dir. Both ways should work ok.
st119304
Hello, I’m a newbie of PyTorch. I would like to concat two variables of different sizes, for DepthConcat of inception network. For example, # a: 1x2x2, b: 1x4x4 a = Variable(torch.FloatTensor([[[1,2],[3,4]]])) b = Variable(torch.FloatTensor([[[1,1,1,1],[2,2,2,2],[3,3,3,3],[4,4,4,4]]])) What I want to get from cat([a, b], 0) is (0 ,.,.) = 0 0 0 0 0 1 2 0 0 3 4 0 0 0 0 0 (1 ,.,.) = 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4 I may try to use narrow() and copy_(), but I’m not sure copying to Variable in forward() is ok.
st119305
Constant pad should do it. There’s no way to copy one variable into another one (if you take out .data and work with that, these ops won’t be registered by autograd).
st119306
Thank you! What I looked for was constant padding. I used torch.nn.functional.pad(), which is not in API Document
st119307
Hi. For my final project, I should define a Cost Function which gets 2 inputs. one of them is a 3D Tensor with size 3077 and another one has size NumberOfObjectInImage*5. Actually, My project is related to a detection problem. NumberOfObjectInImage denotes to the number of objects which are in one image. The 5 number denotes to the Bounding Box properties. And here is my code and I should say that I just have implemented the forward path of the cost.(It is based on Autograd Extension Tutorial): class MyLoss(Function): def __init__(self, S, B, l_coord, l_nobj): super(MyLoss, self).__init__() self.S = S # Number of Cell self.B = B # Number of Bouning Box self.l_coord = l_coord self.l_nobj = l_nobj def forward(self, pred_out, real_out): # pred_out: is 30*7*7 # real_out: is NumObject*5 self.save_for_backward(pred_out, real_out) po = torch.LongTensor([2]).float() sum = torch.sum pow = torch.pow sqr = torch.sqrt print(type(pred_out)) rt = real_out # Real_out pt = pred_out # Pred_out numObj = rt.size()[0] print(numObj) interval = np.linspace(0, 1, self.S + 1) cost = torch.FloatTensor([0]) for index in range(numObj): cls = rt[index,0] x = rt[index,1] y = rt[index,2] w = rt[index,3] h = rt[index,4] # Original Ground Truth box1 = (x-(w/2), y-(h/2), x+(w/2), h+(h/2)) # Select cell colS = self.indices(interval, lambda q: q > x)[0]-1 rowS = self.indices(interval, lambda q: q > y)[0]-1 # Select BBox IOU = np.ndarray(shape=(1,B)) for ind in range(B): px = pt[0, 0 + (5*ind),rowS, colS] py = pt[0, 1 + (5*ind),rowS, colS] pw = pt[0, 2 + (5*ind),rowS, colS] ph = pt[0, 3 + (5*ind),rowS, colS] box2 = (px - (pw/2), py - (ph/2), px + (pw/2), py +(ph/2)) IOU[0,ind] = bb_intersection_over_union(box1, box2) # Select Best BBoc sel = IOU.argmax() x_hat = pt[0, 0 + (5*sel),rowS, colS] y_hat = pt[0, 1 + (5*sel),rowS, colS] w_hat = pt[0, 2 + (5*sel),rowS, colS] h_hat = pt[0, 3 + (5*sel),rowS, colS] c_hat_obj = pt[0, 4 + (5*sel),rowS, colS] if sel == 0: c_hat_noobj = pt[0, 4 + (5),rowS, colS] else: c_hat_noobj = pt[0, 4 + (0),rowS, colS] p = torch.zeros(1,20).view(-1) p[int(cls)] = 1 p_hat = pt[0,10:,rowS, colS] cost1 = self.l_coord*(pow(x-x_hat, po)) + self.l_coord*(pow(y-y_hat, po)) print("cost1:", cost1) cost2 = pow(1-c_hat_obj,po) + self.l_nobj*pow(0-c_hat_noobj,po) print("cost2:", cost2) cost3 = self.l_coord*(pow(sqr(torch.FloatTensor([w]))-sqr(torch.FloatTensor([w_hat])),po)) + self.l_coord*(pow(sqr(torch.FloatTensor([h]))-sqr(torch.FloatTensor([h_hat])),po)) cost += (cost1 + cost2 + cost3) del cost1, cost2, cost3, p return V(cost) def backward(self, grad_cost): pred_out, real_out = self.saved_tensors grad_pred_out = grad_real_out = None return grad_pred_out, grad_real_out def indices(self, a, func): return [i for (i, val) in enumerate(a) if func(val)] I Would like to know this Error (The kernel appears to have died. It will restart automatically.) is related to Anaconda or related to my code. Could you please help me? Thanks
st119308
Try running the code outside of an iPython notebook. It should print the full error then
st119309
Try running this: gdb python <some output printed here> > r your_script.py <some more output. it will tell you that you got SIGSEV and drop into shell again> > where <a few lines looking like `#0 THPFloatTensor... `. Paste it in a GitHub gist and put a link in this thread>
st119310
@apaszke, Finally I have found the source of the error, and it was the return value of the forward function. Actually it should be a tensor value not a variable.
st119311
rnn = nn.LSTM(10, 20, 2) input = Variable(torch.randn(5, 3, 10)) output, hn = rnn(input) In the above example, input for nn.LSTM should be 3 dimensional tensor. However, for example image captioning and machine translation, at the evaluation step, LSTM’s input can not be predetermined. During the evaluation phase, LSTM should takes 2D tensor, generates output and feedback it to the next time step. How can i deal with this problem? I used nn.LSTM in my image captioning model.
st119312
I don’t understand what’s the problem. If you only have a single time and a single batch you can just give a 1x1x10 Variable to the LSTM.
st119313
Please check out the following short program that reproduces the bug I was facing. gist.github.com https://gist.github.com/rguthrie3/2ac45905be0cc4277361084335df87f7 18 bug.py import torch import torch.autograd as autograd import torch.nn as nn import torch.nn.functional as F import torch.optim as optim class WorkingModule(nn.Module): def __init__(self): This file has been truncated. show original Imagine in each module that self.params is just some transition matrix. The goal is to sum up the scores of each transition, where params[i][j] is the score of transitioning to i from j. Both WorkingModule and BuggyModule have a forward() function that correctly computes this score. WorkingModule does what you would expect. If you check its gradient after calling backward(), you will see 1’s in the places where there was a transition, and 0’s elsewhere. BuggyModule though, doesn’t backpropagate to self.params! The difference is that in this case, the sequence was wrapped in an autograd.Variable, and the transition indices are accessed with .data. I understand the dangers of .data and how it might cut off your backprop, but how is it cutting off the backprop from score to params? The only way sequence is ever involved is just providing an index. In principle, score should not be cut off from params in the computation graph unless I am missing something. In addition, I think that sequence[i].data should be evaluated before being passed to the indexing function, so I am not sure how there is any difference at all as far as constructing the “score” computation graph is concerned.
st119314
It’s a problem that we’ve fixed recently. The second module will raise an error now. We decided to roll back some of the support for LongTensor indexing, because it wasn’t consistent with numpy and had these issues in autograd. You first module indexes the Variable with two int objects, the second one uses two torch.LongTensors (we now only allow using a single torch.LongTensor as the last index). Updating to 0.1.10 should fix it.
st119315
Hi, there When I use a Function multiple time in an iteration, the CUDA memory continuously increases. It is worth noting the Function calls save_for_backward(). The problem disappears when replacing the Function with one does not call save_for_backward(). Any ideas? Sample code as below: import torch from torch.autograd import Function from torch.autograd import Variable class Identity(Function): def forward(self, input): return input def backward(self, grad_output): return grad_output class Linear(Function): def forward(self, input, weight): self.save_for_backward(input, weight) return input.mm(weight.t()) def backward(self, grad_output): input, weight = self.saved_tensors grad_input = grad_weight = None if self.needs_input_grad[0]: grad_input = grad_output.mm(weight) if self.needs_input_grad[1]: grad_weight = grad_output.t().mm(input) return grad_input, grad_weight x = Variable(torch.rand(4000, 3000).cuda(), requires_grad=True) w = Variable(torch.rand(3000, 3000).cuda(), requires_grad=True) grad_output = torch.rand(4000, 3000).cuda() lr = 0.01 for i in range(10000): # (1) cuda memory stays the same # identity = Identity() # loss1 = identity(x) # loss2 = identity(x) # (2) cuda memory continuously increase linear = Linear() loss1 = linear(x, w) loss2 = linear(x, w) loss = loss1 + loss2 loss.backward(grad_output) x.data = x.data - lr * x.grad.data x.grad.data.zero_() if i % 100 == 0: print(i)
st119316
Module can be reused. However, when the same Module calls forward() multiple times, it actually creates a new Function object each time, only the Parameters in it (e.g. weight in nn.Linear) are shared. Am I right? Thanks.
st119317
github.com mattmacy/torchbiomed/blob/master/torchbiomed/loss.py#L32 27 result_ = torch.squeeze(result_) if input.is_cuda: result = torch.cuda.FloatTensor(result_.size()) self.target_ = torch.cuda.FloatTensor(target.size()) else: result = torch.FloatTensor(result_.size()) self.target_ = torch.FloatTensor(target.size()) result.copy_(result_) self.target_.copy_(target) target = self.target_ # print(input) intersect = torch.dot(result, target) # binary values so sum the same as sum of squares result_sum = torch.sum(result) target_sum = torch.sum(target) union = result_sum + target_sum + (2*eps) # the target volume can be empty - so we still want to # end up with a score of 1 if the result is 0/0 IoU = intersect / union print('union: {:.3f}\t intersect: {:.6f}\t target_sum: {:.0f} IoU: result_sum: {:.0f} IoU {:.7f}'.format( I’ve implemented the Dice IoU function as a loss function as suggested by the Vnet paper. I’ve copied the last couple of lines from the backward function as generated by _make_function_class_criterion and I’ve implemented a separate loss function as a wrapper around the call to forward. Since I’m just imitating what I see elsewhere there’s a few things I don’t understand about loss functions. In class Function you set __call__ = _C._FunctionBase._do_forward - presumably that just calls down in to whatever forward function that a derived class implements. Given a call down in to the forward function in the training code that returns a tensor, how does the torch know what the corresponding backward function is when you call backward() on that tensor? e.g. loss = F.nll_loss(output, target) loss.backward() My second question is what the last two lines are doing. What does the view(*repeat(… do? And what does mul_(tensor.expand_as(…) do? Last question is, does the DiceLoss class make sense? When I first started looking at adding a new loss function a few days ago it seemed overwhelming, but this seems like a rather trivial piece of code. Thanks in advance.
st119318
Yes, Function’s __call__ will prepare the object, call the forward method (implemented by the user), and do some postprocessing of the results (e.g. wrap them in Variables). These two steps is when the graph is built, and the function object is saved in there, so it can be found during backward and asked to compute the derivative. *repeat(...) is just Python syntax, nothing specific to PyTorch. Look for the docs in itertools package. expand_as is necessary to make sure that the multiplied tensors are of the same length. It will fake that the object on which it was called it in fact a bigger tensor with some tricks (no memory copy needed). Note that you can only expand along new dimensions at the front (pretend that the tensor has more dimensions than it really has), or along dims of size 1.
st119319
I’m currently trying to implement deep neural decision forest, however, I met some problems. It seem that the submodule’s parameters haven’t add to the model’s parametes. I wonder if it is because of the module list. Here is my definition of the model: ` class DeepNeuralDecisionForest(nn.Module): def __init__(self, p_keep_conv, p_keep_hidden, n_leaf, n_label, n_tree, n_depth): super(DeepNeuralDecisionForest, self).__init__() self.conv = nn.Sequential() self.conv.add_module('conv1', nn.Conv2d(1, 32, kernel_size=3, padding=1)) self.conv.add_module('relu1', nn.ReLU()) self.conv.add_module('pool1', nn.MaxPool2d(kernel_size=2)) self.conv.add_module('drop1', nn.Dropout(1 - p_keep_conv)) self.conv.add_module('conv2', nn.Conv2d(32, 64, kernel_size=3, padding=1)) self.conv.add_module('relu2', nn.ReLU()) self.conv.add_module('pool2', nn.MaxPool2d(kernel_size=2)) self.conv.add_module('drop2', nn.Dropout(1 - p_keep_conv)) self.conv.add_module('conv3', nn.Conv2d(64, 128, kernel_size=3, padding=1)) self.conv.add_module('relu3', nn.ReLU()) self.conv.add_module('pool3', nn.MaxPool2d(kernel_size=2)) self.conv.add_module('drop3', nn.Dropout(1 - p_keep_conv)) self._nleaf = n_leaf self._nlabel = n_label self._ntree = n_tree self._ndepth = n_depth self._batchsize = 100 self.treelayers = [] self.pi_e = [] for i in xrange(self._ntree): treelayer = nn.Sequential() treelayer.add_module('sub_linear1', nn.Linear(1152, 625)) treelayer.add_module('sub_relu', nn.ReLU()) treelayer.add_module('sub_drop1', nn.Dropout(1 - p_keep_hidden)) treelayer.add_module('sub_linear2', nn.Linear(625, self._nleaf)) treelayer.add_module('sub_sigmoid', nn.Sigmoid()) pi = Parameter(self.init_pi()) self.treelayers.append(treelayer) self.pi_e.append(nn.Softmax()(pi)) def init_pi(self): return torch.ones(self._nleaf, self._nlabel)/float(self._nlabel) def init_weights(self, shape): return torch.randn(shape) * 0.01 def init_prob_weights(self, shape, minval=-5, maxval=5): return torch.Tensor(shape[0], shape[1]).uniform_(minval, maxval) def compute_mu(self, flat_decision_p_e): n_batch = self._batchsize batch_0_indices = torch.range(0, n_batch * self._nleaf - 1, self._nleaf).unsqueeze(1).repeat(1, self._nleaf).long() in_repeat = self._nleaf / 2 out_repeat = n_batch batch_complement_indices = torch.LongTensor( np.array([[0] * in_repeat, [n_batch * self._nleaf] * in_repeat] * out_repeat).reshape(n_batch, self._nleaf)) # First define the routing probabilistics d for root nodes mu_e = [] indices_var = Variable((batch_0_indices + batch_complement_indices).view(-1)) indices_var = indices_var.cuda() #indices_var = indices_var.typeas(flat_decision_p_e[0]) # iterate over each tree for i, flat_decision_p in enumerate(flat_decision_p_e): mu = torch.gather(flat_decision_p, 0, indices_var).view(n_batch, self._nleaf) mu_e.append(mu) # from the scond layer to the last layer, we make the decison nodes for d in xrange(1, self._ndepth + 1): indices = torch.range(2 ** d, 2 ** (d + 1) - 1) - 1 tile_indices = indices.unsqueeze(1).repeat(1, 2 ** (self._ndepth - d + 1)).view(1, -1) batch_indices = batch_0_indices + tile_indices.repeat(n_batch, 1).long() in_repeat = in_repeat / 2 out_repeat = out_repeat * 2 # Again define the indices that picks d and 1-d for the nodes batch_complement_indices = torch.LongTensor( np.array([[0] * in_repeat, [n_batch * self._nleaf] * in_repeat] * out_repeat).reshape(n_batch, self._nleaf)) mu_e_update = [] indices_var = Variable((batch_indices + batch_complement_indices).view(-1)) indices_var = indices_var.cuda() for mu, flat_decision_p in zip(mu_e, flat_decision_p_e): mu = torch.mul(mu, torch.gather(flat_decision_p, 0, indices_var).view( n_batch, self._nleaf)) mu_e_update.append(mu) mu_e = mu_e_update return mu_e def compute_py_x(self, mu_e): py_x_e = [] n_batch = self._batchsize for mu, leaf_p in zip(mu_e, self.pi_e): py_x_tree = mu.unsqueeze(2).repeat(1, 1, self._nlabel).mul(leaf_p.unsqueeze(0).repeat(n_batch, 1, 1)).mean(1) py_x_e.append(py_x_tree) py_x_e = torch.cat(py_x_e, 1) py_x = py_x_e.mean(1).squeeze() return py_x def forward(self, x): feat = self.conv.forward(x) feat = feat.view(-1, 1152) self._batchsize = x.size(0) #py_x = self.fc.forward(feat) flat_decision_p_e = [] for i in xrange(len(self.treelayers)): decision_p = self.treelayers[i].forward(feat) decision_p_comp = 1 - decision_p decision_p_pack = torch.cat((decision_p, decision_p_comp), 1) flat_decision_p = decision_p_pack.view(-1) flat_decision_p_e.append(flat_decision_p) mu_e = self.compute_mu(flat_decision_p_e) py_x = self.compute_py_x(mu_e)` return py_x
st119320
I met a similar but not same problem. I defined a new module as follow: class RecursiveNN(nn.Module): def __init__(self, word_embedding, hidden_dim): super(RecursiveNN, self).__init__() self.word_dim = word_embedding.embeddings.size(1) self.hidden_dim = hidden_dim self.embedding = nn.Embedding(word_embedding.embeddings.size(0), self.word_dim) self.embedding.weight = nn.Parameter(word_embedding.embeddings) self.word2hidden = nn.Linear(self.word_dim, self.hidden_dim) self.hidden2hidden = nn.Linear(2 * self.hidden_dim, self.hidden_dim) def forward(self, node): if not node.val is None: node.calculate_result = self.word2hidden(self.embedding(Variable(torch.LongTensor([node.word_id])))) return node.calculate_result else: assert len(node.children) == 2 node.calculate_result = self.hidden2hidden(torch.cat((node.children[0].calculate_result, node.children[1].calculate_result), 1)) return node.calculate_result And, this module is used by another module whose definition is shown as below: class RootAlign(nn.Module): def __init__(self, word_embedding, config): super(RootAlign, self).__init__() self.rnn = RecursiveNN(word_embedding, config['hidden_dim']) self.linear = nn.Linear(config['hidden_dim'] * 2, config['relation_num']) def forward(self, p_tree, h_tree): p_tree.postorder_traverse(self.rnn) h_tree.postorder_traverse(self.rnn) out = F.softmax(self.linear(F.sigmoid(torch.cat((p_tree.calculate_result, h_tree.calculate_result), 1)))) return out What I wonder is how to add the parameters of RecursiveNN into RootAlign so that their parameters can be trained together. I would be very grateful if you could help me
st119321
The code seems correct, can’t you see parameters from RecursiveNN in .parameters()? Also, I’d recommend agains caching node.calculate_result if you don’t need it. It will prevent PyTorch from freeing the graph that created node.calculate_result until it is overwriten or manually deleted.
st119322
Thank you very much for your reply. After check RootAlign.paramters(), I’d say that RootAlign does have the parameters of RecursiveNN. Yet I don’t understand why the model can’t be well trained. The train function shows as follow: for _data in snli.train: p_tree = _data['p_tree'] h_tree = _data['h_tree'] target = Variable(torch.LongTensor([_data['label']])) optimizer.zero_grad() output = root_align(p_tree, h_tree) loss = F.nll_loss(output, target) loss.backward() optimizer.step() train_loss += loss And, the whole project can be obtained at [here] 5. The train function can be found in “train.py” By the way, I don’t know how to cache node.calculate_result. Can you please show me an example? Thank you again for your help.
st119323
There’s one problem with your training loop, but it shouldn’t affect correctness. Don’t do train_loss += loss, because you’ll be keeping graphs for each iteration around. Do train_loss += loss.data[0], so that you only accumulate the value, not the Variable that records each iteration. The project is quite large so I’m afraid I won’t be able to help you. There’s probably a bug somewhere. Maybe this example 54 could help you somehow.
st119324
Now, I know where the problem is. In RootAlign.forward(), the F.softmax should be replaced by F.log_softmax as I use F.nll_loss to calculate losses of each sample. Anyway, thanks for your help.
st119325
Is it posible to use nn.Embedding with half precision models? The half precision code below crashes in backward? # Long embedding = nn.Embedding(embedding_dim=5, num_embeddings=10) embedding.cuda() x_device = torch.LongTensor([1,2,0,1]).cuda() xv = Variable(x_device) o = embedding(xv) t = torch.zeros(o.size()).cuda() o.backward(t) # Half Crahes embedding = nn.Embedding(embedding_dim=5, num_embeddings=10) embedding.cuda().half() x_device = torch.LongTensor([1,2,0,1]).cuda() xv = Variable(x_device) o = embedding(xv) print(o) t = torch.zeros(o.size()).cuda().half() o.backward(t)
st119326
I think there is a bug in the pytorch code for cuda. I think this line 53 should be instead if grad_output.is_cuda:
st119327
Is there a way to implement your own padding algorithm? For example use reflect padding instead of the seemingly solely available one: zero padding. Numpy offers it for example: https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html 57
st119328
It’s going to be included in the core soon: https://github.com/pytorch/pytorch/pull/856/files#diff-c66288b9ce36978f377a1a20d32ec53dR537 314
st119329
That would be a welcome addition. Do you know if there is a way to apply a Gaussian convolution individually onto each feature map? Something to use after nearest neighbor upsampling, ex use in building laplacian pyramids. It could certainly be done in a hack-y way where you cut up variables, but that doesn’t seem like it would be very efficient.
st119330
Numpy has apply_along_axis but it doesn’t look like pytorch has that unless I’m missing something. I’m looking for a way to apply a custom (maybe learned too) convolution onto each feature map individually. So if you have a (64,32,128,128) tensor you want a say, (5,5) filter to be applied to all 64*32 (128,128) images. Maybe I’m not thinking in arrays right, what if I reshape it to (64*32,1,128,128) and do convolutions the usual way? Then reshape back? Seems like it should work, would reshaping it naively break things though?
st119331
Is there a way to use these new padding options directly in conv2d somehow? Is that integration planned? These functions also aren’t featured in the docs. It’s not exactly clear how one would go about using this padding prior to conv2d, can you give an example? The one in the tests doesn’t appear to be informative. Does it return an enlarged tensor based on the requested padding?
st119332
No. Conv2d supports only basic zero-padding. If you need anything more complicated use F.pad. I don’t understand the problem, can’t you apply F.pad to the input, and pass the output to Conv2d?
st119333
When I install Pytorch on my Mac Pro (cylinder), os: EI Captian, installation from sources, python 2.7 I have encountered the following problem: #include <string.h> ^ In file included from :332: In file included from :15: In file included from /usr/local/cuda/include/cuda_runtime.h:116: /usr/local/cuda/include/common_functions.h:65:10: fatal error: ‘string.h’ file not found #include <string.h> ^ In file included from :332: In file included from :15: In file included from /usr/local/cuda/include/cuda_runtime.h:116: /usr/local/cuda/include/common_functions.h:65:10: fatal error: ‘string.h’ file not found #include <string.h> ^ In file included from :332: In file included from :15: In file included from /usr/local/cuda/include/cuda_runtime.h:116: /usr/local/cuda/include/common_functions.h:65:10: fatal error: ‘string.h’ file not found #include <string.h> ^ In file included from :332: In file included from :15: In file included from /usr/local/cuda/include/cuda_runtime.h:116: /usr/local/cuda/include/common_functions.h:65:10: fatal error: ‘string.h’ file not found #include <string.h> ^ In file included from :332: In file included from :15: In file included from /usr/local/cuda/include/cuda_runtime.h:116: /usr/local/cuda/include/common_functions.h:65:10: fatal error: ‘string.h’ file not found #include <string.h> ^ In file included from :332: In file included from :15: In file included from /usr/local/cuda/include/cuda_runtime.h:116: /usr/local/cuda/include/common_functions.h:65:10: fatal error: ‘string.h’ file not found #include <string.h> ^ In file included from :332: In file included from :15: In file included from /usr/local/cuda/include/cuda_runtime.h:116: /usr/local/cuda/include/common_functions.h:65:10: fatal error: ‘string.h’ file not found #include <string.h> ^ In file included from :332: In file included from :15: In file included from /usr/local/cuda/include/cuda_runtime.h:116: /usr/local/cuda/include/common_functions.h:65:10: fatal error: ‘string.h’ file not found #include <string.h> ^ In file included from :332: In file included from :15: In file included from /usr/local/cuda/include/cuda_runtime.h:116: /usr/local/cuda/include/common_functions.h:65:10: fatal error: ‘string.h’ file not found #include <string.h> ^ In file included from :332: In file included from :15: In file included from /usr/local/cuda/include/cuda_runtime.h:116: /usr/local/cuda/include/common_functions.h:65:10: fatal error: ‘string.h’ file not found In file included from :332: In file included from :15: In file included from /usr/local/cuda/include/cuda_runtime.h:116: /usr/local/cuda/include/common_functions.h:65:10: fatal error: ‘string.h’ file not found #include <string.h> ^ #include <string.h> ^ In file included from :332: In file included from :15: In file included from /usr/local/cuda/include/cuda_runtime.h:116: /usr/local/cuda/include/common_functions.h:65:10: fatal error: ‘string.h’ file not found #include <string.h> ^ In file included from :332: In file included from :15: In file included from /usr/local/cuda/include/cuda_runtime.h:116: /usr/local/cuda/include/common_functions.h:65:10: fatal error: ‘string.h’ file not found #include <string.h> ^ In file included from :332: In file included from :15: In file included from /usr/local/cuda/include/cuda_runtime.h:116: /usr/local/cuda/include/common_functions.h:65:10: fatal error: ‘string.h’ file not found #include <string.h> ^ In file included from :332: In file included from :15: In file included from /usr/local/cuda/include/cuda_runtime.h:116: /usr/local/cuda/include/common_functions.h:65:10: fatal error: ‘string.h’ file not found #include <string.h> ^ In file included from :332: In file included from :15: In file included from /usr/local/cuda/include/cuda_runtime.h:116: /usr/local/cuda/include/common_functions.h:65:10: fatal error: ‘string.h’ file not found #include <string.h> ^ In file included from :332: In file included from :15: In file included from /usr/local/cuda/include/cuda_runtime.h:116: /usr/local/cuda/include/common_functions.h:65:10: fatal error: ‘string.h’ file not found #include <string.h> ^ In file included from :332: In file included from :15: In file included from /usr/local/cuda/include/cuda_runtime.h:116: /usr/local/cuda/include/common_functions.h:65:10: fatal error: ‘string.h’ file not found #include <string.h> ^ In file included from :332: In file included from :15: In file included from /usr/local/cuda/include/cuda_runtime.h:116: /usr/local/cuda/include/common_functions.h:65:10: fatal error: ‘string.h’ file not found #include <string.h> ^ In file included from :332: In file included from :15: In file included from /usr/local/cuda/include/cuda_runtime.h:116: /usr/local/cuda/include/common_functions.h:65:10: fatal error: ‘string.h’ file not found #include <string.h> ^ In file included from :332: In file included from :15: In file included from /usr/local/cuda/include/cuda_runtime.h:116: /usr/local/cuda/include/common_functions.h:65:10: fatal error: ‘string.h’ file not found #include <string.h> ^ In file included from :332: In file included from :15: In file included from /usr/local/cuda/include/cuda_runtime.h:116: /usr/local/cuda/include/common_functions.h:65:10: fatal error: ‘string.h’ file not found #include <string.h> ^ In file included from :332: In file included from :15: In file included from /usr/local/cuda/include/cuda_runtime.h:116: /usr/local/cuda/include/common_functions.h:65:10: fatal error: ‘string.h’ file not found #include <string.h> ^ 1 error generated. 1 error generated. CMake Error at THC_generated_THCBlas.cu.o.cmake:207 (message): Error generating /Users/liangshiyu/Desktop/pytorch-master/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCBlas.cu.o CMake Error at THC_generated_THCSleep.cu.o.cmake:207 (message): Error generating /Users/liangshiyu/Desktop/pytorch-master/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCSleep.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCBlas.cu.o] Error 1 make[2]: *** Waiting for unfinished jobs… make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCSleep.cu.o] Error 1 1 error generated. CMake Error at THC_generated_THCReduceApplyUtils.cu.o.cmake:207 (message): Error generating /Users/liangshiyu/Desktop/pytorch-master/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCReduceApplyUtils.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCReduceApplyUtils.cu.o] Error 1 1 error generated. CMake Error at THC_generated_THCTensor.cu.o.cmake:207 (message): Error generating /Users/liangshiyu/Desktop/pytorch-master/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensor.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensor.cu.o] Error 1 1 error generated. 1 error generated. CMake Error at THC_generated_THCTensorCopy.cu.o.cmake:207 (message): Error generating /Users/liangshiyu/Desktop/pytorch-master/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorCopy.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensorCopy.cu.o] Error 1 CMake Error at THC_generated_THCTensorTypeUtils.cu.o.cmake:207 (message): Error generating /Users/liangshiyu/Desktop/pytorch-master/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorTypeUtils.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensorTypeUtils.cu.o] Error 1 1 error generated. CMake Error at THC_generated_THCStorageCopy.cu.o.cmake:207 (message): Error generating /Users/liangshiyu/Desktop/pytorch-master/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCStorageCopy.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCStorageCopy.cu.o] Error 1 1 error generated. CMake Error at THC_generated_THCTensorConv.cu.o.cmake:207 (message): Error generating /Users/liangshiyu/Desktop/pytorch-master/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorConv.cu.o 1 error generated. make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensorConv.cu.o] Error 1 CMake Error at THC_generated_THCTensorMathBlas.cu.o.cmake:207 (message): Error generating /Users/liangshiyu/Desktop/pytorch-master/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorMathBlas.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensorMathBlas.cu.o] Error 1 1 error generated. 1 error generated. CMake Error at THC_generated_THCTensorScatterGather.cu.o.cmake:207 (message): Error generating /Users/liangshiyu/Desktop/pytorch-master/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorScatterGather.cu.o CMake Error at THC_generated_THCTensorMathCompareTByte.cu.o.cmake:207 (message): Error generating /Users/liangshiyu/Desktop/pytorch-master/torch/lib/build/THC/CMakeFiles/THC.dir/generated/./THC_generated_THCTensorMathCompareTByte.cu.o make[2]: *** [CMakeFiles/THC.dir/generated/THC_generated_THCTensorMathCompareTByte.cu.o] Error 1 make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensorScatterGather.cu.o] Error 1 1 error generated. 1 error generated. CMake Error at THC_generated_THCTensorMathMagma.cu.o.cmake:207 (message): Error generating /Users/liangshiyu/Desktop/pytorch-master/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorMathMagma.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensorMathMagma.cu.o] Error 1 CMake Error at THC_generated_THCTensorMathPairwise.cu.o.cmake:207 (message): Error generating /Users/liangshiyu/Desktop/pytorch-master/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorMathPairwise.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensorMathPairwise.cu.o] Error 1 1 error generated. CMake Error at THC_generated_THCTensorIndex.cu.o.cmake:207 (message): Error generating /Users/liangshiyu/Desktop/pytorch-master/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorIndex.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensorIndex.cu.o] Error 1 1 error generated. 1 error generated. 1 error generated. 1 error generated. 1 error generated. 1 error generated. 1 error generated. 1 error generated. CMake Error at THC_generated_THCHalf.cu.o.cmake:207 (message): Error generating /Users/liangshiyu/Desktop/pytorch-master/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCHalf.cu.o CMake Error at THC_generated_THCTensorTopK.cu.o.cmake:207 (message): Error generating /Users/liangshiyu/Desktop/pytorch-master/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorTopK.cu.o CMake Error at THC_generated_THCTensorSort.cu.o.cmake:207 (message): Error generating /Users/liangshiyu/Desktop/pytorch-master/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorSort.cu.o CMake Error at THC_generated_THCStorage.cu.o.cmake:207 (message): Error generating /Users/liangshiyu/Desktop/pytorch-master/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCStorage.cu.o CMake Error at THC_generated_THCTensorMathReduce.cu.o.cmake:207 (message): Error generating /Users/liangshiyu/Desktop/pytorch-master/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorMathReduce.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCHalf.cu.o] Error 1 make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensorTopK.cu.o] Error 1 make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensorSort.cu.o] Error 1 make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCStorage.cu.o] Error 1 make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensorMathReduce.cu.o] Error 1 1 error generated. CMake Error at THC_generated_THCTensorSortByte.cu.o.cmake:207 (message): Error generating /Users/liangshiyu/Desktop/pytorch-master/torch/lib/build/THC/CMakeFiles/THC.dir/generated/./THC_generated_THCTensorSortByte.cu.o make[2]: *** [CMakeFiles/THC.dir/generated/THC_generated_THCTensorSortByte.cu.o] Error 1 CMake Error at THC_generated_THCTensorMath2.cu.o.cmake:207 (message): Error generating /Users/liangshiyu/Desktop/pytorch-master/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorMath2.cu.o CMake Error at THC_generated_THCTensorMathScan.cu.o.cmake:207 (message): Error generating /Users/liangshiyu/Desktop/pytorch-master/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorMathScan.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensorMath2.cu.o] Error 1 make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensorMathScan.cu.o] Error 1 CMake Error at THC_generated_THCTensorMath.cu.o.cmake:207 (message): Error generating /Users/liangshiyu/Desktop/pytorch-master/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorMath.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensorMath.cu.o] Error 1 1 error generated. CMake Error at THC_generated_THCTensorRandom.cu.o.cmake:207 (message): Error generating /Users/liangshiyu/Desktop/pytorch-master/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorRandom.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensorRandom.cu.o] Error 1 make[1]: *** [CMakeFiles/THC.dir/all] Error 2 make: *** [all] Error 2
st119334
Are you using conda? It requiers setting an additional env variable on OS X. Please check the README.
st119335
As we mention in the README 31, on OSX, you need to set the env flag: export MACOSX_DEPLOYMENT_TARGET=10.9
st119336
I’m trying out Pytorch by comparing model with its Theano version equivalence, and I noticed: Pytorch model runs much faster than Theano version, ~0.9s vs 1.6s per batch Pytorch model however converges much slower than Theano version, error rate ~83% vs ~62% after 250 batches. The Pytorch model is defined by: class model_5_1(nn.Module): def __init__(self, batchsize=None, channel=1, imsize=(256, 256), Nclass=16, kernel_size=3, border_mode='same'): super(model_5_1, self).__init__() self.batchsize = batchsize self.channel = channel self.imsize = imsize self.Nclass = Nclass self.kernel_size = kernel_size self.border_mode = border_mode if border_mode == 'same': pad = kernel_size // 2 else: pad = 0 self.conv0 = nn.Conv2d(channel, 32, kernel_size, padding=pad) self.conv1 = nn.Conv2d(32, 64, kernel_size, padding=pad) self.conv2 = nn.Conv2d(96, 128, kernel_size, padding=pad) self.conv3 = nn.Conv2d(128, 128, kernel_size, padding=pad) self.conv4 = nn.Conv2d(128, 128, kernel_size, padding=pad) self.conv5 = nn.Conv2d(256, 512, kernel_size, padding=pad) self.bn0 = nn.BatchNorm2d(128) self.bn1 = nn.BatchNorm2d(256) self.bn2 = nn.BatchNorm2d(512) self.rnn0 = nn.LSTM(input_size=512, hidden_size=100, batch_first=True, bidirectional=True) self.rnn1 = nn.LSTM(input_size=200, hidden_size=100, batch_first=True, bidirectional=True) self.fc0 = nn.Linear(200, Nclass) def forward(self, x): x = F.max_pool2d(F.relu(self.conv0(x)), (2, 2)) x1 = F.relu(self.conv1(x)) x = tr.cat((x, x1), 1) x = F.max_pool2d(x, (2,2)) x = F.relu(self.bn0(self.conv2(x))) x = F.max_pool2d(F.relu(self.conv3(x)), (2,2)) x1 = F.relu(self.conv4(x)) x = tr.cat((x, x1), 1) x = self.bn1(x) x = F.max_pool2d(F.relu(self.conv5(x)), (4,4)) x = self.bn2(x) x = x.view(x.size(0), x.size(1), x.size(2) * x.size(3)) x = tr.transpose(x, 1, 2) x, _ = self.rnn0(x) x = F.tanh(x) x, _ = self.rnn1(x) x = x[:,-1,:] x = F.tanh(x) x = F.softmax(self.fc0(x)) return x And I use cross_entropy for loss, Adadelta for optimizer. For Theano version I use categorical_crossentropy for loss, Adadelta with same parameters for optimizer. Anyone has any thought on this problem?
st119337
You don’t need to use softmax in the forward operation. Maybe it is the cause of the problem.
st119338
Precisely, cross_entropy already has a log_softmax internally, so you should remove the F.softmax from your model
st119339
Thanks @kim.seonghyeon @fmassa, the problem indeed rises from the duplicate softmax op. Now the convergence performance of the Pytorch model is pretty close to its Theano counterpart.
st119340
Hi, My question is: if I have a 2D tensor and a 1D LongTensor which stores a list of indices, then I want to select an entry from each row of the 2D tensor based on the 1D longTensor. How can I achieve that in PyTorch? For example, a = [[1,2,3],[4,5,6],[7,8,9]], b = [2,1,0], then I would like to get [3, 5, 7] Also, say if I torch.sum([3,5,7]) and then take the derivative of it, is it doable in the sense that the partial derivatives will be successfully calculated? Thanks a lot!
st119341
This should do it: x.gather(1, b.unsqueeze(1)) The gradient will be correct, as long as all values in b are unique.
st119342
The values in b shouldn’t have to be unique, since they’re indexing into separate rows?
st119343
Yes, of course. You’re right. They should be unique within a row, but it’s not a problem here.
st119344
If I create a function and I apply it by calling forward method, the gradient computed seems independent of my backward() method and seems correct, even if the backward() was incorrect. For example, with this code: class Cube(Function): def forward(self,input): self.save_for_backward(input) return input*input*input def backward(self, grad_output): input, = self.saved_tensors # wrong backward function: return grad_output cube = Cube() input = Variable(torch.ones(2,2).double(), requires_grad=True) output = cube(input).sum() output.backward() print(input.grad) # gives [[1,1],[1,1]] what does my backward do input.grad.data.zero_() output = cube.forward(input).sum() output.backward() print(input.grad) # gives [[3,3],[3,3]] the good gradient ?!
st119345
You can’t apply a function by directly calling its forward method; what that ends up doing is just calling that forward method on the Variable objects, as if it were the forward of a Module, and building a standard autograd graph without ever using the function’s backward implementation.
st119346
I am trying to write custom cuda kernel for pytorch for a specific computation. Is there any available documentation for writing custom cuda kernels for pytorch?
st119347
Hi all, I just wondered if there’s an easy way to apply, for example an addition to each row of a 2DTensor using the values of an 1D Tensor. In Tensorflow, it is as simple as : result= t1+t2 #t1(10,10) and t2(1,10) I know this might sound trivial/easy and there might be already something, but I’ve looked into the docs and I couldn’t find anything. I could create a 2D matrix by repeating the 1D Tensor, or iterate over the 2D matrix, but i thought there might be something more optimized. Thanks a lot !
st119348
Hi, You can do: result = t1 + t2.expand_as(t1) See http://pytorch.org/docs/tensors.html#torch.Tensor.expand_as 522 for more details.
st119349
albanD: You can do: result = t1 + t2.expand_as(t1) Great, works like a charm ! I should have looked harder into the docs >.>
st119350
A possible implmentation is def kmax(self, x, k): return x.sort(dim = 3)[0][:, :, :, -k:] However, this cannot keep the information of relative position.
st119351
If you remove the [0] part you’ll get a tuple of (sorted_values, sorted_indices), so it should be possible to get the position. Also, you might want to use topk instead of sort.
st119352
Could you provide some demonstration code? I cannot find any differential API for such kind of index selection. index_select seems not suitable for this. @apaszke
st119353
Then I think I don’t understand the problem. What’s the exact formula for k-pooling and what’s the problem with your implementation?
st119354
I found the exact solution. The key API is torch.gather: import torch def kmax_pooling(x, dim, k): index = x.topk(k, dim = dim)[1].sort(dim = dim)[0] return x.gather(dim, index) x = torch.rand(4, 5, 6, 10) y = kmax_pooling(x, 3, 5) print(x[0, 0]) print(y[0, 0]) Output: 0.2762 0.3788 0.5708 0.3251 0.0568 0.2483 0.3930 0.1249 0.1874 0.1113 0.9230 0.7428 0.0957 0.2301 0.6187 0.8898 0.3007 0.2653 0.5313 0.1032 0.6376 0.9639 0.6584 0.1502 0.0250 0.5792 0.9283 0.1783 0.9545 0.1681 0.8456 0.6135 0.2860 0.9366 0.5178 0.0113 0.4864 0.9308 0.3005 0.5403 0.3280 0.8755 0.2290 0.0899 0.9093 0.6971 0.1557 0.2412 0.7991 0.9169 0.5389 0.4603 0.7291 0.4070 0.0113 0.3571 0.3860 0.3354 0.4081 0.0209 [torch.FloatTensor of size 6x10] 0.2762 0.3788 0.5708 0.3251 0.3930 0.9230 0.7428 0.6187 0.8898 0.5313 0.6376 0.9639 0.6584 0.9283 0.9545 0.8456 0.6135 0.9366 0.9308 0.5403 0.8755 0.9093 0.6971 0.7991 0.9169 0.5389 0.4603 0.7291 0.4070 0.4081 [torch.FloatTensor of size 6x5] Can also be used on autograd.Variable.
st119355
Hi, When I use the pytorch compiled from the src(pytorch-src), the CPU usage is much higher than the pytorch installed directly through conda(pytorch-conda). In addition, when using pytorch-src, gpu usage is lower than pytorch-conda. I just followed the instructions on github to install from src. I’m using python 2.7. Is there anything with my installation? Best, Yikang
st119356
What OS are you on? And by higher CPU usage do you mean that PyTorch is able to use more cores effectively and run faster?
st119357
I’m on Ubuntu 14.04. It uses more cores, but not faster nor efficient, I’m afraid. When I changed back to the pytorch installed through conda, it seems OK.
st119358
Maybe your pytorch that was compiled from source is not using cudnn? And maybe the compiled version is linking against OpenBLAS instead of MKL?
st119359
Hi Massa, Thank you for your help. I’m not very familiar with Linux. If you don’t mind, could you tell me how to check them. Best, Yikang
st119360
You can check if cudnn is being used by typing torch.backends.cudnn.is_acceptable(torch.cuda.FloatTensor(1)) in your python interpreter. For checking if the library was linked against OpenBlas or MKL, type ldd libTH.so, where libTH.so is the library file that was compiled.
st119361
Check this thread, the discussion provides some details on blas and torch and some additional settings and flags you can look into. Maybe run the scripts I provided there so we can see if it’s indeed a blas-related issue? Also posting the compilation logs could help if you indeed see performance differences between the two installs.
st119362
Hi, I recently learning deep learning and was experimenting with the language model example provide by Pytorch here - https://github.com/pytorch/examples/tree/master/word_language_model 53 In the generate.py script here - https://github.com/pytorch/examples/blob/master/word_language_model/generate.py#L65 39, I don’t understand how we get the output word from all the word weights by sampling from a multinomial distribution. output, hidden = model(input, hidden) word_weights = output.squeeze().data.div(args.temperature).exp().cpu() word_idx = torch.multinomial(word_weights, 1)[0] input.data.fill_(word_idx) word = corpus.dictionary.idx2word[word_idx] If I assume the word_weights as the probabilities of all words, then we should pick the word with the highest probability (thinking about softmax here) . But, I could not understand logically, what’s the benefit/reason behind sampling from a multinomial distribution. I played around a bit and tried to sample 2 word indices and print their corresponding word_weights and noticed that we don’t necessarily take the word with the higher weight (due to sampling). But I don’t understand the reason behind it. I understand that this is not necessarily a Pytorch question, but would appreciate if someone could share details behind the sampling.
st119363
A guess would be that it makes the output richer. Sticking to the most probable words would restrict the model to always use the most commonly used words, while if you use softmax, it should end up using the words approximately as often as they appear in natural language (so it will sometimes insert some more complex ones too).
st119364
I would like to partially reset a Variable, for a specific batch index. At the moment I am using an in-place operation, which works “fine”. def selective_zero(s, new): for b, reset in enumerate(new): if reset: for state_layer in s: state_layer.data[b].zero_() selective_zero(state, y[t + 1] != y[t]) In order to complete this, I was thinking to register a hook in order to zero the gradient correspondingly as well. I am now thinking whether returning a new state Variable multiplied with an appropriate mask, instead, would have the combined effect I am actually after. This, though, would involve a larger amount of computations. Hmm… Something about this: for state_layer in s: state_layer.data[b].zero_() state_layer.register_hook(lambda grad: grad[b].zero_())
st119365
Are you sure you want to do state_layer.data[b].zero_() and not state_layer[b].zero_() ? Because in the first case, the operation is not registered to autograd and can have unexpected behaviour. If what you want is to reset the content of the tensor to use it again independently of how you were using it before, you should repack it in a new Variable otherwise it will still have the history of the previous usage.
st119366
That’s why I am registering a backward hook, to kill the history for a given batch index. Repackaging the whole Variable would kill all the history, which is unacceptable. I am now wondering whether state_layer.data[b].zero_() state_layer.register_hook(lambda grad: grad[b].zero_()) is equivalent to your state_layer[b].zero_()
st119367
If you have the indices of the layers you want to reset (indices as a LongTensor), I think the cleanest way would be state_layer.index_fill_(0, indices, 0) . That would remove one for loop. I guess if state is a python list, you cannot avoid this list. If state is actually a Variable, you can do state.index_fill_(1, indices, 0) which would be the most efficient I think. Whether or not the two formulation are equivalent, I don’t know enough to say yes for sure, @apaszke would have to confirm.