id
stringlengths
3
8
text
stringlengths
1
115k
st115768
Thank you! I wasn’t understanding the dimensions and didn’t realize that I had put 4 input channels.
st115769
So I’m building an autoencoder structure using GRUs, and it currently works with 1 layer, but I am trying to make it work with many layers. Currently, my input is a tensor of size (no. of batches * length of sequence * no. of features). The first two are variable, but for my data, I have only 3 features (I know this is small). So for the sake of an example, let’s say my input is (100 * 250 * 3.) Looking at the seq2seq translation tutorial, the common step in NLP is to create an embedding layer in the encoder, which takes the input size and outputs in the hidden size, this can then be fed through the GRU for any number of layers as the input is always the same as the output. However, I’m not sure an embedding is particularly right for my application as my inputs are just 3 numeric time series, so I’m left in a situation where my initial input to the GRU is (100 * 250 * 3), it goes through the GRU and comes out as a (100 * 250 * hidden size) tensor, which I can no longer put back through the GRU because it is expecting a (i * j * 3) tensor. Any ideas on how I should approach this (code below). def forward(self, input_value, hidden): recursive_input = input_value for i in range(self.n_layers): recursive_input, hidden = self.enc(recursive_input, hidden) for i in range(self.n_layers): recursive_input = F.relu(recursive_input) recursive_input, hidden = self.dec(recursive_input, hidden) seq_output = self.out(recursive_input) return (seq_output, recursive_input, hidden, None) whereby self.enc = nn.GRU(input_size, hidden_size, batch_first=True) self.dec = nn.GRU(hidden_size, hidden_size, batch_first=True) self.out = nn.Linear(hidden_size, output_size)
st115770
you could do a reprojection, either by adding a Linear layer that maps the hidden_size back to 3, or by doing AvgPooling for example.
st115771
I toyed with the idea of reprojection for a while. I’m trying it with setting num_layers in GRU() for the encoder first, and if that fails, I’ll just using an additional linear layer after each encoder GRU layer. Thanks! def forward(self, input_value, hidden): recursive_input = input_value recursive_input, hidden = self.enc(recursive_input, hidden) hidden = hidden[-1].view(1, hidden.size(1), hidden.size(2)) for i in range(self.n_layers): recursive_input = F.relu(recursive_input) recursive_input, hidden = self.dec(recursive_input, hidden) seq_output = self.out(recursive_input) return (seq_output, recursive_input, hidden, None)
st115772
Hi, I’ve been thinking about implementing factorization machines algorithms (the basic one, or more advanced such as in libraries like LightFM and LibFFM) in pytorch. Does someone knows if it was already done somehow? if not, do you think the speed-up will be wothwhile? Thanks!
st115773
it is definitely not done yet (I am trying to keep tabs on user-implemented pytorch repositories)
st115774
Thanks! I’ll put it in my todo list Is the user implemented models list is available somewhere?
st115775
Hi there, I recently started using PyTorch (more familiar with Keras) and here is 1st my attempt at Factorization Machine: GitHub mzaradzki/factorization-machine-for-prediction 135 Factorization Machine for regression and classification - mzaradzki/factorization-machine-for-prediction Remark ; I implemented it using the O(k.N) formulation found in Steffen Rendle paper
st115776
If you’re interested, I have also implemented a factorization machine in pytorch – I cythonized the forward and backward passes, and so it’s relatively fast. Definitely a work-in-progress still! GitHub jmhessel/fmpytorch 107 fmpytorch - A PyTorch implementation of a Factorization Machine module in cython.
st115777
I saved a model using save_state_dict on a machine with 4 GPU’s and I was using the device with id 3. Later, I tried to load the model in a machine with 2 GPU’s, which means id=3 does not work. This throws an error cuda runtime error (10) : invalid device ordinal at torch/csrc/cuda/Module.cpp:80 Is this a bug? I believe we should be able to save and load models in different machines. We’re often sharing machines in labs and use the one which is available at that point in time! Thanks!
st115778
Hi, Personally, I always save my models on cpu to be able to load them easily anywhere, and put them on gpu later if needed. However, there is an option in torch.load to “remap storages to be loaded on a different device”. See this post explaining it: Loading weights for CPU model while trained on GPU And the doc: http://pytorch.org/docs/master/torch.html?highlight=load#torch.load 209 Quoting from doc: torch.load('tensors.pt') # Load all tensors onto the CPU torch.load('tensors.pt', map_location=lambda storage, loc: storage) # Map tensors from GPU 1 to GPU 0 torch.load('tensors.pt', map_location={'cuda:1':'cuda:0'})
st115779
I did: F.relu(np.ones(3)) but it threw an error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/brandomiranda/miniconda3/envs/hbf_env/lib/python3.6/site-packages/torch/nn/functional.py", line 463, in relu return _functions.thnn.Threshold.apply(input, 0, 0, inplace) File "/Users/brandomiranda/miniconda3/envs/hbf_env/lib/python3.6/site-packages/torch/nn/_functions/thnn/auto.py", line 126, in forward ctx._backend = type2backend[type(input)] File "/Users/brandomiranda/miniconda3/envs/hbf_env/lib/python3.6/site-packages/torch/_thnn/__init__.py", line 15, in __getitem__ return self.backends[name].load() KeyError: <class 'numpy.ndarray'> why? do torch ops not process numpy stuff innately?
st115780
F.relu operates on a Variables containing a Tensor object and not on a numpy ndarray. If you want to use torch Functions (such as F.relu), convert your numpy ndarray to a torch Tensor as: np_arr = np.array([0,1,2,3]) tensor_arr = torch.from_numpy(np_arr) tensor_var = Variable(tensor_arr) F.relu(tensor_var)
st115781
yea but now I need to wrap F.relu and change types and check types etc seems unnecessary, but why would it need to be like this?
st115782
that’s how the library works. We dont process numpy arrays without conversion into torch Tensors. numpy arrays are not even defined for the GPU.
st115783
Hello to everybody, I’m new at PyTorch and I have a question about moving the computations on GPU. Suppose we have two different operations that involves different tensors, for example something like this: z1 = x * y # first operation z2 = x + y # second operation where x and y are two tensors in the same device. My question is: will the GPU calculate both z1 and z2 in parallel? If not, how can I force the GPU to performs these two operations at the same time? Thanks in advance!
st115784
it doesn’t do both computations at the same time. There is no exposed mechanism to do it parallely. (there is an advanced mechanism using CUDA streams that allows to do this in pytorch, but it is too error-prone for most users)
st115785
Hi all, I have a computational graph x -> y -> loss. I want to compute d_loss/d_y, but NOT d_loss/d_x In theano I’d use disconnected_grad 54. How can I do this in PyTorch? I’ve heard you can somehow use the register_backward_hook method, but I have not found an example of this working. I’ve attached an attempt below, but it seems that the backward hook function, which is supposed to block the gradient computation, is never called. import torch import numpy as np """ We have the following equations: x = 3 y = 4*x loss = (v2-10)**2 This should yield d_loss_d_y == 2*(3*4-10) == 4 d_loss_d_x == d_loss_d_y*4 == 16 Now - we ONLY want to calcuate d_loss_d_y. We want to block the gradient backpropagation so that we don't bother calculating d_loss_d_x. """ x = torch.autograd.Variable(torch.from_numpy(np.array([3.])), requires_grad=True) class MyOp(torch.nn.Module): def __call__(self, x): return x*4 op = MyOp() def my_hook(mod, in_grad, out_grad): print 'Backward Hook Called (THIS NEVER HAPPENS)' return None # (Should stop further backpropagation?) op.register_backward_hook(my_hook) y = op(x) intermediate_grads = {} y.register_hook(lambda grad: intermediate_grads.setdefault(y, grad)) loss = ((y - 10.) ** 2).sum() loss.backward() assert np.array_equal(intermediate_grads[y].data.numpy(), [4.]) # The following should raise some kind of exception .. because d_loss_d_x shouldn't be computed assert not np.array_equal(x.grad.data.numpy(), [16.])
st115786
Use torch.autograd.grad 106 and don’t call backward() on loss >>> x = torch.autograd.Variable(torch.FloatTensor([3.0]), requires_grad=True) >>> y = 4*x >>> loss = (y - 10)**2 >>> torch.autograd.grad(loss, y) (Variable containing: 4 [torch.FloatTensor of size 1] ,) >>> x.grad == None True
st115787
Ah, so it appears that you’re both kind of right. Grad does seem to calculate only the needed gradient (leaving other variables .grad as None). But still, to avoid ever-growing computation with larger graphs, you need detach'. The following snippet demonstrates the need for detach: import time import torch from torch.autograd import grad torch.manual_seed(1234) h = torch.autograd.Variable(torch.zeros(10, 100)) wt = torch.nn.Linear(100, 100) last_time = time.time() for i in xrange(1000): h = torch.tanh(wt(h.detach())) # Maintains a constant speed # h = torch.tanh(wt(h)) # Gets slower and slower loss = (h**2).sum() (dl_dh, ) = grad(loss, (h, )) if (i+1)%100==0: this_time = time.time() print 'Iterations {} to {}: {:.3g}s'.format(i-100, i, this_time - last_time) last_time = this_time assert 103.32 < dl_dh.abs().sum().data.numpy()[0] < 103.33 When h.detach() is used, it shows that the rate of computation stays roughly fixed: Iterations 0 to 100: 0.0367s Iterations 100 to 200: 0.032s Iterations 200 to 300: 0.0321s Iterations 300 to 400: 0.032s Iterations 400 to 500: 0.0319s Iterations 500 to 600: 0.032s Iterations 600 to 700: 0.0321s Iterations 700 to 800: 0.0429s Iterations 800 to 900: 0.0328s Iterations 900 to 1000: 0.032s Whereas when it is not used, it slows down: Iterations 0 to 100: 0.0736s Iterations 100 to 200: 0.158s Iterations 200 to 300: 0.279s Iterations 300 to 400: 0.367s Iterations 400 to 500: 0.37s Iterations 500 to 600: 0.467s Iterations 600 to 700: 0.581s Iterations 700 to 800: 0.717s Iterations 800 to 900: 0.74s Iterations 900 to 1000: 0.793s What I don’t know is what all this computation is doing (I guess just some overhead due to the ever-growing graph).
st115788
I would like to train my Autoencoder in a special way: First only the decoding part (encoding fixed) and when it has finished I want to train the encoding part (decoding fixed). I set model.fc1.weight.requires_grad = False and train the decoder (enocder is only one layer). Everything is fine until it finishes training the decoder. But when I reset the parameters for p in model.parameters(): p.requires_grad = False model.fc1.weight.requires_grad = True and define a new optimizer to train my encoder, I instantly get NaNs in my model. Still after testing different things, I cannot figure out why. I attached my coda rewritten as a minimum-working example. import numpy as np import numpy.random as rnd import matplotlib.pyplot as plt import torch import torch.nn as nn import torch.utils.data as data_utils import torch.optim as optim from torch.autograd import Variable ########################################################### # Define model ########################################################### class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.fc1 = nn.Linear(100, 80, bias=False) self.fc2 = nn.Linear(80, 100) self.fc3 = nn.Linear(100, 100) self.fc4 = nn.Linear(100, 100) self.fc5 = nn.Linear(100, 100) self.rl = nn.LeakyReLU(negative_slope=0.2) def forward(self, x): x = self.fc1(x) x = self.rl(self.fc2(x)) x = self.rl(self.fc3(x)) x = self.rl(self.fc4(x)) x = self.fc5(x) return x ########################################################### # Train Recovery ########################################################### def train(epochs): epoch = 1 while epoch <= epochs: # Train on train set train_loss = 0 for batch_idx, (data, _) in enumerate(train_loader): data = Variable(data.type(torch.FloatTensor).cuda()) optimizer.zero_grad() output = model(data) loss = criterion(output, data) train_loss += loss loss.backward() optimizer.step() train_loss /= len_trainset train_error.append(train_loss.data[0]) # Eval validation set val_loss = 0 val_loss_l0 = 0 for data, _ in val_loader: data = Variable(data.type(torch.FloatTensor)).cuda() output = model(data) val_loss += criterion(output, data) val_loss /= len_valset val_error.append(val_loss.data[0]) if epoch % 10 == 0: print('Train Epoch: {} \tLoss: {:.6f}'.format(epoch, train_loss.data[0])) print(' Test Epoch: {} \tLoss: {:.6f}\n'.format(epoch, val_loss.data[0])) epoch += 1 ########################################################### # Main ########################################################### if __name__ == '__main__': train_error = [] val_error = [] len_trainset = 25000 len_valset = 5000 torch.manual_seed(0) rnd.seed(0) def sparse_data(N, k, num): X = np.zeros((N, num)) X[0:k,:] = rnd.normal(0, 1, size=(k, num)) idx_1 = rnd.sample(X.shape).argsort(axis=0) idx_2 = np.tile(np.arange(X.shape[1]), (X.shape[0], 1)) return np.transpose(X[idx_1, idx_2]) # Prepare data kwargs = {'num_workers': 1, 'pin_memory': True} X_train = sparse_data(100, 20, len_trainset) X_val = sparse_data(100, 20, len_valset) Strain = data_utils.TensorDataset(torch.from_numpy(X_train), torch.from_numpy(X_train)) Sval = data_utils.TensorDataset(torch.from_numpy(X_val), torch.from_numpy(X_val)) train_loader = data_utils.DataLoader(Strain, batch_size=128, shuffle=True, **kwargs) val_loader = data_utils.DataLoader(Sval, batch_size=128, shuffle=False, **kwargs) model = Net().cuda() model.train() # Train Decoding Part model.fc1.weight.requires_grad = False criterion = nn.MSELoss(size_average=False).cuda() optimizer = optim.SGD(filter(lambda p: p.requires_grad, model.parameters()), lr=0.001) train(150) # Train Complete Net for p in model.parameters(): p.requires_grad = False model.fc1.weight.requires_grad = True optimizer = optim.SGD(filter(lambda p: p.requires_grad, model.parameters()), lr=0.001) train(150)
st115789
The detailed problem is as follows: I have a mixture of Labelled Sample Group and Unlabeled Sample Group. For some special reason, I have to mix those samples in a same mini-batch to train a classification network, with CrossEntropyLoss. So I need to ignore these unlabeled samples when computing the CrossEntropyLoss. I tried to assign a pseudo label to these unlabeled samples and set the option weight in CrossEntropyLoss 2 to be (zero) corresponding to the pseudo label , but found that it wouldn’t satisfy my demand. First the output turns to be C+1 vector. Second and most importantly, though the loss on the pseudo class is ignored, if the prediction of an unlabeled data have large values on other indexes, it still generate large loss. I need those samples contribute exactly 0 to the CrossEntropyLoss. So will someone kindly help me with this issue? Thx a lot.
st115790
Define a custom loss. I had an example where I wanted to treat different samples in batch differently. I’m assuming you want them to go through the same forward prop i.e. forward() function in the model file. Write your data loader such that you know the positions of the labelled samples and unlabelled samples. This won’t be hard. Then at the time of your loss calculation, simply re-write a custom version of CrossEntropyLoss based on Pytorch docs and for the unlabelled samples you can neglect them by not adding them to the equation. Writing custom loss functions is not that hard so don’t worry
st115791
Yes, I need the unlabeled samples to go through the model.forward() and generate an unsupervised loss together with the labeled samples in the mini batch. Thank you for the suggestion. I will try it.
st115792
The pretrained resnet34 works well. However, I am confused by the downsample dim in the basic block of the layer2. The original subnet is written as follows. (layer2): Sequential ( (0): BasicBlock ( (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True) (relu): ReLU (inplace) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True) (downsample): Sequential ( (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True) ) ) Why the input dim of Conv2d in the downsample layer is 64, not 128? May some one tell me the truth? Thanks a lot!
st115793
Not entirely sure but taking a lot shot. It’s possible this is happening because of skip connects? As opposed to simple feed forward nature of networks like VGG. Will have to read up though.
st115794
Hi, I was using torch.smm for cuda sparse matrix multiplication a few weeks back, after I move to the 0.2.0. the same code will complain, RuntimeError: WARNING: Sparse Cuda Tensor op sspaddmm is not implemented at /pytorch/torch/lib/THCS/generic/THCSTensorMath.cu:156 are the old sparse kernels removed from 0.2.0. Or API has been changed? Thanks.
st115795
Given a container model = nn.Sequential( nn.Conv2d(1,20,5), nn.ReLU(), nn.Conv2d(20,64,5), nn.ReLU() nn.Conv2d(64,128,5), nn.ReLU() nn.Conv2d(128,128,5), nn.ReLU() ) I would like to extract some of the features with indexing. feats = model[:2] TypeError: ‘<’ not supported between instances of ‘slice’ and ‘int’ The current best solution I’m guessing is to do this: feats = nn.Sequential( *(model[i] for i in range(2)) ) Will the former solution be possible later or is there an alternative solution?
st115796
The way to extract is not really to index in PyTorch as far as I know. Your model is only a series of operations, not the values stored in it during forward prop. So, you need to delete the last layer - model = models.resnet152(pretrained=True) modules = list(model.children())[:-1] # delete the last fc layer. model = nn.Sequential(*modules) Now you have a model with the same features but the last layer removed. Now, if you do model(x) your forward prop will give you the features. Similarly, if you want to extract output with 2 layers removed, just change that line to model.children())[:-2] and it will remove the last 2 layers. Hope this helps!
st115797
Coming from Torch7 I am missing logical operations such as the any() and all() functions to check whether a Boolean Tensor contains either only zeros or ones (e.g. after another logical operation): github.com torch/torch7/blob/master/doc/maths.md#torchanya 25 <a name="torch.maths.dok"></a> # Math Functions # Torch provides MATLAB-like functions for manipulating [`Tensor`](tensor.md) objects. Functions fall into several types of categories: * [Constructors](#torch.construction.dok) like [`zeros`](#torch.zeros), [`ones`](#torch.ones); * Extractors like [`diag`](#torch.diag) and [`triu`](#torch.triu); * [Element-wise](#torch.elementwise.dok) mathematical operations like [`abs`](#torch.abs) and [`pow`](#torch.pow); * [BLAS](#torch.basicoperations.dok) operations; * [Column or row-wise operations](#torch.columnwise.dok) like [`sum`](#torch.sum) and [`max`](#torch.max); * [Matrix-wide operations](#torch.matrixwide.dok) like [`trace`](#torch.trace) and [`norm`](#torch.norm); * [Convolution and cross-correlation](#torch.conv.dok) operations like [`conv2`](#torch.conv2); * [Basic linear algebra operations](#torch.linalg.dok) like [`eig`](#torch.eig); * [Logical operations](#torch.logical.dok) on `Tensor`s. By default, all operations allocate a new `Tensor` to return the result. However, all functions also support passing the target `Tensor`(s) as the first argument(s), in which case the target `Tensor`(s) will be resized accordingly and filled with result. This property is especially useful when one wants have tight control over when memory is allocated. The *Torch* package adopts the same concept, so that calling a function directly on the `Tensor` itself using an object-oriented syntax is equivalent to passing the `Tensor` as the optional resulting `Tensor`. This file has been truncated. show original I understand I could probably do something like a = torch.rand(3,3) torch.sum(torch.lt 38(a,1)) / torch.numel(a) and check whether this is 1 or 0 (result is a byte tensor so this should be integer division), but it feels rather inconvenient. Is there some equivalent in PyTorch and if not, would it be possible to add these functions?
st115798
thanks for the super quick response. That’s exactly what I need. Sorry if I didn’t look hard enough and my question was perhaps trivial, but I don’t seem to be able to find it in the documentation neither under tensor, nor where all the other comparison ops are listed in math: http://pytorch.org/docs/master/torch.html#comparison-ops 912
st115799
it’s not in the docs. we’ll add it (I recently got to know that it’s exposed but we missed the docs on them).
st115800
Hi, I’ve got a small question regarding fine tuning a model i.e. How can I download a pre-trained model like VGG and then use it to serve as the base of any new layers built on top of it. In Caffe there was a ‘model zoo’, does such a thing exist in PyTorch? If not, how do we go about it?
st115801
this section will help you (yes we have a big modelzoo) http://pytorch.org/docs/torchvision/models.html 928 Also, this thread How to perform finetuning in Pytorch? Can anyone tell me how to do finetuning in pytorch? Suppose, I have loaded the Resnet 18 pretrained model. Now I want to finetune it on my own dataset which contain say 10 classes. How to remove the last output layer and change to as per my requirement?
st115802
Hey thanks, I went through that thread and got a pretty good idea of how to fine-tune stuff, however is there a way to manually remove a layer of the pre-trained network? From what I’ve learnt, even if I set the required_grad in the layers I don’t want in my graph, they still will have the pre-trained weights in them. So essentially I want to do something like this(in pseudo-code): ` vgg = models.vgg16(pretrained=True) remove the last two pooling layers vgg.layer1 = nn.Conv2d(…dilation=2) ` Any advice?
st115803
Hey! Not sure if you’re still stuck on this. But I just wrote a short tutorial on fine tuning resnet in pytorch. Here - https://github.com/Spandan-Madan/Pytorch_fine_tuning_Tutorial 580 Hope this helps!
st115804
Hi, That’s a nice tutorial. However, finetuning in PyTorch is about understanding what a computational graph does. So after reading the docs, it is clear that in order to finetune the gradients must not be backpropagated back to the pre-trained weights. So if you’re finetuning off say vgg you can do something like this: vgg_base = models.vgg16(pretrained=True) net.feat = nn.Sequential(*list(vgg_base.features.children())) net.clf = nn.Linear(in_feat_1, out_feat_1) net.op = nn.Linear(out_feat_1, out_feat_2) for params in net.feat.parameters(): params.requires_grad = False optim = optim.SGD(chain(net.clf.parameters(), net.op.parameters()), lr=lr, momentum=0.9) I hope that adding this to your tutorial may help.
st115805
Usually, it’s a matter of choice in fine tuning to decide how many layers are frozen. Most people tend to freeze most layers because it slows the system down. Ideally, it’s best if layers are not frozen. Which is why I left it like that on purpose!
st115806
Suppose we have @Tudor_Berariu code in Manually feeding trainable parameters to the optimizer : import torch import torch.optim as optim from torch.autograd import Variable w = Variable(torch.randn(3, 5), requires_grad=True) b = Variable(torch.randn(3), requires_grad=True) x = Variable(torch.randn(5)) optimizer = optim.SGD([w,b], lr=0.01) optimizer.zero_grad() y = torch.mv(w, x) + b y.backward(torch.randn(3)) optimizer.step() Is there an automated way to find the total number of trainable parameters used to construct y? In the above the total number of trainable parameters used to construct y will be the sum of the total number of trainable of parameters in w and the total number of trainable parameters in b i.e. 3*5 + 3 = 18.
st115807
I tried this myself. My conclusion is that when you wrap parameters directly as Variables with requires_grad=True you cannot always distinguish between model parameters and inputs. Sometimes you might need to compute gradients for other reasons like training a contractive autoencoder or searching for adversarial examples. Therefore using nn.Parameter and isinstance(var, nn.Parameter) to check variables seems to be a better approach. There are some properties of variables like creator or previous_functions that you might exploit to go back through the computational graph, but I’m not sure it’s a robust approach. Here’s an example that works in some cases, but it fails for convolutional layers for example. import torch import torch.nn as nn import torch.optim as optim from torch.nn import Parameter from torch.autograd import Variable x1 = Variable(torch.rand(10, 7)) x2 = Variable(torch.rand(10, 9)) l1 = nn.Linear(7, 5) l2 = nn.Linear(9, 5) l3 = nn.Linear(10, 5) y = l3(torch.cat([l1(x1), l2(x2)], 1)) def get_all_params(var, all_params): if isinstance(var, Parameter): all_params[id(var)] = var.nelement() elif hasattr(var, "creator") and var.creator is not None: if var.creator.previous_functions is not None: for j in var.creator.previous_functions: get_all_params(j[0], all_params) elif hasattr(var, "previous_functions"): for j in var.previous_functions: get_all_params(j[0], all_params) all_params = {} get_all_params(y, all_params) print(sum(all_params.values()))
st115808
Hi everyone, I have a short and possibly really stupid question regarding the official time sequence example 13 I don’t understand why c_t is stored as the output of the LSTMCell? Comparing with elementary introduction to LSTM cells, for example http://colah.github.io/posts/2015-08-Understanding-LSTMs/ 2, it seems to me that h_t corresponds to the output of the cell? Thank you very much!
st115809
Many thanks for your reply! Doesn’t this contradict the docs - see example (and formulae which suggests that c_t is cell state)? http://pytorch.org/docs/nn.html?highlight=lstmcell#torch.nn.LSTMCell 5
st115810
sorry for misleading you earlier. h_t is the output of the cell, c_t is the internal state of the LSTMCell. The example is wrong. We’ll fix it.
st115811
I want to implement a residual network, and I see that they work best if you start with an initial negative bias for the skip-connections (for example b = -1, -3, … ). My skip connections are 1x1 convolutions (since I need them for resizing) and I want to somehow initialize the biases of these layers with a negative value, for example: self.skip_connection = nn.Conv2d(in_channels=3 , out_channels=16, kernel_size=1, stride=2, padding=0, BIAS_INITIALIZER= -3) This does not work, since the BIAS_INITIALIZER part is taken from tensorflow, but how can I do that here?
st115812
You can access it as: BIAS_INIT = -3 self.skip_connection.bias.data.fill_(BIAS_INIT)
st115813
@vabh Thank you for your answer, can I place this in the init method of the network?
st115814
Yes, you can do this there. You can probably do something like: for m in self.modules(): if isinstance(m, nn.Conv2d) and m.kernel_size == 1: # bias init code For general weight initialisation methods, have a look at: http://pytorch.org/docs/master/nn.html#torch-nn-init 158
st115815
Looking at Yunjey’s example here 121, the net is saved as a .pkl file. I have read on forums here with people trying to access models that are saved with .pth extension. Does it count if they are saved as .pkl files? If it does, what is the correct way to load a saved net? Also, is there a way to import a pytorch model into c++?
st115816
Hi! pkl is python pickled file according to this 532 while pth is a saved pytorch tensor according to this. When I saved the result as pkl, it doesn’t seem to be as the same result from training.
st115817
I have been working on a project which runs the same VGG-based CNN on a common-dataset (CIFAR-10), and unfortunately have not really been able to match the performance of other frameworks with PyTorch. Chainer 1 reaches 0.78 after 4min 16s, CNTK reaches 0.77 after 2min 48s. However, PyTorch only averages 0.73 and after nearly 6 minutes. For example, here is a PyTorch extract: def create_symbol(): class SymbolModule(nn.Module): def __init__(self): super(SymbolModule, self).__init__() self.conv1 = nn.Conv2d(3, 50, kernel_size=(3, 3), padding=(1, 1)) self.conv2 = nn.Conv2d(50, 50, kernel_size=(3, 3), padding=(1, 1)) self.conv3 = nn.Conv2d(50, 100, kernel_size=(3, 3), padding=(1, 1)) self.conv4 = nn.Conv2d(100, 100, kernel_size=(3, 3), padding=(1, 1)) # feature map size is 8*8 by pooling self.fc1 = nn.Linear(100*8*8, 512) self.fc2 = nn.Linear(512, N_CLASSES) def forward(self, x): x = F.relu(self.conv2(F.relu(self.conv1(x)))) x = F.max_pool2d(x, kernel_size=(2, 2), stride=(2, 2)) x = F.dropout(x, 0.25) x = F.relu(self.conv4(F.relu(self.conv3(x)))) x = F.max_pool2d(x, kernel_size=(2, 2), stride=(2, 2)) x = F.dropout(x, 0.25) x = x.view(-1, 100*8*8) # reshape Variable x = F.dropout(F.relu(self.fc1(x)), 0.5) x = self.fc2(x) return F.log_softmax(x) return SymbolModule() def init_model(m): # Implementation of momentum: # v = \rho * v + g \\ # p = p - lr * v opt = optim.SGD(m.parameters(), lr=LR, momentum=MOMENTUM) return opt Chainer extract: class SymbolModule(chainer.Chain): def __init__(self): super(SymbolModule, self).__init__( conv1=L.Convolution2D(3, 50, ksize=(3,3), pad=(1,1)), conv2=L.Convolution2D(50, 50, ksize=(3,3), pad=(1,1)), conv3=L.Convolution2D(50, 100, ksize=(3,3), pad=(1,1)), conv4=L.Convolution2D(100, 100, ksize=(3,3), pad=(1,1)), # feature map size is 8*8 by pooling fc1=L.Linear(100*8*8, 512), fc2=L.Linear(512, N_CLASSES), ) def __call__(self, x): h = F.relu(self.conv2(F.relu(self.conv1(x)))) h = F.max_pooling_2d(h, ksize=(2,2), stride=(2,2)) h = F.dropout(h, 0.25) h = F.relu(self.conv4(F.relu(self.conv3(h)))) h = F.max_pooling_2d(h, ksize=(2,2), stride=(2,2)) h = F.dropout(h, 0.25) h = F.dropout(F.relu(self.fc1(h)), 0.5) return self.fc2(h) def init_model(m): optimizer = optimizers.MomentumSGD(lr=LR, momentum=MOMENTUM) optimizer.setup(m) return optimizer Since I’m comparing the same mathematical operations (albeit on a randomly initialised matrix) I believe I should get the same accuracies when averaged across runs (roughly). So I’m not sure what else there is to set with PyTorch. I have experimented a bit with different weight initialisations (other frameworks use glorot/xavier uniform) and checking gradient-clipping params but this hasn’t made any real difference. For example: self.conv1 = nn.Conv2d(3, 50, kernel_size=3, padding=1) init.xavier_uniform(self.conv1.weight, gain=np.sqrt(2.0)) init.constant(self.conv1.bias, 0.1)
st115818
Well first off you should do max pooling before relu as they are mathematically equivalent either order but it’s nearly 40% computationally more efficient to do maxpool before relu. So look like this: F.relu(F.max_pooling2d(nn.conv2d(x))) Are u purposely only maxpooling twice or do want 4times on 4conv2d layers?
st115819
There are also a lot of potential differences in the code you haven’t published, the training/validation loops, dataset handling, image preprocessing, etc. You’ve got a log_softmax on the output of your pytorch model. I assume you are using NLLLoss and not CrossEntropy (which includes the log softmax like Chainer’s SoftmaxCrossEntropy)?
st115820
Thanks for the feedback. I have moved relu after max_pool2d and indeed was applying softmax on the output twice (didn’t realise CrossEntropyLoss() includes it). The updated script is here 1 however the script still gets 0.72 accuracy and takes around 6 minutes, so weirdly not much has chained. I was purposefully only max-pooling twice because my feature-maps are already quite small (and do the same thing with Chainer). Ross, the data is the same for all the frameworks (it comes from a common function, instead of using the library’s versions of it). If it’s easier here are the relevant code extracts: def create_symbol(): class SymbolModule(nn.Module): def __init__(self): super(SymbolModule, self).__init__() self.conv1 = nn.Conv2d(3, 50, kernel_size=(3, 3), padding=(1, 1)) self.conv2 = nn.Conv2d(50, 50, kernel_size=(3, 3), padding=(1, 1)) self.conv3 = nn.Conv2d(50, 100, kernel_size=(3, 3), padding=(1, 1)) self.conv4 = nn.Conv2d(100, 100, kernel_size=(3, 3), padding=(1, 1)) # feature map size is 8*8 by pooling self.fc1 = nn.Linear(100*8*8, 512) self.fc2 = nn.Linear(512, N_CLASSES) def forward(self, x): x = self.conv2(F.relu(self.conv1(x))) # Apply relu after max-pool x = F.relu(F.max_pool2d(x, kernel_size=(2, 2), stride=(2, 2))) x = F.dropout(x, 0.25) x = self.conv4(F.relu(self.conv3(x))) # Apply relu after max-pool x = F.relu(F.max_pool2d(x, kernel_size=(2, 2), stride=(2, 2))) x = F.dropout(x, 0.25) x = x.view(-1, 100*8*8) # reshape Variable x = F.dropout(F.relu(self.fc1(x)), 0.5) return self.fc2(x) return SymbolModule() def init_model(m): # Implementation of momentum: # v = \rho * v + g \\ # p = p - lr * v opt = optim.SGD(m.parameters(), lr=LR, momentum=MOMENTUM) # Combines softmax output with negative log-likelihood criterion = nn.CrossEntropyLoss() return opt, criterion x_train, x_test, y_train, y_test = cifar_for_library(channel_first=True) y_train = y_train.astype(np.int64) y_test = y_test.astype(np.int64) sym = create_symbol() sym.cuda() # CUDA! optimizer, criterion = init_model(sym) sym.train() for j in range(EPOCHS): for data, target in yield_mb(x_train, y_train, BATCHSIZE, shuffle=True): # Get samples data = Variable(torch.FloatTensor(data).cuda()) target = Variable(torch.LongTensor(target).cuda()) # Init optimizer.zero_grad() # Forwards output = sym(data) # Loss loss = criterion(output, target) # Back-prop loss.backward() optimizer.step()
st115821
Hi I am new to PyTorch I wanted to know how concatenate Variable of different size like 1x96x96x128 and 1x96x96x256 to single variable something like which is used squeezenet.pls help ? ty
st115822
adithya1: Hi I am new to PyTorch I wanted to know how concatenate Variable of different size like 1x96x96x128 and 1x96x96x256 to single variable something like which is used squeezenet.pls help ? ty Context would help. Why do you want concatenate them? What are these variables? Torch.cat(tensor1,tensor2,dimension) let’s you concatenate two tensors along a specified dimension. Here’s a link on how to use it - https://github.com/torch/torch7/blob/master/doc/maths.md 373. Look up for torch.cat on that page!
st115823
These are the feature maps but anyway it worked , your link helped me out man thanks.
st115824
My network accepts multi input data. The data has the same image content, but the resolution is different, like the first input contains the 320*320 image, the second one contains the 160*160 image, the third one contains the 80*80 image. How could I get the batch data from the dataloader?
st115825
You can actually define your own dataset by inheriting the torch.utils.data.Dataset class. So maybe in your __getitem__ function, you can do return image320, image160, image80. For more details, you can look at this tutorial: http://pytorch.org/tutorials/beginner/data_loading_tutorial.html 562
st115826
Have any of you successfully used SELU with either a residual or skip connection network architecture? If so, what are your techniques for setting up and implementing it / what changes to the architecture did you have to make to get it working and converging without exploding the gradients?
st115827
Same question here. I was able to get similar performance to batchnorm with SELU but not better.
st115828
Yes, SELU definitely runs faster. I would like to learn the use case where it substantially improved the score of the experiment.
st115829
Hey everyone, I get some behavior for which I am not sure whether it’s a bug or not. I have the following piece of code for dataset_no, (inputs, targets) in enumerate(trainloader): inputs = Variable(inputs.cuda(), requires_grad=False) targets = Variable(targets.cuda(), requires_grad=False) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = model(inputs) objective = criterion(outputs, targets) \ + model.readout.l1() * gamma_readout \ + model.group_sparsity() * gamma_hidden objective.backward() optimizer.step() The first loop runs fast, but the second time, the lines inputs = Variable(inputs.cuda(), requires_grad=False) targets = Variable(targets.cuda(), requires_grad=False) are execute it takes forever. The network I use is NetWork ( (core): Sequential ( (conv0): Conv3d(1, 32, kernel_size=(5, 5, 5), stride=(1, 1, 1)) (f0): CustomNonLinearity ( ) (conv1): Conv3d(32, 32, kernel_size=(3, 3, 3), stride=(1, 1, 1), dilation=(2, 2, 2)) (f1): CustomNonLinearity ( ) (conv2): Conv3d(32, 32, kernel_size=(3, 3, 3), stride=(1, 1, 1), dilation=(2, 2, 2)) ) (readout): FullyConnected (32 x 24 x 52 -> 452) (nonlinearity): CustomNonLinearity ( ) ) Does anyone see an obvious mistake? Thanks a lot.
st115830
In the code of macroTH_TENSOR_DIM_APPLY2(TYPE1, TENSOR1, TYPE2, TENSOR2, DIMENSION, CODE) I am confusing about the part of follows: if(TH_TENSOR_DIM_APPLY_counter[TH_TENSOR_DIM_APPLY_i] == TENSOR1->size[TH_TENSOR_DIM_APPLY_i]) \ \ if(TH_TENSOR_DIM_APPLY_i == TENSOR1->nDimension-1) \ { \ TH_TENSOR_DIM_APPLY_hasFinished = 1; \ break; \ } \ else \ { \ TENSOR1##_data -= TH_TENSOR_DIM_APPLY_counter[TH_TENSOR_DIM_APPLY_i]*TENSOR1->stride[TH_TENSOR_DIM_APPLY_i]; \ TENSOR2##_data -= TH_TENSOR_DIM_APPLY_counter[TH_TENSOR_DIM_APPLY_i]*TENSOR2->stride[TH_TENSOR_DIM_APPLY_i]; \ TH_TENSOR_DIM_APPLY_counter[TH_TENSOR_DIM_APPLY_i] = 0; \ } \ } \ else \ break; \ Like the tensor: [[1,1,1;1,1,1],[1,1,1;1,1,1]](2x3x2), use the THTensor_(cumsum) operation in dimension 1(the size is 3), after finish the dimension 0,this code will set TH_TENSOR_DIM_APPLY_counter[TH_TENSOR_DIM_APPLY_i] = 0; ??? And When we operate in dimenison 2, the dimension 0 need to be run again ? (this will increase the steps of loop) Might I misundestand something. thank you advance. Extra: (is this code can be changed as follows?) if(TH_TENSOR_DIM_APPLY_counter[TH_TENSOR_DIM_APPLY_i]>TENSOR1->size[TH_TENSOR_DIM_APPLY_i]) \ continue;\ TH_TENSOR_DIM_APPLY_counter[TH_TENSOR_DIM_APPLY_i]++; \ TENSOR1##_data += TENSOR1->stride[TH_TENSOR_DIM_APPLY_i]; \ TENSOR2##_data += TENSOR2->stride[TH_TENSOR_DIM_APPLY_i]; \ \ if(TH_TENSOR_DIM_APPLY_counter[TH_TENSOR_DIM_APPLY_i] == TENSOR1->size[TH_TENSOR_DIM_APPLY_i]) \ { \ if(TH_TENSOR_DIM_APPLY_i == TENSOR1->nDimension-1) \ { \ TH_TENSOR_DIM_APPLY_hasFinished = 1; \ break; \ } \ else \ { \ TENSOR1##_data -= TH_TENSOR_DIM_APPLY_counter[TH_TENSOR_DIM_APPLY_i]*TENSOR1->stride[TH_TENSOR_DIM_APPLY_i]; \ TENSOR2##_data -= TH_TENSOR_DIM_APPLY_counter[TH_TENSOR_DIM_APPLY_i]*TENSOR2->stride[TH_TENSOR_DIM_APPLY_i]; \ TH_TENSOR_DIM_APPLY_counter[TH_TENSOR_DIM_APPLY_i] =TENSOR1->size[TH_TENSOR_DIM_APPLY_i]+1 ; \ } \ } \ else \ break; \
st115831
After rebuild, I found two method can out the same answer (test on cumsum), however, there is no “speed increase”.
st115832
Hello all, I have a pretrained network with a 28x28 input(MNIST) image and 10 outputs. I want to get the gradient of one of those outputs wrt the input.(how the image should change to increase the score of one digit). If I input a single image with a variable that has requires_grad = True, I can do output[digit].backward(retain_variables=True) to do this. So, slicing the output of a network does not affect the backward prop. However, if I input a batch of images, slice the input by x=input[k,:,:,:] and do output[k,digit].backward(retain_variables=True), x.grad remains empty(it is still None). Could you explain me why this happens and is there a way to do this without getting the grad from the whole input? Thank you in advance!
st115833
I would slice the input and then wrap it in Variable, because autograd will only return gradients to the user for leaf nodes.
st115834
Yes, but in this case I have to pass the new Variable from the network once more, right?
st115835
I think I had the same confusion as the OP. Basically, I thought the slicing operation didn’t compute gradients, but x.grad (in the OP’s example) was None because intermediate variables don’t store gradients and not because slicing doesn’t compute gradients.
st115836
FYI, Just saw this blog post comparing pytorch and tensorflow ( I didnt write it): awni.github.io PyTorch or TensorFlow? 52 This is a guide to the main differences I’ve found between PyTorch and TensorFlow. This post is intended to be useful for anyone considering starting a new project or making the switch from one deep learning framework to another. The focus is on... PyTorch is my favorite deep learning framework! Looking forward to seeing more improvements.
st115837
Currently, my input is generated by dataloader, and each batch is not easy to split into several small batches. I was wondering if there is any way to directly feed several batches from dataloader instead of splitting each batch. Any comment would be appreciated. Thank you! update: just got an idea: just delete the nn.parallel.scatter function def data_parallel(module, inputs, device_ids, output_device=None): if not device_ids: return module(input) if output_device is None: output_device = device_ids[0] replicas = nn.parallel.replicate(module, device_ids) replicas = replicas[:len(inputs)] outputs = nn.parallel.parallel_apply(replicas, inputs) return nn.parallel.gather(outputs, output_device)
st115838
Hi, I have a 3d Tensor M of shape bs x N x N. I also have 2d index tensor I of size bs x N. The Matrix M represents distances between 2 sets of points of size N, and does so for bs batches. Matrix I is a bipartite matching between the 2 sets, where I[b][i] = j means that for batch b, i is matched with j. I need to fetch the distances of for all point matchings, and do so across all the batches. A way to do it -badly- would be something like this : total_distance = 0 for batch in range(bs): index = I[batch] # matching for the batch : 1 x N vector distances_for_this_batch = M[batch, torch.arange(0,N), index].sum() total_distance += distances_for_this_batch Is there a better/faster way to proceed ? Thanks, Lucas
st115839
Hi, I was trying to implement a paper originally implemented in Caffe: https://github.com/leongatys/DeepTextures/tree/master/DeepImageSynthesis 21 For some reason, I am not being able to replicate in Pytorch exactly the same results. I have checked my code several times and I found no code error. If someone else has implemented this paper on Pytorch, could he/she share the code for it? Otherwise, here I give you my code, in case someone had some feedback to give me. Thank you a lot for your time! from __future__ import print_function import torch import torch.nn as nn from torch.autograd import Variable import torch.optim as optim import numpy as np from PIL import Image import matplotlib.pyplot as plt import torchvision.transforms as transforms import torchvision.models as models import copy use_cuda = torch.cuda.is_available() dtype = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor imsize = 512 if use_cuda else 128 prep = transforms.Compose([ transforms.Scale(imsize), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[1, 1, 1]) ]) undo = transforms.Compose([ transforms.Normalize(mean=[-0.485, -0.456, -0.406], std=[1, 1, 1]), transforms.ToPILImage(), # change to PIL image ]) def image_loader(image_name): image = Image.open(image_name) image = Variable(prep(image)) image = image.unsqueeze(0) return image def imshow(tensor, title=None, preprocess=True): image = tensor.clone().cpu() image = image.view(3, imsize, imsize) image = undo(image) if preprocess else transforms.ToPILImage()(image) plt.figure() plt.imshow(image) if title is not None: plt.title(title) plt.pause(0.005) def get_min_max(image): min= [0,0,0] max= [0,0,0] for k in range(3): A = np.squeeze(np.asarray(image[0,k,:,:].data.cpu().numpy())) min[k]=np.amin(A) max[k]=np.amax(A) min [k]= min[k].astype(type('float', (float,), {})) max [k]= max[k].astype(type('float', (float,), {})) return min,max unloader = transforms.ToPILImage() # reconvert into PIL image plt.ion() texture = image_loader("images/pebbles.jpg").type(dtype) noise = Variable(torch.randn(texture.data.size())).type(dtype) min,max=get_min_max(texture) imshow(texture.data, title='Texture') imshow(noise.data, title='Noise', preprocess=False) cnn = models.vgg19(pretrained=True).features cnn = cnn.eval() class GramMatrix(nn.Module): def forward(self, input): # get and define parameters b, N, h, w = input.size() M = h * w const = 4 # compute Gram matrix and normalize F = input.view(b, N, M) G = torch.bmm(F, F.transpose(1, 2)) G.div_(M * N * const) return G class textureLoss(nn.Module): def __init__(self, target, weight): super(textureLoss, self).__init__() self.target = target.detach() * weight self.weight = weight self.gram = GramMatrix() self.criterion = nn.MSELoss() def forward(self, input): self.output = input.clone() self.G = self.gram(input) self.G.mul_(self.weight) self.loss = self.criterion(self.G, self.target) return self.output def backward(self, retain_variables=True): self.loss.backward(retain_variables=retain_variables) return self.loss # move it to the GPU if possible: if use_cuda: cnn = cnn.cuda() texture_layers_default = ['conv_1', 'pool_5', 'pool_10', 'pool_19', 'pool_28' ] i = 1 for layer in list(cnn): if isinstance(layer, nn.Conv2d): name = "conv_" + str(i) if isinstance(layer, nn.ReLU): name = "relu_" + str(i) if isinstance(layer, nn.MaxPool2d): name = "pool_" + str(i) + '\n' i += 1 print(name) def get_texture_model_and_losses(cnn, texture_img, texture_weight, texture_layers=texture_layers_default): cnn = copy.deepcopy(cnn) texture_losses = [] model = nn.Sequential() gram = GramMatrix() if use_cuda: model = model.cuda() gram = gram.cuda() i = 1 for layer in list(cnn): if isinstance(layer, nn.Conv2d): name = "conv_" + str(i) model.add_module(name, layer) if name in texture_layers: print(name) target_feature = model(texture_img).clone() target_feature_gram = gram(target_feature) texture_loss = textureLoss(target_feature_gram, texture_weight) model.add_module("texture_loss_" + str(i), texture_loss) texture_losses.append(texture_loss) i += 1 if isinstance(layer, nn.ReLU): name = "relu_" + str(i) model.add_module(name, layer) if name in texture_layers: print(name) target_feature = model(texture_img).clone() target_feature_gram = gram(target_feature) texture_loss = textureLoss(target_feature_gram, texture_weight) model.add_module("texture_loss_" + str(i), texture_loss) texture_losses.append(texture_loss) i += 1 if isinstance(layer, nn.MaxPool2d): name = "pool_" + str(i) model.add_module(name, layer) if name in texture_layers: print(name) target_feature = model(texture_img).clone() target_feature_gram = gram(target_feature) texture_loss = textureLoss(target_feature_gram, texture_weight) model.add_module("texture_loss_" + str(i), texture_loss) texture_losses.append(texture_loss) # *** i += 1 return model, texture_losses def get_input_param_optimizer(input_img): input_param = nn.Parameter(input_img.data) optimizer = optim.LBFGS([input_param]) return input_param, optimizer def run_style_transfer(cnn, texture_img, input_img, num_steps=400, texture_weight=1e9): print('Building the style transfer model..') model, texture_losses = get_texture_model_and_losses( cnn, texture_img, texture_weight) input_param, optimizer = get_input_param_optimizer(input_img) print('Optimizing..') run = [0] while run[0] <= num_steps: def closure(): for k in range(3): input_param.data[0,k,:,:] = torch.clamp(input_param.data[0,k,:,:], max=max[k], min=min[k]) optimizer.zero_grad() model(input_param) texture_score = 0 for tl in texture_losses: texture_score += tl.backward() run[0] += 1 if run[0] % 50 == 0: print("run {}:".format(run)) print('Style Loss : {:4f} '.format(texture_score.data[0])) print() return texture_score optimizer.step(closure) for k in range(3): input_param.data[0,k,:,:] = torch.clamp(input_param.data[0,k,:,:], max=max[k], min=min[k]) return input_param.data output = run_style_transfer(cnn, texture, noise, num_steps=800, texture_weight = 1e9) plt.figure() imshow(output, title='Output Image',preprocess=True) plt.ioff() plt.show()
st115840
In Stanford CS 231N course assignment 3 (http://cs231n.github.io/assignments2017/assignment3/ 20) you are invited to implement Gatys et al.'s style transfer algorithm (https://arxiv.org/abs/1508.06576 6). Doing it in PyTorch is one of the possibilities (the other one is TensorFlow) and they provide an ipython notebook with some initial code and correctness checking routines. You may find it helpful.
st115841
I am trying to build pytorch from master. I end up with the following error, which I am unable to debug. -- Build files have been written to: /home/ram/pytorch/torch/lib/build/THNN Scanning dependencies of target THNN [ 50%] Building C object CMakeFiles/THNN.dir/init.c.o [100%] Linking C shared library libTHNN.so [100%] Built target THNN Install the project... -- Install configuration: "Release" -- Installing: /home/ram/pytorch/torch/lib/tmp_install/lib/libTHNN.so.1 -- Installing: /home/ram/pytorch/torch/lib/tmp_install/lib/libTHNN.so -- Set runtime path of "/home/ram/pytorch/torch/lib/tmp_install/lib/libTHNN.so.1" to "" -- Installing: /home/ram/pytorch/torch/lib/tmp_install/include/THNN/THNN.h -- Installing: /home/ram/pytorch/torch/lib/tmp_install/include/THNN/generic/THNN.h -- The C compiler identification is GNU 4.8.5 -- The CXX compiler identification is GNU 4.8.5 -- Check for working C compiler: /home/ram/anaconda2/bin/cc -- Check for working C compiler: /home/ram/anaconda2/bin/cc -- works -- Detecting C compiler ABI info -- Checking if C linker supports --verbose -- Checking if C linker supports --verbose - yes -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Check for working CXX compiler: /home/ram/anaconda2/bin/c++ -- Check for working CXX compiler: /home/ram/anaconda2/bin/c++ -- works -- Detecting CXX compiler ABI info -- Checking if CXX linker supports --verbose -- Checking if CXX linker supports --verbose - yes -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- Removing -DNDEBUG from compile flags -- TH_LIBRARIES: /home/ram/pytorch/torch/lib/tmp_install/lib/libTH.so.1 -- Found CUDA: /usr/local/cuda-7.5 (found suitable version "7.5", minimum required is "5.5") -- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2 - True -- Compiling with MAGMA support -- MAGMA INCLUDE DIRECTORIES: /home/ram/anaconda2/include -- MAGMA LIBRARIES: /home/ram/anaconda2/lib/libmagma.a -- MAGMA V2 check: 1 -- Automatic GPU detection failed. Building for common architectures. -- Autodetected CUDA architecture(s): 3.0;3.5;5.0;5.2;5.2+PTX -- got cuda version 7.5 -- Found CUDA with FP16 support, compiling with torch.CudaHalfTensor -- CUDA_NVCC_FLAGS: -DTH_INDEX_BASE=0 -I/home/ram/pytorch/torch/lib/tmp_install/include -I/home/ram/pytorch/torch/lib/tmp_install/include/TH -I/home/ram/pytorch/torch/lib/tmp_install/include/THC -I/home/ram/pytorch/torch/lib/tmp_install/include/THS -I/home/ram/pytorch/torch/lib/tmp_install/include/THCS -I/home/ram/pytorch/torch/lib/tmp_install/include/THPP -I/home/ram/pytorch/torch/lib/tmp_install/include/THNN -I/home/ram/pytorch/torch/lib/tmp_install/include/THCUNN;-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_52,code=compute_52;-DCUDA_HAS_FP16=1 -- THC_SO_VERSION: 1 -- Configuring done -- Generating done CMake Warning: Manually-specified variables were not used by the project: ATEN_LIBRARIES NO_CUDA THCS_LIBRARIES THCUNN_LIBRARIES THCUNN_SO_VERSION THC_LIBRARIES THD_SO_VERSION THNN_LIBRARIES THNN_SO_VERSION THPP_LIBRARIES THS_LIBRARIES TH_SO_VERSION -- Build files have been written to: /home/ram/pytorch/torch/lib/build/THC [ 1%] Building NVCC (Device) object CMakeFiles/THC.dir/THC_generated_THCHalf.cu.o [ 4%] Building NVCC (Device) object CMakeFiles/THC.dir/THC_generated_THCReduceApplyUtils.cu.o [ 4%] Building NVCC (Device) object CMakeFiles/THC.dir/THC_generated_THCStorage.cu.o [ 4%] Building NVCC (Device) object CMakeFiles/THC.dir/THC_generated_THCStorageCopy.cu.o [ 6%] Building NVCC (Device) object CMakeFiles/THC.dir/THC_generated_THCSleep.cu.o [ 7%] Building NVCC (Device) object CMakeFiles/THC.dir/THC_generated_THCTensorCopy.cu.o [ 8%] Building NVCC (Device) object CMakeFiles/THC.dir/THC_generated_THCTensor.cu.o [ 9%] Building NVCC (Device) object CMakeFiles/THC.dir/THC_generated_THCTensorMath2.cu.o [ 10%] Building NVCC (Device) object CMakeFiles/THC.dir/THC_generated_THCBlas.cu.o [ 12%] Building NVCC (Device) object CMakeFiles/THC.dir/THC_generated_THCTensorMath.cu.o [ 13%] Building NVCC (Device) object CMakeFiles/THC.dir/THC_generated_THCTensorMathBlas.cu.o [ 14%] Building NVCC (Device) object CMakeFiles/THC.dir/THC_generated_THCTensorMathMagma.cu.o Segmentation fault CMake Error at THC_generated_THCSleep.cu.o.cmake:267 (message): Error generating file /home/ram/pytorch/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCSleep.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCSleep.cu.o] Error 1 make[2]: *** Waiting for unfinished jobs.... Segmentation fault CMake Error at THC_generated_THCBlas.cu.o.cmake:267 (message): Error generating file /home/ram/pytorch/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCBlas.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCBlas.cu.o] Error 1 Segmentation fault CMake Error at THC_generated_THCReduceApplyUtils.cu.o.cmake:267 (message): Error generating file /home/ram/pytorch/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCReduceApplyUtils.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCReduceApplyUtils.cu.o] Error 1 Segmentation fault CMake Error at THC_generated_THCTensor.cu.o.cmake:267 (message): Error generating file /home/ram/pytorch/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensor.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensor.cu.o] Error 1 Segmentation fault CMake Error at THC_generated_THCStorageCopy.cu.o.cmake:267 (message): Error generating file /home/ram/pytorch/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCStorageCopy.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCStorageCopy.cu.o] Error 1 Segmentation fault CMake Error at THC_generated_THCTensorMathBlas.cu.o.cmake:267 (message): Error generating file /home/ram/pytorch/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorMathBlas.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensorMathBlas.cu.o] Error 1 Segmentation fault CMake Error at THC_generated_THCTensorMathMagma.cu.o.cmake:267 (message): Error generating file /home/ram/pytorch/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorMathMagma.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensorMathMagma.cu.o] Error 1 Segmentation fault CMake Error at THC_generated_THCHalf.cu.o.cmake:267 (message): Error generating file /home/ram/pytorch/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCHalf.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCHalf.cu.o] Error 1 Segmentation fault CMake Error at THC_generated_THCTensorCopy.cu.o.cmake:267 (message): Error generating file /home/ram/pytorch/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorCopy.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensorCopy.cu.o] Error 1 Segmentation fault CMake Error at THC_generated_THCTensorMath2.cu.o.cmake:267 (message): Error generating file /home/ram/pytorch/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorMath2.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensorMath2.cu.o] Error 1 Segmentation fault CMake Error at THC_generated_THCStorage.cu.o.cmake:267 (message): Error generating file /home/ram/pytorch/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCStorage.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCStorage.cu.o] Error 1 Segmentation fault CMake Error at THC_generated_THCTensorMath.cu.o.cmake:267 (message): Error generating file /home/ram/pytorch/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorMath.cu.o make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensorMath.cu.o] Error 1 make[1]: *** [CMakeFiles/THC.dir/all] Error 2 make: *** [all] Error 2
st115842
For example, The network has two outputs, ouput1 is from an intermedia layer and output2 is from the last layer. And my total loss: loss_total = loss_1 + loss_2 where loss_1 is calculated using output1 and loss_2 is calculated using output2. Now the problem is the loss_total seems to be dominated by loss_1 , and loss_2 doesn’t play a role。 I try to put some weights on loss_1, loss_2, like: loss_total = loss_1 * 0.001 + loss_2 *0.01 However, this is not likely to work. Does any one has some ideals on the similar problems?
st115843
I have no idea which specific losses you are using. Anyway, the following may help solving your problem. Just define your loss_total as: loss_total = alpha*loss_1 + (1-alpha)*loss_2 and finetune the hyperparameter alpha in the interval (0,1). If you say that loss_1 dominates the total_loss, then you’ll have o choose an alpha that is closer to 0 than to 1. Maybe you can start with alpha = 0.4 and, if it still does not work, decrease it by a fixed step of 0.05 or so, until you get reasonable results. However, before doing this cross-validation, you may wish to try training with alpha = 0, just to make sure that you can in fact minimize loss_2. There could be a bug in the definition of loss_2 and this procedure allows you to check that.
st115844
Hi @dpernes. I have met the same problem as above and appreciate your solution. And I hope the network can learn the alpha itself just like other hyperparameters of the network. But the parameters in the network are optimized by the optimizer like SGD. So how can I do to optimize the alpha automaticly ?
st115845
Hi @Pongroc. In principle, alpha is a hyperparameter, so it is not learned by gradient-based optimization, but rather by cross-validation. (You know what cross-validation is, right?) You may ask why is that so - and that’s precisely what I am going to try to explain. Suppose that loss_1 is typically much larger than loss_2. Then, if you try to learn alpha and the remaining parameters jointly, in such a way that the loss_total is minimized, most likely you will get a value for alpha that is very close to 0, i.e. the optimization simply removes the contribution of loss_1 to your loss_total. This does not mean, however, that loss_1 will be low - on the contrary, it will probably be high. If both loss_1 and loss_2 are measurements of how well your model behaves (in two different senses), then probably you want both to be relatively low instead of having one that is very low and the other one that is very high. The role of the hyperparameter alpha is, therefore, to weight these two losses in some way that optimizes the performance of your model, not necessarily minimizing loss_total. If it makes it easier for you to understand, you may also think of it in the context of regularization. You usually do not learn your regularization hyperparameter together with the model parameters, do you? Note that the usual regularized loss has a form that is very similar to the loss I proposed: , where is some regularization function (e.g. L-2 norm). Dividing the RHS by a constant (independent of theta) does not change the minimizer of the function, so I may equivalently redefine the loss as: , which, by setting , has exactly the same form as the loss_total that I proposed in my first reply to this topic. Now, note that if we set alpha to a value that is very close to 0 (or, equivalently, if we set lambda to a very large value), our loss_total will be totally dominated by the regularization term, and so we’ll get a model with a very poor performance (high bias), because it basically learns nothing related with the specific problem we are trying to solve. On the other hand, if we set alpha to a value that is very close to 1 (or, equivalently, if we set lambda to a very small value), our loss_total will be almost unregularized, and so we are likely to have a model that does not generalize well (high variance).
st115846
I reinstalled PyTorch, and out of curiosity decided to run PyTorch tests from Github repo. I had 1 failure and 2 errors Does it mean my PyTorch installation is broken? root@4ef8202959fd:~# git clone https://github.com/pytorch/pytorch.git Cloning into 'pytorch'... remote: Counting objects: 34197, done. remote: Compressing objects: 100% (6/6), done. remote: Total 34197 (delta 12), reused 12 (delta 12), pack-reused 34179 Receiving objects: 100% (34197/34197), 13.77 MiB | 6.33 MiB/s, done. Resolving deltas: 100% (25739/25739), done. Checking connectivity... done. root@4ef8202959fd:~# cd pytorch/test/ root@4ef8202959fd:~/pytorch/test# bash run_test.sh ~/pytorch/test ~/pytorch/test Running torch tests ......................................sss.............F........s.............E...........................................E...............................................................................ss. ====================================================================== ERROR: test_has_storage_numpy (__main__.TestTorch) ---------------------------------------------------------------------- Traceback (most recent call last): File "test_torch.py", line 142, in test_has_storage_numpy self.assertIsNotNone(torch.DoubleTensor(arr).storage()) RuntimeError: tried to construct a tensor from a float sequence, but found an item of type numpy.float32 at index (0) ====================================================================== ERROR: test_neg (__main__.TestTorch) ---------------------------------------------------------------------- Traceback (most recent call last): File "test_torch.py", line 397, in test_neg self._test_neg(self, lambda t: t) File "test_torch.py", line 385, in _test_neg res_neg.neg_() AttributeError: 'torch.LongTensor' object has no attribute 'neg_' ====================================================================== FAIL: test_dim_reduction (__main__.TestTorch) ---------------------------------------------------------------------- Traceback (most recent call last): File "test_torch.py", line 245, in test_dim_reduction self._test_dim_reduction(self, lambda t: t) File "test_torch.py", line 242, in _test_dim_reduction test_multidim(x, singleton_dim) File "test_torch.py", line 221, in test_multidim self.assertEqual(fn(x, dim).unsqueeze(dim), fn(x, dim, keepdim=True)) File "/root/pytorch/test/common.py", line 207, in assertEqual assertTensorsEqual(x, y) File "/root/pytorch/test/common.py", line 187, in assertTensorsEqual super(TestCase, self).assertEqual(a.size(), b.size()) AssertionError: torch.Size([1, 1, 4, 5]) != torch.Size([1, 4, 5]) ---------------------------------------------------------------------- Ran 204 tests in 23.488s FAILED (failures=1, errors=2, skipped=6)
st115847
Ahhh I am an Idiot. Checking out tag v.0.2.0 and running tests passed all. Fist bump for PyTorch devs.
st115848
I am looking at the same error output. Would you tell me how to solve it? What is the meaning of ‘checking out tag v.0.2.0’ ?
st115849
$ git clone --branch v0.2.0 https://github.com/pytorch/pytorch.git 8 Thanks @FuriouslyCurious
st115850
I have trained a model using my own dataset, now i want to finetune it on another datasets. anyone know how to do this?
st115851
Hello, I found that @apaszke said torch.utils.trainer would be removed and thus recommended to use tnt instead 73 in April But trainer still exists and there is no doc for tnt. I’d like to know the plan and the future of these two. Thank you.
st115852
my code is : x=F.relu(self.fc1(x)) #name the output :“x1” x=self.bn1(x) #name the output :“x2” the result of x1 and x2 is showed in the picture, I do not know why? QQ图片20170822103301.png716×206 11.7 KB
st115853
Hi, I see that Facebook has a package for exactly that: github.com facebookresearch/SentEval/blob/master/senteval/tools/classifier.py 144 # Copyright (c) 2017-present, Facebook, Inc. # All rights reserved. # # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. # """ Pytorch Classifier class in the style of scikit-learn Classifiers include Logistic Regression and MLP """ from __future__ import absolute_import, division, unicode_literals import numpy as np import copy from senteval import utils import torch from torch import nn This file has been truncated. show original and other useful stuff such as: GitHub facebookresearch/SentEval 31 SentEval - A python tool for evaluating the quality of sentence embeddings. Why isn’t this an integral part of PyTorch? Thanks,
st115854
Is original torch VolumetricBatchNormalization exactly the same as the BatchNorm3d? Likewise is the VolumetricConvolution same as the conv3d? Thank you
st115855
I am installing pytorch on Linux, with conda, python 3.5 and CUDA 8.0. I just ran: conda install pytorch torchvision cuda80 -c soumith I get the following: The following NEW packages will be INSTALLED: cffi: 1.10.0-py35_0 cuda80: 1.0-0 soumith pillow: 3.4.2-py35_0 pycparser: 2.18-py35_0 pytorch: 0.2.0-py35h1a0f79e_2cu80 soumith [cuda80] torchvision: 0.1.9-py35h72e4c6f_1 soumith Proceed ([y]/n)? y pytorch-0.2.0- 100% |#####################################################################################################################################################################################################################################| Time: 0:09:15 818.23 kB/s pytorch-0.2.0- 100% |#####################################################################################################################################################################################################################################| Time: 0:01:58 3.83 MB/s pytorch-0.2.0- 100% |#####################################################################################################################################################################################################################################| Time: 0:01:45 4.30 MB/s CondaError: CondaHTTPError: HTTP None None for url <None> Elapsed: None An HTTP error occurred when trying to retrieve this URL. ConnectionError(ReadTimeoutError("HTTPSConnectionPool(host='binstar-cio-packages-prod.s3.amazonaws.com', port=443): Read timed out.",),) CondaError: CondaHTTPError: HTTP None None for url <None> Elapsed: None An HTTP error occurred when trying to retrieve this URL. ConnectionError(ReadTimeoutError("HTTPSConnectionPool(host='binstar-cio-packages-prod.s3.amazonaws.com', port=443): Read timed out.",),) CondaError: CondaHTTPError: HTTP None None for url <None> Elapsed: None An HTTP error occurred when trying to retrieve this URL. ConnectionError(ReadTimeoutError("HTTPSConnectionPool(host='binstar-cio-packages-prod.s3.amazonaws.com', port=443): Read timed out.",),) I need to install it with conda.
st115856
Hi! How to use new pytorch bitwise operations (bitwise and, or, xor, lshift, rshift)? I cannot find them in the documentation or in the source code.
st115857
model_0 = AlexNet() model_1 = torch.nn.DataParallel(AlexNet(), device_ids=[0, 1]) model_1_dict = model_1.state_dict() model = model_0.load_state_dict(model_1_dict) # Here will be an error the keys of model_0 and model_1 are different, as shown below 微信图片_20170821171951.png1801×149 27.8 KB DataParallel add ‘module’ to the key’s name.
st115858
I am using pretrained network Resnet-18, all goes well but at the time of testing an image it giving the error :Expected 4D tensor as input, got 3D tensor instead. Code: params = torch.load(‘resnet18-5c106cde.pth’) model_ft = models.resnet18(pretrained=False) model_ft.load_state_dict(params) num_ftrs = model_ft.fc.in_features model_ft.fc = nn.Linear(num_ftrs, 2) if use_gpu: model_ft = model_ft.cuda() criterion = nn.CrossEntropyLoss() Observe that all parameters are being optimized optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9) Testing Code: tn = transforms.Compose([ transforms.RandomSizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]), ]) print(tn) img = Image.open(‘test/horlicks1.jpg’) test1 = tn(img) print(test1) predict = model_ft1(test1) #print(predict) Error: <torchvision.transforms.Compose object at 0x7fd2c7ddfe10> ( 0 ,.,.) = -1.0048 -1.0219 -1.0562 … -1.4843 -1.5014 -1.5185 -1.0048 -1.0219 -1.0562 … -1.4843 -1.5014 -1.5185 -1.0219 -1.0390 -1.0562 … -1.4672 -1.4843 -1.5014 … ⋱ … 0.4508 0.2111 -0.0972 … -0.8678 -1.1760 -1.4158 0.4851 0.2624 -0.0458 … -0.9020 -1.2103 -1.4329 0.5022 0.2967 -0.0116 … -0.9192 -1.2274 -1.4500 ( 1 ,.,.) = -1.0903 -1.1078 -1.1429 … -1.1429 -1.1429 -1.1429 -1.0903 -1.1078 -1.1429 … -1.1429 -1.1429 -1.1429 -1.1078 -1.1253 -1.1429 … -1.1429 -1.1604 -1.1604 … ⋱ … -0.8102 -1.0378 -1.3179 … -0.4251 -0.8452 -1.1779 -0.7927 -1.0028 -1.2829 … -0.4951 -0.9153 -1.2304 -0.7927 -0.9853 -1.2654 … -0.5301 -0.9503 -1.2654 ( 2 ,.,.) = -1.4210 -1.4384 -1.4733 … -1.5430 -1.5430 -1.5430 -1.4210 -1.4384 -1.4733 … -1.5430 -1.5430 -1.5430 -1.4384 -1.4559 -1.4559 … -1.5430 -1.5604 -1.5604 … ⋱ … -0.8110 -1.0027 -1.2641 … 0.3393 -0.0964 -0.4275 -0.7936 -0.9678 -1.2119 … 0.2522 -0.1835 -0.4973 -0.7761 -0.9504 -1.1944 … 0.1999 -0.2184 -0.5321 [torch.FloatTensor of size 3x224x224] ValueError Traceback (most recent call last) in () 12 #X = test1.resize_(1, 3, 224, 224) 13 test1 = Variable(test1.cpu()) —> 14 predict = model_ft1(test1) 15 #print(predict) /home/rj/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in call(self, *input, **kwargs) 222 for hook in self._forward_pre_hooks.values(): 223 hook(self, input) –> 224 result = self.forward(*input, **kwargs) 225 for hook in self._forward_hooks.values(): 226 hook_result = hook(self, input, result) /home/rj/anaconda2/lib/python2.7/site-packages/torchvision-0.1.9-py2.7.egg/torchvision/models/resnet.py in forward(self, x) 137 138 def forward(self, x): –> 139 x = self.conv1(x) 140 x = self.bn1(x) 141 x = self.relu(x) /home/rj/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in call(self, *input, **kwargs) 222 for hook in self._forward_pre_hooks.values(): 223 hook(self, input) –> 224 result = self.forward(*input, **kwargs) 225 for hook in self._forward_hooks.values(): 226 hook_result = hook(self, input, result) /home/rj/anaconda2/lib/python2.7/site-packages/torch/nn/modules/conv.pyc in forward(self, input) 252 def forward(self, input): 253 return F.conv2d(input, self.weight, self.bias, self.stride, –> 254 self.padding, self.dilation, self.groups) 255 256 /home/rj/anaconda2/lib/python2.7/site-packages/torch/nn/functional.pyc in conv2d(input, weight, bias, stride, padding, dilation, groups) 46 “”" 47 if input is not None and input.dim() != 4: —> 48 raise ValueError(“Expected 4D tensor as input, got {}D tensor instead.”.format(input.dim())) 49 50 f = ConvNd(_pair(stride), _pair(padding), _pair(dilation), False, ValueError: Expected 4D tensor as input, got 3D tensor instead.
st115859
If you are passing one image as the input, you will have to reshape it such that it has a batch dimension, be it 1 as would be in this case. or better test1 = test1.unsqueeze(0) print test1.size()
st115860
Hi! I’m trying to optimise memory requirements for seq2seq decoder when every input for decoder is taken from previous step’s output (non-teaching mode). In that case, I can’t use pack_padded_sequence method and execute RNN on full batch, but iterate over sequences offsets, accumulating loss from every step. From my experiments, I found that in such mode, GPU memory consumption becomes almost linear to input sequence length, which, in case of even small LSTM nets limits sequences to 300-400. I’ve implemented small demonstration tool, which generates random batch of sequences and iterates the loop over their offsets: https://gist.github.com/Shmuma/614d3dbe0ad2805d048ff0e6129682aa 26 Results from it’s run on gtx 1080ti and pytorch 0.1.12: seq_len=50 -> 1.5GB seq_len=200 -> 5.0GB seq_len=300 -> 7.2GB seq_len=400 -> 9.5GB I guess, such high memory consumption is due to gradients accumulated during loss summing. So, my question is: is it possible to optimise memory consumption in such case? My understading of this that gradients for every LSTM matrix should be aggregated somehow among all sequence steps, but they are retained in separate buffers until final .backward() call. Is it possible to achive this? Other option would be to call .backward() for every sequence step, but in this case, it doesn’t look like an option, as decoding step is preceeded with encoder run, and I’m not sure that encoder’s gradients will be valid. Thanks!
st115861
I don’t think so. My issue doesn’t look like a bug, it’s more like an issue of way pytorch calculates gradients (tape gradients calculation) and how I’m using my RNN.
st115862
Something I really don’t understand is whether RNN are unrolled or not and how to choose. In Keras with Theano backend you can specify unroll = True or False depending on the memory vs speed tradeoff you want to make. By the way in the specific case described here, whatever the rolling status of the LSTM, the memory consumption will grow due to the output you use: net_out = nn.Linear(in_features=HIDDEN_SIZE, out_features=INPUT_SIZE) Indeed : print(net_out) Gives : Linear (512 -> INPUT_SIZE)
st115863
As far as I understand whole machinery, Pytorch has dynamic graph, which allows you to decide how many steps to unroll your RNN. With simple architectures, it allows you to stop time batching at all (which you have to do in case of keras) and process whole batch of variable length sequences using one RNN call preceeded with pack_padded_sequence. Unfortunately, it’s not working such smoothly in case of seq2seq architecture.
st115864
I had the same problem for an encoder-decoder model used for summarization. encoder sequences ~400 and decoder of ~120 max length for my 12G on TitanX. Aso this was on v0.2. You could try using a different optimizer…one that doesnt track gradients…
st115865
Hm, that’s an interesting suggestion. Could you give more information about optimizers? I thought all of them just use gradients gathered from computation graph built on Variable operations. Today I had different idea to split both input and output sequences on chunks of fixed size and update gradients at the end of each chunk. It should be similar to “RNN unrolling” in keras or TF, and in theory, can have negative effect on convergence, but looks like a solution to memory limitation. But haven’t tried it yet.
st115866
I tried your example without the enormous net_out = nn.Linear(in_features=HIDDEN_SIZE, out_features=INPUT_SIZE) layer, and it is not so terrible. If you want to have a Linear layer after the LSTM, you have to repeat INPUT_SIZE times a single nn.Linear(in_features=HIDDEN_SIZE, out_features=1). It doesn’t change our conceptual problem but you can build much more longer seq2seq architecture this way.
st115867
That’s something I don’t understand. Could you please explain? My understanding that net_out is not too large (51210) compared to LSTM unit itself which has 4(512*10 + 512^2) i.e. hos 200 times more parameters. Additionally, with trainer mode of seq2seq, on every step of decoder RNN I need to apply linear layer to calculate next token for this step to feed in as input on next step. Of course, in real-world applications, when count of output tokens will be full vocabulary (hundreeds of thousands or even millions of words), output layer will dominate. But this is completely different story which can be solved by hierarchical or sampled softmax.