id
stringlengths
3
8
text
stringlengths
1
115k
st119168
It’s because you called .cuda() before assignment to tmap. Casting or copying the Varaible to another device returns you a different Variable, so tmap is not a leaf and won’t have the gradient accumulated. Either do this: tmap_leaf = Variable(...) tmap = tmap_leaf.cuda() or better tmap = Variable(torch.from_numpy(...).float().cuda(), requires_grad=True)
st119169
I have a list of sentences and I am convert the list to a 3d tensor. The first dimension represents number of sentences, second dimension represents number of words and third dimension represents word embedding size. The problem is, number of words can vary in sentences. I tried to create a 3d tensor as follows. all_sentences1 = torch.FloatTensor(len(instances), None, args.emsize) But this gives me the following error. TypeError: torch.FloatTensor constructor received an invalid combination of arguments - got (int, NoneType, int), but expected one of: * no arguments * (int ...) didn't match because some of the arguments have invalid types: (int, NoneType, int) * (torch.FloatTensor viewed_tensor) * (torch.Size size) * (torch.FloatStorage data) * (Sequence data) How can I declare a 3d tensor?
st119170
Tensors must be declared with specific sizes in each dimension, so you need to pad the sentences so that they all have the same length, then use pack_padded_sequence if you’re running them through an RNN.
st119171
I can pad the sequence manually but is there any way to pad using any torch function? Moreover, can you point me to any tutorial that explains what is pack_padded_sequence because I have no idea about it. Edit I have just seen the documentation of pack_padded_sequence and pad_packed_sequence but its not clear for me. Can anyone explain how to use these two functions?
st119172
The notes here may be helpful for using the packing functions https://github.com/pytorch/pytorch/releases/tag/v0.1.10 60 Depending on your use case, the torchtext 7 library may be helpful with creating an iterable dataset object containing your sentences; let me know if you have issues with using it (documentation is in the code’s docstrings).
st119173
You need to understand that PyTorch works differently than static graph frameworks. You can’t define “placeholders” with unspecified sizes, but you can pass in tensors of different sizes to your model without any modifications. Instead of reusing a single placeholder, you always have to operate on real data.
st119174
I use pip to install however run into this error. Traceback (most recent call last): File “/home/gaop/env/local/lib/python2.7/site-packages/pip/basecommand.py”, line 122, in main status = self.run(options, args) File “/home/gaop/env/local/lib/python2.7/site-packages/pip/commands/install.py”, line 278, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File “/home/gaop/env/local/lib/python2.7/site-packages/pip/req.py”, line 1197, in prepare_files do_download, File “/home/gaop/env/local/lib/python2.7/site-packages/pip/req.py”, line 1375, in unpack_url self.session, File “/home/gaop/env/local/lib/python2.7/site-packages/pip/download.py”, line 546, in unpack_http_url resp = session.get(target_url, stream=True) File “/home/gaop/env/local/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py”, line 395, in get return self.request(‘GET’, url, **kwargs) File “/home/gaop/env/local/lib/python2.7/site-packages/pip/download.py”, line 237, in request return super(PipSession, self).request(method, url, *args, **kwargs) File “/home/gaop/env/local/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py”, line 383, in request resp = self.send(prep, **send_kwargs) File “/home/gaop/env/local/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py”, line 486, in send r = adapter.send(request, **kwargs) File “/home/gaop/env/local/lib/python2.7/site-packages/pip/_vendor/requests/adapters.py”, line 385, in send raise SSLError(e) SSLError: [Errno 1] _ssl.c:510: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure Storing debug log for failure in /home/gaop/.pip/pip.log
st119175
this is the script I use pip install https://download.pytorch.org/whl/cu80/torch-0.1.10.post2-cp27-none-linux_x86_64.whl pip install torchvision
st119176
It must be some kind of a network error at your side (maybe you don’t have some trusted SSL certificates installed)?
st119177
I had the same issue, seems to be SSL problem See : https://github.com/kennethreitz/requests/issues/2022 15 Isemel answer : sudo apt-get install libffi-dev pip install pyOpenSSL ndg-httpsclient pyasn1 Worked for me
st119178
I noticed that there is a ReLU here 56 in an MNIST example. It turns a usual softmax into a softmax with non-negative input. I wonder if I’m missing something or it is a reasonable thing to do.
st119179
I think it is not an unreasonable thing to to but it probably makes sense to remove it as it might speed up learning by a bit.
st119180
Hi, I’ve gone through the PyTorch tutorials, and looked at a couple examples, and I’m still having trouble getting started – I’m just trying to make a basic MLP for now. What I have below is my (existing) Keras version, and then an attempt at a PyTorch version, cobbled together from trying to read the docs and posts on this forum…still not finished because I’m not sure I’m doing this right. Can someone tell me if I’m on the right track? This MLP is intended to take, as input, a vector that’s 3 elements long (X.shape[1]=3) go through two hidden layers with n_hidden neurons each, with different activations, and then output a single number. (It is a regression problem.) def make_model(X, n_hidden, weights_file="weights.hdf5"): if ('keras' == library): model = Sequential() model.add(Dense(n_hidden, input_shape=(X.shape[1],), activation='relu', init='he_uniform')) model.add(Dense(n_hidden,activation='tanh')) model.add(Dense(1)) model.compile(loss='mse', optimizer='adam', lr=0.001) elif ('pytorch' == library): class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.hidden = nn.Linear(n_hidden, X.shape[1]) self.hidden2 = nn.Linear(n_hidden, X.shape[1]) self.out = nn.Linear(1) def forward(self, x): x = F.relu(self.hidden(x)) x = F.tanh(self.hidden2(x)) x = self.out(x) return x model = Net() else: raise ValueError('Invalid library selection') return model …And then for training, if ('keras' == library): model.fit(X_train, Y_train, nb_epoch=1, batch_size=batch_size, verbose=1, validation_data=(X_test, Y_test), callbacks =[ProgbarLogger(),ModelCheckpoint(filepath=weights_file, verbose=1, save_best_only=True)]) elif ('pytorch' == library): input = Variable(X_train, requires_grad=True) result = model(input) result.backward(torch.randn(result.size())) …I don’t see where I actually feed my “Y_train” (target, true) data to PyTorch, with which to compute a loss function. And then for predicting… if ('keras' == library): Y_pred = model.predict(X_test) elif ('pytorch' == library): input = Variable(X_train, requires_grad=True) Y_pred = model(input).numpy # after which I plot Y_pred along with Y_test using matplotlib ?
st119181
You always need to specify both the number of input and output features in the Linear layer constructor. It seems that the order is reversed, and you forgot to add a second argument yo self.out. Additionally hidden2 will be applied to input having n_hidden features. Try this: class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.hidden = nn.Linear(X.shape[1], n_hidden) self.hidden2 = nn.Linear(n_hidden, n_hidden) self.out = nn.Linear(n_hidden, 1) def forward(self, x): x = F.relu(self.hidden(x)) x = F.tanh(self.hidden2(x)) x = self.out(x) return x For the target, you need to give the output of your network together with the targets to a loss function and optimize that.
st119182
Thanks Adam! I incorporated what you wrote, and reviewed the examples/mnist/main.py file, and I’m close to a complete code now. The forward part of the model is generating an error, and I’m wondering if you or anyone else can offer a suggestion: (py35) ~/parameter$ ./pytorch_mlp_param.py Setting up data Defining model X_train.shape= (10000, 3) Using CUDA, number of devices = 2 (Outer) Epoch 0 of 10000 : Traceback (most recent call last): File “./pytorch_mlp_param.py”, line 160, in main() File “./pytorch_mlp_param.py”, line 154, in main train(model, epoch, trainloader, optimizer) File “./pytorch_mlp_param.py”, line 101, in train output = model(data) File “/opt/anaconda/envs/py35/lib/python3.5/site-packages/torch/nn/modules/module.py”, line 202, in call result = self.forward(*input, **kwargs) File “./pytorch_mlp_param.py”, line 64, in forward x = F.relu(self.hidden(x)) File “/opt/anaconda/envs/py35/lib/python3.5/site-packages/torch/nn/modules/module.py”, line 202, in call result = self.forward(*input, **kwargs) File “/opt/anaconda/envs/py35/lib/python3.5/site-packages/torch/nn/modules/linear.py”, line 54, in forward return self.backend.Linear()(input, self.weight, self.bias) File "/opt/anaconda/envs/py35/lib/python3.5/site-packages/torch/nn/functions/linear.py", line 10, in forward output.addmm(0, 1, input, weight.t()) TypeError: addmm received an invalid combination of arguments - got (int, int, torch.cuda.DoubleTensor, torch.cuda.FloatTensor), but expected one of: (torch.cuda.DoubleTensor mat1, torch.cuda.DoubleTensor mat2) (torch.cuda.sparse.DoubleTensor mat1, torch.cuda.DoubleTensor mat2) (float beta, torch.cuda.DoubleTensor mat1, torch.cuda.DoubleTensor mat2) (float alpha, torch.cuda.DoubleTensor mat1, torch.cuda.DoubleTensor mat2) (float beta, torch.cuda.sparse.DoubleTensor mat1, torch.cuda.DoubleTensor mat2) (float alpha, torch.cuda.sparse.DoubleTensor mat1, torch.cuda.DoubleTensor mat2) (float beta, float alpha, torch.cuda.DoubleTensor mat1, torch.cuda.DoubleTensor mat2) (float beta, float alpha, torch.cuda.sparse.DoubleTensor mat1, torch.cuda.DoubleTensor mat2) . . If I print out “x” at the beginning for forward(), I find that, as I was intending, it’s x = [torch.cuda.DoubleTensor of size 20x3 (GPU 0)] (where 20 is the batch size). I don’t understand what is the extra torch.cuda.FloatTensor it says it got. Any thoughts? . . . . For completeness, in case it helps, I’ll put the full code (with Keras references removed) below. . #! /usr/bin/env python # Multilayer Perceptron to learn "f" in "y = f(x,p)", given lots of (x,y) pairs # and a set of parameters p=[...] which affect f(x) from __future__ import print_function import numpy as np import matplotlib.pyplot as plt import argparse import torch import torch.nn as nn import torch.nn.functional as F import torch.utils.data import torch.optim as optim from torch.autograd import Variable def myfunction(x,p=[1,0]): # function to be learned, with its parameters return p[0]*np.sin(100*p[1]*x) # try just a sine wave, with amplitude & frequency def myfunc_stacked(X): Y = [] for i in range(X.shape[0]): x = X[i,0] p1 = X[i,1] p2 = X[i,2] p = [p1,p2] Y.append( myfunction(x,p)) return np.array(Y) def stack_params(X, p=None): # encapsulates parameters with X if p is None: p0 = np.random.rand(len(X)) # random values throughout X p1 = np.random.rand(len(X)) else: p0 = np.ones(len(X)) * p[0] # stack copies of params with X p1 = np.ones(len(X)) * p[1] return np.array(list(zip(X,p0,p1))) def gen_data(n=1000, n_params=2, rand_all=False): X = np.linspace(-1.0,1.0,num=n) if (not rand_all): p = np.random.random(n_params)-0.5 else: p = None X = stack_params(X,p) Y = myfunc_stacked(X) return X, Y, p def make_model(X, n_hidden): class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.hidden = nn.Linear(X.shape[1], n_hidden) self.hidden2 = nn.Linear(n_hidden, n_hidden) self.out = nn.Linear(n_hidden, 1) def forward(self, x): x = F.relu(self.hidden(x)) x = F.tanh(self.hidden2(x)) x = self.out(x) return x # Note: "backward" is automatically defined by torch.autograd model = Net() if torch.cuda.is_available(): print("Using CUDA, number of devices = ",torch.cuda.device_count()) model.cuda() return model def plot_prediction(X_test, Y_test, Y_pred, epoch, n_epochs, p_test): fig=plt.figure() plt.clf() ax = plt.subplot(1,1,1) ax.set_ylim([-1,1]) plt.title("Epoch #"+str(epoch)+"/"+str(n_epochs)+", p = "+str(p_test)) plt.plot(X_test[:,0],Y_test,'b-',label="True") plt.plot(X_test[:,0],Y_pred,'r-',label="Predicted") plt.legend() plt.savefig('progress.png') plt.close(fig) return def train(model, epoch, trainloader, criterion, optimizer): model.train() for batch_idx, (data, target) in enumerate(trainloader): if torch.cuda.is_available(): data, target = data.cuda(), target.cuda() data, target = Variable(data), Variable(target) optimizer.zero_grad() output = model(data) loss = criterion(output,target) loss.backward() optimizer.step() if batch_idx % args.log_interval == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.data[0])) def predict(model,testloader, X_test, Y_test, epoch, n_epochs, p_test): model.eval() print(" Plotting....") Y_pred = [] for data, target in testloader: if torch.cuda.is_available(): data = data.cuda() data = Variable(data, volatile=True) output = model(data) Y_pred.append(output.numpy) Y_pred = np.array(Y_pred) plot_prediction(X_test, Y_test, Y_pred, epoch, n_epochs, p_test) def main(): np.random.seed(2) # parameters for 'size' of run n_hidden = 100 batch_size = 20 n_train = 10000 n_test =1000 print("Setting up data") X_train, Y_train, p_train = gen_data(n=n_train, rand_all=True) X_test, Y_test, p_test = gen_data(n=n_test) trainset = torch.utils.data.TensorDataset(torch.from_numpy(X_train),torch.from_numpy(Y_train)) trainloader = torch.utils.data.DataLoader(trainset, batch_size=20, shuffle=True, num_workers=2) testset = torch.utils.data.TensorDataset(torch.from_numpy(X_test),torch.from_numpy(Y_test)) testloader = torch.utils.data.DataLoader(testset, batch_size=1, shuffle=False, num_workers=2) print("Defining model") model = make_model(X_train, n_hidden) optimizer = optim.Adam(model.parameters(), lr=0.001) criterion = nn.MSELoss() n_epochs= 10000 predict_every = 20 for epoch in range(n_epochs): print("(Outer) Epoch ",epoch," of ",n_epochs,":") train(model, epoch, trainloader, criterion, optimizer) if (0 == epoch % predict_every): predict(model,testloader, X_test, Y_test, epoch, n_epochs, p_test) if __name__ == '__main__': main()
st119183
A guess would be that you’re using from_numpy on a float64 array, and this gives you a DoubleTensor. PyTorch uses FloatTensors as the default tensor type. Once you send the double input and float weights to the GPU, and try to multiply them it will raise an error. I’d suggest you to convert all numpy arrays to float32 type.
st119184
Thank you. So that extra float array was the bias! So, doing that conversion of everything to float32, then for some reason the line for batch_idx, (data, target) in enumerate(trainloader): is producing a target which is still DoubleTensor, although data is a FloatTensor. This is a problem when it comes to compute the loss. Is there a way to recast DoubleTensor as a FloatTensor? (When I try target = torch.FloatTensor(target) I get an error.) These are loaded by converting both data and target to FloatTensor, and I even added a numpy .astype(float) call to make sure…but somehow the dataloader is generating a double. Why might that be? The loader is initialized via… trainset = torch.utils.data.TensorDataset(torch.FloatTensor(X_train),torch.FloatTensor(Y_train.astype(float))) trainloader = torch.utils.data.DataLoader(trainset, batch_size=20, shuffle=True, num_workers=2)
st119185
Wait, just saw this post: DataLoader gives double instead of float?, did what it said. I’m now up & running! Thanks @apaszke! Now I can get on to trying to actually solve the intended problem itself. So far this simple network does not learn well!
st119186
My problem relates to a masked forward pass where I ignore zeros in the input. I believe what’s happening is related to the optimizer (torch.optim.Adam below) not recording any parameter updates in the case of masking. If I have a simple layer that has an option to ignore zeros in the input: import torch import torch.nn as nn import torch.optim as onn import torch.autograd as ann import torch.nn.functional as fnn torch.manual_seed(123) class SimpleLayer(nn.Module): def __init__(self, size_in, size_out, ignore_zero=True): super(SimpleLayer, self).__init__() self.weight = nn.Parameter( torch.randn(size_in, size_out) * 1e-5, requires_grad=True ) self.ignore_zero = ignore_zero def forward(self, input_var): if self.ignore_zero: nz_inds = input_var.data.nonzero()[:, 1] return input_var[:, nz_inds].mm(self.weight[nz_inds]) else: return input_var.mm(self.weight) And I create a training stack and some sparse data: layer = SimpleLayer(10, 5, ignore_zero=True) loss_func = fnn.smooth_l1_loss optimizer = onn.Adam(layer.parameters()) sparse_input = torch.zeros(1, 10) sparse_input[0][2] = 0.2 sparse_input[0][5] = 0.3 sparse_input = ann.Variable(sparse_input) The output is identical whether ‘ignore_zero’ is set or not: layer.ignore_zero = True print layer.forward((sparse_input)) layer.ignore_zero = False print layer.forward((sparse_input)) Outputs: Variable containing: 1.00000e-06 * -2.2359 5.9174 3.7352 -3.4771 1.3588 [torch.FloatTensor of size 1x5] Variable containing: 1.00000e-06 * -2.2359 5.9174 3.7352 -3.4771 1.3588 [torch.FloatTensor of size 1x5] On the other hand, results start to diverge after some training steps: layer.ignore_zero = False print 'ignore_zero False:' for i in range(5): outp = layer.forward((sparse_input)) loss = fnn.smooth_l1_loss(outp, ann.Variable(torch.randn(1, 5))) loss.backward() optimizer.step() print loss.data[0] Gives: ignore_zero False: 0.297815024853 0.872213542461 0.316926777363 0.0565339252353 0.746583342552 layer.ignore_zero = True print 'ignore_zero True:' for i in range(5): outp = layer.forward((sparse_input)) loss = fnn.smooth_l1_loss(outp, ann.Variable(torch.randn(1, 5))) loss.backward() optimizer.step() print loss.data[0] Gives: ignore_zero True: 0.297815024853 0.871760487556 0.316960245371 0.056279104203 0.747062385082 The parameters in optimizer.param_groups do not update at all in the layer.ignore_zero = True case. Is there some way to get the optimizer to agree with the masking step in the module’s forward pass? Thanks!
st119187
Solved by cjmcmurtrie in post #2 I found a solution and will leave it here for other users. First I updated PyTorch. Then I realized that Numpy-style indexing is not allowed in the latest release. Instead of Numpy indexing, I used .index_select. Like this the optimizer parameters update when .step() is called. On a side note, Nump…
st119188
I found a solution and will leave it here for other users. First I updated PyTorch. Then I realized that Numpy-style indexing is not allowed in the latest release. Instead of Numpy indexing, I used .index_select. Like this the optimizer parameters update when .step() is called. On a side note, Numpy style indexing is a super-cool feature, hopefully on the horizon class SimpleLayer(nn.Module): def __init__(self, size_in, size_out, ignore_zeros=True): super(SimpleLayer, self).__init__() self.weight = nn.Parameter( torch.randn(size_in, size_out) * 1e-5, requires_grad=True ) self.ignore_zeros = ignore_zeros def forward(self, input_var): if self.ignore_zeros: nz_inds = ann.Variable(input_var.data.nonzero()[:, 1]) inp_nz = input_var.index_select(1, nz_inds) weight_nz = self.weight.index_select(0, nz_inds) out = inp_nz.mm(weight_nz) return out else: return input_var.mm(self.weight)
st119189
Yes, we’re definitely planning to add it. Good to hear your problem is fixed in the newer version
st119190
It seems that copy.deepcopy(module) is recommended for cloning a module without parameter sharing. Then, what would be the best approach for cloning with parameter sharing? I mean, weights and grads will be automatically shared if we just forward the module with different inputs and dealing with the outputs. However, if there are other non-module variables that I want to share, I am not sure what would be the most elegant way. Thanks!
st119191
I’d say it really depends on the use case. I can’t think of any general recipe right now.
st119192
So far I’ve been adding layers to my neural nets by using nn.Linear, but suppose I wanted to do a forward pass on my layer by doing self.nlin(self.Psi(x), where nlin would be a nonlinearity, say nn.softshrink, parametrized by some lambda, and I wanted to make my lambda a training parameter. How would I do this? Do I have to write my own version of softshrink? If so what will I need to do in order to play nice with the other nn modules and get autodifferentiation to work properly?
st119193
It all depends how do you want to change it, there’s not a single good recipe. You should be able to implement it as a Python function that operates on Variables, and that will get the gradient computed automatically. Another approach would be to implement your own function. You can find some notes on that in the docs.
st119194
I got the DataParallel() example for multi-GPU working successfully for my model. However, I happen to have a 64-core Xeon Phi CPU, and I can’t stand looking at it sitting idle. How can I saturate the CPU by assigning it some work? In other words, can I split the workload on GPU0, GPU1, and CPU0-255 and sum the gradients on CPU? Thank you!
st119195
I don’t know very much about Xeon Phi, but I believe that Intel has an MKL version that will parallelize BLAS calls over the whole chip, making it work like a single GPU device. I would do some experiments to compare speed for each part of your network, then use model parallelism to put submodules on the device they work best on. Or you could subclass/modify the code for DataParallel to allow the CPU (Phi) to be one of the included devices.
st119196
Thanks James. I had a subclassing script for Keras based on Kuza55’s script: it replicated models to /gpu0, /gpu1, and /cpu0-255. I will look into subclassing data_parallel.py to use both CPU and GPU. What are the device IDs for CPUs inside PyTorch? Keras kuza55 script: github.com kuza55/keras-extras/blob/master/utils/multi_gpu.py 69 from keras.layers import merge from keras.layers.core import Lambda from keras.models import Model import tensorflow as tf def make_parallel(model, gpu_count): def get_slice(data, idx, parts): shape = tf.shape(data) size = tf.concat([ shape[:1] // parts, shape[1:] ],axis=0) stride = tf.concat([ shape[:1] // parts, shape[1:]*0 ],axis=0) start = stride * idx return tf.slice(data, start, size) outputs_all = [] for i in range(len(model.outputs)): outputs_all.append([]) #Place a copy of the model on each GPU, each getting a slice of the batch for i in range(gpu_count): This file has been truncated. show original data_parallel.py script: github.com pytorch/pytorch/blob/master/torch/nn/parallel/data_parallel.py 57 import torch from ..modules import Module from .scatter_gather import scatter_kwargs, gather from .replicate import replicate from .parallel_apply import parallel_apply class DataParallel(Module): r"""Implements data parallelism at the module level. This container parallelizes the application of the given module by splitting the input across the specified devices by chunking in the batch dimension. In the forward pass, the module is replicated on each device, and each replica handles a portion of the input. During the backwards pass, gradients from each replica are summed into the original module. The batch size should be larger than the number of GPUs used. It should also be an integer multiple of the number of GPUs so that each chunk is the same size (so that each GPU processes the same number of samples). This file has been truncated. show original
st119197
CPUs don’t have device IDs; there’s just one kind of CPU tensor and operations with them are implemented in TH and farmed out to MKL, which would then have its own strategy for parallelizing over the Phi.
st119198
Trying to install x86_64 version of Python 2.7 pytorch as advertised: pip install https://download.pytorch.org/whl/cu80/torch-0.1.10.post2-cp27-none-linux_x86_64.whl 35 Output of uname -a : Linux 1a2eb1ca1481 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Output python --version: Python 2.7.12 For Python 3, similar procedure works without problem. Getting wheel not supported error - renaming wheel from none to cp27m does not work either (shouldn’t anyway). What’s wrong?
st119199
ok, simple mistake - forgot that now pip will be pip3 by default. installing with pip2.7 works. You may want to update install instructions to use pip3 / pip2.7 explicitely!
st119200
I’ve been trying to figure out how to resolve this error for a couple of hours to no luck. I’ve combed through the documentation, but couldn’t find anything. Can someone explain to me why the AssertionError is being thrown? Here’s the code: import torch import torch.nn as nn import torchvision.transforms as transforms from torch.autograd import Variable num_epochs = 15 batch_size = 500 learning_rate = 0.003 class CNN(nn.Module): def __init__(self): super(CNN, self).__init__() self.layer1 = nn.Sequential( nn.Conv2d(1, 16, kernel_size=5), nn.BatchNorm2d(16), nn.ReLU(), nn.MaxPool2d(2)) def forward(self, x): out = self.layer1(x) return out cnn = CNN() criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(cnn.parameters(), lr=learning_rate) img = torch.Tensor(img) #img & labl are numpy ndarrays labl = torch.Tensor(labl) image = Variable(img) label = Variable(labl) optimizer.zero_grad() output = cnn(image) Here’s the error message: Traceback (most recent call last): File "conv_net.py", line 84, in <module> output = cnn(image) File "/home/randy/.virtualenv/ml-pytorch/local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 202, in __call__ result = self.forward(*input, **kwargs) File "conv_net.py", line 50, in forward out = self.layer1(x) File "/home/randy/.virtualenv/ml-pytorch/local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 202, in __call__ result = self.forward(*input, **kwargs) File "/home/randy/.virtualenv/ml-pytorch/local/lib/python2.7/site-packages/torch/nn/modules/container.py", line 64, in forward input = module(input) File "/home/randy/.virtualenv/ml-pytorch/local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 202, in __call__ result = self.forward(*input, **kwargs) File "/home/randy/.virtualenv/ml-pytorch/local/lib/python2.7/site-packages/torch/nn/modules/conv.py", line 237, in forward self.padding, self.dilation, self.groups) File "/home/randy/.virtualenv/ml-pytorch/local/lib/python2.7/site-packages/torch/nn/functional.py", line 38, in conv2d return f(input, weight, bias) if bias is not None else f(input, weight) File "/home/randy/.virtualenv/ml-pytorch/local/lib/python2.7/site-packages/torch/nn/_functions/conv.py", line 35, in forward output = self._update_output(input, weight, bias) File "/home/randy/.virtualenv/ml-pytorch/local/lib/python2.7/site-packages/torch/nn/_functions/conv.py", line 99, in _update_output output = self._thnn('update_output', input, weight, bias) File "/home/randy/.virtualenv/ml-pytorch/local/lib/python2.7/site-packages/torch/nn/_functions/conv.py", line 159, in _thnn impl = _thnn_convs[self.thnn_class_name(input)] File "/home/randy/.virtualenv/ml-pytorch/local/lib/python2.7/site-packages/torch/nn/_functions/conv.py", line 140, in thnn_class_name assert input.dim() == 4 or input.dim() == 5 AssertionError
st119201
nn.Conv2d expects a dimension for a batch size in the input. So you have to feed your network with a tensor of dimensions (batch_size, nb_channels, height, weight). You can simply add a fake batch dimension with unsqueeze(): img = torch.Tensor(img).unsqueeze(0)
st119202
model.png1152×648 4.42 KB I want to build a model like that. But I don't know how to use PyTorch basic modules to do that. I plan to train the horizontal parameters(CNN) at first. Then,train the vertical parameters(RNN). But,I don't know how to combine them together and train this.
st119203
This model looks like a Stacked Convolutional Recurrent Neural Network. I couldn’t find an implementation of this in PyTorch but it should be easily done. If you see cell 8 in this iPython Notebook 41 you have: import torch.nn as nn from torch.autograd import Variable class RNN(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(RNN, self).__init__() self.hidden_size = hidden_size self.i2h = nn.Linear(input_size + hidden_size, hidden_size) self.i2o = nn.Linear(input_size + hidden_size, output_size) self.softmax = nn.LogSoftmax() def forward(self, input, hidden): combined = torch.cat((input, hidden), 1) hidden = self.i2h(combined) output = self.i2o(combined) output = self.softmax(output) return output, hidden def initHidden(self): return Variable(torch.zeros(1, self.hidden_size)) n_hidden = 128 rnn = RNN(n_letters, n_hidden, n_categories) Here the i2h and i2o are Linear layers. You’d have to change them to be Convolutional Layers to get a ConvRNN (or if you apply the LSTM update equation a ConvLSTM)
st119204
Hi, All. To make experiments more convience, I’ve tried to write a Dataset that can be feeded to dataloader. However, my dataset can read data correctly while the dataloader distorted the images as follows. Images in my dataset is 90001616. (gray images) I want to find out how does dataloader transform the dataset into 9000116*16, but I can’t find the code. Image in my dataset: Image in dataloader: Here is my code: # training set transform = transforms.Compose([transforms.Scale(28), transforms.ToTensor(), ]) trainset = USPS(root='./data', train=True, download=False, transform=transform) # split data trainloader = torch.utils.data.DataLoader(trainset, batch_size=100, shuffle=False, num_workers=1) # kernel code in USPS def load(self): # process and save as torch files print('Processing...') data = sio.loadmat(os.path.join(self.root,'usps_train.mat')) traindata = torch.from_numpy(data['data'].transpose()) traindata = traindata.view(16,16,-1).permute(2,1,0) trainlabel = torch.from_numpy(data['labels']) data = sio.loadmat(os.path.join(self.root,'usps_test.mat')) testdata = torch.from_numpy(data['data'].transpose()) testdata = testdata.view(16,16,-1).permute(2,1,0) testlabel = torch.from_numpy(data['labels']) training_set = (traindata, trainlabel) test_set = (testdata, testlabel) with open(os.path.join(self.root, self.training_file), 'wb') as f: torch.save(training_set, f) with open(os.path.join(self.root, self.test_file), 'wb') as f: torch.save(test_set, f) print('Done!')
st119205
I have an attention decoder whose forward function is as follows. def forward(self, input, hidden, encoder_outputs): embedded = self.embedding(input).view(1, 1, -1) embedded = self.drop(embedded) attn_weights = F.softmax(self.attn(torch.cat((embedded[0], hidden[0]), 1))) When ever the forward function gets called, I get the following error. RuntimeError: inconsistent tensor sizes at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.9_1487344852722/work/torch/lib/THC/generic/THCTensorMath.cu:141 How to resolve this problem? The problem is associated with torch.cat(). Please help.
st119206
as reflected by this thread: How can I print the shape of a tensor inside the forward function? you can add prints to figure out the problematic tensor shape for torch.cat
st119207
Thanks for your reply. I printed the shape and found the following. embedded = self.embedding(input).view(1, 1, -1) embedded = self.drop(embedded) print(embedded[0].size(), hidden[0].size()) I am getting, torch.size([1, 300]) torch.size([1, 1, 300]). Why I am getting [1, 300] shape for embedded tensor even though I have used the view method as view(1, 1, -1)?
st119208
According to the docs 4 Embedding layer returns a Tensor of the shape (N,W, embedding_dim) where N is the mini-batch size and W is number of indices to extract per mini-batch. After performing the view operation on that, you would get a tensor of the shape (1,1, N x W x embedding_dim). It is important to note that this is a 3 dimensional tensor. But since you are doing embedded[0].size(), you are essentially asking for the shape of the remaining two dimensions, which explains the result you are getting via the print statements. Hope this helps!
st119209
Then can you tell me, why hidden[0].size() is working fine? hidden is the output of torch.nn.LSTM which is also a 3d tensor but whenever I try to print hidden.size(), i get error which says - ‘tuple’ object has no attribute ‘size’. Where I am doing the mistake?
st119210
It is probably because you are using LSTMs. Pytorch’s implementation returns to you both h_n and c_n (hidden state and cell state for the last time step) in the hidden variable as a tuple. In comparison, GRU would just return to you h_n. As a result for LSTMs, hidden[0] is giving you h_n.
st119211
I’m running into a strange issue where loading weights is quite slow. Specifically for the DC-GAN example in the repo, loading the weights for a DC-GAN with 10 latent variables takes 150 seconds which doesn’t seem right given the size of the model. The code to create/load the model is below; is anything obviously wrong here? Thanks! Edit: the slow part is torch.load – load_state_dict is almost instantaneous. Edit 2: this was fixed by upgrading to Cuda 8.0 class _netG(nn.Module): def __init__(self, ngpu): super(_netG, self).__init__() self.ngpu = ngpu self.main = nn.Sequential( # input is Z, going into a convolution nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False), nn.BatchNorm2d(ngf * 8), nn.ReLU(True), # state size. (ngf*8) x 4 x 4 nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(True), # state size. (ngf*4) x 8 x 8 nn.ConvTranspose2d(ngf * 4, ngf * 2, 4, 2, 1, bias=False), nn.BatchNorm2d(ngf * 2), nn.ReLU(True), # state size. (ngf*2) x 16 x 16 nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 1, bias=False), nn.BatchNorm2d(ngf), nn.ReLU(True), # state size. (ngf) x 32 x 32 nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False), nn.Tanh() # state size. (nc) x 64 x 64 ) def forward(self, input): # [omitted for previty] netG = _netG(ngpu) netG.apply(weights_init) with rtk.timing.Logger('load weights'): if opt.netG != '': netG.load_state_dict(torch.load(opt.netG)) print(netG)
st119212
Hi! When I call, for example, print(net.conv1.bias.grad), I get vector of bias gradients, but we use batches, is my understanding is correct that we sum up the gradients in the batch and then divide the resulting vector on batch_size and it will be the result of print(net.conv1.bias.grad)? Thanks!
st119213
yes, your understanding is correct. Whether you divide the resulting vector by batch_size or not depends on the size_average=True option present in most loss functions. By default, you do divide by the batch size.
st119214
In the pytorch examples repository, the word language model is being fed batches of size bptt x batch_size, however in the training loop the code iterates over the dataset with a step of length bptt. In my understanding this means that the dataset is being spliced as follows: Given the sequence of characters: “a” “b” “c” “d” … “z” and bptt equal to 3 and ignoring batching for simplicity: first sequence: src=“a”,“b”,“c”; trg=“d” second sequence: src=“d”, “e”, “f”; trg=“g” Perhaps, I am wrong but doesn’t it mean that an amount of data proportional to the value of bptt isn’t being used during training (in the example above sequences src=“b” “c” “d”, trg=“e” and src=“c” “d” “e”, trg=“f” aren’t in the training set)?
st119215
b,c,d target=“e” is covered by carrying the hidden state forward between sequences.
st119216
Thanks for your reply. I am aware that those characters would also be taking into account for the prediction, however my concern is more related to the error signal during training and the evaluation during testing. In the current approach there is only one backpropagation every bptt characters and similarly for testing, perplexity is only estimated on a fraction of characters in the test set.
st119217
In other words, the parameter bptt seems to be tweaking two things: how many steps back to include in the rnn computational graph. how many examples to take into account for estimate perplexity on. but perhaps I am understanding things in a wrong way…
st119218
x is a Variable which has 3 dims. x.size()[0] is sequence length x.size()[1] is batch size. I did something below(for LSTM’s input): temp = [] for i in xrange(len(x)): temp.append(nn.Linear(512, 256)(x[i])) As you can see here, I got many Variables stored in temp. But I what I want is to combine these new Variables into one Variable. How can I do it? Many thanks.
st119219
I tried to train alexnet in transfer learning with ‘–pretrained’ option with increasing input size 224x224–>512x512. with chainging variables in ‘alexnet.py 33’ like this. nn.Linear(256 * 15 * 15, 4096) in __init__(...) and x = x.view(x.size(0), 256 * 15 * 15) in forward() The non-pretraining case works, but pretraining one doesn’t. Is it a limitation of current version? (may be from new graph structure…?) Error Message in pretraining case: Traceback (most recent call last): File "main.py", line 314, in <module> main() File "main.py", line 68, in main model = models.__dict__[args.arch](pretrained=True) File "/home/dylee/.conda/envs/pytorch/lib/python2.7/site-packages/torchvision/models/alexnet.py", line 57, in alexnet model.load_state_dict(model_zoo.load_url(model_urls['alexnet'])) File "/home/dylee/.conda/envs/pytorch/lib/python2.7/site-packages/torch/nn/modules/module.py", line 315, in load_state_dict own_state[name].copy_(param) RuntimeError: inconsistent tensor size at /data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.10_1488756735684/work/torch/lib/TH/generic/THTensorCopy.c:51
st119220
Thanks, your decisive answer was helped. I fixed this problem like this. in def load_state_dict(self, state_dict) (in ...{pytorch}/lib/python2.7/site-packages/torch/nn/modules/module.py) FROM own_state[name].copy_(param) TO if name != 'classifier.1.weight' and name != 'classifier.1.bias': own_state[name].copy_(param) By skipping parameter copying. Though it works in functionality, I included classifier.4 and classifier.6 also for clearing classifiers. I hope it’s helpful for someone.
st119221
What’s object? You didn’t show that in your snippet. You don’t need to implement torch.load. Just call it with the file name. Also, note that we recommend using state_dict() and load_state_dict() for serialization.
st119222
I edited the code above to include all of the model, including training code. It looks like I am using torch.save incorrectly. What is object supposed to refer to? I am still unsure of what would be the correct way to use torch.save to save it and to load it, while keeping the resnet50 architecture? Would this be correct? torch.save(net.new, "/home/ubuntu/pics/test3.pth") Hmm, ok I will look into state_dict() It looks like the documentation on state_dict() is a bit sparse. Do you have any pointers on how to edit the code above to save and load using state_dict and load_state_dict()? Thanks again
st119223
You still haven’t included the definition of object in the snippet. I have no idea what it is
st119224
I have been using pytorch for several days and everything was fine. But now when I import torch, it gives the following error: RuntimeError: module compiled against API version 0xa but this version of numpy is 0x6 Traceback (most recent call last): File “”, line 1, in File “/usr/lib64/python2.7/site-packages/torch/init.py”, line 53, in from torch._C import * ImportError: numpy.core.multiarray failed to import However, if I directly import numpy, there is no error. I have tried uninstall pytorch and install it again, but the error still exists.
st119225
This part is the problem: RuntimeError: module compiled against API version 0xa but this version of numpy is 0x6 Your numpy is too old. You need to update it.
st119226
Hi, I have reproduced my issue below. I have defined my net as a class in sample.py 2 import torch.nn as nn class Classifier_Module(nn.Module): def __init__(self,dilation_series,padding_series): super(Classifier_Module, self).__init__() self.conv2d_list = [] for dilation,padding in zip(dilation_series,padding_series): self.conv2d_list.append(nn.Conv2d(2048,5,kernel_size=3,stride=1, padding =padding, dilation = dilation,bias = True)) def forward(self, x): out = self.conv2d_list[0](x) for i in range(len(self.conv2d_list)-1): out = self.conv2d_list[i+1](x)+out return out class Module1(nn.Module): def __init__(self): super(Module1, self).__init__() self.layer = self._make_pred_layer(Classifier_Module, [6,12,18,24],[6,12,18,24]) def _make_pred_layer(self,block, dilation_series, padding_series): return nn.Sequential(block(dilation_series,padding_series)) def forward(self, x): x = self.layer(x) return x The state dictionary of the net does not contain any keys corresponding to the conv2d layers of the Classifier_Module. import sample model = getattr(sample,'Module1')() print model # 1 does not show conv2d list for keys in model.state_dict().keys(): print keys #2 does not show con2d list print model.layer._modules['0'].conv2d_list # this shows the conv2d list How can I fix this issue? Is there any other way to perform a similar function without writing code for each conv2d layer?
st119227
You need to use nn.ModuleList rather than a plain Python list of modules. This is a very common trap for new users to fall into, and I’m curious if there’s a particular place in the docs/tutorials/etc. where we should add an additional note about it.
st119228
Thank you for your quick reply. I think nn.ModuleList could be mentioned in the 60-minute Blitz tutorial 54. I also wanted to know one more thing - Is it okay to use python lists if I want my net to output more than one outputs? I will calculate loss for each of these outputs and them up. Specifically, would autograd work properly when I use python lists to output more than one outputs?
st119229
I’m trying to understand the philosophy of pytorch, and want to make sure what’s the right way to implement running mean logic like in batch normalization with pytorch. I’ve read the source code of _BatchNorm class, unfortunately the python code stops at F.batch_norm(), from there code goes into binary. Is this the right way to implement running mean in a module simply as: self.register_buffer('running_mean', torch.Tensor(feature_dim)) then in the forward function: self.running_mean += alpha * batch_mean I’m cautious about this because I’m an old Theano user, where the update of a variable must be explicitly defined and passed to theano.function() interface, and directly += is not allowed there.
st119230
Yes. you are on the right track. You register a buffer with self.register_buffer for your running mean, and then you will take the mean of your input batch via: batch_mean = input.data.mean(...). Please note the part input.data, which will give you direct access to the Tensor backing the input Variable.
st119231
Hi, I saw the packsequence class which presumably can speed up the RNN computation by ignoring the paddings. My question is, if I want to use my own cell, how can I build a RNN module which can accept Packsequence. I’ve looked up the code, but it seems nontrivial. I feel like it should be easily down by a wrapper. Is there any example use case or any suggestions?
st119232
This code takes in a PackedSequence as input and computes the forward correctly. Hopefully this makes it simpler to understand and write your own code that takes in a PackedSequence: github.com pytorch/pytorch/blob/a462edd0f6696a4cac4dd04c60d1ad3c9bc0b99c/torch/nn/_functions/rnn.py#L118-L154 124 def VariableRecurrent(batch_sizes, inner): def forward(input, hidden, weight): output = [] input_offset = 0 last_batch_size = batch_sizes[0] hiddens = [] flat_hidden = not isinstance(hidden, tuple) if flat_hidden: hidden = (hidden,) for batch_size in batch_sizes: step_input = input[input_offset:input_offset + batch_size] input_offset += batch_size dec = last_batch_size - batch_size if dec > 0: hiddens.append(tuple(h[-dec:] for h in hidden)) hidden = tuple(h[:-dec] for h in hidden) last_batch_size = batch_size if flat_hidden: This file has been truncated. show original
st119233
I want to make a model based on ResNet to do semantic segmentation, and I upsampled the last several layers with bilinear interpolation which is similar to the methods in FCN. However, I don’t know how to process the ignored_label in the nn.NLLLoss2d layer. Does anyone know how to do that ?
st119234
nn.NLLLoss2d takes in a weights parameter in it’s constructor, which is the weights given to each class. For the labels to ignore, you can give 0 weight and the ones to count, you can give weight of 1. For example: import torch import torch.nn as nn nClasses = 10 ignore_classes = torch.LongTensor([4, 7]) # ignore class 5 and 8 weights = torch.ones(nClasses) weights[ignore_classes] = 0.0 loss = nn.NLLLoss2d(weights)
st119235
Since I am doing machine translation, so I want to train my network with LSTM module. But in testing phase, I want to copy the variable in LSTM to LSTMCell and roll-out the sequence by feeding the last lstm output to the next input.
st119236
For this, you dont need to convert LSTM to LSTMCell from my understanding, you just need to make the timesteps of LSTM to be 1 and repeatedly use it.
st119237
In the DQN tutorial (https://github.com/pytorch/tutorials/blob/master/Reinforcement%20(Q-)Learning%20with%20PyTorch.ipynb 29), at a point, it is suggested to crop the image around the object of interest, while we expect the algorithm to be able to extract this feature. If I try to remove this “cheating” trick (commented in the following code)… resize = T.Compose([T.ToPILImage(), T.Scale(40, interpolation=Image.CUBIC), T.ToTensor()]) # This is based on the code from gym. screen_width = 600 def get_cart_location(): world_width = env.x_threshold * 2 scale = screen_width / world_width return int(env.state[0] * scale + screen_width / 2.0) # MIDDLE OF CART def get_screen(): screen = env.render(mode='rgb_array').transpose((2, 0, 1)) # transpose into torch order (CHW) # Strip off the top and bottom of the screen ''' # this is the trick : screen = screen[:, 160:320] view_width = 320 cart_location = get_cart_location() if cart_location < view_width // 2: slice_range = slice(view_width) elif cart_location > (screen_width - view_width // 2): slice_range = slice(-view_width,None) else: slice_range = slice(cart_location - view_width // 2, cart_location + view_width // 2) # Strip off the edges, so that we have a square image centered on a cart screen = screen[:, :, slice_range] ''' # Convert to float, rescare, convert to torch tensor (this doesn't require a copy) screen = np.ascontiguousarray(screen, dtype=np.float32) / 255 screen = torch.from_numpy(screen) # Resize, and add a batch dimension (BCHW) return resize(screen).unsqueeze(0) env.reset() plt.imshow(get_screen().squeeze(0).permute(1, 2, 0).numpy(), interpolation='none') plt.show() … I can’t obtain any learning. I tried several different combinations of parameters, and I also try to change the structure of the network, but no way to reach any acceptable result. I there a way to make it work ?
st119238
That’s the primary reason why the input is cropped That’s how it is with RL, it’s quite unstable.
st119239
Hi guys, I have a short question. Where is the appropriate place to post feature requests? My last question is how do you register for the slack channel if you don’t have one of those pre-defined email address? Is it by invite?
st119240
Hello there, I wrote below customized Cost Function for my project: import torch from torch.autograd import Function import numpy as np def bb_intersection_over_union(boxA, boxB): # determine the (x, y)-coordinates of the intersection rectangle xA = max(boxA[0], boxB[0]) yA = max(boxA[1], boxB[1]) xB = min(boxA[2], boxB[2]) yB = min(boxA[3], boxB[3]) interArea = (xB - xA ) * (yB - yA) boxAArea = (boxA[2] - boxA[0] ) * (boxA[3] - boxA[1]) boxBArea = (boxB[2] - boxB[0] ) * (boxB[3] - boxB[1]) iou = interArea / float(boxAArea + boxBArea - interArea) return iou class Criterion(Function): def __init__(self, S, B, l_coord, l_nobj): super(Criterion, self).__init__() self.S = S # Number of Cell self.B = B # Number of Bouning Box self.l_coord = l_coord self.l_nobj = l_nobj print("hello") def forward(self, pred_out, real_out): self.save_for_backward(pred_out, real_out) po = torch.LongTensor([2]).float() sum = torch.sum pow = torch.pow sqr = torch.sqrt rt = real_out # Real_out pt = pred_out # Pred_out numObj = rt.size()[0] interval = np.linspace(0, 1, self.S + 1) cost = torch.FloatTensor([0]) for index in range(numObj): cls = rt[index,0] x = rt[index,1] y = rt[index,2] w = rt[index,3] h = rt[index,4] # Original Ground Truth box1 = (x-(w/2), y-(h/2), x+(w/2), h+(h/2)) # Select cell colS = self.indices(interval, lambda q: q > x)[0]-1 rowS = self.indices(interval, lambda q: q > y)[0]-1 # Select BBox IOU = np.ndarray(shape=(1,self.B)) for ind in range(self.B): px = pt[0, 0 + (5*ind),rowS, colS] py = pt[0, 1 + (5*ind),rowS, colS] pw = pt[0, 2 + (5*ind),rowS, colS] ph = pt[0, 3 + (5*ind),rowS, colS] box2 = (px - (pw/2), py - (ph/2), px + (pw/2), py +(ph/2)) IOU[0,ind] = bb_intersection_over_union(box1, box2) # Select Best BBoc sel = IOU.argmax() x_hat = pt[0, 0 + (5*sel),rowS, colS] y_hat = pt[0, 1 + (5*sel),rowS, colS] w_hat = pt[0, 2 + (5*sel),rowS, colS] h_hat = pt[0, 3 + (5*sel),rowS, colS] c_hat_obj = pt[0, 4 + (5*sel),rowS, colS] if sel == 0: c_hat_noobj = pt[0, 4 + (5),rowS, colS] else: c_hat_noobj = pt[0, 4 + (0),rowS, colS] p = torch.zeros(1,20).view(-1) p[int(cls)] = 1 p_hat = pt[0,10:,rowS, colS] cost1 = self.l_coord*(pow(x-x_hat, po)) + self.l_coord*(pow(y-y_hat, po)) cost2 = pow(1-c_hat_obj,po) + self.l_nobj*pow(0-c_hat_noobj,po) cost3 = self.l_coord*(pow(sqr(torch.FloatTensor([w]))-sqr(torch.FloatTensor([w_hat])),po)) + self.l_coord*(pow(sqr(torch.FloatTensor([h]))-sqr(torch.FloatTensor([h_hat])),po)) cost += (cost1 + cost2 + cost3) del cost1, cost2, cost3, p return cost def backward(self, grad_cost): pt, rt = self.saved_tensors #pred_out is FloatTensor not a variable grad_pred_out = torch.zeros(pt.size()) po = torch.FloatTensor([0.5]) sum = torch.sum pow = torch.pow numObj = rt.size()[0] interval = np.linspace(0, 1, self.S + 1) for index in range(numObj): cls = rt[index,0] x = rt[index,1] y = rt[index,2] w = rt[index,3] h = rt[index,4] # Original Ground Truth box1 = (x-(w/2), y-(h/2), x+(w/2), h+(h/2)) # Select cell colS = self.indices(interval, lambda q: q > x)[0]-1 rowS = self.indices(interval, lambda q: q > y)[0]-1 # Select BBox IOU = np.ndarray(shape=(1,self.B)) for ind in range(self.B): px = pt[0, 0 + (5*ind),rowS, colS] py = pt[0, 1 + (5*ind),rowS, colS] pw = pt[0, 2 + (5*ind),rowS, colS] ph = pt[0, 3 + (5*ind),rowS, colS] box2 = (px - (pw/2), py - (ph/2), px + (pw/2), py +(ph/2)) IOU[0,ind] = bb_intersection_over_union(box1, box2) # Select Best BBoc sel = IOU.argmax() #print(x,y,w,h, box1, IOU) x_hat = pt[0, 0 + (5*sel),rowS, colS] y_hat = pt[0, 1 + (5*sel),rowS, colS] w_hat = pt[0, 2 + (5*sel),rowS, colS] h_hat = pt[0, 3 + (5*sel),rowS, colS] c_hat_obj = pt[0, 4 + (5*sel),rowS, colS] if sel == 0: nonsel = 1 c_hat_noobj = pt[0, 4 + (5),rowS, colS] else: nonsel = 0 c_hat_noobj = pt[0, 4 + (0),rowS, colS] p = torch.zeros(1,20).view(-1) p[int(cls)] = 1 p_hat = pt[0,10:,rowS, colS] grad_pred_out[0,0 + (5*sel), rowS, colS] += -2*self.l_coord*(x - x_hat) grad_pred_out[0,1 + (5*sel), rowS, colS] += -2*self.l_coord*(y - y_hat) grad_pred_out[0,2 + (5*sel), rowS, colS] += ((-self.l_coord/pow(w_hat,po))*(pow(w,po) - pow(w_hat,po)))[0] grad_pred_out[0,3 + (5*sel), rowS, colS] += ((-self.l_coord/pow(h_hat,po))*(pow(h,po) - pow(h_hat,po)))[0] grad_pred_out[0,4 + (5*sel), rowS, colS] += -2*(1-c_hat_obj) grad_pred_out[0,4 + (5*nonsel), rowS, colS] += -2*self.l_nobj*c_hat_obj grad_pred_out[0,10:, rowS, colS] += -2*(p - p_hat) grad_real_out = None return grad_pred_out, grad_real_out def indices(self, a, func): return [i for (i, val) in enumerate(a) if func(val)] My cost works well on cpu based run, but when I changed my mode from cpu to gpu by cuda(), I receive this error: AttributeError: 'Criterion' object has no attribute 'cuda'. My codes to benefit from these written cost are: criterion = Criterion(S, B, landa_coord, landa_nobj) if cuda: criterion.cuda() net.cuda() Could you please help my where is the source of problem? Thanks!
st119241
torch.autograd.Function does not know anything about converting to CPU or GPU. If you want to make your code run fine on both CPU and GPU, you need to make it generic. A few tips: calling torch.FloatTensor will always put your tensor in the CPU. You probably want to get the device in which the input tensor is by doing something like pred_out.new(), which will generate a new tensor in the same device as pred_out. you are using np.ndarray in parts of your code. While it’s possible to do it, it will avoid parts of your code to run in the GPU Performing for loops for the computations that you want will probably be slower in the GPU than in the CPU. You’d probably want to wrap some inner loops using only batched mathematical operations in tensors so that it can be efficiently computed in the GPU, so that instead of accepting one bounding box, you accept a bunch of them at the same time.
st119242
@fmassa Thanks for your response! Could you help me more (I am so fresh in pytorch)? I have got one more question. Is it possible to run network on gpu and run criterion on cpu separately. I mean after forwarding input through network with gpu, I change the type of the tensor from CudaFloatTensor to FloatTensor, then, do the calculation via criterion and when backwarding from cost function I just return CudaFloatTensor?
st119243
Yes, it’s possible to compute the criterion on the CPU without problems. Just do output = output.cpu() loss = criterion(output, target) and that should work out of the box.
st119244
Hi, below is my definition of network: class CNN_Text(nn.Module): def __init__(self, args): super(CNN_Text,self).__init__() self.args = args V = args.embed_num D = args.embed_dim C = args.class_num Ci = 1 Co = args.kernel_num Ks = args.kernel_sizes self.embed = nn.Embedding(V, D) self.convs1 = [nn.Conv2d(Ci, Co, (K, D)) for K in Ks] self.dropout = nn.Dropout(args.dropout) self.fc1 = nn.Linear(len(Ks)*Co, C) However, when i called cuda() on the model, the modules in the list self.convs1 will not shift to the GPU, how to solve this problem, any ideas ?
st119245
jekbradbury: nn.ModuleList thanksm but it gives me the error AttributeError: 'module' object has no attribute 'ModuleList' is it a problem related to the version of pytorch?
st119246
Many Thanks, it solves my problem. And Updating my pytorch makes ModuleList available.
st119247
I uninstall pytorch from anaconda and tried to compile the source code from github because I want to use the latest part “nn.init”. But things go strange. My python part cannot find torch packages, which means, when we I try to type in improt torch Python 3.6.0 | Anaconda 4.3.0 can not find torch module. I have no choice but to install it from Anaconda again and I still cannot use nn.init part. What should I do? p.s. Is there any else way to setup the weight initialization? Thanks.
st119248
If you install from source using python setup.py build install, it should be installed for whatever Python instance would run if you type python at the same command line. If that’s not the same as your Anaconda install, make sure to call setup.py using the Anaconda instance of Python.
st119249
Thank you. I made two mistakes, I uninstalled pytorch on Anaconda before I recompiled the new one. I forgot to update my Anaconda. Hope to be useful to others.
st119250
Unlike torch imagenet example where we load both pretrained model and optimState 6, in pytorch example we only load the model 8 and not the optimState. Is that intentional?
st119251
Hello everyone. Recently, I implemented a simple recursive neural network. When training this model on sample/small data set, everything works fine. However, when training it on large data and on GPUs, “out of memory” is raised. Along with the training goes on, usage of GPU memory keeps growing up. So, I want to know, why does this happen? I would be grateful if you could help. The model and training procedure are defined as follow: def train_step(self, data): train_loss = 0 for _data in data: p_tree = _data['p_tree'] h_tree = _data['h_tree'] if args.cuda: target = Variable(torch.LongTensor([_data['label']]).cuda()) else: target = Variable(torch.LongTensor([_data['label']])) self.optimizer.zero_grad() # self.model is an instance of class RootAlign output = self.model(p_tree, h_tree) loss = F.nll_loss(output, target) loss.backward() self.optimizer.step() train_loss += loss.data[0] return train_loss class RootAlign(nn.Module): def __init__(self, word_embedding, config): super(RootAlign, self).__init__() self.rnn = VanillaRecursiveNN(word_embedding, config['hidden_dim'], config['cuda_flag']) self.linear = nn.Linear(config['hidden_dim'] * 2, config['relation_num']) def forward(self, p_tree, h_tree): p_tree.postorder_traverse(self.rnn) h_tree.postorder_traverse(self.rnn) out = F.log_softmax(self.linear(F.sigmoid(torch.cat((p_tree.calculate_result, h_tree.calculate_result), 1)))) return out class VanillaRecursiveNN(nn.Module): def __init__(self, word_embedding, hidden_dim, cuda_flag=False): super(VanillaRecursiveNN, self).__init__() self.word_dim = word_embedding.embeddings.size(1) self.hidden_dim = hidden_dim self.embedding = nn.Embedding(word_embedding.embeddings.size(0), self.word_dim) self.embedding.weight = nn.Parameter(word_embedding.embeddings) self.word2hidden = nn.Linear(self.word_dim, self.hidden_dim, False) self.hidden2hidden = nn.Linear(2 * self.hidden_dim, self.hidden_dim) self.cuda_flag = cuda_flag def forward(self, node): if not node.val is None: if self.cuda_flag: node.calculate_result = self.word2hidden( self.embedding(Variable(torch.LongTensor([node.word_id]).cuda()))) else: node.calculate_result = self.word2hidden( self.embedding(Variable(torch.LongTensor([node.word_id])))) return node.calculate_result else: assert len(node.children) == 2 node.calculate_result = self.hidden2hidden(torch.cat((node.children[0].calculate_result, node.children[1].calculate_result), 1)) return node.calculate_result
st119252
Do you need to save node.calculate_result? Are you using these values later on? If not I’d discourage saving them. If you need them, but don’t want to backprop through them (it seems to be the case), you should save only the tensor, not the Variable that wraps it. This will allow the graph that holds the buffers necessary for backward to be freed and release held memory. Just replace node.calculate_result = ... with result = ...; node.calculate_result = result.data.
st119253
I observed similar GPU memory behavior. When I test two different implementations of the same function as below, where ‘A’ is cuda.Tensor. def function1(A): B = A**2 - 2*A C = torch.sqrt(B) return C def function2(A): return torch.sqrt(A**2 - 2*A) Both functions are the same as a function. However, function1 seems to assign GPU memory to local variables ‘B’, 'C’ on the other hand, function2 seems to only assign GPU memory to memory needed to calculate torch.sqrt(A**2 - 2*A) which is presumably the same size as A Thus, in terms of memory usage, it seems that function2 is twice efficient than function1. This doesn’t apply to all the cases, but in many cases, removing intermediate variables reduces GPU memory usage a lot in my programs. This seems to be that underlying CUDA does not free memory immediately after the moment that memory is not needed anymore. I think some GPU memory Garbage Collection method in pytorch is needed for efficient GPU memory management.
st119254
Both functions will consume the same amount of memory. The execution will look like this (in parenthesis you have current/peak memory usage in multiplies of A size): Assume A is allocated (1/1) Compute A**2 (2/2) Compute 2*A (3/3) Compute A**2 - 2*A (4/4) Free A**2 and 2*A (2/4) Compute torch.sqrt(B) (3/4) Return and free everything except the input and result (2/4) Nevertheless, it is good advice to try to minimize the number of local variables - the sooner things go out of scope the sooner the memory will be available for reuse (btw you can use del to free locals you don’t need). There’s no way for the framework to know when a tensor won’t be needed anymore, we don’t have that knowledge upfront, abnd this is why it’s impossible to implement any garbage collection. The memory management is already very efficient and all tensors are freed as soon as you let them go.
st119255
Thanks for the help. I did the replacement as you stated above in the following ways. # way No.1 if not node.val is None: if self.cuda_flag: variable =Variable(torch.LongTensor([node.word_id]).cuda()) else: variable = Variable(torch.LongTensor([node.word_id])) result = self.word2hidden(self.embedding(variable)) node.calculate_result = result.data return node.calculate_result # way No.2 if not node.val is None: if self.cuda_flag: node.calculate_result = self.word2hidden(self.embedding( Variable(torch.LongTensor([node.word_id]).cuda()))).data else: node.calculate_result = self.word2hidden(self.embedding( Variable(torch.LongTensor([node.word_id])))).data return node.calculate_result # way No.3 if not node.val is None: if self.cuda_flag: result = self.word2hidden( self.embedding(Variable(torch.LongTensor([node.word_id]).cuda()))) else: result = self.word2hidden( self.embedding(Variable(torch.LongTensor([node.word_id])))) node.calculate_result = result.data return node.calculate_result However, they all raised same TypeError, the trace back information is: Traceback (most recent call last): File "/home/shawnguo/PythonWS/KnowledgeEnhancedTE/trainer.py", line 172, in <module> t.train() File "/home/shawnguo/PythonWS/KnowledgeEnhancedTE/trainer.py", line 111, in train train_loss = self.train_step(self.data.train) File "/home/shawnguo/PythonWS/KnowledgeEnhancedTE/trainer.py", line 143, in train_step output = self.model(p_tree, h_tree) File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 210, in __call__ result = self.forward(*input, **kwargs) File "/home/shawnguo/PythonWS/KnowledgeEnhancedTE/align_model.py", line 18, in forward p_tree.postorder_traverse(self.rnn) File "/home/shawnguo/PythonWS/KnowledgeEnhancedTE/tree.py", line 186, in postorder_traverse c.postorder_traverse(func) File "/home/shawnguo/PythonWS/KnowledgeEnhancedTE/tree.py", line 186, in postorder_traverse c.postorder_traverse(func) File "/home/shawnguo/PythonWS/KnowledgeEnhancedTE/tree.py", line 186, in postorder_traverse c.postorder_traverse(func) File "/home/shawnguo/PythonWS/KnowledgeEnhancedTE/tree.py", line 187, in postorder_traverse func(self) File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 219, in __call__ var = var[0] TypeError: 'float' object has no attribute '__getitem__' It seems that the torch.LongTensor([node.word_id]) should be replaced with torch.LongTensor([[node.word_id]]). However, after I fix it, a new RuntimeError is raised. The trace back is: Traceback (most recent call last): File "/home/shawnguo/PythonWS/KnowledgeEnhancedTE/trainer.py", line 172, in <module> t.train() File "/home/shawnguo/PythonWS/KnowledgeEnhancedTE/trainer.py", line 111, in train train_loss = self.train_step(self.data.train) File "/home/shawnguo/PythonWS/KnowledgeEnhancedTE/trainer.py", line 143, in train_step output = self.model(p_tree, h_tree) File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 210, in __call__ result = self.forward(*input, **kwargs) File "/home/shawnguo/PythonWS/KnowledgeEnhancedTE/align_model.py", line 18, in forward p_tree.postorder_traverse(self.rnn) File "/home/shawnguo/PythonWS/KnowledgeEnhancedTE/tree.py", line 186, in postorder_traverse c.postorder_traverse(func) File "/home/shawnguo/PythonWS/KnowledgeEnhancedTE/tree.py", line 186, in postorder_traverse c.postorder_traverse(func) File "/home/shawnguo/PythonWS/KnowledgeEnhancedTE/tree.py", line 186, in postorder_traverse c.postorder_traverse(func) File "/home/shawnguo/PythonWS/KnowledgeEnhancedTE/tree.py", line 187, in postorder_traverse func(self) File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 210, in __call__ result = self.forward(*input, **kwargs) File "/home/shawnguo/PythonWS/KnowledgeEnhancedTE/tree_models.py", line 26, in forward self.embedding(Variable(torch.LongTensor([[node.word_id]]).cuda()))) File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 210, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/linear.py", line 52, in forward return self._backend.Linear()(input, self.weight) File "/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/linear.py", line 10, in forward output.addmm_(0, 1, input, weight.t()) RuntimeError: matrix and matrix expected at /home/soumith/local/builder/wheel/pytorch-src/torch/lib/THC/generic/THCTensorMathBlas.cu:235 I don’t know why the forward function raise up these errors and how to fix them up. And, as I need to optimize self.word2hidden = nn.Linear(self.word_dim, self.hidden_dim, False) and self.hidden2hidden = nn.Linear(2 * self.hidden_dim, self.hidden_dim) in class VanillaRecursiveNN, it seems that node.calculate_result needs to be saved for backprop. So, your solution may not address the problem that I mentioned above, usage of GPU memory growing up ceaselessly. Should I manually free the GPU memory? If yes, how? Anyway, thanks again for your help. Looking forward for your reply.
st119256
I think it’s just wrong input number of fc-layer and usage of cat() fucntion import torch hidden_dim = 10 x = torch.randn(hidden_dim, 1).cuda() print(x.size()) y = torch.cat((x, x), 1) print(y.size()) y = torch.cat((x, x), 0) print(y.size()) result is below torch.Size([10, 1]) <-- hidden_dim x 1 torch.Size([10, 2]) torch.Size([20, 1]) <-- 2 * hidden_dim x 1 so, fixed code is self.hidden2hidden = nn.Linear(2 * self.hidden_dim, self.hidden_dim) ... node.calculate_result = self.hidden2hidden(torch.cat((node.children[0].calculate_result, node.children[1].calculate_result), 0))
st119257
Thanks for your help. However, the problem occurs in the following code: if not node.val is None: if self.cuda_flag: node.calculate_result = self.word2hidden( self.embedding(Variable(torch.LongTensor([node.word_id]).cuda()))) else: node.calculate_result = self.word2hidden( self.embedding(Variable(torch.LongTensor([node.word_id])))) return node.calculate_result And, I’ve found a puzzling phenomenon, if the above code have been changed to: if not node.val is None: if self.cuda_flag: variable =Variable(torch.LongTensor([node.word_id]).cuda()) else: variable = Variable(torch.LongTensor([node.word_id])) node.calculate_result = self.word2hidden(self.embedding(variable)) return node.calculate_result The TypeError would be raised up. Isn’t these two implementation same? If not, what’s the difference?
st119258
You should check the connectivity between network layers Try it self.embedding(Variable(torch.LongTensor(node.word_id).cuda()))) or add squeeze variable variable = variable.squeeze()
st119259
Thanks for your reply. The first solution is obviously impracticable. torch.LongTensor(node.word_id) would give out a Tensor whose length is node.word_id(an integer). As for the second one, I don’t know the function of .squeeze(). But as the problem occurs in self.embedding(Variable(torch.LongTensor([node.word_id]).cuda()))), it may not help solving the problem either. Anyway, thanks for your help.
st119260
You don’t need to save anything for backprop, autograd will take care of that, and my solution is valid. The problems you’re having are only due to giving inputs of invalid sizes to different modules. You can print them inside your module and see if they are what you expect, and what matches the requirements specified in the docs.
st119261
Yes,you’re right. The computation in case “not node.val is None” is correct. Problem is in the computation of the other case. I’m trying to fix it. Thank you very much!
st119262
Now, it seems that the return in following codes has something wrong. if not node.val is None: if self.cuda_flag: variable =Variable(torch.LongTensor([node.word_id]).cuda()) else: variable = Variable(torch.LongTensor([node.word_id])) result = self.word2hidden(self.embedding(variable)) node.calculate_result = result.data return node.calculate_result Now, I guess I know the problem. In module.py, abstract class Module has a method _ call _, in line 210-211, there is a loop: while not isinstance(var, Variable): var = var[0] As I return a torch.FloatTensor, the loop will keeps going until the error was raised. After I do the following change, everything works again except the weights of model hasn’t been updated. No.1, change ni RootAlign: class RootAlign(nn.Module): def __init__(self, word_embedding, config): super(RootAlign, self).__init__() self.rnn = VanillaRecursiveNN(word_embedding, config['hidden_dim'], config['cuda_flag']) self.linear = nn.Linear(config['hidden_dim'] * 2, config['relation_num']) def forward(self, p_tree, h_tree): p_tree.postorder_traverse(self.rnn) h_tree.postorder_traverse(self.rnn) p_result = Variable(p_tree.calculate_result) h_result = Variable(h_tree.calculate_result) out = F.log_softmax(self.linear(F.sigmoid( torch.cat((p_result, h_result), 1)))) return out No.2 Change in VanillarRecursiveNN: def forward(self, node): if not node.val is None: if self.cuda_flag: result = self.word2hidden(self.embedding( Variable(torch.LongTensor([node.word_id]).cuda()))) else: result = self.word2hidden(self.embedding( Variable(torch.LongTensor([node.word_id])))) node.calculate_result = result.data return result else: assert len(node.children) == 2 l_result = Variable(node.children[0].calculate_result) r_result = Variable(node.children[1].calculate_result) result = self.hidden2hidden(torch.cat((l_result, r_result), 1)) node.calculate_result = result.data return result It seems that the usage of GPU memory still grows ceaselessly.
st119263
Ah I now see that you’re storing the Variables in the tree, because you need to read them from the higher parts. In this case you can’t unpack the .data, because you need the backprop to include the lower parts of the tree too. The problem I see is that you’re not clearing the Variables/tensors stored in the trees after you finish the iteration. You could try writing a simple function that traverses the tree and dels all outputs after computing output = self.model(p_tree, h_tree) (you need to clean both trees). This should reduce the memory usage.
st119264
Well, I guess that this is the key to address the problem. I’ll try immediately. Cool!!!The problem is addressed. Thank you very much!!!
st119265
I have an additional question. How to batch tree data when training model? Every tree has their own structure. How can I batch them under the current implementation of Recursive model? ps: The time of every epoch is about 4.5 hr on SNLI.(GPU: Titan X) It takes too long.
st119266
You can’t easily batch trees with this approach. You would need to use something like SPINN https://github.com/jekbradbury/examples/tree/spinn/snli/spinn.py 75 (or in general batch before compute-heavy ops and unbatch after)
st119267
Your implementation is cool~ I’ll learn from it and try to batch data on my model. Thanks very much.