id
stringlengths
3
8
text
stringlengths
1
115k
st117568
If you need the 1st element of alexnet.classifier just alexnet.classifier[1].sate_dict() will give you an OrderedDict containing weight and bias
st117569
I am using LSTM to model time-series data. Since the data that I am using are long and of variable lengths depending on the day (5000 ~ 20,000 data points per day depending on the day), I would like to limit back-propagation through time to a controlled maximum number, let’s say 100. That is, regardless of where you stand in time, back-prop is 100 time steps backward at most. Is there a way to do that ?
st117570
you can call detach() every hundred timesteps to detach the hidden state to it’s previous states. That way, you will end up with: without detatch x -> x -> x -> x-> x-> x-> x-> … -> x -> x-> x -> x -> x -> x-> x-> x-> x-> after detaching every T timesteps x -> x -> x -> x-> x x -> x -> x -> x-> x x -> x -> x -> x-> x x -> x -> x -> x-> x x -> x -> x -> x-> x
st117571
Thank you. Sounds great. Just one more question. The final hidden state (from the final time step) is fed to the next layer of my network and eventually fed to the final loss function. Detaching the hidden states at every T steps wouldn’t distort this in any strange way, would it ?
st117572
I have a UNet autoencoder that branches at the bottom of the cup, to a dense layer that does classification. When I train my model, everything works fine and my network produces two outputs: the output based off of the bottleneck branch, and the final autoencoded output. However when I test my model using: model.eval() inputs = Variable(inputs.cuda(), volatile=True) What I’ve noticed is that the variables coming out of the autoencoder are fine, but the results coming off of the linear are all nan. I tried removing volatile=True, no difference. I altered my model’s .forward() such that the classifier branch’s Linear is computer before / after the autoencoder, no difference. Now, if I remove model.eval() and leave it with model.train(True), everything works! I don’t call loss.backward() or step through my optimizer in train mode, and since I can use volatile=True I don’t really care either way which ‘mode’ the model is in. But I found the behavior interesting =). Maybe someone can shed some light.
st117573
the problem is that one of your input samples during training contained a NaN value. The running_mean and running_std buffers of BatchNorm layers hence likely contain NaNs. Because of this, in inference mode the outputs through the linear layer are NaNs.
st117574
I was trying to use autograd.grad, but torch said that there is no such module. I didn’t found this module in source.(there only backward module )But I saw other people use this function. How can I fix it?
st117575
This is only available on master right now and not on the releases (before the feature is very recent). If you want to use it right now, you will need to install from source.
st117576
Thanks for your reply. I have installed it from source, but there is usage of run_backward with 5 argument, when this function takes only 3 args.(def run_backward(self, variable, grad, retain_variables) ) Do you know where can I take source of changed run_backward? image.jpg985×283 81.1 KB
st117577
No, there are no new realeases yet. You can check here https://github.com/pytorch/pytorch/releases 23
st117578
Hi all, From the source code, I did not find in Module why Parameter is automatically registered? Do anyone can explain? Best wishes, Qiuqiang
st117579
It’s because of the following code in __get_attr__ but also in the rest of this file: github.com pytorch/pytorch/blob/d1a44676828ef65067414c938b15412f85d1a39e/torch/nn/modules/module.py#L225-L283 20 def __getattr__(self, name): if '_parameters' in self.__dict__: _parameters = self.__dict__['_parameters'] if name in _parameters: return _parameters[name] if '_buffers' in self.__dict__: _buffers = self.__dict__['_buffers'] if name in _buffers: return _buffers[name] if '_modules' in self.__dict__: modules = self.__dict__['_modules'] if name in modules: return modules[name] raise AttributeError("'{}' object has no attribute '{}'".format( type(self).__name__, name)) def __setattr__(self, name, value): def remove_from(*dicts): for d in dicts: if name in d: This file has been truncated. show original
st117580
Dear Soumith, Many thanks for the reply! I am clear now! Best wishes, Qiuqiang
st117581
I really like the feature that I can read a 2D matrix row-by-row using a for-loop. a = torch.FloatTensor(4, 4).random_() for b in a: print(b) However, if the size of the 2D matrix is 0, this would throw an error. a = torch.FloatTensor(0, 4).random_() for b in a: print(b) It gives an error saying that RuntimeError: dimension 0 out of range of 0D tensor at /py/conda-bld/pytorch_1493677666423/work/torch/lib/TH/generic/THTensor.c:24 I tried with numpy and it works fine with matrix of size 0. a = np.random.uniform(size=(0, 4)) for b in a: print(b) I am not sure if it is a bug. If this can be “fixed”, it would be helpful as we don’t need to check the size of the matrix before the for-loop.
st117582
We dont have zero-dimensional Tensors. a = torch.FloatTensor(0, 4) print(a.size()) torch.Size([])
st117583
I wrote a custom vector similarity loss function as I wanted to experiment with different vector similarity heuristics. This is the class: class CosineLoss(torch.nn.Module): ''' Loss calculated on the cosine distance between batches of vectors: loss = 1 - label * a.b / (|a|*|b|) ''' def __init__(self): super(CosineLoss, self).__init__() def cosine_similarity(self, mat1, mat2): return mat1.unsqueeze(1).bmm(mat2.unsqueeze(2)).squeeze() / \ (torch.norm(mat1, 2, 1) * torch.norm(mat2, 2, 1)) def forward(self, input_tensor, target_tensor, labels): sim = self.cosine_similarity(input_tensor, target_tensor) loss = (1.0 - labels * sim).sum() / labels.size(0) return loss This has very similar behaviour to nn.CosineEmbeddingLoss: it takes two tensors and a set of labels, and calculates a positive or negative similarity loss depending on the labels’ sign. One difference is I have not used a margin (equivalent to margin = 0 in nn.CosineEmbeddingLoss). On two batches of vectors enc and dec, the loss calculation is: self.error_f = CosineLoss() labels = autograd.Variable(torch.ones(batch_size)) loss = self.error_f(enc, dec, labels) + \ self.error_f(enc, dec[torch.randperm(batch_size)], -labels) Here, I use the ground truth batch as a positive batch, and a shuffled batch as the negative batch (to avoid the easy minimum of zero valued parameters). I am able to train successfully with this loss and begin to converge, but after some time (30-40 epochs on a small dataset) the loss seems to pollute with NaNs when calculating the negative batch loss (the second term above). Using the cosine loss from the nn library I am able to train without NaNs. However I don’t see anything immediate wrong with my implementation. Is there some trick I have missed that was used when implementing nn.CosineEmbeddingLoss?
st117584
Does adding en epsilon in your cosine_similarity function when you divide by the norms help? These norms can go to 0 during training and would result to NaN values.
st117585
@albanD adding an epsilon to the norms worked like a charm. Thanks for the tip, great help!
st117586
I was reading Improved Training of Wasserstein GANs 108, and thinking how it could be implemented in PyTorch. It seems not so complex but how to handle gradient penalty in loss troubles me. 2017-04-05.png709×125 6.71 KB In the tensorflow’s implementation, the author use tf.gradients. github.com igul222/improved_wgan_training/blob/master/gan_cifar.py#L132-L136 98 gradients = tf.gradients(Discriminator(interpolates), [interpolates])[0] slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients), reduction_indices=[1])) gradient_penalty = tf.reduce_mean((slopes-1.)**2) disc_cost += LAMBDA*gradient_penalty I wonder if there is an easy way to handle the gradient penalty. here is my idea of implementing it, I don’t know whether it will work and work in the way I think: optimizer_D = optim.adam(model_D.parameters()) x = Variable() y = Variable() x_hat = (alpha*x+(1-alpha)*y).detach() x_hat.requires_grad = True loss_D = model_D(x_hat).sum() loss_D.backward() x_hat.grad.volatile = False loss = model_D(x).sum() - model_D(y).sum() + ((x_hat.grad -1)**2 * LAMBDA).sum() loss.backward() optimizer_D.step()
st117587
For the moment, it’s not yet possible to have gradients of gradients in PyTorch, but there is a pending PR 612 that will implement that and should be merged soon.
st117588
Thank you, I had a look into this but from what I see torch doesn’t have yet support for higher-order derivates of Non-linear functions present in the DCGAN model. Or am I wrong?
st117589
You are right, most function are still old-style which don’t support grad of grad. There is a temporary fix: use difference rather than differential x_1,x_2 are sampled from x_hat idea from 郑华滨
st117590
Have been struggling with this as well, could you provide an example of how it can be used?
st117591
Ajay and I discussed that a bit a while ago and there is a link to a blog post and Jupyter notebook doing the toy examples from the improved training article in pytorch: Wasserstein loss layer/criterion Hi @AjayTalati, thanks for the pointers! I will definitely look into implementing an application or two. In the meantime, I jotted down a few thoughts regarding the Improved Training paper for your amusement while we’re waiting for the grad of grad to be merged. Or maybe we find something to do with the original WGAN code, too. You have good holidays, best regards Thomas Best regards Thomas
st117592
GitHub caogang/wgan-gp 467 A pytorch implementation of Paper "Improved Training of Wasserstein GANs" - caogang/wgan-gp @caogang is working on it, looking forward to that.
st117593
Now I am working on the gan_language, gan_toy is finished. Hope it will be helpful GitHub caogang/wgan-gp 212 A pytorch implementation of Paper "Improved Training of Wasserstein GANs" - caogang/wgan-gp
st117594
The idea seems more likely from Thomas’s Semi-Improved Training of Wasserstein GANs 108 or it’s just a coincidence?
st117595
Hi @orashi, thank you for the credit. I might be among the first to discuss this in detail in this specific context and with a pytorch implementation, but certainly the identification of 1-Lipschitz (in the classical definition) with unit sphere in $W_{1,\infty}$ in the Sobolev scale (which is the fancy mathematician talk for the gradient being bounded by 1) is very standard just as the approximation of the derivative by finite differences (actually, one could fancy-talk that into a different norm, but let’s not), so I would expect many other people to have the same idea independently, so I’d go for coincidence. (Actually sampling two points is a bit different to sampling the center and using the two side points as I did, too.) What struck me as particularly curious in this case is why the authors of Improved Training chose to do a point-wise derivative test instead of testing the Lipschitz constant directly, but I have not asked them yet, so I don’t know. Best regards Thomas
st117596
I first find the idea from zhihu(Chinese Quora) 72. The author seems to simply use the difference as an approximation of differential. But someone commented in the article that the difference is actually a better way for Kantorovich dual problem. The blog from @tom seems both more insightful and more intuitive. Excellent work!
st117597
Hi, just a quick update: A discussed in the SLOGAN blog post 162, the difference in the is generally not the gradient, but a projection onto x_hat. Thus it would seem to be more prudent to use a one-sided penalty in this formulation. Best regards Thomas
st117598
When I am using torch.add(), I have encountered the following error inputs = torch.add(inputs, -0.02, noise) TypeError: add() takes exactly 2 arguments (3 given)
st117599
The error should be quite self explanatory, you’ve given torch.add too many arguments.
st117600
I have the same problem - and while it seems inded self-explanatory, it contradicts the documentation in: http://pytorch.org/docs/torch.html 12 Which says: torch.add(input, value, out=None) Adds the scalar value to each element of the input input and returns a new resulting tensor. So - I will be using torch.mul, but the question is in place, and someone has to either implement the correct function or fix the documentation
st117601
I’ve tried to reproduce Shiyu’s error, but I cant. Can one of you give me a repro: In [1]: import torch In [2]: a=torch.randn(10) In [3]: torch.add(a, 0.1, a) Out[3]: 0.9079 1.1069 -1.5801 -0.3657 -0.6019 -0.5571 0.5797 0.2054 -0.9112 -0.7749 [torch.FloatTensor of size 10]
st117602
I am not near pc now, but thing it - it worked when I used python in shell and did the problem when I ran it from a file with some importa and so on. Will be able to send details tomorrow.
st117603
Now it is working, but doesn’t do anything: The code I wrote in a file: import torch q =torch.randn(10) print "1 ",q torch.add(q, 0.1, q) print "2 ",q exit() The output I get: 1 1.6000 -0.5714 -2.1857 0.9774 2.3266 -2.2662 -1.2868 -0.7741 -0.3532 -0.0593 [torch.FloatTensor of size 10] 2 1.6000 -0.5714 -2.1857 0.9774 2.3266 -2.2662 -1.2868 -0.7741 -0.3532 -0.0593 [torch.FloatTensor of size 10]
st117604
Hello, I am working on an implementation of the Gram-cam paper ( check it here 79 ) The problem is that it is coded in pure torch and as you problably know with torch we can backward directly a model/Sequential like model.backward(input,target). I am stuck at this stage as my input is the output of a conv2D ( [torch.FloatTensor of size 1x256x13x13] ) and the target is a one dimension tensor with the classe targeted at 1 and the rest at 0 . How could I do this backward ? I tried to use an optimizer and a loss function but it seems not possible with such tensors. Any ideas ? Justin
st117605
you cannot do model.backward(input, target) whether in LuaTorch or PyTorch. LuaTorch’s actual interface is model.backward(input, gradients_wrt_output). You need a loss function whether it is Lua or PyTorch to measure the distance between output and target
st117606
Ohh ok i understand the lua interface now. How could I translate it into pytorch then ? With a loss fonction ? If I understand it well there is the attributes gradinput in modules for torch lua, but I don’t understand how I could access it with pytorch. Thank you for your answer taken on your time, I am a beginner concerning the implementation with pytorch
st117607
Try output = model(input) gradInput = torch.autograd.grad(output, input, target) ?
st117608
Oh I found out it was only available on master 21 days ago, Is it still the case ? EDIT : I installed from source and I have the grad function now. But the following error : > Traceback (most recent call last): > File "grad_cam.py", line 96, in <module> > Gradinput = torch.autograd.grad(logit,Variable(model1_output),doutput) > File "/home/lelouedec/anaconda2/lib/python2.7/site-packages/torch/autograd/__init__.py", line 153, in grad > inputs, only_inputs) > RuntimeError: One of the differentiated Variables appears to not have been used in the graph What I want to achieve is the following piece of code from lua: model2:zeroGradParameters() model2:backward(model1.output, doutput) – Get the activations from model1 and and gradients from model2 local activations = model1.output:squeeze() local gradients = model2.gradInput:squeeze() Where model 1 is the first half of alexnet and model 2 the other half
st117609
I personally haven’t really tried torch.autograd.grad in practice, so I’m not sure if I’m correct My suggestion is: model1_output should be a Variable with requires_grad to be True, and logit is a function of model1_output. For example: model1_output = model1(input) # input can not be volatile logit = model2(model1_output) Gradinput = torch.autograd.grad(logit, model1_output, doutput) BTW, there is a pytorch grad cam implementation which just came out. This uses register_hook to save the gradients. github.com jacobgil/pytorch-grad-cam/blob/master/grad-cam.py 126 import torch from torch.autograd import Variable from torch.autograd import Function from torchvision import models from torchvision import utils import cv2 import sys import numpy as np import argparse class FeatureExtractor(): """ Class for extracting activations and registering gradients from targetted intermediate layers """ def __init__(self, model, target_layers): self.model = model self.target_layers = target_layers self.gradients = [] def save_gradient(self, grad): self.gradients.append(grad) This file has been truncated. show original
st117610
Well man thank you I was on the right track and I stopped working at the last step of the algorithm yesterday… I used hooks to get gradinput and output of intermediate layers. Easier than his technique I think … But anyway thanks you I maybe didn’t do it first but at least I learned.
st117611
I have a net that do not use the embedding, or rnn module. I just create a Variable, but is seems that the variable does not change itself during the training. How can adjust the variable I want to tune the variable.
st117612
You are a bit vague but maybe you forgot to set requires_grad?: v = Variable(..., requires_grad=True)
st117613
Thank you, I just found what I’m looking for, the nn.Parameter, a kind of Variable, it can be added into the model’s parameters() default.
st117614
I am having some weird issues with running my model on the GPU. I have a model that takes in multiple inputs, computes features, concats them and makes some predictions. When I run CUDA_VISIBLE_DEVICES=0 python -m supervised.model.train --lr 1e-3 --batch_size 32 --cuda my code ends up with this error Train Epoch: 1 [27/6204 (0%)] | lr 1.00e-03 | s/batch 0.010, 0.060, 0.006 | Losses (r, q, total) 27.5339, 21.0737, 48.6076 Train Epoch: 1 [28/6204 (0%)] | lr 1.00e-03 | s/batch 0.010, 0.058, 0.006 | Losses (r, q, total) 27.3787, nan, nan Train Epoch: 1 [29/6204 (0%)] | lr 1.00e-03 | s/batch 0.010, 0.060, 0.006 | Losses (r, q, total) nan, nan, nan or Train Epoch: 1 [11/6204 (0%)] | lr 1.00e-03 | s/batch 0.010, 0.061, 0.005 | Losses (r, q, total) 29.9488, 23.4458, 53.3945 Train Epoch: 1 [12/6204 (0%)] | lr 1.00e-03 | s/batch 0.010, 0.061, 0.005 | Losses (r, q, total) inf, 23.1003, inf Train Epoch: 1 [13/6204 (0%)] | lr 1.00e-03 | s/batch 0.010, 0.060, 0.005 | Losses (r, q, total) inf, 23.1479, inf Train Epoch: 1 [14/6204 (0%)] | lr 1.00e-03 | s/batch 0.010, 0.061, 0.005 | Losses (r, q, total) inf, 22.9168, inf Train Epoch: 1 [15/6204 (0%)] | lr 1.00e-03 | s/batch 0.011, 0.078, 0.005 | Losses (r, q, total) inf, 22.8032, inf Train Epoch: 1 [16/6204 (0%)] | lr 1.00e-03 | s/batch 0.009, 0.087, 0.005 | Losses (r, q, total) inf, 22.7152, inf Train Epoch: 1 [17/6204 (0%)] | lr 1.00e-03 | s/batch 0.010, 0.063, 0.005 | Losses (r, q, total) inf, 22.4327, inf Train Epoch: 1 [18/6204 (0%)] | lr 1.00e-03 | s/batch 0.010, 0.061, 0.005 | Losses (r, q, total) inf, 22.4060, inf Train Epoch: 1 [19/6204 (0%)] | lr 1.00e-03 | s/batch 0.009, 0.069, 0.004 | Losses (r, q, total) inf, inf, inf If I use a larger batch size (128), I get Train Epoch: 1 [16/1551 (1%)] | lr 1.00e-03 | s/batch 0.034, 0.185, 0.011 | Losses (r, q, total) 29.1557, 22.7390, 51.8947 Train Epoch: 1 [17/1551 (1%)] | lr 1.00e-03 | s/batch 0.034, 0.185, 0.011 | Losses (r, q, total) 29.0046, 22.5440, 51.5485 /b/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:321: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [26,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed. ... File "/usr/local/lib/python3.5/dist-packages/torch/autograd/_functions/tensor.py", line 310, in forward return torch.cat(inputs, self.dim) RuntimeError: cuda runtime error (59) : device-side assert triggered at /b/wheel/pytorch-src/torch/lib/THC/generic/THCTensorMath.cu:226 The code runs perfectly if I run it on the CPU. Moreover, if I use an extra flag CUDA_LAUNCH_BLOCKING=1 before the script, everything works just fine and nothing crashes, nothing goes to inf or nan. What can the issue be and how do I fix it? This seems like a Pytorch kernel. Versions: Python 3.5.2 Cuda compilation tools, release 8.0, V8.0.44 Ubuntu 16.04.2 LTS pip install http://download.pytorch.org/whl/cu80/torch-0.1.12.post2-cp35-cp35m-linux_x86_64.whl
st117615
i think adding the CUDA_LAUNCH_BLOCKING=1 and everything working is probably a false-positive, where in that case random memory is just filled with zeros. Same with CPU (the illegal memory being hit is probably all zeros). This is most definitely an index out of bounds issue (indexing with < 0 or > size ). Can you run this repeatedly with CUDA_LAUNCH_BLOCKING=1 and see if you can get a good stack-trace to identify the location of your issue.
st117616
Turns out the issue was with h5py and DataLoader. Using 2 or more threads with DataLoader when reading from an hdf5 input results in corrupted data. The read data was randomly corrupted and that resulted in errors with the network, with losses going to nan, or performing illegal indexing.
st117617
I world like to L2 toward initial value for embedding model. code snipe in Theano ([link] (https://github.com/jwieting/iclr2016/blob/master/sentiment/lstm_model_sentiment.py 2)) l2 = 0.5*params.LC*sum(lasagne.regularization.l2(x) for x in self.network_params) if params.updatewords: return l2 + 0.5*params.LW*lasagne.regularization.l2(We-initial_We) else: return l2 In paper 7, the author said “All models use L2 regularization on all parameters, except for the word embeddings, which are regularized back to their initial values with an L2 penalty”. But I don’t know how to “regularized back to their initial values”. I have tried this. But it did not work as expected. optimizer = optim.Adam([ {'params': model.parameters(), 'lr':args.lr, 'weight_decay':args.wd }, {'params': embedding_model.parameters(), 'lr': args.emblr, 'weight_decay':args.embwd} ])
st117618
you can use optimizer’s weigt_decay option for L2 regularization, but it wont pull it towards initial weight initialization, it only pulls it to t-1 weight values. You’ll have to implement something like the theano snippet yourself right after the optim.step call.
st117619
Here is how I implement my custom L2. Can anybody verify if it is correct ? Here is getParameters function, which take all parameters of sub-model and flat it so I can get norm easily def getParameters(self): """ Get flatParameters note that getParameters and parameters is not equal in this case getParameters do not get parameters of output module :return: 1d tensor """ params = [] for m in [self.ix, self.ih, self.fx, self.fh, self.ox, self.oh, self.ux, self.uh]: # we do not get param of output module l = list(m.parameters()) params.extend(l) one_dim = [p.view(p.numel()) for p in params] params = F.torch.cat(one_dim) return params I add my custom L2 to err before I call backward, then step Only err is Variable. err is output of criterion(output, target). But l2_model and l2_emb_params, batch_size are not Variable (they are float and int) params = self.model.getParameters() params_norm = params.data.norm() l2_model = 0.5*self.args.reg*params_norm*params_norm emb_params = list(self.embedding_model.parameters())[0] emb_params_norm = (emb_params.data - self.emb_params_init).norm() l2_emb_params = 0.5 * self.args.embreg* emb_params_norm * emb_params_norm err = (err + l2_model + l2_emb_params) / batch_size err.backward() # after loop batch_size optim.step() optim.zero_grad()
st117620
is it possible that I resize the pretrained models to work with smaller images (40x40) instead of 256x256? I want to try decreasing training time by first doing my experiments on smaller images (40x40). Is it even possible to take a pretrained network and change it to process 40x40 and then fine tune on top of that? Are there any other issues with doing this? And if its possible, how would would go about doing this? I read all the docs on pretrained models and didnt see a lot of information.
st117621
all the pre-trained models require 224x224 images as input atleast. If you have smaller images, you might have to upsample to 224x224.
st117622
Hi, In my resnet class, I defined a list of layers. for k in range(4): self.layers[k] = self.make_layer(…) When running, it showd “Runtime Error: tensors are on different GPUs”, although I have only one GPU. I modified the code, added something like: self.layer1 = self.layers[0] self.layer2 = self.layers[1] … in the init function and ran again, everthing is OK. I guess current .cuda() function cannot copy the content in complex data structure, such as list, to GPU. Am I right?Preformatted text
st117623
you are correct. .cuda() will only copy parameters and buffers of a model on to GPU, not all the other custom attributes.
st117624
@smth You are really quick! In my case, if .cuda() can copy the parameters of the list of layers, the code can be much nicer, especially when there are many layers. So, is there any solution for my case? Many thanks! Ben
st117625
Hi, For list of Modules, you have http://pytorch.org/docs/nn.html#modulelist 344 And for list of Parameters you have http://pytorch.org/docs/nn.html#parameterlist 229 Using them will make sure that all methods like .cuda() or .parameters() work as expected !
st117626
Thanks @albanD! Now my code is like this and it works: self.layer_list = [] for k in range(4): self.layer_list.append(self.make_layer(...)) self.layers = nn.Modulelist(self.layer_list) I didn’t use ParameterList. Please tell me if there is any thing wrong!
st117627
ParameterList is if you want to store a list of Parameters, not needed in your case. You can do even better : self.layers = nn.Modulelist() for k in range(4): self.layers.append(self.make_layer(...))
st117628
Cool! Thanks! I found some similar posts here. This should be added to the document and made easy to find.
st117629
So I am trying to use a vgg16 for simple image classification. So I converted PIL image to tensor, then to variable and passed to model but still get “not 3d tensor error”. What could be the problem? edit: nevermind, image was 3d but to forward it you need to pass it as 4d tensor def transform_image(pil_image): loader = transforms.Compose([ transforms.Scale(imsize), # scale imported image transforms.ToTensor()]) # transform it into a torch tensor return loader(pil_image) def load_model(): return models.vgg16(pretrained=True) # todo load finetunned model only for hotdogs def get_label(path): one_image = load_image(path) image_tensor = transform_image(one_image) image_as_variable = Variable(image_tensor) print(image_as_variable) model = load_model() label = model.forward(image_as_variable) return label def load_image(path): return Image.open(path)
st117630
I am fresh here…I want to use 8 gpu for DataParallel in both forward(success) and backward(failed).I don’t know why. If I only use one GPU for backward like that( 72.criterion = nn.CrossEntropyLoss().cuda()), it can work. However ,I want to play with 8 gpu.Here is my code. from future import print_function import argparse import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms from torch.autograd import Variable Training settings parser = argparse.ArgumentParser(description=‘PyTorch MNIST Example’) parser.add_argument(’–batch-size’, type=int, default=64, metavar=‘N’, help='input batch size for training (default: 64)') parser.add_argument(’–test-batch-size’, type=int, default=1000, metavar=‘N’, help='input batch size for testing (default: 1000)') parser.add_argument(’–epochs’, type=int, default=10, metavar=‘N’, help='number of epochs to train (default: 10)') parser.add_argument(’–lr’, type=float, default=0.01, metavar=‘LR’, help='learning rate (default: 0.01)') parser.add_argument(’–momentum’, type=float, default=0.5, metavar=‘M’, help='SGD momentum (default: 0.5)') parser.add_argument(’–no-cuda’, action=‘store_true’, default=False, help='disables CUDA training') parser.add_argument(’–seed’, type=int, default=1, metavar=‘S’, help='random seed (default: 1)') parser.add_argument(’–log-interval’, type=int, default=10, metavar=‘N’, help='how many batches to wait before logging training status') args = parser.parse_args() args.cuda = not args.no_cuda torch.manual_seed(args.seed) if args.cuda: torch.cuda.manual_seed(args.seed) kwargs = {‘num_workers’: 1, ‘pin_memory’: False} if args.cuda else {} train_loader = torch.utils.data.DataLoader( datasets.MNIST('../data', train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=args.batch_size, shuffle=True, **kwargs) test_loader = torch.utils.data.DataLoader( datasets.MNIST('../data', train=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=args.batch_size, shuffle=True, **kwargs) class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 10, kernel_size=5) self.conv2 = nn.Conv2d(10, 20, kernel_size=5) self.conv2_drop = nn.Dropout2d() self.fc1 = nn.Linear(320, 50) self.fc2 = nn.Linear(50, 10) def forward(self, x,target): x = F.relu(F.max_pool2d(self.conv1(x), 2)) x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) x = x.view(-1, self.num_flat_features(x)) x = F.relu(self.fc1(x)) x = F.dropout(x, training=self.training) x = self.fc2(x) output = F.log_softmax(x) return output def num_flat_features(self, x): size = x.size()[1:] # all dimensions except the batch dimension num_features = 1 for s in size: num_features *= s return num_features model = Net() if args.cuda: model=torch.nn.DataParallel(model, device_ids=[0,1,2,3,4,5,6,7]).cuda() criterion = torch.nn.DataParallel(nn.CrossEntropyLoss(), device_ids=[0,1,2,3,4,5,6,7]).cuda() optimizer = optim.SGD(model.parameters(), lr=args.lr 1, momentum=args.momentum) def train(epoch): model.train() for batch_idx, (data, target) in enumerate(train_loader): if args.cuda: data, target = data.cuda(), target.cuda() data, target = torch.autograd.Variable(data), torch.autograd.Variable(target) optimizer.zero_grad() output = model(data,target) loss = criterion(output, target) loss.backward() optimizer.step() if batch_idx % args.log_interval == 0: correct = 0 pred = output.data.max(1)[1] # get the index of the max log-probability correct += pred.eq(target.data).sum() print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}\t Accuracy: {}/{} ({:.0f}%)'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.data[0], correct, len(target), 100. * correct / len(target))) def test(epoch): model.eval() test_loss = 0 correct = 0 for data, target in test_loader: if args.cuda: data, target = data.cuda(), target.cuda() data, target = Variable(data, volatile=True), Variable(target) output = model(data) test_loss += F.nll_loss(output, target).data[0] pred = output.data.max(1)[1] # get the index of the max log-probability correct += pred.eq(target.data).cpu().sum() test_loss = test_loss test_loss /= len(test_loader) # loss function already averages over batch size print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) for epoch in range(1, args.epochs + 1): train(epoch) test(epoch)
st117631
your code has screwed up formatting. You can look at our examples (dcgan or imagenet) for correct usage of DataParallel. GitHub pytorch/examples 735 A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc. - pytorch/examples
st117632
Thanks for your help. I have run the examples. But it is too hard for me to understand the key step for DataParallel in backward. Could you teach me in a simple example like mnist? Here is my code.It 7 can run , but can only realize the DataParallel in forward. from __future__ import print_function import argparse import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms from torch.autograd import Variable parser = argparse.ArgumentParser(description='PyTorch MNIST Example') parser.add_argument('--batch-size', type=int, default=64, metavar='N', help='input batch size for training (default: 64)') parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N', help='input batch size for testing (default: 1000)') parser.add_argument('--epochs', type=int, default=10, metavar='N', help='number of epochs to train (default: 10)') parser.add_argument('--lr', type=float, default=0.01, metavar='LR', help='learning rate (default: 0.01)') parser.add_argument('--momentum', type=float, default=0.5, metavar='M', help='SGD momentum (default: 0.5)') parser.add_argument('--no-cuda', action='store_true', default=False, help='disables CUDA training') parser.add_argument('--seed', type=int, default=1, metavar='S', help='random seed (default: 1)') parser.add_argument('--log-interval', type=int, default=10, metavar='N', help='how many batches to wait before logging training status') args = parser.parse_args() args.cuda = not args.no_cuda torch.manual_seed(args.seed) if args.cuda: torch.cuda.manual_seed(args.seed) kwargs = {'num_workers': 7, 'pin_memory': True} if args.cuda else {} train_loader = torch.utils.data.DataLoader( datasets.MNIST('../data', train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=args.batch_size, shuffle=True, **kwargs) test_loader = torch.utils.data.DataLoader( datasets.MNIST('../data', train=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=args.batch_size, shuffle=True, **kwargs) class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 10, kernel_size=5) self.conv2 = nn.Conv2d(10, 20, kernel_size=5) self.conv2_drop = nn.Dropout2d() self.fc1 = nn.Linear(320, 50) self.fc2 = nn.Linear(50, 10) def forward(self, x): x = F.relu(F.max_pool2d(self.conv1(x), 2)) x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) x = x.view(-1, self.num_flat_features(x)) x = F.relu(self.fc1(x)) x = F.dropout(x, training=self.training) x = self.fc2(x) output = F.log_softmax(x) return output def num_flat_features(self, x): size = x.size()[1:] # all dimensions except the batch dimension num_features = 1 for s in size: num_features *= s return num_features model = Net() if args.cuda: model=torch.nn.DataParallel(model, device_ids=[0,1,2,3,4,5,6,7]).cuda() optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum) def train(epoch): model.train() for batch_idx, (data, target) in enumerate(train_loader): if args.cuda: data, target = data.cuda(), target.cuda() data, target = torch.autograd.Variable(data), torch.autograd.Variable(target) optimizer.zero_grad() output = model(data) loss = F.nll_loss(output, target) loss.backward() optimizer.step() if batch_idx % args.log_interval == 0: correct = 0 pred = output.data.max(1)[1] # get the index of the max log-probability correct += pred.eq(target.data).sum() print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}\t Accuracy: {}/{} ({:.0f}%)'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.data[0], correct, len(target), 100. * correct / len(target))) def test(epoch): model.eval() test_loss = 0 correct = 0 for data, target in test_loader: if args.cuda: data, target = data.cuda(), target.cuda() data, target = Variable(data, volatile=True), Variable(target) output = model(data) test_loss += F.nll_loss(output, target).data[0] pred = output.data.max(1)[1] # get the index of the max log-probability correct += pred.eq(target.data).cpu().sum() test_loss = test_loss test_loss /= len(test_loader) # loss function already averages over batch size print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) for epoch in range(1, args.epochs + 1): train(epoch) test(epoch)
st117633
first: dont double post. you’ve posted in the other thread with the same huge code-block. It’s not helpful. second: Why do you think DataParallel doesn’t work in backward? of course it does work in backward too.
st117634
oh, thanks!! Please forgive me. In fact it is my first time to post a topic in a coding forum, and I won’t make a double post again. I print loss and find it is a scalar. I’m curious about how the Dataparallel works in backward. Your doc say that Data Parallelism is when we split the mini-batch of samples into multiple smaller mini-batches and run the computation for each of the smaller mini-batches in parallel. So, I think the loss should be a tensor with shape (8*1). Each smaller mini-batches corresponds to a loss. Why there is only one scalar?
st117635
if you notice the examples, DataParallel is not applied to the entire network + loss. It is only applied to part of the network. before adding DataParallel: network = features (conv layers) -> classifier (linear layers) error = loss_function(network(input), target) error.backward() After adding DataParallel: network = DataParallel(features (conv layers)) -> classifier (linear layers) error = loss_function(network(input), target) error.backward()
st117636
So, how does the dataparallel work in backward when I only wrap my network(without loss) with data parallel? https://discuss.pytorch.org/t/is-the-loss-function-paralleled-when-using-dataparallel/3346/2?u=bigxiuixu By the way ,I am follow this discuss. I have tried computing loss as part of the forward function in model too, here is the code. def forward(self, x,target): x = F.relu(F.max_pool2d(self.conv1(x), 2)) x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) x = x.view(-1, self.num_flat_features(x)) x = F.relu(self.fc1(x)) x = F.dropout(x, training=self.training) x = self.fc2(x) output = F.log_softmax(x) return F.nll_loss(output, target),output and wrap the network +loss with dataparallel model=torch.nn.DataParallel(model, device_ids=[0,1,2,3,4,5,6,7]).cuda() But finally, it say that Traceback (most recent call last): File "main.py", line 135, in <module> train(epoch) File "main.py", line 98, in train loss.backward() File "/home/lab/anaconda2/lib/python2.7/site-packages/torch/autograd/variable.py", line 143, in backward 'backward should be called only on a scalar (i.e. 1-element tensor) ' RuntimeError: backward should be called only on a scalar (i.e. 1-element tensor) or with gradient w.r.t. the variable
st117637
The code of dcgan is if opt.cuda: netD.cuda() netG.cuda() criterion.cuda() input, label = input.cuda(), label.cuda() noise, fixed_noise = noise.cuda(), fixed_noise.cuda() So, is the criterion use one gpu ? If it just use one gpu , how about the backward? Is it use one GPU too?
st117638
In my Finetune models , I wanna parallel my model, in multi-gpus, my code is shown below: class FinetuneModel(nn.Module): def __init__(self, pretrained_model, ngpu = opt.gpuids): self.ngpu = ngpu super(FinetuneModel, self).__init__() self.features = pretrained_model self.classifier = nn.Sequential( nn.Dropout(), nn.Linear(512 * 4 * 4, 2048), .... )) def forward(self, x): gpuids = None if self.ngpu: gpuids = range(self.ngpu) features = self.features(x)#self.features has already implemented data parallel return nn.parallel.data_parallel(self.classifier, features, device_ids=gpuids) as far as I know , when doing features = self.features(x)#self.features.forward has already implemented data parallel score = nn.parallel.data_parallel(self.classifier, features, device_ids = gpuids) GPU first broadcast batch data to GPU0 and GPU1 , after executing self.features, pytorch copy result to GPU0. when executing self.classifier, pytorch again broadcast data to multi-gpus. is there a pytorchic way that could reduce data-copy like this score = nn.parallel.data_parallel([self.features,self.classifier], features, device_ids = gpuids) which only does one broadcast
st117639
Nice, thanks . I’ll try it later. Another question: why in here 40, only the model.features is paralleled, not the whole model?
st117640
As AlexNet and VGG contain lots of parameters in the FC layers. Syncing params on these layers will have large overhead. It’s faster to compute the FC layers only on one GPU.
st117641
A Pythonic way to do a data parallel on a sequence of modules is to group them in a container, and use data parallel on that container. You could just remove the data parallel code from features. And if you really need to do it differently, we have gather, scatter, replicate and parallel_apply inside torch.nn.parallel. Just keep in mind that they’re not documented right now and they still might change.
st117642
Hi, I know there is a zero_grad function for the network, e.g., scores=vgg16(image) vgg16.zero_grad() When I do backward pass for the network, the image also get some gradient, right? If I want to zero all the grad for the image, how can I do it? image is a Variable; image.zero_grad() will has en error. Thanks!
st117643
Hello all, I am basically trying to do mean-variance normalization in embedding space. I have a RNN that embeds a sequence, and then calculates the mean and standard deviation. These are used to normalize the input of another sub-network. This model seems to work (atleast my training and validation losses behave themselves). I now wanted to try to learn weights corresponding to the mean and variance. Sort of like batch-norm (in the most hand-wavy way possible) Q. Ofcourse, pytorch is magical and my model seems to be training. But am I doing it correctly? Q. Do I need to do something else with the 2 new weight matrices I have introduced, or will they automatically be added to the parameter list of the RNN class? Q. What would be the best way to check if these weights are actually learning something? some sort of check for the gradients w.r.t to these parameters? My code : class RNN(nn.Module): def __init__(self, input_size, hidden_size, num_layers): super(RNN, self).__init__() self.hidden_size = hidden_size self.num_layers = num_layers self.input_size = input_size self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True) #project down to feature dimension self.proj = nn.Linear(hidden_size, 840) #Define weights for the channel mean and variance self.W_mean = nn.Parameter(torch.randn(840,840)) self.W_std = nn.Parameter(torch.randn(840,840)) def forward(self,seq,x): # Set initial states h0 = Variable(torch.zeros(self.num_layers, seq.size(0), self.hidden_size).cuda()) c0 = Variable(torch.zeros(self.num_layers, seq.size(0), self.hidden_size).cuda()) out, _ = self.lstm(seq, (h0,c0)) #project lstm embeddings to feature size out = out.contiguous().view(out.size(0)*out.size(1),self.hidden_size) proj = self.proj(out) proj = proj.view(-1,seq.size(1),x.size(1)) #840 dimensional average embedding avg_emb = torch.mean(proj,1) std_emb = torch.std(proj,dim=1) #subtract the avg embedding from the speech frames avg_emb = avg_emb.view(-1,x.size(1)) std_emb = std_emb.view(-1,x.size(1)) #mean vaiance normalization x_norm = (x - torch.mm(avg_emb,self.W_mean))/torch.mm(std_emb,self.W_std) To summarize, I am projecting the LSTM output to have the same dimension as the input of my other network i.e. 840. This also corresponds to the size of the W_mean and W_std weight matrices, i.e. (840x840) Thanks, Gautam
st117644
PS. Training is not going as well as I thought. Very jumpy as compared to the model without weights.
st117645
I’m not able to calculate the inverse of Tensors without throwing Intel MLK fatal error. In general, all other modules I use seem to be working fine. >>> import torch >>> a = torch.rand(5, 5) >>> a.sqrt() 0.7312 0.9105 0.5279 0.7337 0.8724 0.6445 0.9235 0.5381 0.9239 0.8973 0.7807 0.8782 0.3331 0.6407 0.2607 0.7194 0.9982 0.4491 0.7978 0.7362 0.7902 0.4041 0.9476 0.5784 0.7240 [torch.FloatTensor of size 5x5] >>> a.inverse() Intel MKL FATAL ERROR: Error on loading function mkl_lapack_ps_avx2_sgetrf_small. CudaFloatTensor has the same problem. Any ideas?
st117646
did you install pytorch from source (instead of binaries)? If not, how did you install the binaries? through pip wheel or conda?
st117647
Pip install, Python 2, cuda 8. pip install http://download.pytorch.org/whl/cu80/torch-0.1.12.post2-cp27-none-linux_x86_64.whl
st117648
Re-installing with conda resolved this for me. It seems the issue is with pip package dependencies.
st117649
looks like a screw-up on my side when packaging binaries (sorry about that). I’ve made a note of it and will fix it in the next pip wheels (I didn’t ship the AVX2 extensions of MKL)
st117650
If I refer to the the documentation we have : def __len__(self): return len(self._modules) in the Sequential classe. But if I am trying to len(sequential_object) I have : TypeError: object of type 'Sequential' has no len() Am I doing something wrong ? I am quite new to python and it might be wrong
st117651
np.float32(1.)+Tensor([1.]) works as expected (returns a Tensor) But: Tensor([1.])+np.float32(1.) fails with TypeError: add received an invalid combination of arguments - got (numpy.float32) np.float32(1.)+Variable(Tensor([1.])) returns a very strange numpy array: array([[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing: 2 [torch.FloatTensor of size 1] ]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]], dtype=object) Variable(Tensor([1.]))+np.float32(1.) fails with TypeError: add received an invalid combination of arguments - got (numpy.float32) I expected to get Variable when adding float32 to Variable and Tensor when adding float32 to Tensor. Is it then a bug?
st117652
We do not support mixed type addition (np /torch, torch/np). In the 1st case, np treats Tensor as an iterable, and it kind of magically worked out. In the 3rd case, np treats Variable as an iterable and Variable’s x[1] = x, so there’s this weird recursive indexing. There’s not much we can do to fix it on the PyTorch side, but we can introduce an autograd.Scalar (which we are planning to do) and then we’ll have a proper error message generated.
st117653
Hi there, I have got the following ModuleList named path1 in my network definition: ModuleList ( (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True) (2): LeakyReLU (0.1, inplace) (3): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1)) (4): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (5): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True) (6): LeakyReLU (0.1, inplace) (7): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1)) (8): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (9): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True) (10): LeakyReLU (0.1, inplace) (11): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (12): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True) (13): LeakyReLU (0.1, inplace) (14): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (15): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True) (16): LeakyReLU (0.1, inplace) (17): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1)) (18): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (19): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True) (20): LeakyReLU (0.1, inplace) (21): Conv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (22): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True) (23): LeakyReLU (0.1, inplace) (24): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (25): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True) (26): LeakyReLU (0.1, inplace) (27): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1)) (28): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (29): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True) (30): LeakyReLU (0.1, inplace) (31): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (32): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True) (33): LeakyReLU (0.1, inplace) (34): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (35): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True) (36): LeakyReLU (0.1, inplace) (37): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (38): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True) (39): LeakyReLU (0.1, inplace) (40): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (41): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True) (42): LeakyReLU (0.1, inplace) ) and also here is my input: inputs = autograd.Variable(torch.randn(1,3,416,416)) In my forward function, I do below loop: def forward(self, input): out = input for layer in self.path1: out = layer(out) return out At first iteration of the loop, I mean for the first convolution layer, I receive below error: RuntimeError: Need input of dimension 4 and input.size[1] == 32 but got input to be of shape: [1 x 3 x 416 x 416] at /py/conda-bld/pytorch_1493677666423/work/torch/lib/THNN/generic/SpatialConvolutionMM.c:47 Could you please tell me how can I solve this problem? I think something went wrong with pytorch. By the way. I would like to say that I jointly work with keras with tensorflow backend and pytorch (Both of them were installed on anaconda). Does it make sense that this joint working causes the error
st117654
there is a clear error message there that should help you: RuntimeError: Need input of dimension 4 and input.size[1] == 32 but got input to be of shape: [1 x 3 x 416 x 416]
st117655
Hi, does anyone know the difference between these two definitions? Because when I tried to train the network, they had very different performances. class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 16, 5, padding=2) self.pool = nn.MaxPool2d(2, 2) self.dropout = nn.Dropout2d(p=0.5) self.conv2 = nn.Conv2d(16, 16, 5, padding=2) self.conv3 = nn.Conv2d(16, 400, 11, padding=5) self.conv4 = nn.Conv2d(400, 200, 1) self.conv5 = nn.Conv2d(200, 1, 1) def forward(self, x): x = self.dropout(self.pool(F.relu(self.conv1(x)))) x = self.dropout(self.pool(F.relu(self.conv2(x)))) x = self.conv3(x) x = self.conv4(x) x = F.relu(self.conv5(x)) return x and class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.single5 = nn.Sequential( nn.Conv2d(3, 16, 5, padding=2), nn.ReLU(), nn.MaxPool2d(2, 2), nn.Dropout2d(p=0.5), nn.Conv2d(16, 16, 5, padding=2), nn.ReLU(), nn.MaxPool2d(2, 2), nn.Dropout2d(p=0.5), nn.Conv2d(16, 400, 11, padding=5), nn.Conv2d(400, 200, 1) ) self.conv = nn.Conv2d(200, 1, 1) def forward(self, x): x = self.single5(x) x = F.relu(self.conv(x)) return x
st117656
both of them are exactly equivalent. Maybe both of them had different weight initializations and hence they got different performance.
st117657
What is the correct usage of torch.cuda.device 34 ? Set_device 20 has the comment “Usage of this function is discouraged in favor of device”. But when I try to use it to set the current device it doesn’t work, whereas set_device does work? (Pdb) torch.cuda.device_count() 2 (Pdb) torch.cuda.current_device() 0 (Pdb) torch.cuda.device(1) <torch.cuda.device object at 0x2afd1ce77390> (Pdb) torch.cuda.current_device() 0 (Pdb) torch.cuda.set_device(1) (Pdb) torch.cuda.current_device() 1
st117658
Hello, I am doing the following : criterion(input,target) Both being variable but I end up with the following error which I am not understanding : File “grad_cam.py”, line 93, in loss = criterion(input, target) File “/home/lelouedec/.local/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 202, in call result = self.forward(*input, **kwargs) File “/home/lelouedec/.local/lib/python2.7/site-packages/torch/nn/modules/loss.py”, line 316, in forward self.weight, self.size_average) File “/home/lelouedec/.local/lib/python2.7/site-packages/torch/nn/functional.py”, line 452, in cross_entropy return nll_loss(log_softmax(input), target, weight, size_average) File “/home/lelouedec/.local/lib/python2.7/site-packages/torch/nn/functional.py”, line 367, in log_softmax return _functions.thnn.LogSoftmax()(input) File “/home/lelouedec/.local/lib/python2.7/site-packages/torch/nn/_functions/thnn/auto.py”, line 110, in forward self._backend = type2backend[type(input)] File “/home/lelouedec/.local/lib/python2.7/site-packages/torch/_thnn/init.py”, line 15, in getitem return self.backends[name].load() KeyError: <class ‘torch.LongTensor’ Any Idea why ?
st117659
Is the input Variable containing a torch.LongTensor? I think CrossEntropyLoss is only implemented for FloatTensor and DoubleTensor.
st117660
I tried with float tensor for the input and I got the following : TypeError: FloatClassNLLCriterion_updateOutput received an invalid combination of arguments - got (int, torch.FloatTensor, torch.FloatTensor, torch.FloatTensor, bool, NoneType, torch.FloatTensor), but expected (int state, torch.FloatTensor input, torch.LongTensor target, torch.FloatTensor output, bool sizeAverage, [torch.FloatTensor weights or None], torch.FloatTensor total_weight) That is why I used long Tensor
st117661
I am just adding this last question : I have this assertion : Assertion `THIndexTensor_(size)(target, 0) == batch_size’ failed Does target and input need to have the same size ?
st117662
input should be (batch_size, n_label) and target should be (batch_size) with values in [0, n_label-1].
st117663
Hi, I am new to PYTORCH. I am trying to use ‘nn.Sequential’ to build a single layer LSTM (just for the sake of trial) rnn = nn.Sequential( nn.LSTM(10, 20, 2) ) input = Variable(torch.randn(100, 3, 10)) h0 = Variable(torch.randn(2, 3, 20)) output, hn = rnn(input, (h0)) However, this gives an error forward() takes exactly 2 arguments (3 given) although the same example works if I don’t use Sequential rnn = nn.LSTM(10, 20, 2) input = Variable(torch.randn(100, 3, 10)) h0 = Variable(torch.randn(2, 3, 20)) output, hn = rnn(input, (h0)) Can you please advice on this? Thank you
st117664
An LSTM has two internal states: h and c. I don’t know the why of that behaviour with Sequential but try either passing both h0 and c0 values or not passing them at all in order to let the model do the default initialisation. Let me know how it goes… EDIT: not passing any state value works for me…
st117665
You’re right, I just tried it, and it works without passing h and c. However, I don’t understand if this is correct or no. In the examples in the documentation, you have to pass h and c. How can I do this in case of a Sequential model?
st117666
nn.Sequential is not meant for building a model that operates on time sequences; nn.LSTM will do that out of the box. nn.Sequential is for stringing together several layers that don’t use time sequences into one model that runs the layers one after another.
st117667
@jekbradbury I see. I thought it is similar to Sequential Model in Keras. What if I want to stack layers of Convolution, LSTM and Dense in one model? Is there an example on how to do this in PyTorch? Also, what if I want to stack LSTM layers with different number of hidden units? How can I do that?