id
stringlengths
3
8
text
stringlengths
1
115k
st117868
I wonder if there’s a batch version of torch.eig or calling it in a async way on a GPU? For example, aaa = torch.randn((5, 5)) mat = torch.mm(torch.t(aaa), aaa) mat = mat.pin_memory().cuda(async=True) for ii in range(1000): torch.eig(mat, eigenvectors=True) How can I run these 1000 torch.eig computations simultaneously on GPU?
st117869
the CUDA kernel queue is of size 1023 i think. So maybe you can do 10 or 100 of these asynchronously, but possibly not 1000 (as each of .eig might be calling multiple kernels).
st117870
cool, thanks! that’s good to know. am I calling it in the right way? or there need to be a new function written?
st117871
Is this a bug? import torch t = torch.HalfTensor([0]) t = torch.autograd.Variable(t) This code causes these errors, Traceback (most recent call last): File “a.py”, line 4, in t = torch.autograd.Variable(t) RuntimeError: Variable data has to be a tensor, but got HalfTensor
st117872
Hi, CPU half tensors do not actually exist. Using cuda HalfTensors work as expected: import torch t = torch.cuda.HalfTensor([0]) t = torch.autograd.Variable(t)
st117873
Thank your for reply. Are you intend to implement cpu HalfTensor? I think It is desired torch.HalfTensor is deleted or sends warnings until it… This is very unexpected behaviour.
st117874
Dear guys, I figure out that some functions in PyTorch like torch.max() or torch.sum() don’t have keepdim argument thought it is available in the document. For example, if I run the following code: import numpy as np import torch a = torch.Tensor(np.arange(6).reshape(2, 3)) print torch.max(a, keepdim=True) I will get the error: print torch.max(a, keepdim=True) TypeError: torch.max received an invalid combination of arguments - got (torch.FloatTensor, keepdim=bool), but expected one of: * (torch.FloatTensor source) * (torch.FloatTensor source, torch.FloatTensor other) didn't match because some of the keywords were incorrect: keepdim * (torch.FloatTensor source, int dim) Do you have any solution for this problem ? Moreover, when I run torch.max(a) I get a float number but I expect it to be a 2-tuple. The first is a Tensor with max value, the second is the index. I think the code should be checked.
st117875
Can I use the numbapro or some other python cuda library to write the cuda python code for the pytorch extension? I find that the present way to write pytorch extension code running on gpu is mostly writing the cuda file with C language. I want to directly write the cuda file with python in pytorch.
st117876
not numba/numbapro, but you can use cupy: gist.github.com https://gist.github.com/szagoruyko/dccce13465df1542621b728fcc15df53 239 cupy-pytorch-ptx.py import torch from cupy.cuda import function from pynvrtc.compiler import Program from collections import namedtuple a = torch.randn(1,4,4).cuda() b = torch.zeros(a.size()).cuda() kernel = ''' extern "C" This file has been truncated. show original
st117877
I want to pad feature maps of zeros to the output of Conv2D but the only mode available for 5-D input is ‘replicate’. So is there any other way to pad zeros ?
st117878
Given a feature map x with size of ncw*h, For each sample n, I want to set some specific channels to zero. I multiply X by a Mask, but the accuracy is not satisfactory. Code: def forward(self, x): mask = Variable(torch.ones(x.size())) mask = mask.cuda() x = x * mask x = F.avg_pool2d(x, x.size()[2:]) x = x.view(x.size(0), -1) x = self.classifier(x) return x x = x * mask does not change the value of x, why I get a lower accuracy than I did not do this multiplication operation? Is there anything wrong in the backward of training?
st117879
you are multiplying by the value 1. Because mask is torch.ones. The value will not change in that case, as expected.
st117880
My ResNet is not working on some images Here Is my probe of sizes in input layer, the layer before avgpool and the layer after avgpool. Right1.jpg torch.Size([1, 3, 256, 382]) torch.Size([1, 2048, 8, 12]) torch.Size([1, 2048]) Right2.jpg torch.Size([1, 3, 256, 341]) torch.Size([1, 2048, 8, 11]) torch.Size([1, 2048]) wrong.jpg torch.Size([1, 3, 256, 454]) torch.Size([1, 2048, 8, 15]) torch.Size([1, 4096]) It seems that the avgpool layer goes wrong. Is there any remedies?
st117881
Hello Guys, I have got a simple question. Here is the architecture of resnet18. ResNet ( (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True) (relu): ReLU (inplace) (maxpool): MaxPool2d (size=(3, 3), stride=(2, 2), padding=(1, 1), dilation=(1, 1)) (layer1): Sequential ( (0): BasicBlock ( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True) (relu): ReLU (inplace) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True) ) (1): BasicBlock ( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True) (relu): ReLU (inplace) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True) ) ) (layer2): Sequential ( (0): BasicBlock ( (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True) (relu): ReLU (inplace) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True) (downsample): Sequential ( (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True) ) ) (1): BasicBlock ( (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True) (relu): ReLU (inplace) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True) ) ) (layer3): Sequential ( (0): BasicBlock ( (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True) (relu): ReLU (inplace) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True) (downsample): Sequential ( (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True) ) ) (1): BasicBlock ( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True) (relu): ReLU (inplace) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True) ) ) (layer4): Sequential ( (0): BasicBlock ( (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True) (relu): ReLU (inplace) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True) (downsample): Sequential ( (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True) ) ) (1): BasicBlock ( (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True) (relu): ReLU (inplace) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True) ) ) (avgpool): AvgPool2d () (fc): Linear (512 -> 1000) ) I would like to make a branch in layer 1, Basic block 1 after conv2. Actually, I would like to use the output of this layer and make branch. One point which is so important for me is to use pretrained weight of resnet. Could you please help me how can I do that?
st117882
Thanks for your response. But I could not get what you mean. could you please write for me some snippet here about how to do that?
st117883
Hi, ResNet model definitions are here: github.com pytorch/vision/blob/master/torchvision/models/resnet.py#L106 40 class ResNet(nn.Module): def __init__(self, block, layers, num_classes=1000): self.inplanes = 64 super(ResNet, self).__init__() self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False) self.bn1 = nn.BatchNorm2d(64) self.relu = nn.ReLU(inplace=True) self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) self.layer1 = self._make_layer(block, 64, layers[0]) self.layer2 = self._make_layer(block, 128, layers[1], stride=2) self.layer3 = self._make_layer(block, 256, layers[2], stride=2) self.layer4 = self._make_layer(block, 512, layers[3], stride=2) self.avgpool = nn.AvgPool2d(7, stride=1) self.fc = nn.Linear(512 * block.expansion, num_classes) for m in self.modules(): if isinstance(m, nn.Conv2d): n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels m.weight.data.normal_(0, math.sqrt(2. / n)) Look at _make_layer function and how the residual blocks were coded: github.com pytorch/vision/blob/master/torchvision/models/resnet.py#L57 14 if self.downsample is not None: residual = self.downsample(x) out += residual out = self.relu(out) return out class Bottleneck(nn.Module): expansion = 4 def __init__(self, inplanes, planes, stride=1, downsample=None): super(Bottleneck, self).__init__() self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) self.bn1 = nn.BatchNorm2d(planes) self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False) self.bn2 = nn.BatchNorm2d(planes) self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False) So you could adapt these however you want.
st117884
Thanks for your response! Could you please write a concrete example. I mean please add a conv after bn2 in basic block 0 in `layer2. I would like to make a branch around that point
st117885
I mean please add a conv after bn2 in basic block 0 in `layer2. examples are given as pointers for you to get going. You cant expect someone to write the exact example you want (or rather, do the work for you).
st117886
I only have a year or so experience doing ML stuff, but it has all been in TensorFlow. I’d like some help expanding my vision to the possibilities allowed (or facilitated by) dynamic computation graphs. I would imagine that you could come up with some crazy network structures that weren’t possible/easy before. Some ideas that I’ve had - these aren’t meant to be good ideas, just different: graph-structured neural networks - each node is a few layers, and nodes can conditionally pass messages to each other depending on their output, etc. Similarly, state machines of some kind networks that change structure partway through - after you get a [batch size, 512, 13, 13] tensor out of a CNN, maybe you could cluster those vectors and choose different fully connected layers. pick a random activation function every iteration layers that adjust size skipping layers re-running things through portions of a network Am I on the right track? What other (potentially wacky) things could I do with dynamic graphs?
st117887
Hi, I implemented the many-to-one RNN code to predict score with given sequence. This model does regression with hidden vector of last RNN(GRU) layer in last time step and input sequences have various length. There is no error and I trained the model. After training the model, I found that the output of this model didn’t change in spite of different inputs. Here is the code that defines model. embedding_dim = 200 hidden_size = 200 num_layer = 2 class RNNReg(nn.Module): def __init__(self, num_layer=2, hidden_size=200, bidirectional=True): super(RNNReg, self).__init__() self.num_layer = num_layer self.hidden_size = hidden_size self.bidirectional = bidirectional self.embedding = nn.Embedding(vocaNum, embedding_dim, padding_idx=0, max_norm=1) self.gru = nn.GRU(input_size=embedding_dim, hidden_size=hidden_size, num_layers=num_layers, batch_first=False, bidirectional=bidirectional, dropout=0.2) self.fc = nn.Linear(in_features=(int(bidirectional)+1)*hidden_size, out_features=1) def forward(self, x, lengths): input = self.embedding(x) input = pack_padded_sequence(input, lengths, batch_first=True) output, hidden = self.gru(input) #output = pad_packed_sequence(output, batch_first=True) #output = self.fc(output[0][:, -1, :]) output = hidden[(self.num_layer-1)*(int(self.bidirectional)+1):,:,:] output = output.permute(1,0,2).contiguous() output = output.view(-1, (int(self.bidirectional)+1)*self.hidden_size) output = self.fc(output) return output and Here is test code to calculate the output with sequence variable test. test = u"영화 재미 있다" test = twitter.morphs(test) test = [[voca2index[word] if word in voca2index else voca2index['<UNK>'] for word in test]] print(" ".join([index2voca[t] for t in test[0]])) test, lengths = addPad(test) test = Variable(test).cuda() predict = reg(test, lengths) print(predict) How can I solve this problem?
st117888
It’s difficult to say why the output is not changing. Investigate where the output is different compared to training and testing.
st117889
When I follow the signature for torch.cat described on the docs 11, and try to pass a dim argument, I get the error TypeError: cat() takes no keyword arguments I also noticed that when I do help(torch.cat), the documentation that pops up uses dimension instead of dim as the name of the argument which indicates along which axis to concatenate the tensors. I understand that the function works by passing the dimension as a non-keyword argument, but then isn’t documenting it with dim=0 misleading?
st117890
Well, I also met this error whether I use dim or dimension, if I just ignore them and just pass a number, it can work.
st117891
@pietromarchesi i believe this was fixed in https://github.com/pytorch/pytorch/issues/1028 56 to support keyword arguments, and might have gone into 0.1.12 release.
st117892
I am seeking for example code of a simple Bayesian NN. Is it generally the similar way to implement with PyTorch as the traditional CNN does ?
st117893
yes, from what I understand about Bayesian NNs, you dont have to do anything drastically different in terms of implementation.
st117894
Hi everyone, I’ve a RNN model that take as input 64 (batch size) x 100 (time steps) * 3 (3 labels to be predicted, 2 of them have 64 classes, and the 3rd has 2 classes). The model output is the same I tried the CrossEntropyLoss loss function, it gave an error that it needs only 2-D or 4-D tensors. What is wrong in what I am doing? What is the good loss function that i should use for my problem?
st117895
Hi @osm3000, the cross entropy loss doesn’t know about timesteps or multiple classes. Last time I needed that for a single class, I used loss = lossfn(scores.view(-1,batch_size*time_steps), labels.contiguous().view(-1)) (the contiguous was needed because the view failed without due to the minibatch preparation method, you could try to do without). If you have three labels, you might just hand back three score vectors and add three cross entropy losses. But this is only one way to do it, and you might look at what best fits your purpose. For example Sean Robertson just adds the losses over the sequence steps 218 in his RNN-for-Shakespeare tutorial (the notebook 100 is an excellent read, too, but it is harder to link to specific lines), probably because the outputs are generated one by one anyways. Best regards Thomas
st117896
Thank you @tom for your reply. I really don’t know, I tried everything with it. It only worked when I used Log Softmax instead of Softmax, and used NLLLoss instead of CrossEntropy
st117897
If you use CrossEntropyLoss, you dont need to put Softmax or LogSoftmax at the end.
st117898
Related: are model.zero_grad() and optimizer.zero_grad() equivalent when using an optimizer?
st117899
@Nick_Young yes, the buffer for the gradient are never zeroed out automatically. @lgelderloos only if you created your optimizer as optimizer = optim.some_optim_func(model.parameters(), ...). Basically model.zero_grad() will zero all the parameters in the model. optimizer.zero_grad() will zero out all parameters associated with this optimizer. Depending on how you created the optimizer, they will be the same or not.
st117900
Hi there, I’m very new to PyTorch so please bear with me. I’ve been following and reading tutorials to get familiar with pytorch. The tutorials all use torchvision package which contains dataloaders for CIFAR-10/100, COCO etc. I wanted to know if torchvision’s functionality can be extended to any non-standard dataset that I may have. If not, then can one write their own custom dataloaders and still use the transforms features defined in torchvision? Thanks
st117901
Of course, as long as you write your own Dataset 36 which is very easy to implement. Then you can utilize the speedup of multiprocessing by using Dataloader 9 You may refer to Imagefolder 37 it’s a standard implementation of Dataset. github.com pytorch/vision/blob/master/torchvision/datasets/folder.py#L62-L91 94 def default_loader(path): from torchvision import get_image_backend if get_image_backend() == 'accimage': return accimage_loader(path) else: return pil_loader(path) class ImageFolder(data.Dataset): """A generic data loader where the images are arranged in this way: :: root/dog/xxx.png root/dog/xxy.png root/dog/xxz.png root/cat/123.png root/cat/nsdf3.png root/cat/asd932_.png This file has been truncated. show original
st117902
Thanks, just one more question - Does this class only support image data as of now or it can be used without any modifications in cases like text data for training RNNs.
st117903
It supports all kind datasets, and it could also be used to load raw text file.But you need to write your own loader(read file into memory) and transform(transform text data to tensor). As for text datasets, try: GitHub pytorch/text 31 Data loaders and abstractions for text and NLP. Contribute to pytorch/text development by creating an account on GitHub.
st117904
hi pytorch I have a fully convolutional network that operates on some training data that contains invalid values. How can I mask out these regions for the purpose of calculating loss, given the binary masks?
st117905
you can mask by: mask= (output!=invalid_value) mask_target=target[mask] mask_output=output[mask] loss = calculate_loss(mask_output,mask_target)
st117906
In the following procedure, how should the numpy format be transformed into a tensor format? thanks OUT = np.zeros((N,F,Ho,Wo)) for f in xrange(F): for i in xrange(Ho): for j in xrange(Wo): OUT[:,f,i,j] = np.sum(x_pad[:, :, i*S : i*S+HH, j*S : j*S+WW] * w[f, :, :, :], axis=(1, 2, 3))
st117907
yes,I know it. but I need tensor format to complete the above code. Could you help me with it? Thanks a lot.
st117908
Do something like this, whenever you need to create a tensor from a numpy array. OUT_tensor = torch.from_numpy(OUT) after the loop completes.
st117909
I use BCELoss but the output of my model is tensor.FloatTensor, how to change it?
st117910
train_dataset = dsets.CIFAR100(root=’…/…/data/’, train=True, transform=transform, download=True) images, labels = train_dataset[0] … images = Variable(image.cuda()) # error! labels = Variable(label.cuda()) outputs = resnet(images) loss = criterion(outputs, labels) I want to convert it to a net inputs, and calc net’s gradients How to convert image to Variable? thanks
st117911
In general,i use torch.utils.data.DataLoader to load image, but i have to filter some images with images dataset. how to filter some images what i don’t to use with torch.utils.data.DataLoader? thanks
st117912
I don’t know how to do it with dataloader, maybe u need to do it by yourself, u need to write a function to filter the images.
st117913
Make sure your transform contains torchvision.transforms.ToTensor. torch.utils.data.DataLoader is not neccessary, just rewrite your dsets.CIFAR100's __getitem__ and `len`` to filter some images.
st117914
In the official documents, i find this method: torch.nn.init.xavier_uniform(tensor, gain=1) but when i use it, it raise an error: xavier = nn.init.xavier_normal AttributeError: ‘module’ object has no attribute ‘init’ and i import torch in ipython to test it, i didn’t find init either why?
st117915
You need to import it before you can use it. Its a submodule import torch.nn.init as weight_init
st117916
I was reading the documentation of torch.multinomial() method in the online documentation 58. I tried this in IPython environment. weights = torch.Tensor([0, 10, 3, 0]) print weights # out: [torch.FloatTensor of size 4] torch.multinomial(weights, 5) Of course, error occurs because 5 is larger than the size of weights and I didn’t set replacement=True. The error message is like this: RuntimeError: cannot sample n_sample > prob_dist:size(1) samples without replacement at /home/me/pytorch/pytorch/torch/lib/TH/generic/THTensorRandom.c:94 But after this, the dimensions of weights changed. print weights # out: [torch.FloatTensor of size 1x4] If I run as following without runtime exception, then the dimensions of weights do not change. weights = torch.Tensor([0, 10, 3, 0]) print weights # out: [torch.FloatTensor of size 4] torch.multinomial(weights, 4) print weights # out: [torch.FloatTensor of size 4]
st117917
Hi, Yes it is a bug in the C implementation that is not cleaning up properly when error are raised. Could you open an issue on github to track this and say that this 43 operation is not cleaned up when error are raised please. Thanks for the report !
st117918
My pleasure, and thank you for the explanation! Here is the issue: Operation is not cleaned up when exception raised in file THTensorRandom.c 110
st117919
I’d like to pass a custom object as undifferentiable input (tuple of tensors representing batched list of ROIs) to a custom (ROI pooling) Function’s new-style forward. In practice, this object in my case would be a tuple of tensors, but unfortunately a Variable can’t hold non-tensors. Is there a nice way to do it without resorting to workarounds? (As a workaround, I can pass this tuple as *args, but for other cases this wouldn’t work) P.S. would be cool if this forum supported more topic categories, like “autograd” etc
st117920
there isn’t a good way right now (as you know) for a Variable to hold non-Tensors. We are planning scalars, but that’s about it. I’ve added an autograd topic.
st117921
Sure about variables holding only tensors. I was thinking more about allowing functions to accept arguments if they implement iter returning Variables. That would allow hacking up lists of variables, dicts with variables (or even custom objects holding variables) and passing them to autograd functions.
st117922
I would like to use a padding module which is already implemented in Torch (nn.Padding), but I get an error message when I use the padding module which exists in torch.legacy.nn: TypeError: ‘Padding’ object is not callable Does anyone know how to use this? Or is there a module which can replace it?
st117923
I used the code similar to the one below: import torch import torch.legacy.nn as L pad = L.Padding(1,2) output = pad(torch.randn(1,3,4,4))
st117924
legacy package does not have a call operator. You need to do: pad = L.Padding.forward(1, 2). Legacy package is EXACTLY like in LuaTorch
st117925
if My input is cuda tensor. How I can do it ,It seems that the L.Padding.forward()only support CPU tensor.
st117926
In Ben Graham’s paper, the output_ratio is round to the nearest integer but the code in pytorch using floor to the lower bound integer.
st117927
Greetings all! I am fairly new to pytorch, but I have experience running very large models in caffe, keras, theano, and others. My system has 3 GPUs and I am running some tests on my Quadro GP100 at the moment (16.3 GB video ram). I wrote the below model to see how memory behaves in pytorch. Probably should have started smaller. Ah well, go big or go home! Anyway, The following code crashes as a result of running out of memory on the Quadro (holy crap). import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torch.multiprocessing as mp from torch.autograd import Variable import numpy as np import math import time import random import matplotlib.pyplot as plt def DimCalcConv2D(Ishape, N, Kshape, Sshape, Pshape): Dimx = N Dimy = int(math.floor((Ishape[0] + 2*Pshape[0] - Kshape[0])/Sshape[0] + 1)) Dimz = int(math.floor((Ishape[1] + 2*Pshape[1] - Kshape[1])/Sshape[1] + 1)) return (Dimx, Dimy, Dimz) class Discriminator(nn.Module): def __init__(self): super(Discriminator,self).__init__() self.ishape = (1,256,256) self.conv1 = nn.Conv2d(1, 64, 3, stride=1, padding =0) self.conv2 = nn.Conv2d(64,64,3, stride=1, padding=0) self.conv3 = nn.Conv2d(64,64,3,stride=1, padding=0) D1 = DimCalcConv2D((256, 256),64,(3,3),(1,1),(0,0)) D2 = DimCalcConv2D(D1[1:],64, (3,3),(1,1),(0,0)) D = DimCalcConv2D(D2[1:],64, (3,3),(1,1),(0,0)) self.ConvOutShape = D N = int(D[0]*D[1]*D[2]) self.linear1 = nn.Linear(N,256) self.linear2 = nn.Linear(256,256) self.linear3 = nn.Linear(256,2) self.train() def forward(self, Image): x = F.relu(self.conv1(Image)) x = F.relu(self.conv2(x)) x = F.relu(self.conv3(x)) D = self.ConvOutShape N = int(D[0]*D[1]*D[2]) x = x.view(-1,N) x = F.relu(self.linear1(x)) x = F.relu(self.linear2(x)) x = F.relu(self.linear3(x)) return F.softmax(x) if __name__ == "__main__": image = np.ones((1,1,256,256)) Image = Variable(torch.from_numpy(image).float()).cuda(0) Truth = Variable(torch.from_numpy(np.array([1,0]).reshape(1,2)).float()).cuda(0) net = Discriminator() net.cuda(0) #4.4 GB video ram. With float32 this is expected for this model net.train() optimizer = optim.Adam(net.parameters()) BCE = torch.nn.BCELoss() out = net(Image) loss = BCE(out, Truth) loss.backward() #12.5 GB video ram. Gradient buffer populated? Why isn't this 8.8 GB? one grad per parameter... optimizer.step() # CRASH. out of ram. 16.3 GB video ram I have run much, MUCH larger models than this on my current system with other libraries (VGG 18, for example) and haven’t had many memory issues. At the moment I am assuming this is because other libraries do some kind memory management and distribution across multiple GPUs behind the scenes, while pytorch (I hope) leaves this to the user. This raises some questions. Did I do something dumb with this example 4.4 GB model that somehow causes it to balloon to 16+ GB? If it’s a simple fix, what is it? (note, by dumb I mean is there something I missed in pytorch. The model is purposefully huge so changes in memory are easier to distinguish) Why does the memory increase during optimizer.step()? The backward call makes sense to me as you need to populate the gradient buffer, but doesn’t step() just use the buffer to update the parameters? At most I would expect this to increase by another 4.4 GB for temp variables for each parameter. If I didn’t do something dumb, is there a good “rule of thumb” for predicting how much memory a model will occupy on a card during training? is there a quick / easy way to tell pytorch “hey, I’ve got these two other GPUs. You should use them intelligently!”? How do I distribute my model across my other GPUs? Thanks! Gus
st117928
I’m sorry for the late reply. This last week has been hectic. I’ve made a note to look into this once I get back to the office on Wednesday.
st117929
No worries! I’m just still learning this. I wouldn’t be surprised if it’s something trivial i’m missing =D Thanks!
st117930
Dumb answer, but Adam has to keep a running average and squared running average 12 of the gradients, so are you accounting for this as well?
st117931
ah! Interesting! Yes that indeed makes a difference. Using RMSprop instead of Adam reduces the total allocated vram. 11.8 GB at loss.backwards() and 15.6 GB at optimizer.step(). So the optimization algorithm makes a difference. Still not sure why optim.step() shows such a large increase though still! good to know! Any idea how to distribute such a large model across multiple GPUs easily? My initial thought was to manually place parts of the model on different devices and transfer the activation from device to device in forward(), but I am curious if there is a more elegant way. Thanks!
st117932
Different optimisers require different amounts of memory. Normal SGD only involves the gradients, but if you want to store the momentum of the weights this is going to double the memory requirement because of the optimisation step. If you look at the Adam code, you’ll see it literally allocates the two buffers I mentioned at the first call of optim.step(), so different bits can be allocated at different parts of the overall optimisation. I’ll have to leave practices of model parallelism to someone with more than one GPU though~
st117933
I was about to do the same kind of question, when I saw this post. I had (relatively) small model built in Keras (Theano backend) that only took ~200-300 Mb of GPU memory. However, the same model implementation in pytorch takes 2200Mb of GPU memory. I’m using Adam, but then again I was using it in Keras as well…
st117934
miguelvr in your case it’s probably different and also prob the memory measurement is not accurate because of the caching allocator
st117935
Michael, you can do model parallelism simply by using the keyword with torch.cuda.device(…) 54 as described in the link.
st117936
Is there any suggestion for triplet data loader that can feed a TripletMarginLoss?
st117937
there is no Triplet dataloader (it does not make sense). Depending on your dataset, you can write a custom collate_fn for your DataLoader as well as a custom Dataset to achieve this. See http://pytorch.org/docs/data.html 195 for details on collate_fn and Dataset
st117938
I have access to a cluster with multiple GPU nodes each having 1 GPU device on-board. Can I use multiple nodes to run my code rather than a single GPU? If yes, how can I do that? Usually we set devices through CUDA_VISIBLE_DEVICES but I am not sure how to run code in a cluster with multiple GPU nodes.
st117939
Not yet. We are going to have distributed support in the next major release of pytorch.
st117940
Can you give any indications on what technology you will use for the multi-node implementation. Will it follow an MPI approach such as https://github.com/pytorch/pytorch/issues/241 112 (that would be useful for me - but perhaps not for others). Also the timescale for that release would be interesting to know.
st117941
Here’s a minimal example (never mind that it looks strange): import torch.nn as nn x = Variable( t.rand(10000000,1)).cuda() bn = nn.BatchNorm1d(1) bn.cuda() xbn = bn(x) I get the stack-trace below. I recall from some other thread that I would need to build PyTorch from branch R4 to get rid of this? Is that still the case? I’m using PyTorch built from master a month ago on the AWS Deep Learning Ubuntu AMI instance, and I get this error on that instance. Thanks ! --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-25-6584c2ec408e> in <module>() 6 bn = nn.BatchNorm1d(1) 7 bn.cuda() ----> 8 x1cbn = bn(x) 9 x1cbn.size() /home/ubuntu/src/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs) 200 201 def __call__(self, *input, **kwargs): --> 202 result = self.forward(*input, **kwargs) 203 for hook in self._forward_hooks.values(): 204 hook_result = hook(self, input, result) /home/ubuntu/src/anaconda2/lib/python2.7/site-packages/torch/nn/modules/batchnorm.pyc in forward(self, input) 41 return F.batch_norm( 42 input, self.running_mean, self.running_var, self.weight, self.bias, ---> 43 self.training, self.momentum, self.eps) 44 45 def __repr__(self): /home/ubuntu/src/anaconda2/lib/python2.7/site-packages/torch/nn/functional.pyc in batch_norm(input, running_mean, running_var, weight, bias, training, momentum, eps) 387 training=False, momentum=0.1, eps=1e-5): 388 f = torch._C._functions.BatchNorm(running_mean, running_var, training, momentum, eps) --> 389 return f(input, weight, bias) 390 391 RuntimeError: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.
st117942
This is with Cuda 8.0, cudnn 6, Ubuntu 14.04 (the Amazon Deep Learning AMI), with PyTorch installed from source a couple months ago. And incidentally it works fine if I use a smaller tensor size, like say 100000 instead of 10000000
st117943
Hi, You may want to wait for a response from an nvidia guy, but from what I remember, some very weird input shapes are not supported by cudnn (for various reasons like for example your gpu not having enough memory for the required workspace). Given that in your case feeding a smaller tensor works, it may be the reason. You can try disabling cudnn with torch.backends.cudnn.enabled=False and see if it works.
st117944
Thanks @albanD I did this and it did not fix it: import torch import torch.nn as nn torch.backends.cudnn.enabled=False x = Variable( t.rand(1000000,1).contiguous()).cuda() print torch.backends.cudnn.version() bn = nn.BatchNorm1d(1) bn.cuda() xbn = bn(x) xbn.size()
st117945
Hi, I cannot reproduce your problem. Running the code sample that you gave does not raise any cudnn error anymore for me.
st117946
Yes, this exact code works for me (after freezing my computer for few second due to memory usage) and outputs: 6021 (100000000L, 1L) import torch from torch.autograd import Variable import torch.nn as nn torch.backends.cudnn.enabled=False x = Variable( torch.rand(100000000,1).contiguous()).cuda() print torch.backends.cudnn.version() bn = nn.BatchNorm1d(1) bn.cuda() xbn = bn(x) print(xbn.size())
st117947
Ah my version shows 5110, so looks like I’m still on cudnn 5, although I thought I installed 6.0
st117948
Turned out I had forgotten to re-build PyTorch from source after installing the cudnn 6.0 files. Now it works fine.
st117949
I implemented a subclass of torch.autograd.Function class and its forward and backward function. I use Pycharm IDE. I add a breakpoint in backward function and wanted to debug the script , then I found the program did not stop at the breakpoint . I do not know why. It works well in other part of the script,even in forward function. I thought it is the pycharm’s problem ,but when I run the script in another computer with pycharm ,the problem disappear . I mean it stop just the breakpoint. The problem drives me mad. so if anyone how to solve the problem ,please help me.
st117950
do you have different PyCharm versions in either machines? Or maybe you have different PyTorch versions in either machines?
st117951
I firstly used the pycharm 2017.1.2 and pytorch 0.1.11_5, then I ran the code in the computer with pycharm 2016.2.3 and pytorch 0.1.9. The debug program could stop at the breakpoint when I ran code in the latter machines in which both software version are older. That confused me.
st117952
Pycharm can be buggy at times, I’ve encountered the problem of breakpoints being ignored in the past, I’d guess it is not related to pytorch.
st117953
HI, all, A strange error occurred when loading the pre-trained model. The pre-trained model was trained in Pytorch DataParallel mode. And I try to load the model to test new data. model = torch.nn.DataParallel(model).cuda() model.load_state_dict(checkpoint['state_dict']) The pre-trained model is loaded correctly. However if code is modified as model = model.cuda() model.load_state_dict(checkpoint['state_dict']) Error occurs as following File "/mnt/lustre/zhangyi/pytorch-laneseg/gen_png_result.py", line 47, in main model.load_state_dict(checkpoint['state_dict']) File "/usr/lib64/python2.7/site-packages/torch/nn/modules/module.py", line 331, in load_state_dict .format(name)) KeyError: 'unexpected key "module.backbone.conv1.weight" in state_dict' Does DataParallel make difference when loading pre-trained model?
st117954
[solved] KeyError: 'unexpected key "module.encoder.embedding.weight" in state_dict' I was thinking about something like the following: # original saved file with DataParallel state_dict = torch.load('myfile.pth.tar') # create new OrderedDict that does not contain `module.` from collections import OrderedDict new_state_dict = OrderedDict() for k, v in state_dict.items(): name = k[7:] # remove `module.` new_state_dict[name] = v # load params model.load_state_dict(new_state_dict)
st117955
Given I installed pytorch by conda install pytorch torchvision -c soumith The problem 1 _rebuild_tensor inside _utils/ does not in the following output, why is it? ", ".join([t for t in dir(torch._utils)]) ### inside _utils we have the following function, but why not shown up??? # def _rebuild_tensor(storage, storage_offset, size, stride): # class_name = storage.__class__.__name__.replace('Storage', 'Tensor') # module = importlib.import_module(storage.__module__) # tensor_class = getattr(module, class_name) # return tensor_class().set_(storage, storage_offset, size, stride) # output is the following # '__builtins__, __cached__, __doc__, __file__, __loader__, __name__, __package__, __spec__, # _accumulate, _cuda, _import_dotted_name, _range, _type, torch' The problem 2 _rebuild_tensor inside _utils/ does not in the following output, given it is imported inside torch.tensor.py file, why is it? ", ".join([t for t in dir(torch.tensor)]) # output is the following # '_TensorBase, __builtins__, __cached__, __doc__, __file__, __loader__, __name__, __package__, # __spec__, _cuda, _range, _tensor_str, _type, sys, torch' I think the two problems above is really the same one, that I can’t have access to torch._utils._rebuild_tensor, I wonder why. Guesses I just did conda install pytorch torchvision -c soumith to install, was this the problem? when I ran ./run_test.sh, I got the following error: ERROR: test_serialization_map_location (__main__.TestTorch) ---------------------------------------------------------------------- Traceback (most recent call last): File "test_torch.py", line 2711, in test_serialization_map_location tensor = torch.load(test_file_path, map_location=map_location) File "/Users/Natsume/miniconda2/envs/dlnd-tf-lab/lib/python3.5/site-packages/torch/serialization.py", line 222, in load return _load(f, map_location, pickle_module) File "/Users/Natsume/miniconda2/envs/dlnd-tf-lab/lib/python3.5/site-packages/torch/serialization.py", line 370, in _load result = unpickler.load() AttributeError: Can't get attribute '_rebuild_tensor' on <module 'torch._utils' from '/Users/Natsume/miniconda2/envs/dlnd-tf-lab/lib/python3.5/site-packages/torch/_utils.py'> Solution worked this time git clone pytorch create a new env conda install numpy setuptools cmake cffi pip install -r requirements.txt python setup.py install but the 5th step has a lot of failures and warnings, I can’t run import torch then I did pip install -e . then I can run import torch and the problems above are gone. but when I run cd test/ and ./run_test.sh, previous error is gone, but I got the following Exception: Exception ignored in: <function WeakValueDictionary.__init__.<locals>.remove at 0x114c5da60> Traceback (most recent call last): File "/Users/Natsume/miniconda2/envs/pytorch-experiment/lib/python3.5/weakref.py", line 117, in remove TypeError: 'NoneType' object is not callable should I be worried about this one? Thanks a lot!
st117956
I met the same problem. The reason may be the version of the pretrained model you load doesn’t match with your pytorch.
st117957
Hi, Does anyone know how to differentiably ‘select’ indices that fall on a 2D line within a 2D tensor / rectangle ? Something like this: buffer = Variable( torch.FloatTensor( 128, 128 ) ) p0 = Variable( torch.FloatTensor( [2.4, 9.4] ) ) p1 = Variable( torch.FloatTensor( [5, 11.5] ) ) # magic happens here buffer.lineSelect(p0, p1).fill( 1 ) Ideally lineSelect would automatically round float values to their proper discrete indices and would be differentiable. My plan is to use this method to implement a differentiable 3D renderer in pytorch. The nearest insight I found on the forum is this but it does not really show out to ‘select’ from a buffer. Indexing a variable with a variable
st117958
you can use index_select, and yes it’s differentiable wrt buffer (but not differentiable wrt p0 or p1, because it’s not continuous): buffer = Variable( torch.FloatTensor( 128, 128 ) , requires_grad=True) p0 = Variable( torch.FloatTensor( [2.4, 9.4] ) ) p1 = Variable( torch.FloatTensor( [5, 11.5] ) ) buf2 = buffer.index_select(0, p0.long()).index_select(1, p1.long()) buf2.backward(torch.ones(2, 2))
st117959
Nice, thanks! I need wrt to p0 and p1 so I can train a network on the generation of these points. Still, I really appreciate the response!
st117960
wrt p0 is a non-differentiable operation, but you can use reinforcement learning, by using the reward(…) function on stochastic variables. so you will first get float indices, and you will stochastically sample from them over a multinomial or something, and you will give this sampled node a reward to propagate gradients back. see pytorch/examples or read on REINFORCE algorithm
st117961
Hi all, I installed PyTorch successfully using the command: conda install pytorch torchvision -c soumith After installation, torch.cuda.is_available() returns False, this may due to the older CUDA version 7.0. Is it possible to set the CUDA path manually for PyTorch to use GPU? I have tried two ways to deal with the problem: Update CUDA (failed, no root privilege) Install from source (failed, no root to update cmake) But unfortunately both of them are failed due to the above reasons.
st117962
What happens if you use conda install cuda80 pytorch torchvision -c soumith? PyTorch ships an updated copy of CUDA that you can use without root.
st117963
Thanks for your reply, but I do not have the root privilege to update the nvidia driver.
st117964
I was trying to look at the underlying implementation of nn.Conv2d and ended up at ConvForward::apply of convolution.cpp . I wanted to check if there are test cases or examples using which I can directly play around with torch/csrc code .
st117965
Hi, The ConvForward cpp class is directly exposed to python at torch._C._functions.ConvNd and is used here 7. There is no example right now on how to use this function directly from cpp. What do you want to do with it?
st117966
@albanD thnks for the reply . I was just curious to understand the underlying implementation . I thought it would be easy if there was something directly exposed in cpp .
st117967
Hi Pytorch, I’m trying to implement a custom piecewise loss function in pytorch. Specifically the reverse huber loss with an adaptive threshold (Loss = |x| if |x| <c, x^2+c^2/2*c otherwise) . There doesn’t seem to be a great way to do this. Below is my code `adiff = torch.abs(output-target) batch_max = 0.2torch.max(adiff).data[0] t1_mask = adiff.le(batch_max).float() t2_mask = adiff.gt 8(batch_max).float() t1 = adifft1_mask t2 = (adiffadiff+batch_maxbatch_max)/(2batch_max) t2 = t2t2_mask return (torch.sum(t1)+torch.sum(t2))/torch.numel(output.data)`` not the most straightforward. I’m sort of struggling to figure out how I can implement a piecewise loss function easily without using this cumbersome masking thing (in fact, I’m struggling to determine if this is actually correct or not). Can someone help, point me towards an example, or otherwise provide guidance?