id
stringlengths
3
8
text
stringlengths
1
115k
st117768
Oh I think I got it. In the CBOW model, we want to look at the nearby words, but we don’t want to be constrained by any particular order of those words. So it’s both better and worse than n-gram model, because we throw away order information, but we gain flexibility of the context. Cool!
st117769
Oh I see how you added the tensors in your solution: embedding = self.embedding(x).sum(dim=0) That’s definitely better than how I did it: vectors_sum = autograd.Variable(torch.Tensor(1,dimensionality).zero_()) for word in inputs: vectors_sum += self.embed(word).view(1, -1)
st117770
I would like to apply a function to each row of a tensor. Is there a simple and efficient way to do this without using an index for each row? I am looking for the equivalent of numpy.apply_along_axis if there is one for pytorch.
st117771
You could try (if you haven’t already): torch.stack([ function(x_i, other_input[i]) for i, x_i in enumerate(torch.unbind(x, dim=axis), 0) ], dim=axis) I would also like to hear about it if there’s a better way to do this!
st117772
As a preprocessing step, I need to scale 3D images. Does pytorch have a 3D bilinear interpolation tool or any other useful upsample/downsample tools for this purpose?
st117773
Torchvision.transforms.Scale is based on Pil’s tools for interpolation (Pil.image.resize) that works in 2D only. To your purpose, you should rather have a look to scipy ndimages : https://docs.scipy.org/doc/scipy/reference/ndimage.html 227
st117774
I wrote a pytorch GPU-accelerated 3d resampler a while back that uses catmull-rom spline interpolation gist.github.com https://gist.github.com/ajbrock/86fd41cf468839fe804ce28df145d170 267 resample3d.py # Resample.py # Andrew Brock, 2017 # This code resamples a 3d grid using catmull-rom spline interpolation, and is GPU accelerated. # Resample along the trailing dimension # Assumes a more-than-1D array? Or just directly assumes a 3D array? we'll find out # # TODO: Some things could be shared (such as the mgrid call, which can presumably be done once? hmm) # between resample1d calls. This file has been truncated. show original
st117775
Is there a reason the why the encoder and decoder part of the seq2seq model is defined in separate classes and separate optimizers are used for them? I have build and trained in both encode and decode in same class - the loss is not decreasing - but when I tried to use two separate optimizers, the loss is decreasing well. Is there any difference in grad computation in the later?
st117776
Does anybody know how to add intermediate supervision in PyTorch, which means the network produces multiple predictions and is a common technique used in Googlenet and stacked hourglass network?
st117777
Me, too. Stacking multiple stage networks while keeping the intermediate output of each stage is a problem. It seems that for loop couldn’t be used.
st117778
I don’t know if I fully understand the problem. You can just create a list, and append all the intermediate states you want to it.
st117779
There’s no reason a for loop couldn’t be used. Assuming skip_layers and hourglass_layers contain the necessary modules: skips = [] for skip_layer, hourglass_layer in zip(skip_layers, hourglass_layers): skips.append(skip_layer(input)) input = hourglass_layer(input) Do that for the first half of the hourglass, then another for loop for the second half where you combine the output in skips and use that as input for the rest of the “stem” layers.
st117780
Thanks a lot for the hint. But I’m still a little bit confused about how to implement it ( sorry a newbie here ). It would be great if you would share a full hourglass model here for reference.
st117781
Hi. I am trying to train two large networks (net1 and net2) by turns. For example, train net1 for one epoch, and then net2 for one epoch, and repeat this process. Is there a way to free the memory consumed by a network after its training epoch finishes? Thanks!
st117782
Hi, I want to convert the FC layers in the VGG-16 model to a CONV layers while keeping and using the pretrained weights. How should I act on this? Thanks, Saeed
st117783
Hi, you mean how to convert fc weights to conv kernel’s weight? if so I did like below 45. fc_net.conv.load_state_dict({"weight": resnet.fc.state_dict()["weight"].view(1000, 2048, 1, 1), "bias": resnet.fc.state_dict()["bias"]})
st117784
I used forward_hook this function, it seems only print input and output, if I want to call the input and output data with variables that should I do? I tried to return input in the printnorm function, but failed. > def printnorm(self,input,output): > print('Inside '+ self.__class__.__name__ +'forward') > print('') > print('input: ',input) > model.conv2.register_forward_hook(printnorm) > out = model(input)
st117785
Do you want to fetch the input and output of conv2? tem_input = torch.zeros(input.size()) def fun(model, input, output): tmp_input.copy_(input.data) hook = model.conv2.register_forward_hook(fun) model(x) hook.remove() Do not do change variable in hook, that is unsafe.If you really need a variable, maybe you need write it in forward. the hook return nothing remember to remove the hook after using it.
st117786
yes,I want to fetch the input and output of each layer, I try to use the global variable in the fun function to fetch, but the fetched variable can not be changed to numpy, suggesting that ‘tuple’ object has no attribute ‘numpy’ In the following code def fun(self,input,output): print('Inside '+ self.__class__.__name__ +'forward') print('') #print('input: ',input) global xg xg=input model.conv2.register_forward_hook(fun) out = model(input) xi=xg xi=xi.numpy() so,If hook can not solve my problem, there are other ways to solve my problem? forward can fetch the input and output of each layer? thank you very much
st117787
Thank you for your reply.I solved the above problem! I also have a question that I wish you can help me. I have seen this Questions about ImageFolder about imagefolder. But I’m not really understand. I have a picture datasets. Divided into trian and val two files Folder. Each folder has 0 and 1 subfolders. I also create lables.txt about train and val. How do I load this dataset into pytorch?
st117788
Are the images in subfolders 0 all belong to class 0? if So, simply use ImageFolder train_dataset = ImageFolder('train',Transofroms) data,label = train_dataset[100]# the 100th picture train_loader = Dataloader(train_dataset) for data,label in train_loader: train() have a look at the example of Imagenet github.com pytorch/examples/blob/master/imagenet/main.py#L97-L121 8 # optionally resume from a checkpoint if args.resume: if os.path.isfile(args.resume): print("=> loading checkpoint '{}'".format(args.resume)) checkpoint = torch.load(args.resume) args.start_epoch = checkpoint['epoch'] best_prec1 = checkpoint['best_prec1'] model.load_state_dict(checkpoint['state_dict']) optimizer.load_state_dict(checkpoint['optimizer']) print("=> loaded checkpoint '{}' (epoch {})" .format(args.resume, checkpoint['epoch'])) else: print("=> no checkpoint found at '{}'".format(args.resume)) cudnn.benchmark = True # Data loading code traindir = os.path.join(args.data, 'train') valdir = os.path.join(args.data, 'val') normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], This file has been truncated. show original
st117789
yes, subfolders 0,1 represent class 0,1. So I only need to use imagefolder without having to use torch.utils.data.DataLoader?
st117790
dataset only return one sample at a time, you need loader to cat them into a batch. also dataloader can utilize multiprocessing to speed up the program.
st117791
Hi all, I have test following code using cuda tensor, but failed. from torch.autograd import gradgradcheck, gradcheck inputs = [autograd.Variable(torch.randn(5, 12).double(), requires_grad=True), autograd.Variable(torch.randn(10, 12).double(), requires_grad=True), autograd.Variable(torch.randn(10).double(), requires_grad=True)] for i in xrange(len(inputs)): inputs[i] = inputs[i].cuda() test = gradcheck(lambda i, w, b : F.linear(i, w, b), inputs) print(test) It return False. When I remove the cuda() operation, it return True. Is this a bug?
st117792
I’d probably do the .cuda() before the Variable wrapping: from torch.nn import functional as F from torch import autograd from torch.autograd import gradcheck import torch inputs = [autograd.Variable(torch.randn(5, 12).cuda().double(), requires_grad=True), autograd.Variable(torch.randn(10, 12).cuda().double(), requires_grad=True), autograd.Variable(torch.randn(10).cuda().double(), requires_grad=True)] for i in range(len(inputs)): inputs[i] = inputs[i] test = gradcheck(lambda i, w, b : F.linear(i, w, b), inputs) print(test) prints True for me. (Don’t know if it should also work for single, but that is another thing.) Best regards Thomas
st117793
[in Pytorch] fc1 = Sequential ( (0): Linear (147 -> 1000) (1): BatchNorm1d(1000, eps=1e-05, momentum=0.1, affine=True) (2): LeakyReLU (0.2, inplace) ) conv1 = Sequential ( (0): Conv1d(100, 250, kernel_size=(13,), stride=(1,), padding=(6,)) (1): BatchNorm1d(250, eps=1e-05, momentum=0.1, affine=True) (2): LeakyReLU (0.2, inplace) ) a = Variable(torch.randn(10, 147)) b = fc1(a) b.size() Out[71]: torch.Size([10, 1000]) torch.Size([10, 1000]) b = b.view(10,100,10) c = conv1(b) c.size() Out[75]: torch.Size([10, 250, 10]) In Keras x = Input(shape=(147,),name="input") In [78]: x Out[78]: <tf.Tensor 'input:0' shape=(?, 147) dtype=float32> x = Dense(10 * 100)(x) x = BatchNormalization()(x) x = LeakyReLU(0.2)(x) # output shape is 10*100 x = Reshape((100, 10))(x) # shape is 100 x 10 x = Conv1D(filters=250, kernel_size=13, padding='same')(x) x Out[80]: <tf.Tensor 'conv1d_4/add:0' shape=(?, 100, 250) dtype=float32> The question is the output after applying the pytorch conv1d is (?, 250, 10) where as the keras output is (?, 100, 250) Could you tell me why they are different and how to write conv1d in pytorch to match with keras conv1d. Thanks,
st117794
Hello, this is due to the different memory layout conventions between pytorch and tensorflow (as a Keras backend). Pytorch (and e.g. Theano, Keras+Theano) is “channel first”, while Tensorflow and Keras + Tensorflow are “channel last”. Keras adapts to the backend and has keras.backend.image_data_format() to differentiate between the two. Best regards Thomas
st117795
I think all pip comands listed on your main website point to CUDA compiled versions even when None is selected. Screen Shot 2017-05-23 at 16.40.39.png2030×778 84.8 KB
st117796
regardless of whether you wish to use cuda or not, our same binary works on cpu and gpu nodes.
st117797
I tried using data_parallel for LSTM input = #(50, 99, 100) h0 = #(4, 50,500) c0 = #(4, 50,500) encoder = nn.LSTM(100, 500,2,bidirectional=True) output, (h_t, c_t) = nn.parallel.data_parallel(encoder, (input, (h0, c0)), device_ids=[0,1]) log.jpg1551×622 321 KB The hidden dimensions are (nlayer*directions, batch_size, hidden_size) unlike input size which is (batch_size, seq_length, embed_size). For parallel, the first dimension of all the inputs need to be batch_size which is not true for h0 and c0 hence the error above. How can I solve this issue? Thanks!
st117798
I converted the googlenet (inception_v1) model from Torch to Pytorch using https://github.com/clcarwin/convert_torch_to_pytorch 12 and trying to run the imagenet validation (https://github.com/pytorch/examples/blob/master/imagenet/main.py 6) but I am getting the following error: Traceback (most recent call last): File “main.py 1”, line 287, in main() File “main.py 1”, line 122, in main validate(val_loader, model, criterion) File “main.py 1”, line 211, in validate output = model(input_var) File “/home/me/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 206, in call result = self.forward(*input, **kwargs) File “/home/me/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py”, line 61, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File “/home/me/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py”, line 71, in parallel_apply return parallel_apply(replicas, inputs, kwargs) File “/home/me/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py”, line 46, in parallel_apply raise output File “/home/me/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py”, line 25, in _worker output = module(*input, **kwargs) File “/home/me/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 206, in call result = self.forward(*input, **kwargs) File “/home/me/anaconda3/lib/python3.6/site-packages/torch/nn/modules/container.py”, line 64, in forward input = module(input) File “/home/me/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 206, in call result = self.forward(input, **kwargs) File “/media/me/E/pytorch_examples/example7_imagenet/googlenet_v1.py”, line 19, in forward return self.lambda_func(self.forward_prepare(input)) File “/media/me/E/pytorch_examples/example7_imagenet/googlenet_v1.py”, line 34, in Lambda(lambda x,lrn=torch.legacy.nn.SpatialCrossMapLRN((11, 0.00109999999404, 0.5, 2)): Variable(lrn.forward(x.data))), File “/home/me/anaconda3/lib/python3.6/site-packages/torch/legacy/nn/Module.py”, line 33, in forward return self.updateOutput(input) File “/home/me/anaconda3/lib/python3.6/site-packages/torch/legacy/nn/SpatialCrossMapLRN.py”, line 25, in updateOutput self._backend.SpatialCrossMapLRN_updateOutput( File “/home/me/anaconda3/lib/python3.6/site-packages/torch/_thnn/utils.py”, line 22, in getattr raise NotImplementedError NotImplementedError Looks like Pytorch doesn’t have SpatialCrossMapLRN (Torch Architecture in Pytorch). Is there any work-around for this? OR anybody has pretrained model for inception-v1 in Pytorch?
st117799
Hello, You may want to temporarily use https://github.com/thnkim/OpenFacePytorch/blob/master/SpatialCrossMapLRN_temp.py 141 . This is a simple modification of PyTorch’s SpatialCrossMapLRN in legacy.
st117800
In my code, there’re many places where a variable is transfered to the GPU with .cuda() call like x = x.cuda()$ When I begin the training, the program will always crash at some time, but at different such calls randomly. One example is like this: h = h.cuda() return CudaTransfer(device_id, async)(self) return i.cuda(async=self.async) return new_type(self.size()).copy_(self, async) RuntimeError: cuda runtime error (59) : device-side assert triggered at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.10_1488755368782/work/torch/lib/THC/generic/THCTensorCopy.c:18 I really cannot understand what is going on here. I also tried to catch the exception and check the variable before the .cuda() call. It seems the variable is normal. Anyone can help?
st117801
A device-side assert is usually triggered when you are doing out of bounds indexing. To get the exact location of crash, you can try to run your program after setting the environment variable export CUDA_LAUNCH_BLOCKING=1 python myprogram.py
st117802
I am looking to use pre-trained word vectors to start a text classifier. There seem to be several pre-trained sets available including word2vec and my question has two parts: are there any word vectors that are more suited to Pytorch than others. I saw FastText mentioned and wondered whether that is a good starting point the usual pretrained vector files are very large and containing millions of words, is there a way to manage this, in reality I only need a fairly small fraction of these and don’t want all of my memory being consumed in loading and storing masses of data I don’t need. Apologies if these are novice questions, I have looked at earlier posts but these don’t seem to answer quite the questions I have raised. Many thanks in advance John
st117803
torchtext (http://github.com/pytorch/text 343) will load only the subset of vectors you’re using, and has GloVe built in, so that is likely the easiest way to get word vectors. But FastText is newer and probably a little better.
st117804
Many thanks James, will give it a try, it will be good to see how GloVe works, might not be as up to date as FastText but its going to be better then me previous model John
st117805
Hi, As I mentioned in the last post gradient checking : A new function SolveTriangular is defined: class SolveTrianguler(Function): # Aware of btrisolve, btrifact, and more will come # sloves A * x = b def __init__(self, lower=True): super(SolveTrianguler, self).__init__() # lower=False, use data contained in the upper triangular, the default is lower self.lower = lower self.needs_input_grad = (True, False) def forward(self, matrix, rhs): x = torch.from_numpy( solve_triangular(matrix.numpy(), rhs.numpy(), trans=self.trans, lower=self.lower)) self.save_for_backward(matrix, x) return x def backward(self, grad_output): # grad_matrix = grad_rhs = None matrix, x = self.saved_tensors # formula from Giles 2008, 2.3.1 if self.lower == True: return torch.tril(-matrix.inverse().t().mm(grad_output).mm(torch.t(x))), None else: return torch.triu(-matrix.inverse().t().mm(grad_output).mm(torch.t(x))), None When I called this function just once, the gradients after backward are the same with the ones from TensorFlow. In the forward function, torch.Tensor is converted into numpy array, then converted back after calling the scipy.linalg.solve_triangular. All happens on the RHS of the assignment (o.w. it breaks, don’t know why) The problem is when I called it twice in a row, it gave me wrong answer. init_K = np.cov(np.random.rand(3, 6)) # TF L = tf.Variable(np.tril(init_K)) d = tf.placeholder(tf.float64, shape=(3, 1)) alpha = tf.matrix_triangular_solve(L, d, lower=True) alpha2 = tf.matrix_triangular_solve(tf.transpose(L), alpha, lower=False) y = tf.reduce_sum(alpha2) grads = tf.gradients(y, [L, alpha, alpha2]) with tf.Session() as sess: tf.global_variables_initializer().run() grads = sess.run(grads, feed_dict={d: [[1.],[2],[3]]}) print(grads) # pytorch L = Variable(th.from_numpy(np.tril(init_K)), requires_grad=True) d = Variable(th.from_numpy(np.array([[1.],[2],[3]])), requires_grad=False) alpha = SolveTrianguler(lower=True)(L, d) # alpha.register_hook(print_grad) - not working here! alpha2 = SolveTrianguler(lower=False)(L.t(), alpha) # alpha2.register_hook(print_grad) y = th.sum(alpha2) y.backward() print(L.grad) print(alpha.grad) print(alpha2.grad) Calling SolveTriangular twice gave the wrong gradients (I assume TensorFlow is correct) and alpha.grad, alpha2.grad are None. Why is it?
st117806
autograd functions are not meant to be used twice. You create a new function for every forward+backward call.
st117807
Could you elaborate more in terms of my code? What should I do if I need to SolveTriagular twice within one forward computation? # pytorch L = Variable(th.from_numpy(np.tril(init_K)), requires_grad=True) d = Variable(th.from_numpy(np.array([[1.],[2],[3]])), requires_grad=False) alpha = SolveTrianguler(lower=True)(L, d) # alpha.register_hook(print_grad) - not working here! alpha2 = SolveTrianguler(lower=False)(L.t(), alpha) # alpha2.register_hook(print_grad) y = th.sum(alpha2) y.backward()
st117808
yes, you call it as many times as you invoke the function. Each of the SolveTrianguler functions will be a node in the autograd graph, so sharing them will end up overriding the saved_tensors of the first one etc… Your snippet now looks correct.
st117809
Thank for your reply, but the gradients got from PyTorch and TensorFlow do not agree, and alpha.grad, alpha2.grad are None. Don’t know whether it has anything to do with the implementation of the function - conversion between Tensor and np.ndarray within forward. Looking forward to the update on more matrix functions…
st117810
Thanks for the replies so far. Will there be a way in the future to call a custom autograd.function several times and each of these function call in the graph will know which saved_tensors belong to which call? This would make it easier to write custom activation functions, as in the example given here: http://pytorch.org/tutorials/beginner/pytorch_with_examples.html#pytorch-defining-new-autograd-functions 11 and reuse them several times as it is possible with F.relu etc. Of course the function would just depend on the Input without internal parameters.
st117811
Hello @magz, Note that what you see as torch.nn.relu is not a function instance, but rather a “factory” similar to what is done with Variables when you use somevar = Variable(...) ; b = somevar.someop(...). If you look at the source code of Variable 16, you see what is done internally make Variable.someop work (which you can do manually as well). The way the Function class then works is that you record on in the forward and compute the gradient in the backward at the point specified by the inputs of the forward. This used to be done in objects of the class but has been seperated to contexts for the new-style autograd that will allow higher order derivatives. Best regards Thomas
st117812
when I install pytorch,something wrong like this:CondaHTTPError: HTTP None None for url https://repo.continuum.io/pkgs/free/linux-64/repodata.json.bz2 20 Elapsed: None
st117813
This is conda related issue with proxy: https://github.com/conda/conda/issues/4793 97 https://github.com/pytorch/pytorch/issues/1214 52
st117814
I am implementing one neural network using Pytorch. I want some layers of my network to be trained using the cross entropy loss, some layer using the REINFORCE gradient update. In Tensorflow, we can define the variable scope and then compute gradients of output w.r.t. different variable scopes using different optimizers. Can I implement the network I mentioned above in pytorch ? Thanks in advance !!
st117815
on the master branch, we have torch.autograd.differentiate that provides this. We will have it in the next release.
st117816
Hey guys, I am trying to use the SNLI Classifier (https://github.com/pytorch/examples/tree/master/snli 61) on a different QA dataset, TrecQA, as a baseline model. I am having trouble importing the dataset. The task is more or less the same (premise -> question, hypothesis -> answer) and there are 2 labels instead of 3. This dataset has 4 files for each of train/dev/test set: ids.txt, questions.txt, answers.txt, labels.txt. How do I import the dataset in train, dev, set splits and build the vocabulary like they do in the SNLI example: https://github.com/pytorch/examples/blob/master/snli/train.py 22 Some help will be much appreciated. Thank you!
st117817
SNLI is provided as a JSONL file, which means the torchtext JSON loader can be used more or less unmodified; it looks like the TREC dataset doesn’t quite match an existing torchtext loader, so you’d have to write a small loader that subclasses torchtext.data.Dataset. I’d look at the TranslationDataset code 33, which is quite similar to what you’d need to do.
st117818
I just wanted to share something I’ve been working on. It’s in the very early stages, but I was hoping to get some assistance in building it up - I think it could be really useful. I realize that a full OpenCL port would be a huge amount of work (and I would like to be able to use models on OpenCL devices), so I decided to take matters into my own hands. This project allows you to train up a network in pytorch, then save each of the tensors (for now, individually but I’d love to change that) for weights/filters/bias and then load them into ArrayFire. Using ArrayFire as our OpenCL library, we then perform the forwards pass as usual. The easiest way I’ve found to go python -> C++ is through numpy and using their API. I have to do some index gymnastics because of the ArrayFire conventions, but so far I can initialize tensors for Conv2d and Linear layers in python, save them, load them into C++ and perform an inference. I realize it’s more cumbersome and not ideal for training and development, but it’s meant as a tool for deployment. I would love to have help from people who know ArrayFire/pytorch better than I do - the next layers to do are pooling layers (maxpool, avgpool) and batchnorm (especially batchnorm). Help/suggestions for optimizing ArrayFire code (I’m a total newb with AF) would be awesome as well. Project link: pytorch-inference 115 Please be gentle, I’ve only been working on it for about a day at this point so it’s still pretty rough.
st117819
great. you might also be interested to see: GitHub mvitez/thnets 42 Basic library that can run networks created with Torch - mvitez/thnets GitHub lantiga/pytorch2c 50 A Python module for compiling PyTorch graphs to C. Contribute to lantiga/pytorch2c development by creating an account on GitHub.
st117820
I hadn’t seen the second one - thanks! I saw the first one but only briefly, I’ll take another look. Correct me if I’m wrong on the second one (pytorch2c) - you have to compile the graph for each input it seems?
st117821
not for each input. it uses the trace of the graph generated via an input (because pytorch uses tape-based autodiff)
st117822
Just a quick update - I’ve managed to implement many of the layers that are in mvitz/thnets, and the documentation (while admittedly incomplete) has its own doxygen site and everything I’m having issues with unpooling efficiently and for some reason softmax is slow. If someone who knows arrayfire well has advice I’d welcome it! The pytorch2c repo is fascinating but it has doesn’t look like it has any acceleration at all (unless it supports CUDA, and even then I need OpenCL for embedded devices). I’m looking into the transfer techniques that both of them use to see if there’s an approach that’s more portable and cleaner than what I’m using now.
st117823
In order for pytorch2c to work cleanly we will need forward traces to be implemented on top of the new autograd. That’s what I tasked myself to do now in PyTorch, although I had to slow down due to other commitments. I’ll start working actively on it once this crazy week is over. Also, ideally I’d like to change the way pytorch2c works: first generate an intermediate representation, then compilable code. This way we’ll be able to easily add new backends (cuda, opencl, …) in addition to THNN.
st117824
Hello, I am working on a network implementation where I start from an already trained network model (AlexNet) on imagenet and I train it with my own dataset. I wanted to know if there is a way to print the classes the network has been trained on previously ? And also can someone explain how the network keep track of the classes it has been trained on to retrieve them when it is tested with a image for example ? Thank you in advance Justin
st117825
Based on what you’ve said, I think you misinterpret what the network “learn” on training (sorry if I am wrong). The network doesn’t know anything specific about the class (name or something); it only know that a picture may belong to the i-th class, where the order of classes is decided when training the network. If it’s trained on ImageNet, you can find the ground truth by visiting ImageNet’s website 3. Also, you can “test” it (if it has acceptable accuracy) by feeding the dataset it is pretrained on, so you can get the order of the classes.
st117826
Thank you for the answer, in fact I did’nt express myself correctly. And I understood the stupidity of my question after posting it. Sorry for this. Justin
st117827
I have a 2D input [Observations x Features] that I am trying to expand to 3D using nonlinear transformations so that I can use various convolution functions and architectures on it. I did some searching and found the “stack” function which seems to achieve this. In my “forward” function I have: x = tc.stack([tc.atan(x),tc.exp(x),x], dim = 1) This results in an output that looks like [Observations x Channels x Features] However, when I try to run my code (pasted below) I get a “expected 3D tensor”. I tried using different “dim” positions and I still get the error. What is the most correct way to go about expanding a 2D input into a 3D input for use inside the model? # Setting up the net from torch.autograd import Variable import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.elu = nn.ELU(alpha = 1.0, inplace = False) self.conv1 = nn.Conv2d(in_channels = 3, out_channels = 8, kernel_size = 1, padding = 0) self.conv2 = nn.Conv2d(in_channels = 8, out_channels = 12, kernel_size = 3, padding = 0) self.fc1 = nn.Linear(19*19*12, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 2) def forward(self, x): x = tc.stack([tc.atan(x),tc.exp(x),x], dim = 1) x = self.elu(self.conv1(x)) x = self.elu(self.conv2(x)) x = x.view(-1, 19*19*12) x = self.elu(self.fc1(x)) x = self.elu(self.fc2(x)) x = self.fc3(x) return x net = Net() # Loss function and optimizer import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(net.parameters(), lr = 0.001, betas = (0.9, 0.99), weight_decay = 1e-4) # Training for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i in range(10000): # Mini batch indices inx = np.random.choice(a = list(range(len(train_out))), size = 32, replace = False) # get the inputs inputs, labels = train_in[tc.LongTensor(inx)],train_out[tc.LongTensor(inx)] # wrap them in Variable inputs, labels = Variable(inputs), Variable(labels) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.data[0] if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training') Side note, I am using Spyder and no PyTorch variables seem to appear in the “Variable explorer”. Is there a quick guide on how to figure out dimentionality, types, memory size, etc. of torch arrays?
st117828
I just tried this in an interpreter: import torch a=torch.randn(10, 20) torch.stack([torch.atan(a), torch.exp(a), a], dim=1) It seems to have worked as expected. What’s the full error and stack-trace that you see and what version of pytorch are you on? print(tc.__version__)
st117829
print(tc.version) #0.1.12_2 The command works outside of the model just fine, its giving me a problem when I try to use it inside of “forward”. Here is the full stack-trace: Traceback (most recent call last): File "<ipython-input-145-6f218d24ab72>", line 76, in <module> outputs = net(inputs) File "/home/ml-comp-01/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__ result = self.forward(*input, **kwargs) File "<ipython-input-145-6f218d24ab72>", line 29, in forward x = self.elu(self.conv1(x)) File "/home/ml-comp-01/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__ result = self.forward(*input, **kwargs) File "/home/ml-comp-01/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 237, in forward self.padding, self.dilation, self.groups) File "/home/ml-comp-01/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 40, in conv2d return f(input, weight, bias) RuntimeError: expected 3D tensor
st117830
from the stack-trace it looks like your problem is not stack. You are sending 3D inputs to Conv2d which wants 4D inputs (mini-batch x channels x height x width). You need to use Conv1d for your purposes. I’ve modified the definition that’ll work for you: class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.elu = nn.ELU(alpha = 1.0, inplace = False) self.conv1 = nn.Conv1d(in_channels = 3, out_channels = 8, kernel_size = 1, padding = 0) self.conv2 = nn.Conv1d(in_channels = 8, out_channels = 12, kernel_size = 3, padding = 0) self.fc1 = nn.Linear(96, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 2) def forward(self, x): x = tc.stack([tc.atan(x),tc.exp(x),x], dim = 1) x = self.elu(self.conv1(x)) x = self.elu(self.conv2(x)) x = x.view(-1, 96) x = self.elu(self.fc1(x)) x = self.elu(self.fc2(x)) x = self.fc3(x) return x
st117831
Thank you for the quick reply. The model essentially is supposed to be something like: [32 observations x 21 features] --> [32 Observations x 3 Non-linear expansions * 21 features] == Input --> layers… What I’m trying to do with the second step is to get PyTorch to treat this like an image of depth 3 or deeper. I am able to successfully run and modify code from the CIFAR-10 tutorial 3. When I run your modified code from above I get a massive set of errors with the title “An error occurred while starting the kernel”: *** Error in `/home/ml‑comp󈚥/anaconda3/bin/python’: free(): invalid pointer: 0x00007f17ac63dae0 *** ======= Backtrace: ========= /lib/x86_64‑linux‑gnu/libc.so.6(+0x7908b)[0x7f17e7afd08b] /lib/x86_64‑linux‑gnu/libc.so.6(+0x826fa)[0x7f17e7b066fa] /lib/x86_64‑linux‑gnu/libc.so.6(cfree+0x4c)[0x7f17e7b0a12c] … Here are some system details: Processor: Intel® Core™ i7-6700K CPU @ 4.00GHz × 8 Graphics: GeForce GTX 970/PCIe/SSE2 OS: Ubuntu 17.04 64-bit nvcc --version # Cuda compilation tools, release 8.0, V8.0.44 Python: 3.6.0 GCC: 4.4.7 Spyder IDE: 3.1.4
st117832
I think I figured it out. My target output data were formatted as a single dimensional DoubleTensor and the input data were also formatted as a 2D DoubleTensor. Formatting the inputs as a FloatTensor and LongTensor for the outputs seems to have fixed the issue. I understand that this currently limits the precision to 32-bits. If you’d like some more information on my error I can give you any relevant information outside of this post. Thanks for the above suggestion regarding convolutions!
st117833
I wanted to do something like this: out1 = net1(input) out2 = net2(out1) other_out = net3(out1) I want to update net1 with the weighted sum of gradient from out2 i.e. out2.backward() and other_out. So the similar implementation in torch7 would be: dnet2 = net2:backward(...some argument....) net1:backward(input, l1* dnet2 + l2*other_out) where l1 and l2 are some constants. The dimensions of dnet2 and other_out are same. How should I do something like this in pytorch? @smth
st117834
assuming out2 and other_out are scalars you can do this: total_loss = out2 * l1 + other_out * l2 total_loss.backward()
st117835
For this to work as I want, I would have to make requries_grad of net2 and net3 False. And I do not want the gradient of net3 to backpropagate in net1 but the output of net3, I doubt just making requires_grad of net3 False will do this?
st117836
Well you can’t get the gradient out of nowhere. You’ll still need to differentiate all of net3, and the only thing that changing requires_grad can save you is computing grads w.r.t. weights.
st117837
Ah I think I misread what you wrote. If you want to do the backward with grad_output as a linear combination of these outputs you can do this: out1 = net1(input) out2 = net2(out1) other_out = net3(out1) out1.backward(l1 * out2 + l2 * other_out)
st117838
The thing is that out2 and other_out are not of same dimension. other_out is of the same dimension as out1 while out2 is some other dimension. So I wanted to backpropagate the just net2 and extract the gradient which would be input to net1. Then I can write a linear combination of that gradient with other_out and do a backward on net1 with the combined gradient. How do I do this in pytorch? Thanks
st117839
Then you can do this: out1 = net1(input) out2 = net2(out1) other_out = net3(out1) # grad_out2 can be Variable(torch.ones(1)) if out2 is a scalar, or it can be None if you use the master branch torch.autograd.backward([out1, l2 * out2], [l1 * other_out, grad_out2])
st117840
Thanks for the help. I believe I can multiply the constants l1 and l2 in either the variables or the grad_variables argument, right or is it specific?
st117841
I am getting the same error as was before which is: RuntimeError: Trying to backward through the graph second time, but the buffers have already been freed. Please specify retain_variables=True when calling backward for the first time. I think when it does backward on out1 with other_out, it frees the buffer of net1 and not able to do backward while doing backward on out2. Would using a cloned variable in the second argument work? I wrote something like this: torch.autograd.backward([out1, out2], [l2 * other_out.data, l1 * grad_out2]) where grad_output is Tensor. Let me know if its wrong. UPDATE: If I do retain_variables=True I get an error like this: RuntimeError: dependency not found for N5torch8autograd12ConvBackwardE Please let me know if its a pytorch issue or is something wrong with my implementation? I am using the master branch version of pytorch. Any updates with the issue @smth @apaszke?
st117842
if you are using master branch, the “dependency not found” is a compile issue on your side. ummm, give me a small snippet and i can try to help you (i dont have a lot of time right now either).
st117843
Thanks for the help. I understand you must be busy. I just wanted to know what the error was. So, out1 = net1(input) err1 = net2(out1) out2 = net3(out1) newd = out2.data - out1.data torch.autograd.backward([err1, out1], [grad_out, -someWeight * newd], retain_=True) Here, out2 and out1 are of same dimension and grad_out is a tensor. Let me know if you need more info. Thanks!
st117844
can you give me a script with this exact code snippet in your comment (but with net1, net2, etc. defined to something), so that I can run it.
st117845
Yeah it is. The codebase consists of whole lot if things. I can upload on github and let you run it. I can share it with you since it is a private repository.
st117846
i dont need the entire codebase, based on the snippet in this comment Backpropagate a fixed gradient through a network just make a fake script that reproduces the error that’s small.
st117847
I sent you a message with the gist of the code snippet. Please take a look at it.
st117848
we’re looking into this. it is a bug on the master branch. let me see if we can issue a quick patch or find a workaround for you.
st117849
This is a failure case because fake here is a non-leaf Variable and torch.autograd.backward has a dependency analysis bug. I am going to open a bug report for this. For now, here is your workaround, and good luck with your deadline: noise.resize_(opt.batchSize, nz, 1, 1).normal_(0, 1) noisev = Variable(noise) fake_v = netG(noisev) fake = Variable(fake_v.data, requires_grad=True) errG = netD(fake) rec = netA(fake) newd = rec.data - fake.data errG.backward() fake_v.backward(fake.grad.data + (-opt.daeWeight * newd)) For reference, here’s the opened issue: https://github.com/pytorch/pytorch/issues/1605 22
st117850
This is a long-winded description of a simple problem. I’m trying to create a very simple dictionary-base training scheduler for resnet18. Inside of the loop I’m setting the requires_grad = True. But upon the 2nd iteration the model parameters sets are not taking. It’s like I’m holding onto a reference for a different set of layers but I’m not really sure how to proceed. Here’s a simplified version of the code with debug prints. If you look at the “debug section 2” for the second iteration you’ll see that the model parameter flags did not change. schedule = [ {'label': 'fc', 'tries': 1, 'epochs': 1, 'params': [model.fc]}, {'label': 'fc, layer4', 'tries': 1, 'epochs': 1, 'params': [model.fc, model.layer4]}] best_acc = 0.0 for param in model.parameters(): param.requires_grad = False for s in schedule: print('\n----------------------------------------------------------------') print(f"Training layers {s['label']}") print('[Debug section 1]') for layer in s['params']: for param in layer.parameters(): print(param.requires_grad, end=' ') param.requires_grad = True print(param.requires_grad) print('[Debug section 2]') for param in model.parameters(): print(param.requires_grad, end=' ') print() for t in range(1, s['tries'] + 1): print(f"Training layers {s['label']} try {t}/{s['tries']}") model, best_acc = train_model(model, criterion, params=s['params'], num_epochs=s['epochs'], best_acc=best_acc, init_lr=1e-4, lr_decay_epoch=75) Edited output: Training layers fc [Debug section 1] False True False True [Debug section 2] False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False True True Training layers fc try 1/1 ---------------------------------------------------------------- Training layers fc, layer4 [Debug section 1] True True True True False True [13 More like this] False True [Debug section 2] False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False True True Training layers fc, layer4 try 1/1
st117851
I figured out solution. Working under the assumption that a reference was changed (probably at model = train_model) I changed the loop to use layer names and not references to layers.
st117852
How do you correctly ship an lstm model class to gpu? I have a model defined as: class StackRegressive(nn.Module): def __init__(self, **kwargs): super(StackRegressive, self).__init__() self.criterion = nn.MSELoss(size_average=False) # Backprop Through Time (Recurrent Layer) Params self.noutputs = kwargs['noutputs'] self.num_layers = kwargs['numLayers'] self.input_size = kwargs['inputSize'] self.hidden_size = kwargs['nHidden'] self.batch_size = kwargs['batchSize'] self.noutputs = kwargs['noutputs'] self.cuda = kwargs['cuda'] self.criterion = nn.MSELoss(size_average=False) self.fc = nn.Linear(32, self.noutputs) #define the recurrent connections self.lstm1 = nn.LSTM(self.input_size, self.hidden_size[0], self.num_layers, bias=False, batch_first=False, dropout=0.3) self.lstm2 = nn.LSTM(self.hidden_size[0], self.hidden_size[1], self.num_layers, bias=False, batch_first=False, dropout=0.3) self.fc = nn.Linear(self.hidden_size[1], self.noutputs) if self.cuda: self.lstm1 = self.lstm1.cuda() self.lstm2 = self.lstm2.cuda() self.fc = self.fc.cuda() def forward(self, x): nBatch = x.size(0) # Forward propagate RNN layer 1 out, state_0 = self.lstm1(x) # Forward propagate RNN layer 2 out, state_1 = self.lstm2(out) # Decode hidden state of last time step out = self.fc(out[:, -1, :]) out = out.view(nBatch, -1) return out When I contruct the model’s instance e.g. regressor = StackRegressive(res_cube=res_classifier, inputSize=128, nHidden=[64,32,12], noutputs=12,\ batchSize=args.cbatchSize, cuda=args.cuda, numLayers=2) I am able to run the program. But occasionally, I get a bad_cast runtime error: Traceback (most recent call last): File "./main.py", line 390, in <module> main(args) File "./main.py", line 384, in main trainClassifierRegressor(train_loader, bbox_loader, resnet, args) File "./main.py", line 322, in trainClassifierRegressor rloss.backward() File "/home/lex/anaconda2/envs/py27/lib/python2.7/site-packages/torch/autograd/variable.py", line 146, in backward self._execution_engine.run_backward((self,), (gradient,), retain_variables) RuntimeError: std::bad_cast Why does it fail during backward() call? When I run the model on cpu, I do not have this problem.
st117853
I am pretty new, so there a very good chance I am wrong. Have you tried not doing the .cuda() in the class definition and instead after you instantiate the class ? - myawesomenet = StackRegressive() myawesomenet.cuda()
st117854
That gives: File "train.py", line 283, in trainClassifierRegressor regressor = regressor.cuda() TypeError: 'bool' object is not callable
st117855
and if you just do : regressor.cuda() ? (no regressor = ) I am out of ideas after that.
st117856
The former is not an inplace transfer of an object to gpu. So it must be the latter, I am sure.
st117857
It is an inplace operation (unfortunately not from the function name though). http://pytorch.org/docs/_modules/torch/nn/modules/module.html#Module 34
st117858
you can do at the end of the __init__ function: if torch.cuda.is_available(): self.cuda() instead of the multiple calls… However, I don’t know if it’s the source of the error. EDIT: Also, probably it is not a good idea having an attribute with the same name a class method (self.cuda and self.cuda()).
st117859
Thank you! The self.cuda variable was the problem. Do not know why I did not foresee it would be an issue.
st117860
I noticed that, while training my model, the GPU usage constantly increases. The only different thing I do, compared to what I was already training on, is adding some different types of noise to my input batch idx_gaussian = np.random.choice(np.arange(4), 2, replace=False) lst = np.array([0, 1, 2, 3]) idx_masking = np.setdiff1d(lst, idx_gaussian) noisy_input = input_tensor.data noisy_input = Variable(noisy_input) noisy_input[idx_masking[0]] = self.noise1(noisy_input[idx_masking[0]]) noisy_input[idx_masking[1]] = self.noise1(noisy_input[idx_masking[1]]) noisy_input[idx_gaussian[0]] = self.noise2(noisy_input[idx_gaussian[0]]) noisy_input[idx_gaussian[1]] = self.noise2(noisy_input[idx_gaussian[1]]) and use the noisy_input tansor as my input. I tried adding the .data part since I read that I would be able to discard redundant info that way. This does, however not work, and I end up with an out of memory error about 20 epochs in.
st117861
Hi everyone, I need to reconstruct an image tensor from its patches and a one-hot tensor by transpose convolution. Let’s say the original image tensor, I, is 1x1x4x4 large: (0 ,0 ,.,.) = 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Its patches P (size 3x3, stride 1): (0 ,0 ,.,.) = 0 1 2 4 5 6 8 9 10 (1 ,0 ,.,.) = 1 2 3 5 6 7 9 10 11 (2 ,0 ,.,.) = 4 5 6 8 9 10 12 13 14 (3 ,0 ,.,.) = 5 6 7 9 10 11 13 14 15 And I have a one-hot tensor K corresponding to P: (0 ,0 ,.,.) = 1 0 0 0 (0 ,1 ,.,.) = 0 1 0 0 (0 ,2 ,.,.) = 0 0 1 0 (0 ,3 ,.,.) = 0 0 0 1 Transpose convolving K and P (as filter) reconstructs I’, but it also sums up values from overlapping patches: (0 ,0 ,.,.) = 0 2 4 3 8 20 24 14 16 36 40 22 12 26 28 15 To work around this, I record the overlapping times in a tensor O with the same size of I when making patches using the helper function below, and at the end perform elementwise division I’ = I’/O. def compute_overlaps(tensor, patch_size=(3, 3), patch_stride=(1, 1)): n, c, h, w = tensor.size() px, py = patch_size sx, sy = patch_stride nx = ((w-px)//sx)+1 ny = ((h-py)//sy)+1 overlaps = torch.zeros(tensor.size()).type_as(tensor.data) for i in range(ny): for j in range(nx): overlaps[:, :, i*sy:i*sy+py, j*sx:j*sx+px] += 1 overlaps = Variable(overlaps) return overlaps But I am wondering if averaging could be done in an efficient way?
st117862
look at torch.unfold (it is an autograd supported operation). You can extract patches from an image using that, and then convolve on the extracted patches. In the backward, it automatically computes the correct gradients. For example see https://github.com/pytorch/pytorch/pull/1523#issue-227526015 100
st117863
my code is like this: transform = transforms.Compose([transforms.Scale(32, 32), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), transforms.ToPILImage()]) trainset = dset.ImageFolder(root = ‘./imgs’, transform = transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size = 4, shuffle=True, num_workers=2) and when I iterate through the trainset I got the error: ValueError: unknown resampling filter I got no idea on how to wrap the transform functions, and I found no tutorial or documents explain this in detail.
st117864
you dont need the last transforms.ToPILImage(). See https://github.com/pytorch/examples 56 for a few examples to help you.
st117865
when i use criterion = torch.nn.CrossEntropyLoss() to get the loss between the predicted value and the label, there is a error like this: TypeError: CudaClassNLLCriterion_updateOutput received an invalid combination of arguments - got (int, torch.cuda.FloatTensor, !torch.cuda.FloatTensor!, torch.cuda.FloatTensor, bool, NoneType, torch.cuda.FloatTensor), but expected (int state, torch.cuda.FloatTensor input, torch.cuda.LongTensor target, torch.cuda.FloatTensor output, bool sizeAverage, [torch.cuda.FloatTensor weights or None], torch.cuda.FloatTensor total_weight) i have checked my inputs many times, the two inputs are no problem. While this error still exist. Does any one can give some suggestions?
st117866
Hi, Did you check that target tensor (which should be class numbers, not one-hot vectors) is actually a torch.cuda.LongTensor? If your class numbers happen to be in a FloatTensor, you can convert them to a LongTensor with labels.long() (likely preferably before wrapping them in a Variable. Best regards Thomas
st117867
Time counts on AWS when you’re paying and a lot of people have used this AMI so far but I’ve not included PyTorch until today as I’m now an ardent supporter of PyTorch. You can launch and get up and running with PyTorch in less than 5mins with DLAMI. It’s open-source, entirely free to use. I originally started it because I was frustrated running commands to install stuff before I can get PyTorch running on GPU. Simply search for DLAMI.V1 or ami-7e3a5b1e. It’s only currently available in the oregon region but they’re a lot of spot instances here I believe. I will be expanding to more regions Feel free to contribute here or head to FAQs in case you run into anything although it’s unlikely: https://github.com/ritchieng/dlami 75 Cheers