id
stringlengths
3
8
text
stringlengths
1
115k
st98568
This might be something straightforward but, I’m not sure why I have this error since the documentation clearly shows there should be a BCELoss() class. Any ideas will be helpful! Thanks! Here is my code that generates the problem: import torch import torch.nn as nn # Loss and optimizer criterion = nn.BCEloss()
st98569
It seems the l is lowercase in your example. Try nn.BCELoss with an uppercase L.
st98570
Hi, I’m using the following two functions to find the accuracy of my semantic segmentation network, I found this code on github and they seem to work but I dont exactly know how. I am trying to understand what each line is doing. I have commented each line with what I think is going on, if I am wrong in my understanding can you please correct me. The lines I need help in understanding are: values,indices = tensor.cpu().max(1) and incorrect=preds.ne(targets).cpu().sum() def get_predictions(output_batch): bs, c, h, w = output_batch.size() # size returns [batchsize, channels, rows, columns] tensor = output_batch.data values, indices = tensor.cpu().max(1) # get the values and indices of the max values in every channel (dim=1), why are we finding the maximum value in RGB channels? indices = indices.view(bs, h, w) # reshape it to this, as this is how 'targets' is shaped return indices def error(preds, targets): assert preds.size() == targets.size() bs, h, w = preds.size() n_pixels = bs*h*w incorrect = preds.ne(targets).cpu().sum() # I cannot find out what 'ne' is doing here and what are we summing? err = incorrect.numpy()/n_pixels # converted this tensor to numpy as the tensor was int and division was giving 0 everytime # return err return round(err, 5) Many Thanks
st98571
Solved by ptrblck in post #2 Let’s walk through the code using your explanations: def get_predictions(output_batch): bs, c, h, w = output_batch.size() # size returns [batchsize, channels, rows, columns] # Get's the underlying data. I would prefer to use .detach(), but that shouldn't be a problem here. ten…
st98572
Let’s walk through the code using your explanations: def get_predictions(output_batch): bs, c, h, w = output_batch.size() # size returns [batchsize, channels, rows, columns] # Get's the underlying data. I would prefer to use .detach(), but that shouldn't be a problem here. tensor = output_batch.data # Gets the maximal value in every channel, right. # As this will most likely be your model's prediction, you the channels correspond to the classes, i.e. # channel0 represents the logits of class0. indices will therefore contain the predicted class for each pixel location. values, indices = tensor.cpu().max(1) # get the values and indices of the max values in every channel (dim=1), why are we finding the maximum value in RGB channels? # .squeeze() would probably do the same. # Basically you want to get rid of dim1 which is a single channel now with the class predictions. indices = indices.view(bs, h, w) # reshape it to this, as this is how 'targets' is shaped return indices def error(preds, targets): assert preds.size() == targets.size() bs, h, w = preds.size() n_pixels = bs*h*w # You are comparing the predictions of your model with the target tensor element-wise # using the "not equal" operation. In other words, you'll bet a ByteTensor with 1s for all pixel locations, # where the predictions do not equal the target. Summing it will give you the number of falsely predicted pixels. incorrect = preds.ne(targets).cpu().sum() # I cannot find out what 'ne' is doing here and what are we summing? # Divide the number of incorrectly classified pixel by the number of all pixels. err = incorrect.numpy()/n_pixels # converted this tensor to numpy as the tensor was int and division was giving 0 everytime # return err return round(err, 5) Let me know, if some aspects are still unclear.
st98573
Good afternoon, I am using the function torch clamp to clip a tensor between its minimum and 1e-10, however it seems to be very slow, would it be an alternative to that? or a better way to do it ? I am doing : torch.clamp(dist,torch.min(dist),1e-10)
st98574
To save time, don’t take the min! torch.clamp(dist, max=1e-10) I don’t think you can improve clamp by much (unless fusing it with other pointwise ops, which the jit will do). Best regards Thomas
st98575
I am getting the following error. RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time. I am moving my code from 0.2.0.4 to 0.4.1. With a few checks for version the code runs on both up to a few epochs, but eventually 0.4.1 fails with this error. I’ve seen a couple other posts where this can be related to custom Variables, of which I have many. Just posting the question to see if anyone has any thoughts on how to begin looking with the differences between versions before I go crazy looking at every single Variable in my system.
st98576
Solved by albanD in post #2 Hi, I think the best practice is to just remove all Variables and .data from your code. Variables are not needed anymore and you can pass requires_grad=True when you create a tensor if needed. The .data should be replaced by either .detach() if the goal is to prevent gradient from flowing back. O…
st98577
Hi, I think the best practice is to just remove all Variables and .data from your code. Variables are not needed anymore and you can pass requires_grad=True when you create a tensor if needed. The .data should be replaced by either .detach() if the goal is to prevent gradient from flowing back. Or move operations inside a with torch.no_grad(): block if the goal is to perform operations not tracked by the autograd engine. The error you’re seeing is most likely due to a change in the Variable wrapping and .data semantics. Removing them with the right tool should fix it
st98578
Thanks for the info! In the process of doing that. Should I also remove Parameters or those are still ok?
st98579
Hi, No nn.Parameters remain the same. They are used by nn internals to detect parameters for .parameters()-like operations !
st98580
Awesome! Getting rid of all the Variables and replacing all the .datas with “with torch.no_grad():” makes it all run! Still gotta check its doing the same thing but no more errors. Thanks!
st98581
this may be a bit of a random question but it relates to the inputs i want to give to my neural network. Does anyone know how to create a 2 dimensional lognorm distribution, and then visualizing it as a 3d surface plot in python? i need to do this to understand a component of my network, any help is gratefully appreciated,
st98582
Solved by LinjX in post #8 Sorry for reply you late. I can draw figure like this [image] using code import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import copy def Goldstein_price(x,y): ''' range: [-2, 2] f(0,-1)=3 ''' A = (1+ ((x+y+1)**2) * (19-14…
st98583
first you need to create a function logNoraml(x,y) that returns the value. Then using matplotlib module like this example from matplotlib import pyplot as plt import numpy as np from mpl_toolkits.mplot3d import Axes3D fig = plt.figure() ax = Axes3D(fig) X = np.arange(-4, 4, 0.25) Y = np.arange(-4, 4, 0.25) X, Y = np.meshgrid(X, Y) R = np.sqrt(X**2 + Y**2) Z = np.sin(R) ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap='rainbow') plt.show() Please test this script first, then you need to edit R = logNorm(X, Y), finally ax.plot_surface(X, Y, R, ...)
st98584
thank you for your reply, that approach makes alot of sense but im confused as to how i would create the function that would return the lognorm for an x,y position. could you lend some help in that regard? i understand how to visualize it its just i dont know how to create that function
st98585
dx = 90 - (-90) dy = 90 - (-90) c = [dx + dx/2.0, dy+dy/2.0] z = np.zeros((400, 400)) x = np.linspace(-90, 90, 400) y = x.copy() for i in range(len(x)): for j in range(len(y)): p =[x[i], y[j]] d = math.sqrt((p[0]-c[0])**2 + (p[1]-c[1])**2) t = d z[i][j] = lognorm.pdf(t, 1.2) fig = plt.figure() ax = fig.add_subplot(111, projection = '3d') ax.plot_surface(x,y, z, cmap = 'viridis') plt.show() `` this is the code that i have so far, but it produces an image like the one that i have attached. this isnt what id like, the whole plane is skewed rather than just the z vlaues. any ideas as to how i can fix this? ![testing|690x355](upload://g1ZOV14IrAfHh9Qjwk4u3hXEymO.jpeg)
st98586
ideally i’d like it too look something like the attached image. Any help from you smart people is highly highly appreciated, im in a bit of time crunch with this
st98587
Sorry for reply you late. I can draw figure like this image.jpg756×578 104 KB using code import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import copy def Goldstein_price(x,y): ''' range: [-2, 2] f(0,-1)=3 ''' A = (1+ ((x+y+1)**2) * (19-14*x + 3*(x**2) - 14*y + 6*x*y + 3*(y**2) )) B = (30+ ((2*x-3*y)**2) *(18- 32*x + 12*(x**2) + 48*y -36*x*y + 27*y**2)) return A*B def main(): max_rg=2 num = 200 rg = np.arange(-max_rg, max_rg, float(max_rg)*2/num) result = draw(rg, Goldstein_price) if __name__ == '__main__': main()
st98588
thank you!! how could i shift this such that the peak is at a different position say, (-0.5, 0)
st98589
if you want to draw the figure on horizontal space [-1, 1] (actually a square), you can change the max_rg to 1, and the num means how many point to draw from -1 to 1.
st98590
i understand that but can i shift it such that the peak value the highest z value is centered at a different position?
st98591
Yes you can, what you need to do is get two range for x and y, if you want to centralize on (-0.5,0) then y is range(-n, n), and x can be a range(-1, 0)
st98592
i see! thank you! would you mind plotting this using plt library i tried using your code and i get an error the draw() function
st98593
def draw(rg, func): fig = plt.figure() ax = Axes3D(fig) X = copy.deepcopy(rg)#np.arange(-5, 5, 5/100.0) Y = copy.deepcopy(rg)#np.arange(-5, 5, 5/100.0) X, Y = np.meshgrid(X, Y) R = func(X,Y) ax.plot_surface(X, Y, R, rstride=1, cstride=1, cmap='rainbow') plt.show() return R The missing part of my code.
st98594
one final request, im very sorry for this. could you show me an exmaple of shifting the peak point to near the center of the distribution of the meshgrid? thank you very much!!
st98595
The code above draw a fig in range x[-2,2] and y[-2,2], I change it to x[-1, 1] and y[-2,2] by editing the draw() like this: X = np.arange(-1,1,2/100.0) Y = copy.deepcopy(rg)#np.arange(-5, 5, 5/100.0) May this help you.
st98596
thank you, im going to try and use it but i still cant seem to shift the peak to the centre position, like this:
st98597
I have a set of networks out of which only some of them are used in one forward pass and hence only their weights should be updated by a call to backward, The caveat is that which of these networks are selected for forward pass depends on the input and hence when initialising the optimiser, I am passing parameters for all the networks. This creates a problem while using an optimiser like Adam which keeps running average and would update grads even when they are 0. For example - If N1 & N2 are used for first input, then their grad is initialised to a number. If in the next input networks N2 & N3 are used, then naively taking an optimiser step after zero_grad won’t prevent updates in N1, since it’s gradients would be 0 not None. In code of Adam, a networks isn’t updated only if its grad is None`, which is as expected. But to solve the issue above, I believe it would be useful to have something like a None_grad function for the optimiser and networks. Suggestions for any alternative methods to do this task are welcome.
st98598
No such function exist. I am sure it could be implemented easily at the nn.Module level like zero_grad() is done. @smth do you think this is something we want in the core or more of a user specific usage?
st98599
Would it be possible to create an optimizer for each model and then select the model with the corresponding optimizer using your condition?
st98600
Yes but that would defeat the purpose of autograd in providing easy backward pass. The networks are selected in the forward pass based on some characteristics of the input. I will have to repeat these computations to select corresponding optimisers for each input. Having a function 2 that makes all grads None makes code simple 2
st98601
I am trying feature visualization, and noticed a curious phenomenon. when i compare the norm of the gradient w.r.t to a input with the gradient w.r.t. a clone of this, i am getting different answers. I have not been able to figure this out. I am pasting the google colab code here: imports etc., initial code: !pip install --no-cache-dir -I pillow # http://pytorch.org/ from os.path import exists from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag()) cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/' accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu' !pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision import torch import torchvision import numpy as np from matplotlib import pyplot as plt import time import pdb !git clone http://github.com/tumble-weed/images import os os.listdir('images') from skimage import io from PIL import Image The meat of the code: im = io.imread('images/ILSVRC2012_val_00000013.JPEG') model = torchvision.models.vgg16(pretrained=True) model.eval() s = 224 mean = [0.5,0.5,0.5] std = [0.225,0.225,0.225] transform = torchvision.transforms.Compose([torchvision.transforms.Resize((s,s)), torchvision.transforms.ToTensor(), torchvision.transforms.Normalize(mean=mean,std=std)]) ref = transform(Image.fromarray(im)) class fwdHook(): def __init__(self): self.feat = None def hook(obj,input,output): self.feat = output pass self.hook = hook pass model_layers = [l for n in model.children() for l in n] hooks = [fwdHook() for l in model_layers] hooked_layers = [l.register_forward_hook(h.hook) for l,h in zip(model_layers,hooks)] if False: print(hooks[0].feat) model(ref.unsqueeze(0)) if False: print(hooks[0].feat) mag = 10 x_ = mag*np.random.randn(*ref.unsqueeze(0).shape).astype(np.float32) x = torch.from_numpy(x_) x = torch.autograd.Variable(x,requires_grad = True) lidx = 3 model(ref.unsqueeze(0)) ref_feat = hooks[lidx].feat model(x) x_feat = hooks[lidx].feat get_dist_from_ref = lambda feat:torch.sum((ref_feat - feat)**2)/torch.sum((ref_feat)**2) loss = get_dist_from_ref(x_feat) print(loss) # loss.zero_grad() loss.backward(retain_graph=True) x_grad = x.grad.clone() print(torch.norm(x_grad)) im_x_grad = x_grad.permute(0,2,3,1)[0] im_x_grad = im_x_grad.detach().cpu().data.numpy() # print(x_grad) '''----------------------''' xx = x.clone() xx = torch.autograd.Variable(xx,requires_grad = True) model.forward(xx) xx_feat = hooks[lidx].feat loss_xx_ref = torch.dist(ref_feat,xx_feat)/torch.norm(ref_feat.view(-1)) print(f'loss_x2_ref {loss_xx_ref}') loss_xx_ref.backward() xx_grad = xx.grad.clone() print(xx_grad.norm()) Output: Downloading: "https://download.pytorch.org/models/vgg16-397923af.pth" to /root/.torch/models/vgg16-397923af.pth 100%|██████████| 553433881/553433881 [00:05<00:00, 97564919.92it/s] tensor(543.5735, grad_fn=<DivBackward1>) tensor(0.4398) loss_x2_ref 23.292146682739258 tensor(0.0094)
st98602
If you make your example more minimal, it will be easier to see. Also, I’d recommend against using Variable (and against using torch versions where you have to). Then, xx = x.clone() will give you xx that is connected to x for the backward, i.e. losses calculated from xx will backward into x, too, whereas losses calculated from x will only show in x. (This is cumulative, i.e. all gradients from backwards are added.) On the other hand, xx = x.detach().clone().requires_grad_() will give you something that is completely separate. Best regards Thomas
st98603
thanks I thought it would be better to give executable code, will give snippet next time. my problem seems to have gotten solved after doing x.grad.data.zero_(), it seems that the gradients were accruing up into the variable of interest. Will do some more tests to verify. Thanks for the info on clone, it was causing short circuit of gradients. a question is why allow the clone to affect the original gradients? if someone wanted x to be connected to it, they would have used x, and not a clone of it. I was thinking it was just a numeric copy of x, but otherwise disconnected from it, like in Theano.
st98604
tumble-weed: I thought it would be better to give executable code, will give snippet next time. Ideally, you would come up with a minimal runnable example. If you look at the time economics of a forum like this, when we all do that it helps all of us get better answers to our questions, because the “answer time” is a relatively scarce resource. tumble-weed: my problem seems to have gotten solved after doing x.grad.data.zero_(), Usually you would call opt.zero_grad() or model.zero_grad(). tumble-weed: why allow the clone to affect the original gradients? Because people want that and use .detach() when they don’t: .clone() means “different memory” but connected in autograd, .detach() means “disconnect in autograd”, but same memory. Best regards Thomas
st98605
There is a precision difference between the convolutions executed by CPU and GPU, using Conv2d(). In the worst case, results of forward pass on GPU and CPU are identical only up to 3 digits. If output channel is defined as 1, it requires summing over different channels which ends up in even lower precision; when input channels is greater than zero and output channels and groups are one, the results of forward pass on GPU and CPU are identical only up to 1 digit. What is the reason for that?
st98606
That sounds like too big a difference. Do you check that on a single conv layer? Could you send a small code sample to reproduce this please?
st98607
Yes sure. I just check on a single conv layer (e.g., Conv2d) with random hyperparameters. num_iter = 6000 torch.set_printoptions(precision=6) for i in range(num_iter): padVal = round(random.uniform(0, 10), 6) padAmount = randint(1, 5) # starts from 1 weights = round(random.uniform(0, 10), 6) dilated = randint(1, 4) size_input = randint(15, 25) size_kernel = randint(1, 5) channel = randint(2, 5) input_channel = 1 output_channel = 1 group_num = 1 stride_num = randint(1, 5) print ('Pad value: %6f Pad amount: %s Weights: %6f Size input: %s Size kernel: %s Dilated: %s' %(padVal, padAmount, weights, size_input, size_kernel, dilated)) # random inputs non_padded_input = torch.randn(1, input_channel, size_input, size_input) padder = nn.ConstantPad2d(padAmount, padVal) padded_input = padder(non_padded_input) # now declare nets net_gpu = nn.Conv2d(input_channel, output_channel, size_kernel, padding = 0, stride = stride_num, dilation = dilated, groups = group_num, bias = 0).cuda() net_cpu = nn.Conv2d(input_channel, output_channel, size_kernel, padding = 0, stride = stride_num, dilation = dilated, groups = group_num, bias = 0) # can be outside # initialize weights with same random value net_gpu.weight.data.fill_(weights).cuda(); net_cpu.weight.data.fill_(weights); # forward pass output_gpu = net_gpu(padded_input.cuda()).cuda() output_cpu = net_cpu(padded_input) # compare with threshold of second argument in inner function if torch.all(torch.lt(torch.abs(torch.add(output_gpu, -output_cpu.cuda())), 1e-3)): pass else: print('!!!!!!!!!!!!!!!!! BUG IN CODE !!!!!!!!!!!!!!!!!!!!') #print ('Output gpu:'); print (output_gpu); print ('Output cpu:');print (output_cpu); assert(False) print ('Test is done, good to go!')
st98608
Hi, I ran your code sample 10 times and it always returned no issue. Do you have a special setting where it fails for you? I seems to work fine on my install.
st98609
Hi, I’m trying to create a model that is kind of ensemble of more than 1000 small models. Each small model takes as input same vector, but process it with a different mask (input vectors are very sparse). class MiniModel(nn.Module): def __init__(self, clusters_set): super(Nut, self).__init__() self.fc = nn.Sequential( nn.Linear(len(clusters_set), 1), nn.Sigmoid() ) self.mask = nn.Parameter(self.get_cluster_mask(clusters_set), requires_grad=False) .... def apply_mask(self, x): mask = self.mask.expand(x.shape[0], self.mask.shape[-1]) return torch.masked_select(x, self.mask).view(x.shape[0], len(self.mask.nonzero())) def forward(self, x): x = self.apply_mask(x) x = self.fc(x) return x mask is binary tensor which is different for every MiniModel. Then there is a model which creates and ensembles them together: class Tree(nn.Module): def __init__(self, settings_dict): super(Tree, self).__init__() self.tree = { for category, clusters in setting_dict.items(): self.tree[str(category)] = MiniModel(clusters) self.tree_nn = nn.ModuleList(self.tree.values()) def forward(self, x): return torch.cat([model(x.clone()) for model in self.tree_nn], dim=1) Question: The problem is that the training process is super inefficient - because the number of models is too big, it trains very slowly. Is there any possibility to make forward through all of MiniModels in Tree simultaneously? Just like one big vectorized multiplication?
st98610
Hi, I’m trying to do something, that I don’t know if possible, I have 2 parts in my networks: 1.3d cnn 2.another part (not important for now). The 3dcnn gets 4 frames, and outputs a vector of 256, which represents the 4 frames. Now I want to use a big batch, because I want to train my second part of the network (the cnn is pretrained), but I don’t have enough memory. The problem is the input to the cnn is too big, so I thought of doing a loop only on the cnn, and each loop instead of a batch of 64 I’ll do a batch of 16 (just an example). but that means that I’m getting new inputs , while my second part is not running , which I don’t think is possible. Another option I thought is to load the batch of 64 (means 64*4=256 frames), save it somewhere and then run few each time in a loop, but I’m not sure that I’ll have memory for this either. Hope that you’ll have something in mind, Thanks a lot!
st98611
Hello, I hope to evaluate results for coco test-set. To do this, detected result which written json format is needed. Is there pytorch API for coco evaluation? Thank you
st98612
The recently released maskrcnn-benchmark 99 by FAIR (@fmassa) has some testing functions using the COCO dataset. Maybe you could use some of these functions or re-use some snippets.
st98613
I want to train a densenet121 model from torchvision package, but the error massage indicated that module name can’t contain ’ . ’ , I checked the code and found that “self.add_module(‘norm.1’, nn.BatchNorm2d(num_input_features))” seems to be the source of error. How can I solve this problem and import the pretrained model correctly. Thank you.
st98614
You might use an older version of torchvision, since this issue was fixed some time ago here 57. Could you install the current version and try it again?
st98615
I would like to know what are the best practices/tools to visualize what happens during loss.backward(). I am confident in my understanding of the forward pass of my model, how can I control its backward pass?
st98616
you can try to set up Hooks on your layers to see what’s happening. Might be a good start, as well as learning about backpropagation.
st98617
Ok, thanks, I’ll look into what hooks are. I have a good theoretical knowledge of back-propagation. My question is a practical one: How can I easily/efficiently track the operations performed during loss.backward()?
st98618
What’s the difference between torch.jit.trace and torch.onnx.export ? Can’t libtorch support training ?
st98619
Hi, I have just started working with neural networks. I am having trouble in mapping of concepts given in paper to code. I had to ask some questions: The nn module in PyTorch accepts a batch as its parameter by default? i.e. I prepare my input in the format of batch x features x feature_length form and then when defining my network class I can just ignore the batch parameter? Isn’t seq2seq model just a normal lstm with a softmax decoder? I can put another layer before softmax and it would still be an encoder, what is the difference here b/w seq2seq and just more layers? How do i implement attention, is it a layer? I think in keras they just take the dot product of the hidden states of lstm and get scores which is then multiplied back to inputs and added to the final hidden state. Is attention any different? I don’t think it has any tunable parameters but still in many places in Pytorch, they implement it as a layer i.e. a class, why and how is that and what is the benefit? How do I implement self attention? Here i don’t even have a clue? Is it a linear layer, how to do this? This is very hard it appears, thanks for helping! Even some pointers would be great.
st98620
Hi all, I wrote an in depth tutorial on Machine Learning last year which went viral, getting almost 3K stars on Github, and making me trending #1 developer on Github for a week! A lot of people found it useful, but it was written in Keras at the time. Now, it’s been upgraded to PyTorch. It was based off of tutorials I wrote for a class at Harvard University while I was a TA there. You can find the PyTorch version here:- https://spandan-madan.github.io/DeepLearningProject/docs/Deep_Learning_Project-Pytorch.html 20 and the accompanying ipython notebook here - https://github.com/Spandan-Madan/DeepLearningProject/blob/master/docs/Deep_Learning_Project-Pytorch.ipynb 7
st98621
In my network, there are two models A and B. It runs like: a = A(input) b = B(a) I want to freeze model A, only train model B. I want to know, are these enough to fix model A which has batchnorm layers and dropout layers: for param in A.parameters(): param.requires_grad = False optimizer = torch.optim.Adam(B.parameters()……) During my test, I found this code is also needed: A.eval() When I tell it to my friends, we both are confused. We think parameters in batchnorm layers will not be changed if we don’t optimize them (that means use an optimizer). Batchnorm layers perform differently in eval and train mode. How about their parameters? Do the parameters change along with each prediction?
st98622
Solved by tom in post #2 There are three things to batchnorm (Optional) Parameters (weight and bias aka scale and location aka gamma and beta) that behave like those of a linear layer except they are per-channel. Those are trained using gradient descent and by disabling gradients you inhibit that they are updated. There a…
st98623
There are three things to batchnorm (Optional) Parameters (weight and bias aka scale and location aka gamma and beta) that behave like those of a linear layer except they are per-channel. Those are trained using gradient descent and by disabling gradients you inhibit that they are updated. There are (optional again) running mean and variance that are a form of average over the batch statistics for each channel. These are not parameters, but buffers, and are updated during the forward pass when batch norm is used in training mode. They don’t require grad. These don’t affect the outputs in training mode, but do change while you feed data through them in training mode. In training mode, the batch statistics are taken and each channel is mean/variance standardized. In eval mode, the the running mean and variance are used in place of the batch statistics to “standardize” the input. Best regards Thomas
st98624
Thanks a lot! I know what happend…… Model A is actually loaded from state_dict_A. Then I train model B. Finally I save model B as new_B. After that, I load state_dict_A, and new_B. As a result, the prediction changes because I haven’t completely fixed model A. I will never forget to use model.eval() when I have to freeze it.
st98625
I installed cuda 9.0 and cudnn 7.3 but torch.backends.cudnn.version() gave 7005. May I ask how to fix it? I appled RTX 2080 ti and it gave me CUDNN_STATUS_EXECUTION_FAILED error. So I double it maybe the Cudnn version Thanks!
st98626
I am trying to integrate PyTorch into an existing project. I needed to rename the package to avoid a clash with Torch 7. (I know…) I am getting confused by three undefined references. I have gone so far as to manually reference all of the .so files by absolute path in cmake and have confirmed that one of them contains the first undefined reference using nm, but my linker still gives me undefined references to: `torch::jit::load(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&)’ `c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&)’ `c10::Symbol::fromQualString(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&)’ Does anybody have any suggestions? Converting the existing Torch 7 models to Pytorch is sadly non-trivial, and I’m not sure that’s even the cause. (I do of course get even more torch-related undefined references if I don’t link manually to the .so files so that’s working)
st98627
Hi, I’m not sure about the undefined reference, but to convert an old Torch7 model that you saved, you can just do torch.load_lua("your_file.t7") to load it as a pytorch nn.Module.
st98628
I’m looking to get the byte-decomposed values of Float32 tensor on GPU. Would anyone know how to do this? In numpy, this would be: float_nums = [0.16474093, -0.06143471, 0.09829687] float_arr = np.array(float_nums, dtype=np.float32) uint8_arr = float_nums.view(np.uint8) # uint8_arr is now 4 times the length float_arr # and I can perform various bit operations, # like bit masking the float mantissa, etc. PyTorch’s bit operators (^/&/etc.) require ByteTensors - this is my underlying use-case. Unfortunately, the .byte() function doesn’t re-interpret the same binary bytes, instead uses the floating point value to round and wraps around to nearest byte value. For example, float_arr.byte(), in the above snippet, would result in an all-zeroes tensor. I could interconvert to numpy and back, but these Tensors are on GPU and shuffling all these tensors back and forth to main-memory is wasteful and begins to dominate my inference and training time. Any suggestions on how to achieve this same functionality would be appreciated. Thanks.
st98629
Hi, I’m afraid there is no way to do this at the moment. Would a reinterpret function would be an interesting feature @smth ?
st98630
Thanks for the quick response. If this is the case, I might look into writing a CUDA extension for this little bit manipulation and the feasibility of doing the re-interpreting inside a custom kernel. If so desired, I could look into extending this to a more general reinterpret bytes to submit upstream.
st98631
@albanD yes a reinterpret function would be nice, though not sure how far the rabbit-hole goes in implementing it
st98632
Hi, Yes a simple cuda extension with the new cpp extension should be very simple to do. And the easiest way to do this. @smth There is definitly no api that lets you do that easily. I’ll take a look when I have a bit of free time and I’m done with the hook thing.
st98633
I am getting this error: “RuntimeError: cuda runtime error (8) : invalid device function at /pytorch/torch/lib/THC/THCTensorCopy.cu:204” and also the message: " THCudaCheck FAIL file=/pytorch/torch/lib/THC/THCTensorCopy.cu line=204 error=8 : invalid device function" In a different thread, I was told that Pytorch will take care of properly installing everything needed fro CUDA if it is available on a computer. So my code simply checks if cuda is available, then either uses CUDA or not. This seems to work on some computers but not on the one where I get the error message above. This seems to indicate the Pytorch first tells me that CUDA is available but then gets a problem actually using it?
st98634
i have met the same problems, it might be my using the Nvidia GetForce 840M with 8GB, whose computer capability is 5.0. It is strange because i am using pytorch-cpu, which doesn’t need CUDA.
st98635
Hey everyone, I have a feeling I am going to be straight out of luck on this one, but thought I’d throw it out there and see. System Info: Cuda Version: 9.0.176 Cudnn Version: 7 OS: CentOS Linux 7 Pytorch: 0.4.1 Python: 3.6 I am encountering segmentation faults when I try to use torch.utils.checkpoint.checkpoint in a DataParallel module across multiple GPUs. Didn’t find anything for this specific problem so thought I’d create a new thread, apologies if I’ve missed something. Posted a demo script here: dp_segfault.py 3 The diamond pattern comes from the model I’m basing this on - it’s based on this paper Convolutional Neural Fabrics - so I can’t just use the normal sequential. There seem to be a couple of weird things: Running the script on 1 GPU works fine. Running the script without any checkpointing works fine Running the script without the diamond pattern (i.e. conv0(x) -> checkpoint(conv1, x) -> conv3(x)) works fine Faulthandler output: Current thread 0x00007f7bc2dff700 (most recent call first): File "/home/{user}/anaconda3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 90 in backward File "/home/{user}/anaconda3/lib/python3.6/site-packages/torch/utils/checkpoint.py", line 45 in backward File "/home/{user}/anaconda3/lib/python3.6/site-packages/torch/autograd/function.py", line 76 in apply File "/home/{user}/anaconda3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 90 in backward File "/home/{user}/anaconda3/lib/python3.6/site-packages/torch/utils/checkpoint.py", line 45 in backward File "/home/{user}/anaconda3/lib/python3.6/site-packages/torch/autograd/function.py", line 76 in apply Thread 0x00007f7bc3ffe700 (most recent call first): Thread 0x00007f7bc47ff700 (most recent call first): Thread 0x00007f7c57b1a740 (most recent call first): File "/home/{user}/anaconda3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 90 in backward File "/home/{user}/anaconda3/lib/python3.6/site-packages/torch/tensor.py", line 93 in backward File "dp.py", line 62 in <module> Segmentation fault gdb output: Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7fff649ff700 (LWP 39737)] std::__push_heap<__gnu_cxx::__normal_iterator<torch::autograd::FunctionTask*, std::vector<torch::autograd::FunctionTask> >, long, torch::autograd::FunctionTask, __gnu_cxx::__ops::_Iter_comp_val<torch::autograd::CompareFunctionTaskTime> > (__first=..., __holeIndex=1, __topIndex=__topIndex@entry=0, __value=..., __comp=...) at /opt/rh/devtoolset-3/root/usr/include/c++/4.9.2/bits/stl_heap.h:129 129 /opt/rh/devtoolset-3/root/usr/include/c++/4.9.2/bits/stl_heap.h: No such file or directory. Obviously the latter part gives some pointers - I probably need to find and install stl_heap.h - but I don’t have admin access (university cluster) so I’d like to really understand what needs to be done, and why this is happening, before going and pestering the sysadmins.
st98636
Glad to know it wasn’t just me being dumb. For the benefit of anyone who reads this later (https://xkcd.com/979/ 48) my hacky solution was to checkpoint all layers in the model. Not ideal, but it seems to get the job done, and the run time is not -too- bad
st98637
Just downloaded the jupyter notebook for deep Q-learning and I would get error for the code block that is suppose to extract and process the rendered images from the environment. --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) <ipython-input-7-db314a5502f8> in <module>() 39 env.reset() 40 plt.figure() ---> 41 plt.imshow(get_screen().cpu().squeeze(0).permute(1, 2, 0).numpy(), 42 interpolation='none') 43 plt.title('Example extracted screen') <ipython-input-7-db314a5502f8> in get_screen() 14 15 def get_screen(): ---> 16 screen = env.render(mode='rgb_array').transpose( 17 (2, 0, 1)) # transpose into torch order (CHW) 18 # Strip off the top and bottom of the screen C:\Anaconda3\lib\site-packages\gym\envs\classic_control\cartpole.py in render(self, mode) 105 if self.viewer is None: 106 from gym.envs.classic_control import rendering --> 107 self.viewer = rendering.Viewer(screen_width, screen_height) 108 l,r,t,b = -cartwidth/2, cartwidth/2, cartheight/2, -cartheight/2 109 axleoffset =cartheight/4.0 C:\Anaconda3\lib\site-packages\gym\envs\classic_control\rendering.py in __init__(self, width, height, display) 49 self.width = width 50 self.height = height ---> 51 self.window = pyglet.window.Window(width=width, height=height, display=display) 52 self.window.on_close = self.window_closed_by_user 53 self.isopen = True C:\Anaconda3\lib\site-packages\pyglet\window\__init__.py in __init__(self, width, height, caption, resizable, style, fullscreen, visible, vsync, display, screen, config, context, mode) 502 503 if not screen: --> 504 screen = display.get_default_screen() 505 506 if not config: C:\Anaconda3\lib\site-packages\pyglet\canvas\base.py in get_default_screen(self) 71 :rtype: :class:`Screen` 72 ''' ---> 73 return self.get_screens()[0] 74 75 def get_windows(self): C:\Anaconda3\lib\site-packages\pyglet\canvas\base.py in get_screens(self) 63 :rtype: list of :class:`Screen` 64 ''' ---> 65 raise NotImplementedError('abstract') 66 67 def get_default_screen(self): NotImplementedError: abstract <matplotlib.figure.Figure at 0x202fd020278> I am not sure what I need to do can some one please help? Link to the tutorial where you can download the notebook. https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html#sphx-glr-download-intermediate-reinforcement-q-learning-py 4
st98638
This is the issue and how to fix it. If you don’t want to read the thread the issue is caused by pyglet not playing nice with jupyternotebook. You either have to downgrade to 1.2.4 or do this: after some checking. python3.6/site-packages/pyglet/__init__.py change if 'sphinx' in sys.modules: setattr(sys, 'is_epydoc', True) to if 'sphinx' in sys.modules: setattr(sys, 'is_epydoc', False) pyglet has problem with jupyter, jupyter imports sphinx be default but sphinx imported will lead pyglet thinks it is generating document? so cannot find display correctly. https://github.com/openai/gym/issues/775 32
st98639
I am trying to run simple RNN on my dataset. Which has dimensions trainX = (480, 3), trainY = (480,1). In order to pass the input to the model I converted 2D to 3D which changed (480,3) to (1, 480, 3). I am getting the RuntimeError: input must have 3 dimensions, got 4. But i am already passing 3d. Following is the snippet of my code: class Model(torch.nn.Module): def __init__(self, input_size, rnn_hidden_size, output_size): super(Model, self).__init__() self.rnn = torch.nn.RNN(input_size, rnn_hidden_size, num_layers=2, nonlinearity='relu', batch_first=True) self.h_0 = self.initialize_hidden(rnn_hidden_size) self.linear = torch.nn.Linear(rnn_hidden_size, output_size) def forward(self, x): x = x.unsqueeze(0) self.rnn.flatten_parameters() out, self.h_0 = self.rnn(x, self.h_0) out = self.linear(out) # third_output = self.relu(self.linear3(second_output)) # fourth_output = self.relu(self.linear4(third_output)) # output = self.rnn(lineared_output) # output = self.dropout(output) return out def initialize_hidden(self, rnn_hidden_size): # n_layers * n_directions, batch_size, rnn_hidden_size return Variable(torch.randn(2, 1, rnn_hidden_size), requires_grad=True) def Train(X, Y): input_size = 3 hidden_size = 32 output_size = 1 model = Model(input_size, hidden_size, output_size) criterion = nn.MSELoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.001) trainX = torch.from_numpy(X).float() trainY = torch.from_numpy(Y).float() trainX = trainX[:,np.newaxis] # shape (samples, time_step, features) trainY = trainY[:,np.newaxis] for ep in range(5000): model.train() optimizer.zero_grad() output = model(trainX) loss = criterion(output, trainY) loss.backward() optimizer.step() lossTrain = loss.data[0] image.png991×830 91.1 KB
st98640
Hi all, According to wikipedia(https://en.wikipedia.org/wiki/GeForce_10_series 32) fp16 should be 32 times slower than fp32. However on my GPU(1080) I observe that fp16 is about 2 times faster. It runs mostly matrix-vector multiplication (torch.mv) I reaserched internet and found some tests on Pascal arch which proves speedup for fp16 with no further explanation. Can someone explain that?
st98641
Hi, it there anyway to concatenate tensors inplace to use memory efficiently? Like Tensor:cat() in Torch7
st98642
i use the pretrained vgg16_bn downloaded from torchvision for images classification.First, extracting the feature from the first full connection layers. Then, some of 4096-d features are trained for liblinear. After the SVM training, rest of the features are tested. The same operation is implemented on the caffe and matconvnet. Comparing the three results, I find the model’s performance from torchvision is the worst.I want to know wether other people face the same problem, or my method is wrong. And i have one more question is that how can i use the pretrained model from other deep learning platform on the pytorch. Thank you in advance.
st98643
It seems this is also happen to me, if you can tell me how bad the performance was? such as top1 or top5, thx
st98644
Could you post a link to the repo of the Caffe and Matconvnet implementation of these models? We’ve had this discussion in the past, but I can’t find the thread. The conclusion was, as far as I remember, that the Caffe model was trained differently, i.e. an advanced model instead of the original architecture. But as I said, I try to find the thread where I compared the implementations. So take this statement with a grain of salt.
st98645
If I create a random initialized embedding using torch.randn((vocab_size, depth), requires_grad=True), Will pytorch save it to disk automatically after each training epoch is done? Will pytorch load it from disk rather than initializing another random embedding ?
st98646
The modules and state_dicts won’t be saved automatically. You would have to save and restore them. Have a look at the serialization semantics 4 on how to do that.
st98647
Hi, getting this error when I attempt to convert a BLSTM audio model to ONNX. Is this not supported? File “convert_model.py”, line 27, in main() File “convert_model.py”, line 23, in main torch_out = torch.onnx._export(model, spect, ‘deepspeech.onnx’, export_params=True, verbose=True) File “/miniconda/envs/py36/lib/python3.6/site-packages/torch/onnx/init.py”, line 21, in _export return utils._export(*args, **kwargs) File “/miniconda/envs/py36/lib/python3.6/site-packages/torch/onnx/utils.py”, line 226, in _export example_outputs, propagate) File “/miniconda/envs/py36/lib/python3.6/site-packages/torch/onnx/utils.py”, line 180, in _model_to_graph graph = _optimize_graph(graph, operator_export_type) File “/miniconda/envs/py36/lib/python3.6/site-packages/torch/onnx/utils.py”, line 107, in _optimize_graph graph = torch._C._jit_pass_onnx(graph, operator_export_type) File “/miniconda/envs/py36/lib/python3.6/site-packages/torch/onnx/init.py”, line 56, in _run_symbolic_method return utils._run_symbolic_method(*args, **kwargs) File “/miniconda/envs/py36/lib/python3.6/site-packages/torch/onnx/utils.py”, line 291, in _run_symbolic_method return symbolic_fn(*args) File “/miniconda/envs/py36/lib/python3.6/site-packages/torch/onnx/symbolic.py”, line 906, in symbolic_flattened_wrapper return sym(g, input, weights, hiddens, batch_sizes) File “/miniconda/envs/py36/lib/python3.6/site-packages/torch/onnx/symbolic.py”, line 974, in symbolic weight_ih_f, weight_hh_f, bias_f = transform_weights(2 * i) File “/miniconda/envs/py36/lib/python3.6/site-packages/torch/onnx/symbolic.py”, line 961, in transform_weights [reform_weights(g, w, hidden_size, reform_permutation) for w in all_weights[layer_index]] ValueError: not enough values to unpack (expected 4, got 2)
st98648
Hi, I am trying to run my script on GPU but I am getting an error. The data loading part goes like: class_sample_count = np.array([len(np.where(y_train==t)[0]) for t in np.unique(y_train)]) weight = 1. / class_sample_count samples_weight = np.array([weight[t] for t in y_train]) samples_weight = torch.from_numpy(samples_weight) sampler = WeightedRandomSampler(samples_weight.type('torch.cuda.DoubleTensor'), len(samples_weight), replacement=True) trainDataset = torch.utils.data.TensorDataset(torch.cuda.FloatTensor(X_train), torch.cuda.FloatTensor(y_train.astype(int))) trainLoader = torch.utils.data.DataLoader(dataset = trainDataset, batch_size=mb_size, shuffle=False, num_workers=1, sampler = sampler) My model object is like: class AEE(nn.Module): def __init__(self): super(AEE, self).__init__() self.EnE = torch.nn.Sequential( nn.Linear(IE_dim, h_dim), nn.BatchNorm1d(h_dim), nn.ReLU(), nn.Dropout(), nn.Linear(h_dim, Z_dim), nn.Sigmoid(), #nn.BatchNorm1d(Z_dim)) ) def forward(self, x): output = self.EnE(x) return output model= AEE() model.cuda() I am getting this error in the for loop on my trainloader: RuntimeError: CUDA error (3): initialization error Any ideas?
st98649
I often need to conduct scalar operation (e.g. add or multiply) on a subset of tensor elements, where the subset is specified by another tensor in a form of index or mask. There are multiple ways of doing this, and I would like to know which to use or which is better. I myself conducted a few experiments of speed comparison (details below). In short, the result was that indexing (using long or byte) tends to be slower than updating the whole tensor. Question I would like to know pros and cons of different ways to update a subset of tensor. In particular, should we avoid subset updating in a form x[cond] = ... if other options are available? Below describes the experiments that I made. Operation on subset of rows Let X be a float tensor on which changes are made, and a is a scalar. Suppose we want to add a to as subset rows of X. As far as I know, there are following ways of doing this: X[idx] += a, where idx is a long 1-D tensor of indices. X[mask] += a, where mask is a byte 1-D tensor of conditions. X += mask_f * a, where mask_f is a float 1-D tensor of conditions. Graph below is the time for 1000 operations. “true” ratio is the fraction of rows satisfying the condition. Size is the size of rows and columns of X. X += mask_f * a tends to be faster for many cases, but “indexing” may outperform with small p (only few rows satisfy condition) and large tensor size. Operation on subset of elements Now suppose we have a element-wise condition in a form of mask. And we would like to add a only to the elements of X where condition is satisfied. I know the following two ways. X[mask] += a, where mask is a byte tensor of conditions with same size as X. X += mask_f * a, where mask_f is a float tensor of conditions with same size as X. Again, X += mask_f * a is faster. Code to reproduce (up to randomness). import timeit import random import torch import numpy as np number = 1000 y = 4.0 setup = """from __main__ import x, y, mask, idx, mask_f""" out = [] for p in [0.1, 0.5, 0.9]: for s in [10, 50, 100, 250, 500]: x0 = np.random.random((s, s)) mask = np.array(random.choices([True, False], weights=[p, 1-p], k=s), dtype=np.uint8) idx = np.where(mask)[0] mask = torch.tensor(mask, dtype=torch.uint8) idx = torch.tensor(idx, dtype=torch.long) mask_f = mask.float().view(-1, 1) x = torch.tensor(x0, dtype=torch.float32) t1 = timeit.timeit("x[mask] += y", setup=setup, number=number) x1 = x.numpy() x = torch.tensor(x0, dtype=torch.float32) t2 = timeit.timeit("x[idx] += y", setup=setup, number=number) x2 = x.numpy() x = torch.tensor(x0, dtype=torch.float32) t3 = timeit.timeit("x += mask_f * y", setup=setup, number=number) x3 = x.numpy() assert np.all(x1 == x2) assert np.all(x1 == x3) out.append([s, p, t1, t2, t3]) import pandas as pd import matplotlib.pyplot as plt import seaborn as sns sns.set() df = pd.DataFrame(out, columns=["size", "true_ratio", "mask", "index", "mask_f"]).set_index(["true_ratio", "size"]) fig, axes = plt.subplots(3, 1, figsize=(7, 7), sharey=True) for i, p in enumerate(df.index.levels[0]): ax = axes[i] df.loc[p].plot(kind="bar", ax=ax) ax.set_yscale("log") ax.set_title('"true" ratio = ' + str(p)) fig.tight_layout() y = 4.0 number = 1000 setup = """from __main__ import x, y, mask, mask_f""" out = [] for p in [0.1, 0.5, 0.9]: for s in [10, 50, 100, 250, 500]: x0 = np.random.random((s, s)) mask = np.array(random.choices([True, False], weights=[p, 1-p], k=s*s), dtype=np.uint8).reshape((s, s)) mask = torch.tensor(mask, dtype=torch.uint8) mask_f = mask.float() x = torch.tensor(x0, dtype=torch.float32) t1 = timeit.timeit("x[mask] += y", setup=setup, number=number) x1 = x.numpy() x = torch.tensor(x0, dtype=torch.float32) t2 = timeit.timeit("x += mask_f * y", setup=setup, number=number) x2 = x.numpy() assert np.all(x1 == x2) out.append([s, p, t1, t2]) df = pd.DataFrame(out, columns=["size", "true_ratio", "mask", "mask_f"]).set_index(["true_ratio", "size"]) fig, axes = plt.subplots(3, 1, figsize=(6, 7), sharey=True) for i, p in enumerate(df.index.levels[0]): ax = axes[i] df.loc[p].plot(kind="bar", ax=ax) ax.set_yscale("log") ax.set_title('"true" ratio = ' + str(p)) fig.tight_layout()
st98650
Hello I have a server equipped with 4 Titan X GPU; The problem is: when I run my code, it reports “cuda runtime error (2) : out of memory”. The output of gpustat tells me that only gpu0’s memory is being used. The corresponding tensorflow code can use all memory from 4 gpu, which is 48G in my case. Anyone can help ? Best
st98651
Hi, You most certainly want to use a DataParallel 41 wrapper around your network.
st98652
Yes, either model or data parallelism. See also Model parallelism in Multi-GPUs: forward/backward graph 46
st98653
Hi albanD I just noticed there is a need to use DataParallel. However, simply wrapping my network with it cause trouble: File:…/torch/nn/parallel/parallel_apply.py/ line 67, in parallel_apply raise output IndexError:index 5 is out of range for dimension 0 (of size 5) How I wrap it: net = torch.nn.DataParallel(net, device_ids=[0, 1, 2, 3]).cuda() I am traceing back to parallel_apply.py code. Please let me know if you have any idea. Thanks
st98654
May be this extra .cuda() on the DataParallel wrapper is causing the problem. Take a look on the CUDA semantics 2 and DataParallel 3. Have you tried this way ? device = ("cuda" if torch.cuda.is_available() else "cpu" ) model = net() if torch.cuda.device_count() > 1: # device_ids has a default : all model = torch.nn.DataParallel(model, device_ids=[0, 1, 2, 3]) model.to(device)
st98655
It reports the same error, even if I try the code snippet you provided. Thanks anyway. I am suspecting the reason is I am using for loop in the code. My input format is [batch, time, height, width, channel]. My code do for loop on time axis. FYI, code runs fine on CPU. Best
st98656
Thanks for following up with me. First, the code takes up so much memory because I save result of every epoch into list and forget to release them like below: class Network(nn.Module): def __init__(self): self.conv1 = nn.Conv2d() self.conv2 = nn.Conv2d() self.conv3 = nn.Conv2d() self.feature_list = [] def forward(self, X): x = self.conv3(self.conv2(self.conv1(X))) self.feature_list.append(x) return self.feature_list For dataparallel issue, as I suspected, my code has for loop for temporal/time axis, which I put in the first axis like [time, batch, channel, height, width]. So I have total 20 time-step with 4 gpu, DataParallel would split my input to 4 shares, each one with length 5. Thus when I try to index to 6th element, it report index out of range error. Thanks a lot
st98657
Great, I couldn’t see how did you released the memory, but very nice to know you found the problem.
st98658
I have noticed model = torch.nn.DataParallel(model, device_ids=[0, 2]) must be executed within the module that is using the model. That is, I cannot pass a model to another external module. Why is that and how can I pass models around without getting the RuntimeError: all tensors must be on devices[0]? (Yes, I do place my input on device 0).
st98659
Hi, I have trained my CNN model on dataset and now want to extract feature from query image from FC layer. How can I do that? Thanks
st98660
You could register a forward hook to the layer you would like to get the activations from. Have a look at this example 15. Another approach would be to modify the forward method and return the desired activation.
st98661
Would a PyTorch model be able to import weights produced by a lua torch model of the exact same architecture? Particularly I’m curious whether I could pick up these 4 weights and import into an identical PyTorch model.
st98662
Hi, Progressing on porting the Python book examples, I am stuck now on the implementation of a trivial network. I have this: struct MyFirstNetwork : torch::nn::Module { using Linear = torch::nn::Linear; MyFirstNetwork(size_t input_size, size_t hidden_size, size_t output_size) : layer1(register_module("layer1", Linear(input_size, hidden_size))), layer2(register_module("layer2", Linear(hidden_size, output_size))) {}; Linear layer1; Linear layer2; }; I understand that I have to register the layers, but I don’t know which is the C++ equivalent of implementing the forward member function in python. I didn’t find anything when browsing through the source code of the integration tests. Any help will be appreciated. If I am missing any good documentation source, don’t hesitate to tell me to RTFM Thanks.
st98663
Thank you. It works now, except that I had to add a 3rd parameter to the dropout layer like this: x = torch::dropout(x, /*p=*/0.5, /*train*/true);
st98664
Hi, I am porting the code from “Deep Learning with PyTorch” from python to C++ and learning the C++ frontend API at the same time. I am facing a difficulty when porting this snippet: loss = nn.CrossEntropyLoss() input = Variable(torch.randn(3, 5), requires_grad=True) target = Variable(torch.LongTensor(3).random_(5)) output = loss(input, target) output.backward() I have written this: auto input2 = torch::randn({3, 5}, torch::requires_grad(true).dtype(torch::kLong)); auto target2 = torch::tensor({3},torch::kLong).random_(5); auto output2 = torch::binary_cross_entropy(input2, target2); output2.backward(); It compiles, but I have a runtime exception which says: terminate called after throwing an instance of 'at::Error' what(): normal_ is not implemented for type CPULongType (normal_ at /pytorch/build/aten/src/ATen/TypeDefault.cpp:1652) frame #0: at::native::randn(at::ArrayRef<long>, at::Generator*, at::TensorOptions const&) + 0x44 (0x7f1ffc68a704 in /home/inglada/local/libtorch/lib/libcaffe2.so) frame #1: at::native::randn(at::ArrayRef<long>, at::TensorOptions const&) + 0xe (0x7f1ffc68a7be in /home/inglada/local/libtorch/lib/libcaffe2.so) frame #2: ./Chapter03/chapter03() [0x428c8a] frame #3: torch::randn(at::ArrayRef<long>, at::TensorOptions const&) + 0x177 (0x42ade2 in ./Chapter03/chapter03) frame #4: main + 0x20a (0x427494 in ./Chapter03/chapter03) frame #5: __libc_start_main + 0xf1 (0x7f1ffb7342e1 in /lib/x86_64-linux-gnu/libc.so.6) frame #6: _start + 0x2a (0x42687a in ./Chapter03/chapter03) I understand that I can not create a normal random tensor with long ints, but if I do auto input2 = torch::randn({3, 5}, torch::requires_grad(true)); then the exception appears when computing the cross entropy: terminate called after throwing an instance of 'at::Error' what(): Expected object of scalar type Float but got scalar type Long for argument #2 'target' (checked_tensor_unwrap at /pytorch/aten/src/ATen/Utils.h:74) What can I do to get the same behaviour as in python? Thank you.
st98665
nn.BCELoss expects FloatTensors for both the input and output. As you are using nn.CrossentropyLoss I think you should use torch::log_softmax and torch::nll_loss as your criterion. I can’t find the CrossEntropyLoss on the C++ side.
st98666
Hi, Thanks for your answer. Does that mean that I can’t port to C++ exactly this python code? loss = nn.CrossEntropyLoss() input = Variable(torch.randn(3, 5), requires_grad=True) target = Variable(torch.LongTensor(3).random_(5)) output = loss(input, target) output.backward()
st98667
I’m not sure. @goldsborough might have a better answer here. Let’s wait for his wisdom.