id
stringlengths
3
8
text
stringlengths
1
115k
st49068
I tried the piece of code posted by apaszke, but I got the following error. RuntimeError: bitwise_xor(): functions with out=... arguments don't support automatic differentiation, but one of the arguments requires grad. So I am confused how @big_tree and @jdhao managed to get this code to work. I read the documentation Extending PyTorch 3, but the extending torch.nn (adding a module) section using the LinearFunction defined above, which explicitly defines both forward and backward functions, but this doesn’t seem to be the case in apaszke 's sample code above. So there is a way to make his code work, without explicitly encoding the gradient myself?
st49069
I figured that the problem arises from the power calculation in python. I changed ^2 to **2 and it works now.
st49070
Hello there, I am using following code in model , I=torch.randn(40,1,128,186) m=nn.Sequential( nn.Conv2d(1,16, kernel_size=5, stride=1), nn.BatchNorm2d(16), nn.ReLU()) O=m(I) I want my output to be of same size as input ie (40,1,128,186) I tried using O=m(F.pad(I,(0,0,2,2))) But getting (40, 16, 128, 182) Can anyone help plz.
st49071
From what I know, Pytorch doesn’t support this as an inbuilt option, TensorFlow does. Checkout this discussion 9 which mentions how dynamic loading makes it hard. However, there could be ways to hack it by combining asymmtric padding layers with conv2d layers. I wouldn’t bother doing it, unless super useful and just go with the inbuilt padding options. More discussion here 6.
st49072
Hello, I am experiencing a strange issue with replicating the same results on two different platforms. I trained a neural network for medical imaging segmentation on a supercomputer using an Nvidia Volta V100 GPU, torch version 1.4.0, CUDA 10.1, evaluating its results using Dice scores. When a reviewer asked for additional measures of the network’s accuracy, I downloaded the code on my machine and generated its outputs again, obtaining an entirely different, higher Dice score result (average of ~0.95 vs ~0.82). I can easily replicate this result on both platforms: running exactly the same code (by copying and pasting the full directory), loading the same weights, I still obtain two different values. In my local environment I am also using version 1.4.0 and CUDA 10.1, of course using a different GPU (GeForce RTX 2080 Ti). I also made sure to test this by having the same version of numpy in both environments (1.18.1). Edit: I am also running the same python version: 3.7.6, with the only difference that I am running it in a conda environment on my local machine. The only difference could be the GCC version: 7.3.0 on the V100 machine, 7.5.0 on mine. Do you have any idea where this difference could be coming from?
st49073
Hi, This is expected behavior I’m afraid. Different machines will have different hardware/software stacks that can lead to very small differences in floating point ops results. When training a neural net, such error will usually be increased by the training process, leading to different final results. But if your training is stable, it should converge to very similar loss function values.
st49074
My problem here is that there is no training involved. I am quite literally just loading the same weights and running the inference on the test set.
st49075
How large is the difference then? A small(ish) difference is expected even for inference only
st49076
By replicating the evaluation on the same system (V100) I do get a very small difference: average Dice overlap of 0.8229 vs 0.8243. That would be fine. On my local system I get 0.9527. Now, I like having a better result, but this difference looks pretty big.
st49077
There might be other issues as well if you don’t have the exact same data on both machines? (if you did preprocessing one one dataset but not the other?) But from pytorch side, you should expect to see small differences but nothing very big, in particular you can check that by ensuring that the forward passes give almost the same result for the same input?
st49078
I eventually found out that while code and files were exactly the same, the ordering of the files was different on the remote server, which caused the discrepancy. As usual, the actual issue is a simple mistake thank you for your time!
st49079
I have a tensor x size of BxCxHxW and a tensor y size of B/2 xCxHxW. I want to perform multiplication between tensor x and tensor y such as the first part of tensor x (from 0 to B/2) will be modified by the result of the multiplication during forwarding , while the last part of tensor x is unchanged as the bellow figure. How to do it in pytorch? I have my way import torch bx,cx,h,w = 4,2,3,3 by,cy,h,w = 2,2,3,3 tensor_x = torch.rand((bx,cx,h,w), requires_grad=True) tensor_y = torch.rand((by,cy,h,w), requires_grad=True) tensor_x_first = tensor_x[:by,...] * tensor_y tensor_x[:by,...] = tensor_x_first I got the error during backward RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4, 2, 3, 3]], which is output 0 of SliceBackward, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
st49080
Solved by KFrank in post #2 Hello Johnson! Try: tensor_z = tensor_x * torch.cat ((tensor_y, torch.ones_like (tensor_y)), dim = 0) Best. K. Frank
st49081
Hello Johnson! Johnson_Mark: tensor_x_first = tensor_x[:by,...] * tensor_y tensor_x[:by,...] = tensor_x_first Try: tensor_z = tensor_x * torch.cat ((tensor_y, torch.ones_like (tensor_y)), dim = 0) Best. K. Frank
st49082
Hello, I am working with 3d CT images. I am trying to change pixel dimension from [1 1.36 1.36 1.36] to [1 1 1 1] using MONAI package. I used : Spacingd(keys=[“image”, “label”], pixdim=(1, 1, 1), mode=(“bilinear”, “nearest”)) When I print pixel dimension using : print(f" :\n{data_dict[‘image_meta_dict’][‘pixdim’]}"), the output is still [1 1.36 1.36 1.36] while the affine matrix reflects the new pixel size. Would appreciate any comments on this… @MONAI
st49083
Solved by wyli in post #2 Hi! those transforms currently only rely on the “affine” key to track the coordinate system changes, ‘pixdim’ is not directly used at the moment.
st49084
Hi! those transforms currently only rely on the “affine” key to track the coordinate system changes, ‘pixdim’ is not directly used at the moment.
st49085
I just started coding in Pytorch. I have converted my wav files into text using glob library. But now I want to split that text file into train and test. Actually the dataset is very small and imbalanced. To be more clear it has 7 classes in file name only. But different classes have different samples like 100, 50 etc. Not How to split it into Train and Test can anybody help plz
st49086
Solved by ptrblck in post #2 If you want to randomly split your Dataset, you could use torch.utils.data.random_split. Alternatively, if you want to apply a stratified split, you could use sklearn.model_selection.train_test_split and pass the targets as the stratify argument.
st49087
If you want to randomly split your Dataset, you could use torch.utils.data.random_split. Alternatively, if you want to apply a stratified split, you could use sklearn.model_selection.train_test_split 3 and pass the targets as the stratify argument.
st49088
Is there an existing pytorch function to generate n equally spaced items starting from x with interval of k without having to specify the end? Eg. n = 5, x = 1, k = 0.1 -> [1.0, 1.1, 1.2, 1.3, 1.4] n = 5, x = 1, k = 2 -> [1, 3, 5, 7, 9] So the end is determined by the interval (k) and number of items (n)
st49089
Is it not good enough to calculate the end from n, x and k and using the available function? end = k*(n-1) + x
st49090
That’s okay. I only wanted to avoid extra calculations as much as possible, no matter how trivial it seems to calculate. But I guess for now, I cannot avoid it. Thanks
st49091
If I have a 3d tensor of size T1 --> torch.Size([196, 14, 14]) and a 1d tensor of size T2 --> torch.Size([196]) How do I multiply matrices of size [14, 14] in T1 by the corresponding scalar in T2? e.g if T2[0] == 3, then the first multiplication would be T1[0] * 3 , etc. So the output should be with the original shape of T1.
st49092
Solved by Nikronic in post #2 Hi, You just need to change the view of second tensor to match shapes in corresponding dimension, then PyTorch’s tensor broadcasting will take care of that. e.g. t1 = torch.arange(0, 196*14*14).view((196, 14, 14)) # shape (196, 14, 14) t2 = torch.arange(196).view(-1, 1, 1) # shape (196, 1, 1) re…
st49093
Hi, You just need to change the view of second tensor to match shapes in corresponding dimension, then PyTorch’s tensor broadcasting will take care of that. e.g. t1 = torch.arange(0, 196*14*14).view((196, 14, 14)) # shape (196, 14, 14) t2 = torch.arange(196).view(-1, 1, 1) # shape (196, 1, 1) result = t1*t2 Bests
st49094
I have the following tensor T=[100, 10, 10] I want to find max for each of the 100 matrices and get output [100]. When I run out = torch.max(T, dim=0)[0] I get 2d output of size [10, 10]. What does it mean and why I dont get 1d output of size 100 ? Thanks!
st49095
Hi Nadia! NadiaMe: I have the following tensor T=[100, 10, 10] I want to find max for each of the 100 matrices and get output [100]. T.max (dim = -1)[0].max (dim = -1)[0] should work, or you can use the equivalent torch.max() version, if you prefer. (This could also be written T.max (dim = 2)[0].max (dim = 1)[0].) (If you want to be courageous, you could try the unstable, nightly build, e.g., version “1.8.0.dev20201021,” and run T.amax (dim = (1, 2)).) When I run out = torch.max(T, dim=0)[0] I get 2d output of size [10, 10]. What does it mean and why I dont get 1d output of size 100 ? dim = 0 doesn’t mean “keep dim 0.” It means take the max() over dim 0. So it builds tensor of shape [10, 10], each element of which is the max() of 100 values, namely the max() of the corresponding elements of the 100 shape-[10, 10] slices of your original tensor. Best. K. Frank
st49096
Hi everyone, I’m working on an Autoencoder. My problem is that the output of the Decoder is always flat (all pixels are the same value) and does not reconstruct the image well. I’ve explored the output of each layer and I’ve noticed that the tensor values become zero as the image goes through the Encoder layers. Do somebody have any idea of why is this happening? Thanks!
st49097
Say we have a tensor T of size [s, s]. How do I create a mask tensor of size [s*s, s, s] where for each tensor only 1 entry is equal to 1. E.g for s = 3 mask tensor would look like [ [[1, 0, 0], [0,0,0], [0, 0, 0]], [[0, 1, 0], [0,0,0], [0, 0, 0]], [[0, 0, 1], [0,0,0], [0, 0, 0]], ... [[0, 0, 0], [0,0,0], [0, 0, 1]] ] Thanks!
st49098
Solved by pchandrasekaran in post #4 I apologize. The first line is wrong. mask_setup = torch.ones(s*s,) #Shape -> [s*s,] mask = torch.diag(mask_setup) #Shape -> [s*s, s*s] mask = mask.reshape(s*s, s, s) #Shape -> [s*s, s, s] The idea is to create a diagonal matrix of size (s * s, s * s) and then reshape it (s*s, s, s).
st49099
mask_setup = torch.ones(s, s) #Shape -> [s, s] mask = torch.diag(mask_setup) #Shape -> [s*s, s*s] mask = mask.reshape(s*s, s, s) #Shape -> [s*s, s, s] This should give you what you need. There may be a simpler way to get it done though.
st49100
pchandrasekaran: mask = mask.reshape(s*s, s, s) So in my case s = 14 mask = mask.reshape(s * s, s, s) RuntimeError: shape '[196, 14, 14]' is invalid for input of size 14 mask = torch.diag(mask_setup) – > torch.Size([14]) so it does not work, but maybe you can explain me your idea?
st49101
I apologize. The first line is wrong. mask_setup = torch.ones(s*s,) #Shape -> [s*s,] mask = torch.diag(mask_setup) #Shape -> [s*s, s*s] mask = mask.reshape(s*s, s, s) #Shape -> [s*s, s, s] The idea is to create a diagonal matrix of size (s * s, s * s) and then reshape it (s*s, s, s).
st49102
Hi there, Could anyone please provide an example on how to setup mypy to properly work with PyTorch? I’m currently using : [mypy-torch.*] # https://github.com/pytorch/pytorch/issues/42787#issuecomment-672419289 implicit_reexport = True I am aware that typing with PyTorch is still a work in progress. So it would be very much appreciated if additional suggested flags could come with some explanation as to why we currently need them instead of pure strict. If you also feel this could be useful in your project, please comment or like this post. Thanks!
st49103
Solved by albanD in post #2 Hi, You can check our own mypy ini file here: https://github.com/pytorch/pytorch/blob/master/mypy.ini It is updated every time we add typing to new part of our code base.
st49104
Hi, You can check our own mypy ini file here: https://github.com/pytorch/pytorch/blob/master/mypy.ini 207 It is updated every time we add typing to new part of our code base.
st49105
I have a loss function that requires multiple internal passes: def my_loss_func(logits, sigma, labels, num_passes): total_loss = 0 img_batch_size = logits.shape[0] logits_shape = list(logits.shape) vol_std = np.zeros((img_batch_size, num_passes)) for fpass in range(num_passes): noise_array = torch.normal(mean=0.0, std=1.0, size=logits_shape, device=torch.device('cuda:0')) stochastic_output = logits + sigma * noise_array del noise_array temp_vol = torch.softmax(stochastic_output, dim=1) temp_vol = temp_vol[:, 0, ...] vol_std[:, fpass] = temp_vol.view(4, -1).sum(1).detach().cpu().numpy() del temp_vol exponent_B = torch.log(torch.sum(torch.exp(stochastic_output), dim=-1, keepdim=True)) inner_logits = exponent_B - stochastic_output soft_inner_logits = labels * inner_logits total_loss += torch.exp(soft_inner_logits) del exponent_B, inner_logits, soft_inner_logits mean_loss = total_loss / num_passes actual_loss = torch.mean(torch.log(mean_loss)) batch_std = np.std(vol_std, axis=1) return actual_loss, batch_std Both logits and sigma are networks outputs and therefore have associated gradients. I run into memory issues when num_passes exceeds 50, are there any other ways in which I could fully optimise memory allocation to allow for a greater number of passes? I’m not at all concerned with readability/ ugly solutions, anything will do.
st49106
@PedsB, I am not exactly sure if this is the most efficient way to do the multiple internal passes but firstly have you checked for other running processes that are taking up the GPU memory? Maybe, kill 'em all: kill -9 $(nvidia-smi | sed -n 's/|\s*[0-9]*\s*\([0-9]*\)\s*.*/\1/p' | sort | uniq | sed '/^$/d') Secondly, the problem could be at this line of code: vol_std = np.zeros((img_batch_size, num_passes)) What’s your batch_size? Maybe, use a smaller batch size. Thanks
st49107
Hi @YASJAY, I should’ve clarified: I’m running this job on a GPU cluster so I’m confident memory is completely free prior to job submission. My batch size cannot be changed at all, but it is fairly small, four. My issue is that I quickly run out of memory, likely because of the total += torch.exp(soft_inner_logits) step; I believe that keeps piling on computation graphs on top of each other, using up a lot of memory. I was more looking for ways to re-write what I’ve done in a more memory-efficient way: E.g.: backpropagating in the loop to free up intermediary memory (Which unfortunately I can’t do in this case since the loss cannot be calculated until I’ve gone through N passes).
st49108
I want to create a masking tensor of shape [N, K] that contains ones in between different ranges of K for each N and zeros outside. To create this, I have two tensors with shape [N, 1] that contain the lower and upper index for each example that I should give “one”. In other words: Assuming N=3 and K=5, as well as the lower limit tensor is [[2], [0], [4]] and the upper limit tensor is [[3], [2], [4]] then, the masking tensor should look like this: [ [0, 0, 1, 1, 0], [1, 1, 1, 0, 0], [0, 0, 0, 0, 1] ] Is there a way of creating this masking tensor with the provided Pytorch functions?
st49109
Hi, You can use torch.ge (greater or equal) and torch.le (lower or equal) with : lower = torch.LongTensor([2, 0, 4]).unsqueeze(dim=-1) upper = torch.LongTensor([3, 2, 4]).unsqueeze(dim=-1) idx = torch.arange(5) mask = idx.ge(lower) & idx.le(upper) It will give you a tensor with True and False Use mask.long() if you need ones and zeros
st49110
My loader is: class FeatureLoader(Dataset): def __init__(self, data_path, device='cpu'): torch.manual_seed(0) self.data_path = data_path self.device = device self.total_length = 0 self.files = sorted(glob.glob(os.path.join(data_path, "*.pt"))) self.samples = np.array([]) for data_file in self.files: (audio_pt, feature_pt) = torch.load(data_file, map_location=self.device) self.total_length = self.total_length + audio_pt.shape[0] self.samples = np.append(self.samples, {"file": data_file, "length": audio_pt.shape[0]}) def __len__(self): return self.total_length def __getitem__(self, index): samples_index = index % len(self.files) sample = self.samples[samples_index] (audio_pt, feature_pt) = torch.load(sample['file'], map_location=self.device) input_feature = feature_pt[index % sample['length']].detach() output_audio = audio_pt[index % sample['length']].detach() return input_feature, output_audio I have 1.5M samples, but my GPU usage doesn’t quite increase at all. Stays low, which means I can probably load more per batch. Currently I have a batch_size of 8192, but if I increase to 16384, then I get some CUDA errors. So I’m wondering what I can do to speed it up.
st49111
Your data loader looks OK, do you suspect that it is slow on loading the data? Increasing batch size might increase the GPU utilization, but it also affects the learning process, mini-batches have an important part in training, they provide generalization (some would argue that batch_size=1 will be the best 2 but no need to go too extreme) So keep an eye on the accuracy Roy
st49112
I’m only using 2313MiB of GPU RAM (out of 24GB) with a batch_size of 8192. If I set increase my batch_size to 16384, then sometimes it randomly crashes. As for my model and training samples, I have: Total params: 10,562,592 Trainable params: 10,562,592 Non-trainable params: 0 ---------------------------------------------------------------- Input size (MB): 0.00 Forward/backward pass size (MB): 0.32 Params size (MB): 40.29 Estimated Total Size (MB): 40.62 ---------------------------------------------------------------- Training Samples: 28975661 Validation Samples: 1552227
st49113
Can you provide more information about the crash? paste the crash report, call stack, is it an OOM error? Also, please give more information about the data, are all samples the same size? Roy
st49114
Next time the error pops up, I’ll copy / paste it. As for data, yes, all the same size. Input is a 512-dim vector. Output is a 400-dim vector.
st49115
Hi everyone I am working on a project where I want to compare the performance of CNNs on RGB images and the converted greyscale images. In PyTorch there is a handy transform torchvision.transforms.Grayscale(num_output_channels=1) to convert an RGB image into its greyscale version. In the source code I found that they use the luma transform to do so: L = R * 299/1000 + G * 587/1000 + B * 114/1000. Essentially, this converts an RGB image into the Y plane of a YCbCr image (if I am not mistaken). My question: do the mean and the standard deviation of a dataset of converted greyscale images change compared to the same dataset with RGB images? The mean should be the same right, since grey just means that R=G=B. What about the standard deviation? Any help is very much appreciated! All the best snowe
st49116
Thank you @RaLo4! I just tried it out and the mean and the std for both RGB and greyscale datasets seem to be the same with a difference of maybe 0.01 - 0.02. I am just curious… from a theoretical perspective, does that make sense? All the best snowe
st49117
Hello, I’m new at pytroch and I’m currently working on my first cnn for image recognition. My images have the size 28*28 and are gray, so they have a channel of 1. I get a message every time that i have a size mismatch. I can’t figure out what is wrong. Can anyone help me? I Error Code: Traceback (most recent call last): File “C:\Users\jessi\Downloads\Code (1)\BA\main1.py”, line 238, in train(model, device, train_loader, optimizer, epoch) File “C:\Users\jessi\Downloads\Code (1)\BA\main1.py”, line 183, in train output = model(data) File “C:\Users\jessi\anaconda3\lib\site-packages\torch\nn\modules\module.py”, line 722, in _call_impl result = self.forward(*input, **kwargs) File “C:\Users\jessi\Downloads\Code (1)\BA\main1.py”, line 62, in forward x = self.fc1(x) File “C:\Users\jessi\anaconda3\lib\site-packages\torch\nn\modules\module.py”, line 722, in _call_impl result = self.forward(*input, **kwargs) File “C:\Users\jessi\anaconda3\lib\site-packages\torch\nn\modules\linear.py”, line 91, in forward return F.linear(input, self.weight, self.bias) File “C:\Users\jessi\anaconda3\lib\site-packages\torch\nn\functional.py”, line 1674, in linear ret = torch.addmm(bias, input, weight.t()) RuntimeError: size mismatch, m1: [1 x 512], m2: [256 x 128] at …\aten\src\TH/generic/THTensorMath.cpp:41 from __future__ import print_function import numpy as np import pandas as pd import torch from PIL import Image from torch.utils.data.dataset import Dataset from torchvision import transforms import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.optim.lr_scheduler import StepLR class Net(nn.Module): def __init__(self): # init is the constructor for a class.The self parameter refers to the instance of the object super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 32, kernel_size=(5,5)) #3 Channels da buntes Bild torch.nn.init.ones_(self.conv1.weight) self.pool1 = nn.MaxPool2d(2,2) self.conv2 = nn.Conv2d(32, 64, kernel_size=(5,5)) self.pool2 = nn.MaxPool2d(2,2) self.conv3 = nn.Conv2d(64, 128,kernel_size=(5,5)) self.conv4 = nn.Conv2d(128, 64,kernel_size=(5,5)) self.conv5 = nn.Conv2d(64, 32, kernel_size=(5,5)) self.conv6 = nn.Conv2d(32, 16, kernel_size=(5,5)) self.pool3 = nn.MaxPool2d(2,2) self.conv7 = nn.Conv2d(16, 32, kernel_size=(5,5)) self.conv8 = nn.Conv2d(32, 64, kernel_size=(5,5)) #self.dropout1 = nn.Dropout2d(0.25) #self.dropout2 = nn.Dropout2d(0.5) # ((I - K + 2*P) / S) + 1 self.fc1 = nn.Linear(64*2*2, 128) self.fc2 = nn.Linear(128, 64) self.fc3= nn.Linear(64, 32) self.fc4= nn.Linear(32, 16) self.fc5= nn.Linear(16, 8) self.fc6= nn.Linear(8, 4) self.fc7= nn.Linear(4, 2) self.fc8= nn.Linear(2, 1) def forward(self, x): x = self.conv1(x) x = F.max_pool2d(x, 2) x = F.relu(x) x = self.conv2(x) x = F.max_pool2d(x, 2) x = F.relu(x) x = torch.flatten(x, 1) x = self.fc1(x) x = F.relu(x) x = self.fc2(x) x = F.relu(x) x = self.fc3(x) x = F.relu(x) x = self.fc4(x) x = F.relu(x) x = self.fc5(x) x = F.relu(x) x = self.fc6(x) x = F.relu(x) x = self.fc7(x) x = F.relu(x) x = self.fc8(x) ''' x = self.conv2(x) x = F.relu(x) x = F.max_pool2d(x, 2) x = self.dropout1(x) x = torch.flatten(x, 1) x = self.fc1(x) x = F.relu(x) x = self.dropout2(x) x = self.fc2(x) ''' #output = F.log_softmax(x, dim=1) return x
st49118
Solved by ptrblck in post #2 Check the shape of the activation after flattening it and before passing it to self.fc1: x = torch.flatten(x, 1) print(x.shape) This should give you a shape as [batch_size, num_features], where num_features should be equal to the input features of self.fc1. Based on the error message you are seei…
st49119
Check the shape of the activation after flattening it and before passing it to self.fc1: x = torch.flatten(x, 1) print(x.shape) This should give you a shape as [batch_size, num_features], where num_features should be equal to the input features of self.fc1. Based on the error message you are seeing I guess x would have the shape [batch_size, 512], while you set in_features=256 for self.fc1, which causes the shape mismatch. PS: you can post code snippets by wrapping them into three backticks ```, which makes debugging easier.
st49120
ptrblck: This should give you a shape as [batch_size, num_features], where num_features should be equal to the input features of self.fc1. Thank you so much! It’s working !
st49121
I am running a PyTorch ANN model (for a classification task) and I am using skorch’s GridSearchCV to search for the optimal hyperparameters. When I run GridSearchCV using n_jobs=1, it runs really slowly. When I set n_jobs greater than 1, I get a memory blow-out error. So I am now trying to see if I could use PyTorch’s DataLoader to split up the dataset into batches to avoid the memory blow-out issue when I set n_jobs greater than 1. According to this other PyTorch Forum question (How to use Skorch for data that does not fit into memory? 7), it appears we could use SliceDataset. My code for this is as below: # Setting up artifical neural net model class TabularModel(nn.Module): # Initialize parameters embeds, emb_drop, bn_cont and layers def __init__(self, emb_szs, n_cont, out_sz, layers, p=0.5): super().__init__() self.embeds = nn.ModuleList([nn.Embedding(ni, nf) for ni, nf in emb_szs]) self.emb_drop = nn.Dropout(p) self.bn_cont = nn.BatchNorm1d(n_cont) # Create empty list for each layer in the neural net layerlist = [] # Number of all embedded columns for categorical features n_emb = sum((nf for ni, nf in emb_szs)) # Number of inputs for each layer n_in = n_emb + n_cont for i in layers: # Set the linear function for the weights and biases, wX + b layerlist.append(nn.Linear(n_in, i)) # Using ReLu activation function layerlist.append(nn.ReLU(inplace=True)) # Normalised all the activation function output values layerlist.append(nn.BatchNorm1d(i)) # Set some of the normalised activation function output values to zero layerlist.append(nn.Dropout(p)) # Reassign number of inputs for the next layer n_in = i # Append last layer layerlist.append(nn.Linear(layers[-1], out_sz)) # Create sequential layers self.layers = nn.Sequential(*layerlist) # Function for feedforward def forward(self, x_cat_cont): x_cat = x_cat_cont[:,0:cat_train.shape[1]].type(torch.int64) x_cont = x_cat_cont[:,cat_train.shape[1]:].type(torch.float32) # Create empty list for embedded categorical features embeddings = [] # Embed categorical features for i, e in enumerate(self.embeds): embeddings.append(e(x_cat[:,i])) # Concatenate embedded categorical features x = torch.cat(embeddings, 1) # Apply dropout rates to categorical features x = self.emb_drop(x) # Batch normalize continuous features x_cont = self.bn_cont(x_cont) # Concatenate categorical and continuous features x = torch.cat([x, x_cont], 1) # Feed categorical and continuous features into neural net layers x = self.layers(x) return x # Use cross entropy loss function since this is a classification problem # Assign class weights to the loss function criterion_skorch = nn.CrossEntropyLoss # Use Adam solver with learning rate 0.001 optimizer_skorch = torch.optim.Adam from skorch import NeuralNetClassifier # Random seed chosen to ensure results are reproducible by using the same initial random weights and biases, # and applying dropout rates to the same random embedded categorical features and neurons in the hidden layers torch.manual_seed(0) net = NeuralNetClassifier(module=TabularModel, module__emb_szs=emb_szs, module__n_cont=con_train.shape[1], module__out_sz=2, module__layers=[30], module__p=0.0, criterion=criterion_skorch, criterion__weight=cls_wgt, optimizer=optimizer_skorch, optimizer__lr=0.001, max_epochs=150, device='cuda' ) from sklearn.model_selection import GridSearchCV param_grid = {'module__layers': [[30], [50,20]], 'module__p': [0.0], 'max_epochs': [150, 175] } from torch.utils.data import TensorDataset, DataLoader from skorch.helper import SliceDataset # cat_con_train and y_train is a PyTorch tensor tsr_ds = TensorDataset(cat_con_train.cpu(), y_train.cpu()) torch.manual_seed(0) # Set random seed for shuffling results to be reproducible d_loader = DataLoader(tsr_ds, batch_size=100000, shuffle=True) d_loader_slice_X = SliceDataset(d_loader, idx=0) d_loader_slice_y = SliceDataset(d_loader, idx=1) models = GridSearchCV(net, param_grid, scoring='roc_auc', n_jobs=2).fit(d_loader_slice_X, d_loader_slice_y) However, when I ran this code, I get the following error message: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-47-df3fc792ad5e> in <module>() 104 --> 105 models = GridSearchCV(net, param_grid, scoring='roc_auc', n_jobs=2).fit(d_loader_slice_X, d_loader_slice_y) 106 6 frames /usr/local/lib/python3.6/dist-packages/skorch/helper.py in __getitem__(self, i) 230 def __getitem__(self, i): 231 if isinstance(i, (int, np.integer)): --> 232 Xn = self.dataset[self.indices_[i]] 233 Xi = self._select_item(Xn) 234 return self.transform(Xi) TypeError: 'DataLoader' object does not support indexing I am now confused because based on the other PyTorch Forum question I had mentioned above, someone there said that SliceDataset could work with DataLoader, but I am getting this error message above. What is wrong and/or how do I fix this? Many many thanks in advance!
st49122
Solved by ptrblck in post #2 The linked post mentions to wrap the Dataset into SliceDataset, not the DataLoader, which cannot be directly indexed. I don’t know about the skorch internals and don’t know, if this would solve your issue, but this explains the raised error.
st49123
The linked post mentions to wrap the Dataset into SliceDataset, not the DataLoader, which cannot be directly indexed. I don’t know about the skorch internals and don’t know, if this would solve your issue, but this explains the raised error.
st49124
In my script the model successfully trains and I save it at the end. When I try to load and evaluate it this is what happens: Traceback (most recent call last): File ".\GRU.py", line 139, in <module> gru_outputs, targets, gru_sMAPE = evaluate(gmodel, X_test, Y_test, label_scalars) File ".\GRU.py", line 115, in evaluate model.eval() AttributeError: 'NoneType' object has no attribute 'eval' This is the last portion of the code: def evaluate(model, X_test, Y_test, label_scalars): model.eval() outputs = [] targets = [] start_time = time.clock() for i in X_test.keys(): inp = torch.from_numpy(np.array(X_test[i])) labs = torch.from_numpy(np.array(Y_test[i])) h = model.init_hidden(inp.shape[0]) out, h = model(inp.to(device).float(), h) outputs.append(label_scalars[i].inverse_transform(out.cpu().detach().numpy()).reshape(-1)) targets.append(label_scalars[i].inverse_transform(labs.numpy()).reshape(-1)) print("Evaluation Time: {}".format(str(time.clock()-start_time))) sMAPE = 0 for i in range(len(outputs)): sMAPE += np.mean(abs(outputs[i]-targets[i])/(targets[i]+outputs[i])/2)/len(outputs) print("sMAPE: {}%".format(sMAPE*100)) return outputs, targets, sMAPE lr = 0.002 #Gru_model = train(train_loader, lr) gmodel = torch.load('./grunet.pkl') gru_outputs, targets, gru_sMAPE = evaluate(gmodel, X_test, Y_test, label_scalars)
st49125
Solved by ptrblck in post #2 What does print(gmodel) return after the torch.load call? Based on the error it seems it’s None, which indicates the loading fails somehow.
st49126
What does print(gmodel) return after the torch.load call? Based on the error it seems it’s None, which indicates the loading fails somehow.
st49127
As far as I know, COCO only provides annotations for 80 classes. My question is how was Faster R-CNN ResNet-50 FPN form torchvision trained with 91 classes?
st49128
I am guessing this is due to the 91 stuff categories the COCO dataset has. Screenshot from the COCO website Not 100% sure tho…
st49129
I implemented a simple cnn architecture in Keras and PyTorch, and trained them using exactly the same hyperparameters on the same CIFAR10 data. However, the training behaved very differently. It only took about 1s per epoch for the Keras model, while for PyTorch model it was about 15 seconds. Overfitting happend in the Keras case but not in the PyTorch case (I did not use weight decay though). Also, PyTorch only used a small fraction of the GPU memory, while Keras occupied all GPU memory during the training. All experiments were done in the same machine with a single GeForce RTX 2080 Ti. Here is part of the scripts for the torch and keras experiments. The full scripts could be found at https://gist.github.com/Xiuyu-Li/cd99c7d75e9b705c599d25b412593fed 1 PyTorch def train(trainloader, model, criterion, optimizer, epoch, device): model.train() train_loss = 0 correct = 0 total = 0 for batch_idx, (inputs, targets) in enumerate(trainloader): inputs, targets = inputs.to(device), targets.to(device) optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, targets) loss.backward() optimizer.step() train_loss += loss.item() * targets.size(0) _, predicted = outputs.max(1) total += targets.size(0) correct += predicted.eq(targets).sum().item() return train_loss/total, 100.*correct/total device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = small_cnn(num_classes, num_conv) model = model.to(device) optimizer = optim.SGD(model.parameters(), lr=lr, momentum=momentum) criterion = nn.CrossEntropyLoss() Keras input_shape = x_train.shape[1:] model = small_cnn(input_shape, num_classes, num_conv=num_conv) optimizer = tf.keras.optimizers.SGD(lr=lr, momentum=momentum) loss = tf.keras.losses.CategoricalCrossentropy(from_logits=True) model.compile(loss=loss, optimizer=optimizer, metrics=['accuracy']) model.summary() model.fit( x_train, y_train, batch_size=batch_size, epochs=epochs, validation_data=(x_test, y_test), shuffle=True) I checked the correctness of the implemented model architectures, and it seems like they are the same: PyTorch ---------------------------------------------------------------- Layer (type) Output Shape Param # ================================================================ Conv2d-1 [-1, 32, 30, 30] 896 MaxPool2d-2 [-1, 32, 15, 15] 0 Conv2d-3 [-1, 32, 13, 13] 9,248 MaxPool2d-4 [-1, 32, 6, 6] 0 Conv2d-5 [-1, 32, 4, 4] 9,248 MaxPool2d-6 [-1, 32, 2, 2] 0 Linear-7 [-1, 64] 8,256 Linear-8 [-1, 10] 650 ================================================================ Total params: 28,298 Trainable params: 28,298 Non-trainable params: 0 ---------------------------------------------------------------- Input size (MB): 0.01 Forward/backward pass size (MB): 0.33 Params size (MB): 0.11 Estimated Total Size (MB): 0.45 ---------------------------------------------------------------- Keras Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 30, 30, 32) 896 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 15, 15, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 13, 13, 32) 9248 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 6, 6, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 4, 4, 32) 9248 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 2, 2, 32) 0 _________________________________________________________________ flatten (Flatten) (None, 128) 0 _________________________________________________________________ dense (Dense) (None, 64) 8256 _________________________________________________________________ dense_1 (Dense) (None, 10) 650 ================================================================= Total params: 28,298 Trainable params: 28,298 Non-trainable params: 0 _________________________________________________________________ What is causing these huge differences in training? Could it be related to something like the (potential) different implementations of optimizer and loss function in PyTorch and Keras?
st49130
Olorin: while Keras occupied all GPU memory during the training That’s the default behavior of TensorFlow and doesn’t mean that the complete GPU memory will be used. By skimming through your code I cannot find any obvious issues, so you could load the Keras parameters into the PyTorch model and compare the output for a static input to make sure the architecture is equal.
st49131
Olorin: Could it be related to something like the (potential) different implementations of optimizer and loss function in PyTorch and Keras? Since you’re using SGD in both cases, the above should not be the case. Are the loss values aligned in both cases? Is the difference only in the train time and not in loss values? What’s your model like? Make sure that the model params are initialized to the same values (like constant values) for fair comparison.
st49132
I can’t help but notice that PyTorch models are most likely to over fit after the first experiment. For example, when using k-fold cross validation, the first fold learns and tests normally (loss decreases slowly until it gets saturated) but the second, third, … folds already start with a low loss value, get to the best performance in the very first epochs, and then the model starts to over fit. Is there any explanation for this behaviour? Thanks!
st49133
I guess you might not be resetting the experiment setup properly, i.e. the model, optimizer, lr_scheduler or any other object which “learns” during the first fold.
st49134
During model training, due to GPU memory overflow, only pytorch’s nn.DataParallel was used for setting, and a simple program was written to test. The result is feasible, but it is stuck in the model input part during real model training,that is “self.model(inputs)”.No error message, just stuck. Can you help me ? Thanks. The code is as follows: {OUH7G@%6J7UB`GP_9)AA1281×532 32.2 KB
st49135
Which PyTorch version are you using? If it’s an older one, could you update to the latest stable version (1.5)? Also, is your model working on a single GPU?
st49136
I also have same problem. After reading @ptrblck 's reply, I updated pytorch as version 1.6. But still it is not fixed.
st49137
I found that it was caused by deadlock of another part. Now I fixed my problem. Maybe you need to check other part too.
st49138
Hi, I am working on finding gradients of an image using the sobel operator and convolutional filters. Right now, my approach is kernel_x = [[1., 0., -1.], [2., 0., -2.], [1., 0., -1.]] kernel_x = torch.FloatTensor(kernel_x) kernel_y = [[1., 2., 1.] ,[0. ,0., 0.], [-1., -2., -1.]] kernel_y = torch.FloatTensor(kernel_y) kernel_x = kernel_x.unsqueeze(0).unsqueeze(0) kernel_y = kernel_y.unsqueeze(0).unsqueeze(0) self.weight_x = nn.Parameter(data=kernel_x, requires_grad=False) self.weight_y = nn.Parameter(data=kernel_y, requires_grad=False) grad_x = F.conv2d(x, self.weight_x) grad_y = F.conv2d(x, self.weight_y) The main reason i am confused is that the weights are transposed during convolutions–and i want to calculate the x and y gradients specifically. Is this approach right?–like would grad_x calculate the x-directional gradients and grad_y the y-directional? Input Format : [1,1,64,64]
st49139
Here’s my simple NN structure: class DNN(nn.Module): def __init__(self, input_layer_size: int, hidden_layer_sizes: List[int], dropout_rate: float, debug: bool = False): ''' Set up the network. Args: input_layer_size: size of the input layer hidden_layer_sizes: sizes of the hidden linear layers e.g. [5,5,3,2] -> linear layers 5,5 -> 5,3 -> 3,2 dropout_rate: dropout rate ''' super().__init__() self.debug = debug self.linear_layers_list: List[nn.Module] = [] self.linear_layers.append(nn.Linear(input_layer_size, hidden_layer_sizes[0])) self.linear_layers.append(nn.LeakyReLU(0.04)) self.linear_layers.append(nn.BatchNorm1d(hidden_layer_sizes[0])) self.linear_layers.append(nn.Dropout(p=dropout_rate)) # hidden layers for in_size, out_size in zip(hidden_layer_sizes[:-2], (hidden_layer_sizes[1:-1])): self.linear_layers.append(nn.Linear(in_size, out_size)) self.linear_layers.append(nn.LeakyReLU(0.04)) self.linear_layers.append(nn.BatchNorm1d(out_size)) self.linear_layers.append(nn.Dropout(p=dropout_rate)) # output layer output_layer_mean = nn.Linear(hidden_layer_sizes[-1], 1) output_layer_sigma = nn.Linear(hidden_layer_sizes[-1], 1) # make sure that sigma > 0 output_layer = torch.cat(output_layer_mean, F.softplus(output_layer_sigma)) <--- NO GOOD self.linear_layers_list.append(output_layer) self.linear_layers = torch.nn.ModuleList(self.linear_layers_list) def forward(self, x: torch.Tensor) -> torch.Tensor: for layer in self.linear_layers_list: x = layer(x) return x I need the output of output_layer_sigma to be positive so I’m running it through a softplus. However, I get the following error at run time: File "(...)", line 56, in __init__ output_layer = torch.cat(output_layer_mean, F.softplus(output_layer_sigma)) TypeError: softplus(): argument 'input' (position 1) must be Tensor, not Linear How do I make sure the 2nd value in my output tensor is always positive? should i call F.softplus in forward instead of __init__()? Or use the nn.Softplus module? Or is there a better, “pytorch-standard” way of doing things? Thanks! PS: Here’s what this would look like in tf.keras: # output layer - enforce a positive sigma mean_output = keras.layers.Dense(1)(last_hidden_layer) sigma_output = keras.layers.Dense(1, activation='softplus')(last_hidden_layer) output_layer = keras.layers.concatenate([mean_output, sigma_output])
st49140
Solved by KFrank in post #2 Hello Mishoo! Yes. As you see, you can’t apply softplus() to a Linear. You need to apply it to the output of the Linear, which is a tensor. I would not append output_layer (nor output_layer_mean nor output_layer_sigma) to linear_layers_list. Something like this: output_layer_mean = …
st49141
Hello Mishoo! mishooax: class DNN(nn.Module): ... # output layer output_layer_mean = nn.Linear(hidden_layer_sizes[-1], 1) output_layer_sigma = nn.Linear(hidden_layer_sizes[-1], 1) # make sure that sigma > 0 output_layer = torch.cat(output_layer_mean, F.softplus(output_layer_sigma)) <--- NO GOOD self.linear_layers_list.append(output_layer) ... def forward(self, x: torch.Tensor) -> torch.Tensor: for layer in self.linear_layers_list: x = layer(x) return x should i call F.softplus in forward instead of __init__()? Yes. As you see, you can’t apply softplus() to a Linear. You need to apply it to the output of the Linear, which is a tensor. I would not append output_layer (nor output_layer_mean nor output_layer_sigma) to linear_layers_list. Something like this: output_layer_mean = nn.Linear(hidden_layer_sizes[-1], 1) output_layer_sigma = nn.Linear(hidden_layer_sizes[-1], 1) # do this stuff in forward ... # # make sure that sigma > 0 # output_layer = torch.cat(output_layer_mean, F.softplus(output_layer_sigma)) <--- NO GOOD # self.linear_layers_list.append(output_layer) ... def forward(self, x: torch.Tensor) -> torch.Tensor: for layer in self.linear_layers_list: x = layer(x) # haven't yet applied output_layer_mean nor output_layer_sigma x = torch.cat((output_layer_mean(x), F.softplus(output_layer_sigma(x))), dim = -1) return x Best. K. Frank
st49142
When I use PyTorch DataLoader to load my test data, I set the shuffle to False like this. test_loader = DataLoader(test_set, batch_size=batch_size, shuffle=False) It returns labels like this: 3 The labels are all 1 which is impossible because my data includes data from different classes(thus, different labels). When I changed the shuffle argument of DataLoader to shuffle=True, test_loader = DataLoader(test_set, batch_size=batch_size, shuffle=True) the labels are finally returned normally. Labels: tensor([11, 0, 12, 5, 13, 16, 12, 16, 18, 7, 10, 10, 8, 18, 14, 16, 14, 3, 15, 6, 0, 10, 6, 10, 0, 18, 14, 0, 7, 10, 5, 15, 11, 7, 0, 9, 11, 13, 8, 11, 6, 16, 10, 8, 10, 18, 9, 4, 7, 10, 5, 18, 3, 12, 5, 9, 8, 6, 15, 3, 14, 12, 17, 14]) Labels: tensor([14, 14, 8, 12, 15, 7, 6, 14, 8, 9, 17, 12, 16, 0, 17, 1, 7, 2, 16, 14, 10, 15, 7, 8, 14, 16, 4, 17, 9, 15, 6, 6, 6, 18, 5, 0, 8, 10, 2, 0, 8, 6, 5, 17, 16, 18, 10, 9, 11, 7, 7, 10, 18, 7, 4, 7, 9, 4, 18, 6, 18, 6, 5, 10]) The problem is solved but I don’t understand why dataloader returns all same labels when shuffle is set to False. Can anyone explain this to me?
st49143
Maybe it is the reason: When the folder contains images whose names are with specific meaningful prefixes, then the dataloader load the images sorted alphabetically. Take two classes dog and cat as an example, given cat label 1 and dog label 0. Suppose 1000 images are dog_001.jpg, dog_002.jpg, etc, 200 images are cat_001.jpg, … Then you get labels with all 1s. Because images with the prefix of cat should be returned first by alphabet. So this explains your case.
st49144
Sorry, I don’t understand your explanation. Can you elaborate more on that? (I can read Japanese, so you can just reply in Japanese too)
st49145
Isn’t this the expected behavior when setting shuffle=False? shuffle=False means, now the data is no longer shuffled but in-order. The labels before tensor you printed above is probably only your first batch and thus only contains label 1. If you were to print further batches they would probably contain your other labels. I am guessing your test_set is a torchvision.datasets.ImageFolder() with your images being inside different folders representing your labels? If so than this is indeed the expected behavior.
st49146
Is there a way to ensure that the weights of the network stay in a positive range throughout training?
st49147
milesg: Is there a way to ensure that the weights of the network stay in a positive range throughout training? Did you find any solution for this question?
st49148
After you’ve updated the weights, add the following lines to your code: for p in mdl.parameters(): p.data.clamp_(0) Example: import torch import torch.nn as nn x = torch.arange(10).view(-1,1) y = -3 * x class NN_Linear_Model(nn.Module): def __init__(self): super().__init__() self.lm = nn.Linear(1,1) def forward(self, X): out = self.lm(X) return out mdl = NN_Linear_Model() loss_fn = nn.MSELoss() optimizer = torch.optim.SGD(mdl.parameters(), lr=0.0001) for i in range(100): optimizer.zero_grad() y_pred = mdl(x) loss = loss_fn(y_pred, y) loss.backward() optimizer.step() for p in mdl.parameters(): p.data.clamp_(0) if i % 10 == 0: print(f'loss is {loss} at iter {i}, weight: {list(mdl.parameters())[0].item()}') list(mdl.parameters()) One caveat is your model may not converge to the optimal point as you are restricting where your parameters can go.
st49149
Does the loss function need to be changed to a specific one or something because when I applied the proposed solution, I got stuck in an accuracy of 10% I am using CrossEntropy as loss function
st49150
Yes, softmax won’t converge when negative inputs are not expressible, while gradients push them there (as such use of clamp_ is invisible to autograd). Maybe softmax(x-10) will work, but it is still a hack. Not touching parameters of some final layers may work better. PS: actually, softmax/CrossEntropy mainly exist to force positivity, if network outputs are always positive anyway, something like -CategoricalDistribution(probs=output).log_prob(target) may work as a loss.
st49151
Sorry I did not return here… I already solved it and it converged with softmax with no problem (just less accuracy, dropped from 95% to 82%)
st49152
The trick is to parameterize the weights by their logarithms. The log weights are allowed to vary freely among real numbers. An exponential map will convert the log weights to positive-definite weights before the weight is applied to the input data. Example code: import torch import torch.nn as nn class PositiveLinear(nn.Module): def __init__(self, in_features, out_features): super(PositiveLinear, self).__init__() self.in_features = in_features self.out_features = out_features self.log_weight = nn.Parameter(torch.Tensor(out_features, in_features)) self.reset_parameters() def reset_parameters(self): nn.init.xavier_uniform_(self.log_weight) def forward(self, input): return nn.functional.linear(input, self.log_weight.exp())
st49153
Hello, I have used a 2D GAN code to synthesize CT images from MR. Since both MR CT comes in 3D, I had to slice the images into 2D images and feed into the network. Now I want to try the same synthesis project in 3D. I want to use the same GAN algorithm. How do I shift the same project from 2D to 3D? What should I be concerned about? How loss, optimization, forward, backward change if data dimension increases?
st49154
Hello, Can you let me know how did you extracted the slices from the 3d image.I am working on same thing. Or can you share any reference . Currently I have my images as 3D , and I am stuck at trying to slice them
st49155
I just extracted them axially. Suppose if 3D image dimension is 172x220x156. I created 156 2D images with 156x220 dimension and saved them in .tif format. It should be pretty straightforward and can be done using Matlab or PIL python library. Let me know how it goes for you.
st49156
Thank you, I was able to do that for 2D network. Have you also tried to create 3D patches.Since I want to create 3D patches for my CT and PET images
st49157
Hello Banik, Were you successful in converting your 2D project to 3D? As I am also trying to implement the same. But stuck with some errors.
st49158
Hi Enthu, I had converted the 2D synthesis into 3D. If I remember correctly I was having issues in reconstruction from patches.
st49159
I want to do this grads = grad(loss, model.parameters()) But I am using nn.Module to define my model. It runs backward() function automatically but I want backward() function not to calculate any grads and want to compute grads myself. How can I omit backward() function and can prevent to perform any gradient calculation when I call my model?
st49160
I was wondering if it were at all possible to implement a backward of backward method in an autograd function? Hi, plain nn.Module don’t call backward by themselves. You might need to update the library you’re using on top of pytorch to stop doing that. Then you can use autograd.grad() to do what you want.
st49161
I am using version 1.6 of pytorch. And I am getting this error for using grad manually “RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling backward the first time.”
st49162
As mentioned, if you want to backward through the graph multiple times, you have to specify retain_graph=True for all but the last call. Note that this can happen as well if you do some computations before your training loop and you re-use part of the graph for all the iterations. you should recompute all the graph at every iteration!
st49163
So here is what I am doing Defined the model class (myModel) using nn.Module Now in training called my model as outputs = myModel(inputs) calculating loss as loss = Crossentropy(outputs, targets) Now I am not calling loss.backward(). I use: grads = grad(loss, model.paramaters()) I am getting the error that I mentioned above which means that somehow it is calling backward() function already. I want to prevent that automatic first call of backward().
st49164
Hi, There is no automatic call to backward. And this can also be a previous call to autograd.grad. Can you share the code around your main training loop. Or a small code sample (30-40 lines) that can reproduce the error if you can do so?
st49165
I am using LSTM for PENN TreeBank problem. Also I have one doubt that came across mind. For LSTMs, do I have to initialize hidden states in every loop or just at the start? def train_loss(data, target, ht, ct): out, (ht, ct) = myModel(data, ht, ct) loss = F.cross_entropy(out, target) return loss, (ht, ct) for epoch in range(EPOCHS): trainloss = 0.0 t0 = time.time() for batch in train_loader: data, target = batch.text.t(), batch.target.t(), loss, (ht, ct) = train_loss(data, target.reshape(-1), ht, ct) grads = grad(loss, Ws) trainloss += loss ..........................
st49166
You do have to either re-initialize it or at least .detach() it. Otherwise, you will try to backprop through all the previous iterations of your model. And this causes the error you see as you already backproped in that model before.
st49167
Yes, Right The problem is solved by using ht = ht.clone().detach() ct = ct.clone().detach() solved the problem. Thank you.