id
stringlengths
3
8
text
stringlengths
1
115k
st49668
Hi, you can use this mask for slicing, so c = a[b] should return your expected output
st49669
Hi, I notice that when you do bidirectional LSTM in pytorch, it is common to do floor division on hidden dimension for example: def init_hidden(self): return (autograd.Variable(torch.randn(2, 1, self.hidden_dim // 2)), autograd.Variable(torch.randn(2, 1, self.hidden_dim // 2))) http://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html#sphx-glr-beginner-nlp-advanced-tutorial-py 908 I’m not quite sure why do we need to do this extra step when changing from uni-directional to bi-directional LSTM.
st49670
this is probably expected by the LSTM module (and in turn the CuDNN RNN API). It’s a convention more than anything from what I understand.
st49671
It’s a good convention. In that example they create a custom model and the hidden_dim defines the output size they want from the LSTM. Bidirectional has twice the amount of hidden variables so if you wan’t to keep the final output the same you have to divide the hidden_dim by 2. hidden_size = 128 lstm = nn.LSTM(10, hidden_size, num_layers=1, batch_first=True, bidirectional=False) hidden_vect_1 = ( Variable(torch.zeros(1, 1, hidden_size)), Variable(torch.zeros(1, 1, hidden_size))) output, hidden = lstm(Variable(torch.rand(1, 5, 10)), hidden_vect_1) print('Output size:', output.size(), '- Hidden size:', [h.size() for h in hidden]) # Output size: torch.Size([1, 5, 128]) - Hidden size: [torch.Size([1, 1, 128]), torch.Size([1, 1, 128])] lstm = nn.LSTM(10, hidden_size, num_layers=1, batch_first=True, bidirectional=True) hidden_vect_1 = ( Variable(torch.zeros(2, 1, hidden_size)), Variable(torch.zeros(2, 1, hidden_size))) output, hidden = lstm(Variable(torch.rand(1, 5, 10)), hidden_vect_1) print('Output size:', output.size(), '- Hidden size:', [h.size() for h in hidden]) # Output size: torch.Size([1, 5, 256]) - Hidden size: [torch.Size([2, 1, 128]), torch.Size([2, 1, 128])] # Note that the output is twice as large, and also that the hidden are [2,1,128] instead of [1,1,128]
st49672
If you use same hidden size in LSTM while using bidirectional then it will produce twice the amount of hidden varibles where you wont even use half of them.So you are wasting computation.But you can keep same hidden size and apply a fully connected layer ate end to convert ‘2x’ size to ‘x’ size if you want:))
st49673
Warm regards to all! I am currently trying to predict x and y locations (landmarks) from a 512x512 image for a circle detection problem. I have a segmented image which shows circles as black and background as white, for each image I have the x and y coordinate of the centroid for each circle with a total of 435 points to predict (so 870 total values x and y). I had previously worked in a facial landmark detection where I was only predicting 138 values instead of 870. However, I am not successful at the moment at predicting the correct values for the centroids and the main question I have is regarding normalization. I have followed the following procedure for normalizing to [-1 1]: Y_C /= 512 Y_C = 2*Y_C - 1 I am using SmoothL1Loss, and after training it turns out the predicted values are outside of the range [-1 1]. Is there a reason for this I am missing? Also if you have some advise for predicting a large number of landmarks for this type of problem I would really appreciate the help as I am currently not successful.
st49674
Solved by rzp0063 in post #5 @ptrblck Thank you so much for your response and guidance to solve this issue with several tips. This makes a lot more sense now why the the points are shifting towards zero and confirms the source of error with this approach. I will keep trying to solve it using ConvNets along with your suggestion…
st49675
rzp0063: the predicted values are outside of the range [-1 1] If you don’t clip the output or use any other condition, the outputs of e.g. a linear layer would be unbounded so that the values might be outside of [-1, 1]. That being said, I don’t think that e.g. using a sigmoid for a regression task would perform better than using no activation function so you might need to play around with some hyperparameters to improve the predictions.
st49676
@ptrblck Thank you so much for your reply! it is very helpful since it answers the main question I had in terms of normalization and its bounds. I’ll try to describe my project better to see if you have any additional professional advise about which methods to use for this problem from your experience. I have 30,000+ images (256x256) of circular particles in space. I have already collected their ground truth using conventional computer vision segmentation which also outputs radius, perimeter, and x-y locations of each particle; it is very slow, therefore machine learning could be an attractive approach. My initial task which I have already done successfully was to implement a simpler version of U-Net to generate the same segmented (ground truth) figure. The results were great, nearly a perfect segmentation as well as improving the speed of conventional computer vision methods by a substantial amount. Now I am trying to do the second portion which is to predict the radius, and x-y locations as well. I have tried using convolutional neural nets, even using transfer learning from pre-trained models but the results are quite undesirable. image976×504 92.8 KB My first thought as to what is causing this is that since some images have more particles than others, in order to make the output vector of x-y locations and radius a fixed size (101 each x, y and radius so 303 total), some outputs will contain values of zeros. In other words I made the output vector size fixed at 101 for x,y,radius but some images might only have 51 particles which means 50 (50*3 = 150 out of 303) will be 0. My second thought regarding the source of error is the sparsity on the x-y locations; being scattered all over the image. Is there an approach you would advise on using that could be better for this specific problem? I am thinking of trying the YOLO network today by creating bounding boxes, however, can the YOLO network work with only 1 class (particle or no particle, where some even overlap). Please, if you think there is a different route I should take I would really appreciate your advise
st49677
That’s an interesting use case. If I understand your current approach correctly, you are using the first output indices for the valid points and set the rest to the zero target. Assuming that’s the case, then this would explain the “drift” of your points towards the zero location. Some of the “later” outputs will get zero targets more often than the earlier ones and the model could try to output the “mean” location, which would be somewhere between a valid location and the zero point. I’m not sure what the best approach would be, but I guess you could try to reuse some approaches of object detection models, e.g. sorting the outputs based on their mse loss to the target value (in case your model just predicts the points in a different order) or work with proposals and keep the best ones.
st49678
@ptrblck Thank you so much for your response and guidance to solve this issue with several tips. This makes a lot more sense now why the the points are shifting towards zero and confirms the source of error with this approach. I will keep trying to solve it using ConvNets along with your suggestion of sorting the outputs thats a very clever idea; it would be very interesting to show that this problem could be solve with simple ConvNets approach. I wanted to give you the good news since you have been so helpful and responsive to my question, I tried YOLO network right away over the past couple of days since it allows to have various number of objects per image. The results using the YOLO network are nearly pefect! Again thank you so much for your help! I will keep trying with ConvNets though as it would be a more attractive solution requiring less computational power.
st49679
That’s pretty cool. You could also try to use a “fast” YOLO model. If I remember it correctly, newer YOLO architectures are quite fast and might also work pretty well.
st49680
This seems really silly but where is the forward() function of the mask rcnn? I have been looking through https://github.com/pytorch/vision/blob/2831f11abcb9ec7b951b6bbbcb7a85b79ee2fd79/torchvision/models/detection/mask_rcnn.py 5 but can’t find it at all. I am trying to modify the mask rcnn’s forward pass https://pytorch.org/docs/stable/torchvision/models.html#torchvision.models.detection.maskrcnn_resnet50_fpn 1
st49681
Solved by albanD in post #2 Hi, It is inherited from its parent class. So all the way to this class: https://github.com/pytorch/vision/blob/2831f11abcb9ec7b951b6bbbcb7a85b79ee2fd79/torchvision/models/detection/generalized_rcnn.py#L15
st49682
Hi, It is inherited from its parent class. So all the way to this class: https://github.com/pytorch/vision/blob/2831f11abcb9ec7b951b6bbbcb7a85b79ee2fd79/torchvision/models/detection/generalized_rcnn.py#L15 25
st49683
What happens if loss.backwards() is called multiple times without optimizer.step() ? How would gradients be updated? Sum of gradients for each backwards call?
st49684
Solved by BramVanroy in post #2 Yes, gradients are accumulated (summed).
st49685
Hello, I found different time cost with different weights on the same mobilenetv3 model using PyTorch. The following code is used to measure time: transform_fn = transforms.Compose([ transforms.ToTensor(), transforms.Normalize( mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], ), ]) image = cv2.imread('test-image.jpg') image = transform_fn(cv2.resize(image, (96,96), interpolation=cv2.INTER_LINEAR)).unsqueeze(0).cpu() model.load_state_dict(torch.load('model_path', map_location='cpu')) model.eval() count = 100 with torch.no_grad(): start_ts = time.time() for i in range(count): outputs = model(image) elapsed = (time.time() - start_ts) * 1000 print('elapsed time:', elapsed / count) I doubted if it is caused by the number of zero parameters in the model. Then I have counted the zero params in the different weights. But it turned out that the pretrained weights have less zero parameters. Pretrained: Time: 8ms Zero-params number (eps=1e-6): 541 Zero-params number (eps=1e-2): 114071 After 100 epoch based on Pretrained: Time: 20ms Zero-params number (eps=1e-6): 1538906 Zero-params number (eps=1e-2): 1619014 Zero parameters counting code: eps = 1e-2 zero_cnt = 0 params = list(model.parameters()) for param in params: zeros_count = torch.sum(torch.where(torch.abs(param) < eps, torch.ones_like(param), torch.zeros_like(param))).int().item() zero_cnt += zeros_count print('pretrained zero params:', zero_cnt) Anybody know why this happens? Thanks.
st49686
You might see a performance hit, if you are handling a lot of denormal values. Set torch.set_flush_denormal(True) 1 and run the code again.
st49687
Thanks. It’s caused by denormalized floats. Both set_flush_denormal and setting all parameters close to zero to exactly zero can solve this problem. torch.set_flush_denormal(True) Set all parameters close to zero to exactly zero for param in model.parameters(): param.data = torch.where(torch.abs(param) < eps, torch.zeros_like(param), param)
st49688
Is there anyway to prevent trained weights to have denormal values during the training time? we trained a model with ResNet50 as the backbone with PyTorch v1.0.0a0 and v1.4.0. Model trained with v1.4.0 has denormal value in trained weights and v1.0.0a0 does not, do you know the reason?
st49689
@bsting Did you find any useful resources for avoiding denormal values during training? I am also curious
st49690
I am doing a tutorial on classifying the multiple face attributes . My idea is to crate objects of model multiple times to train each attribute . Can we do one model to train all attributes at a time ? because as what I can imagine ,the cnn model will output binary classes -1 or 1 for one attributes . Are there any ways to output 10 attributes with (-1 or 1) instead of assigning a model to one attribute
st49691
If you would like to use a multi-label classification (multiple outputs can have an active or inactive class), then you could use nn.BCEWithLogitsLoss as the criterion and make sure your model outputs have the shape [batch_size, nb_classes]. I’m not sure if -1 and 1 would correspond to a positive and negative case, but the target for nn.BCEWithLogitsLoss should contain values in [0, 1].
st49692
Hi thank you for your answers , I am building a resNet 50 ,which output 2 feature at then end , but i need to produce 40 face attributes for classification, in class (0 and 1) , i can do for one attribute,which is very standard , I have actually come cross an question you have answered in this forum which create multiple fc layer and return multiple output at then end ,so i use module list to create 40 model output where each one has class probability for 1 and 0 . it seems work , but my training is ridiculously long(probability 10 hours for just one epoch) ,(dateset is celebrity dateset, around 170000 training image,7k each in size) . I still trying to perfect the code . Is my approach correct? appreciate your reply thx
st49693
As previously said, if these face attributes can be independently active (1) or inactive (0), then you are dealing with a multi-label approach and could use a single output layer with out_features=40 and nn.BCEWithLogitsLoss as the loss function.
st49694
My main training and validation loop looks like this: import torch def train(net,dataloader,loss_func,optimizer,device): net.train() num_true_pred = 0 total_loss = 0 for images,labels in dataloader: images = images.to(device) labels = labels.to(device) optimizer.zero_grad() outputs = net(images) loss = loss_func(outputs,labels) loss.backward() optimizer.step() class_preds = outputs > 0 # for binary cross entropy num_true_pred += torch.sum(class_preds == labels) total_loss += loss train_loss = total_loss.item() / len(dataloader) train_acc = num_true_pred.item() / len(dataloader) return net,train_loss,train_acc def validate(net,dataloader,loss_func,device): net.eval() num_true_pred = 0 total_loss = 0 for images,labels in dataloader: images = images.to(device) labels = labels.to(device) with torch.no_grad(): outputs = net(images) loss = loss_func(outputs,labels) class_preds = outputs > 0 # for binary cross entropy num_true_pred += torch.sum(class_preds == labels) total_loss += loss val_loss = total_loss.item() / len(dataloader) val_acc = num_true_pred.item() / len(dataloader) return val_loss,val_acc # GPU device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # initialize datasets and dataloaders train_dataset = ... train_dataloader = ... val_dataset = ... val_dataloader = ... # initialize net and move to GPU net = ... net = net.to(device) # initialize loss function (e.g. binary cross entropy) loss_func = ... # initialize optimizer (e.g. SGD) optimizer = ... # number of epochs to train and validate for num_epochs = ... for epoch in range(num_epochs): net,train_loss,train_acc = train(net,train_dataloader,loss_func, optimizer,device) val_loss,val_acc = validate(net,val_dataloader,loss_func,device) My main question is about the train function. Do I need to return the network net as well as the train_loss and train_acc? What I mean is, is the network net mutable such that any changes that are done to it inside the train function reflect outside of it? I should then be able to change the for loop at the end to: for epoch in range(num_epochs): train_loss,train_acc = train(net,train_dataloader,loss_func, optimizer,device) val_loss,val_acc = validate(net,val_dataloader,loss_func,device) Also, please let me know if there are other ways to improve this code, since this is the template that I use for all my training and validation loops.
st49695
Solved by albanD in post #2 Hi, Yes the net is modified inplace by the optimizer. So no need to return it.
st49696
Thanks for the quick reply. Do you have any other suggestions for modifying the structure of my code? I use this structure a lot so just want to know if there is any way of making it more efficient.
st49697
I think it’s quite good. The only comments I would make are: Do total_loss += loss.item() to convert the loss to a python number directly. This will make sure you don’t build the autograd graph for things that don’t need it. You can wrap you validate function with @torch.no_grad() to disable the autograd in the whole function if you want (you already do it for the model which is the most important part). Looks good otherwise.
st49698
I am trying to build a network in Pytorch with a very large fully connected top layer, on the order of input 80000, output 15000 elements (there are more layers after). The top layer alone requires too much CUDA memory to fit on one GPU during training (even with batch size 1). Pytorch’s DataParallel for GPU splitting still needs to put the model on all GPUs (as far as I know), and gather the data on one at the end, so doesn’t help me with the memory issue. Other posts discuss splitting a large model onto several GPUs (e.g. Split single model in multiple gpus 23). From what I understand, using something like (from linked post): self.large_submodule1.cuda(0) self.large_submodule1.cuda(1) ... only seems to be relevant when you can split whole modules, e.g. the first whole FC layer, the second FC layer etc. I don’t know if and how it can be used for splitting one module, here one FC layer? In the MWE below, that would mean splitting fc onto multiple GPUs. class network(nn.Module): def __init__(self,sizeIn,sizeOut): super(network, self).__init__() ### Linear FC layers self.fc = nn.Sequential( nn.Linear(sizeIn, sizeOut), nn.Tanh(), ) def forward(self, x): x = self.fc(x) return x Any tips are much appreciated! Thanks.
st49699
Hi All, I am beginner to this field. Currently i would like to setup environment for my GPU. I am lost here and there. I try to install the driver 418 as I want cuda 10.1 which has support for pytorch. But I got Driver 450.66 instead. and when i type nvidia-smi, it show Cuda version 11, I search in google it said the cuda here is driver API compatibility. If I install another cuda version 10.1, will my environment is messed up? When I install cuda, shall I also install the cudnn altogether? However I found a folder for cudnn. “/usr/include/cudnn.h” Since I got not a clean device, I am not sure what they have installed in the GPU. So I a little bit confuse if my environment is messed up. How can i know if all the configuration setup is correct for deep learning environment using pytorch?
st49700
Hi, If you have a new device, I would actually advise with using conda to install these things. You only need to install the nvidia driver (which you already have) and everything else will come in the form of conda package in a given environment. This means that if you mess things up, just remove that environment and create a new one and all is good again! But once you have the nvidia drivers and conda, you can just follow the instructions from the website https://pytorch.org/get-started/locally/ 2 to install with conda. And it will automatically install the right version of cuda for you (as well as all other required dependencies)!
st49701
Hi AlbanD, Thank you for your advise. So the step would be: driver --> Conda --> create environment --> install Pytorch? Therefore every create a new environment we shall install pytorch with cuda? Or shall I install pytorch with cuda first then create environment? My GPU is in server, and has several users. Would that be wise if I install driver and conda in the root, then enable conda to all users? Thank you so much
st49702
Especially for a shared server, I would recommend to have as few things in the global install as possible. That gives user freedom to install whichever variant they want. And prevent them from breaking the whole system if they do bad things. In particular because installing cuda within conda is very simple and safe. So I would suggest: (for everyone): driver -> Conda (for each user): create env -> install pytorch (that will pull cuda automatically) Note that you can also make users install their own conda as well if you want to ensure more separation between users. The benefit of the share conda is that packages will be re-used, reducing disk footprint.
st49703
I have a data sequence a which is of shape [seq_len, 2], seq_len is the length of the sequence. There is time correlation among elements of a[:, 0] and a[:, 1], but a[:, 0] and a[:, 1] are independent of each other. For training I prepare data of shape [batch_size, seq_len, 2]. The initialization of BRNN that I use is birnn_layer = nn.RNN(input_size=2, hidden_size=100, batch_first=True, bidirectional=True) From the docs 1, input_size – The number of expected features in the input x hidden_size – The number of features in the hidden state h What does “number of expected features” mean? Since there is correlation along the seq_len axis should my input_size be set as seq_len and the input be permuted? Thanks.
st49704
Solved by suraj.pt in post #6 Let’s say your batch_size=5 and seq_len=3. So each batch looks like batch = [ [x1_1, x1_2, x1_3], [x2_1, x2_2, x2_3], ... [x5_1, x5_2, x5_3] ] # shape (batch_size, seq_len, input_size) where x{seq_id}_{timestep} For a given timestep t, the RNN reads the t-…
st49705
Hi, input_size or “no. of expected features” denotes the dimensionality of each observation; in this case, 2. Also, your input to nn.RNN should be in the shape of [seq_len, batch_size, input_size]. At every timestep, the RNN receives a [t, :, :] matrix that contains all the observations at timestep t from all the batches.
st49706
Hi, I have mentioned batch_size=True as one of the parameters, so the batch_size dimension should come first right?
st49707
suraj.pt: the RNN receives a [t, :, :] matrix that contains all the observations at timestep t from all the batches. Can you please elaborate on this?
st49708
Let’s say your batch_size=5 and seq_len=3. So each batch looks like batch = [ [x1_1, x1_2, x1_3], [x2_1, x2_2, x2_3], ... [x5_1, x5_2, x5_3] ] # shape (batch_size, seq_len, input_size) where x{seq_id}_{timestep} For a given timestep t, the RNN reads the t-th observations from all the sequences in the batch. So at timestep t, it will read and process input_t input_1 = [x1_1, x2_1..., x5_1] # shape (batch_size, input_size) input_2 = [x1_2, x2_2..., x5_2] input_3 = [x1_3, x2_3..., x5_3] However, you don’t actually need to split your data into time-stepped inputs like this, the RNN just needs rnn_input = [input_1, input_2, input_3] # (seq_len, batch_size, input_size) from you. Hope this helps!
st49709
I have one last question. In the above example what would be the shape of data? Thank you so much for your effort
st49710
Hello everyone, I have around 5 fields of ordinal data that have to be one-hot-encoded. These fields essentially describe the context in which a sentence was made. The one-hot-encoded would be sparse tensors. Although I am not sure, my understanding is that due to sparsity traditional layers applied such data would yield meaningless results. I was wondering if there is any way to extract features from these sparse tensors such that I would be able to “include the Context” in the sentence.
st49711
Hi all, What is the quickest way to create some perturbations of the distributions already available in PyTorch distributions? For example, if I want to create a new distribution by doing $(1-\epsilon)N(\mu,\sigma^2) + \epsilon \delta_x$, what would be the quickest way to do it? My other goal is to perturb some pdf’s with Gaussian noise. If f(x) is a pdf of a probability density function, then I would like to construct a new density by perturbing it with some Gaussian noise by essentially the same equation as above $(1-\epsilon)f(x) + \epsilon N(\mu,\sigma^2)$. Thanks very much.
st49712
Affine transformed gaussian is another gaussian distribution. Look at https://pytorch.org/docs/master/distributions.html#torch.distributions.transforms.AffineTransform 4 and other bijective transforms there. Generally, you can’t mess with pdfs, as they have to integrate to 1. Valid approaches to construct pdfs include mixture distributions, compound distributions, normalizing flows (advanced version of bijective transforms).
st49713
Thank you for your answer. What I was writing is basically creating various mixture distributions. I still can not figure out how to create mixture distributions except for the code of the mixture in the same family in pytorch.
st49714
Well, mixture’s log_prob() is not hard to write (logsumexp() can be handy). But there is not much else to put into distribution objects, they’d just contain Categorical and component distribution objects. And differentiable sampling won’t work because of Categorical.
st49715
RuntimeError: Module 'SparseSequential' has no attribute '_modules' : File "/home/gcy/code/spconv-master/spconv/modules.py", line 132 def forward(self, input): print(888888888, self._modules) ~~~~~~~~~~~~~ <--- HERE for module in self._modules: ...... NOW I have a class SparseSquential derived from nn.modulle. When I script this class, I got the error above. As we all know , the _modules is a member variable ‘OrderedDict’ in the nn.module. So how to solve it. THanks . My environment is pytorch:1.5.0 .
st49716
Hi, Check out .modules() or .named_modules() - both return an iterator of modules in the network. pytorch.org Module — PyTorch 1.6.0 documentation 14
st49717
I have trained a model using CYCLEGAN: I have the model.pth file, now I am trying to load the model and look for the saliency map features and also I want to generate the output on each encoding layer. I have tried several things but got no luck. Can anyone help?
st49718
using this code data_path = '.' train_dataset = torchvision.datasets.ImageFolder( root=data_path, transform=torchvision.transforms.ToTensor() ) train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=1, num_workers=1, shuffle=True ) I can load multiple png images but all transparent parts turn into black. I want to keep all four channels. I searched how to do it but people have the inverse problem - they have all four channels and they want to drop the alpha channel. So what shall I do so the alpha channel is loaded also?
st49719
I figured it out myself. from scipy import misc data_path = '.' def my_loader(path): return misc.imread(path); train_dataset = torchvision.datasets.ImageFolder( root=data_path, transform=torchvision.transforms.ToTensor(), loader = my_loader ) train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=1, num_workers=1, shuffle=True )
st49720
I have problem using Categorical Cross Entropy loss Target data is imported from a numpy array containing label indices for 3 classes (0,1,2) Dataset definition class Tr_dataset(Dataset): def __init__(self, windowed_input, classification_target): self.windowed_input = windowed_input self.classification_target = classification_target def __len__(self): return len(self.windowed_input) def __getitem__(self, index): x_input = self.windowed_input[index] x_target = self.classification_target[index] x_input_tensor = torch.Tensor(x_input) x_input_tensor= x_input_tensor.view(SEQUENCE_LENGTH, INPUT_SIZE) # Ground truth x_target_tensor = torch.LongTensor(x_target) return x_input_tensor, x_target_tensor Model class classification_RNN(nn.Module): def __init__(self, input_size, hidden_size, num_layers): super(classification_RNN, self).__init__() self.input_size = input_size self.hidden_size = hidden_size self.num_layers = num_layers self.rnn = nn.RNN(input_size = input_size, hidden_size = hidden_size,num_layers= num_layers, batch_first= True) self.out = nn.Linear(hidden_size, 3) def forward(self, x): # The first hidden layer is automatically initialized to zeros if not passed rnn_out, hidden = self.rnn(x) class_label = self.out(rnn_out) return class_label Model relevant information model = classification_RNN(input_size=INPUT_SIZE, hidden_size=HIDDEN_SIZE, num_layers=NUM_LAYERS) Where: # Model params BATCH_SIZE = 12 # The sequence length of the windowed data input SEQUENCE_LENGTH = 12 # This is actually the number of dimensions/ features in the input INPUT_SIZE = 1 # The number of features in the last hidden state (Which should definitely be one) # This is also equal to the number of outpu time steps to predict? HIDDEN_SIZE = 1 #The number of RNN layers to stack NUM_LAYERS = 3 Since it is already getting too long: Partial Training loop: for bi, (x_input, x_target) in enumerate(train_loader): model.train() x_input_batch, x_target_batch = x_input.to(device), x_target.to(device) optimizer.zero_grad() output_batch = model(x_input_batch) loss = criterion(output_batch, x_target_batch) The 2 inputs for the criterion seem to have a size mismatch I read in many places, I seem to be doing everything fine. But, I get the error: ValueError: Expected target size (12, 3), got torch.Size([12, 1])
st49721
Solved by pchandrasekaran in post #13 @bibekx That’s a bit weird. I can’t tell how a variable change may have caused that. What I would suggest is: Check the model weights in the test_model function to ensure that an updated model is in fact passed. Check that the dataloader in test_model is iterating through correctly. Play around w…
st49722
Can you print the shapes of output_batch and x_target_batch, before they are passed to the loss function?
st49723
Output Batch has a size: torch.Size([12, 12, 3]) Target Batch has a size torch.Size([12, 1]) I change a line in def forward: to class_label = self.out(rnn_out.contiguous().view(-1, self.hidden_size)) to meet input expectation of linear now, Output Batch has a size: torch.Size([144, 3]) I want it to be [12, 3]
st49724
Let us assume you did not reshape and the original error was about output shape: Output Batch has a size: torch.Size([12, 12, 3]) Target Batch has a size torch.Size([12, 1]) import torch import torch.nn as nn ce = nn.CrossEntropyLoss() a = torch.randn((12,12,3)) b = torch.randn((12,1)) #Expected target size (12, 3), got torch.Size([12, 1]) #print('Input Shape: ', a.shape) #print('Target Shape: ', b.shape) #print('Loss: ', ce(a,b)) #This works as I reshape the input batch. a = torch.randn((12,12,1)) b = torch.randint(0,3,(12,1)) print('Input Shape: ', a.shape) print('Target Shape: ', b.shape) print('Loss: ', ce(a,b)) In your original code, can you change “self.out = nn.Linear(hidden_size, 3)” to "self.out = nn.Linear(hidden_size, 1), print the shapes and try?
st49725
But, would I not want the logits as one of the inputs to Cross Entropy loss? Also, I am not trying to do binary classification, I have to predict among 3 classes I think the real problem I am facing is interfacing the output of RNN to a linear layer Do you think this is correct? self.rnn = nn.RNN(input_size = input_size, hidden_size = hidden_size,num_layers= num_layers, batch_first= True) self.out = nn.Linear(hidden_size, 3)
st49726
Although, I’m not too familiar with the workings of RNNs, your implementation looks correct. CrossEntropyLoss expects a input of dim = (N, C) and a target of dim = (N,). Additional dimensions are used for “K-dimensional loss” as stated in the docs 2. Since your output batch is of dim (12, 12, 3), the target expected shape is (12, 3), but your targets are (12, 1), which explains your error. You need to perform 2 reshapes: The first one is output_batch = output_batch[:, -1, :] It works, but I have no idea why this specific “reshape”. Here’s a link to an RNN Implementation for MNIST where I looked it up. The second one is x_target_batch = x_target_batch.view(-1) This one is to satisfy the target shape required for the loss function.
st49727
pchandrasekaran: It works, but I have no idea why this specific “reshape”. The RNN Module returns 2 output tensors, the outputs after each iteration and the last hidden state. We only use first, which is of shape [Batch, Seq, Hidden] with batch_first=True and num_directions=1. bibekx most likely only wants the output of the last iteration, so we slice it with [:, -1, :]. Best use of this slicing would be in the forward call in classification_RNN, right before we feed it into the linear layer.
st49728
Caruso: bibekx most likely only wants the output of the last iteration, so we slice it with [:, -1, :] . That makes sense now. Thank You for the explanation!
st49729
Thank you guys for the help, I did implement it as recommended by @Caruso and @pchandrasekaran it seems to have initiated the training but… Its not training as expected. as the training accuracy increases, so does the loss. doesn’t make sense. The link to code: (Please note some class names might be different) Link to my GitHub 4
st49730
Hi @pchandrasekaran, Thanks! I fixed it. But now, the accuracy is not improving.
st49731
@bibekx That’s a bit weird. I can’t tell how a variable change may have caused that. What I would suggest is: Check the model weights in the test_model function to ensure that an updated model is in fact passed. Check that the dataloader in test_model is iterating through correctly. Play around with the optimizer params (and different optimizers) and epochs. The loss flatlines at only the 4th epoch. Personally, I don’t think 1 or 2 is an issue. It has to be the optimizer. Also, before any retraining, run cells 6, 10 and 11 just to ensure all the weights get reinitialized.
st49732
I just want to confirm whether the weight decay parameter in optimisers is equivalent to applying L2 regularisation. According to fastai’s article 3 on this, weight decay and L2 regularisation are only equivalent when used in vanilla SGD. There was also the Decoupling Weight decay 1 paper that states weight decay is a better alternative to L2 loss. Just wanted to how this is done in Pytorch.
st49733
Hello everyone, I am currently training my network in mini batch size using Adam optimizer. However, because I’ve read some posts saying if i choose the right lr and lr_schedule parameters, sgd could perform better tahn adams, I am planning to try SGD. From my understanding, SGD updates the parameters in each sample which has batch size of 1 since batch size greater than 1 is just a normal GD. But, I am seeing lots of pytorch code using SGD with batch size greater than 1 (e.g. 16). How does pytorch SGD then work and update the parameters? Also, can I still use batch normalization when i use SGD? SGD is when batch size is 1, so surely batch normalization will either not work or perform really badly.
st49734
Hi! First of all, batch size greater than 1 is min batch instead of a normal GD. The number of batch size determines the type of GD(SGD with batch size=1 or mini-batch with batch size greater than 1). In addition,I personally recommend you use Adam and mini-batch.Since the two of them outperform much steady and robust training. Finally, you can’t use batch normalization with SGD-batch size=1, because batch normalization compute the average over a batch, since your batch size=1, there is no need to compute average value over a single value. I hope I would make this clear. If you have any more questions, please let me know. Have a nice day : )
st49735
Thank you for the reply! However, my question is that I see some codes in pytorch which they use batch size greater than 1 with SGD and this means it is not SGD really anymore. So, how does pytorch calculate SGD when batch size is greater than 1?
st49736
Sorry,after searching the wiki,turns out I was wrong at(SGD is batch size=1). Really sorry! SGD is randomly chosen an example in the batch as the gradient. Use randomly picked gradient to replace the gradient computed over all examples or batch. So It’s reasonable to use batch size that greater than 1. However, from my experience, It’s very unstable using SGD,which would cause your loss changes dramatically. If you want to know more, please click here 8 and check out. Hope I didn’t make you confused: )
st49737
Hi! I’m trying to run many models on a single GPU by switching them in and out as needed (they don’t all fit in GPU memory together), but I’m finding that loading each model: model = model.to('cuda') takes 20-80ms (ex/ VGG16: ~80ms). If I want to load two different VGG16 models at a time, is there a way to parallelize loading such that the total load time is < 160ms? Thanks!
st49738
You could call the to() methods multiple times on different models, but note that the memory bandwidth is limited so you cannot push the parameters faster to the device than your system allows.
st49739
Hi Everyone I’m facing this conversion error after completing the validation step of my UNet regression model it is throwing this error: I don’t know am i providing the sufficient info for your understanding File “C:\Python\Python 3.8.5\lib\site-packages\torch\utils\tensorboard_convert_np.py”, line 29, in make_np raise NotImplementedError( NotImplementedError: Got <class ‘torch.nn.modules.loss.MSELoss’>, but numpy array, torch tensor, or caffe2 blob name are expected. The code where im facing error is attached below: def validation_end(self, outputs): # getting the mean of stack of validation losses avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean() tensorboard_logs = {'val_loss': avg_loss} return {'avg_val_loss': avg_loss, 'log': tensorboard_logs}
st49740
Based on the error message it seems you are passing the class nn.MSELoss to the tensorboard visualization instead of a tensor or array containing the loss value. Could you check the type of all objects and make sure they are valid values and not the loss function itself?
st49741
Set up You have an input dataset X, and each row has multiple labels. Eg, 3 possible labels, [1,0,1] etc Problem The typical approach is to use BCEwithlogits loss or multi label soft margin loss. But what if the problem is now switched to All the labels must be correct, or dont predict anything at all? What loss function do we pick for this? I thought of coding a custom loss function that returns 0 if all the labels match, else 1 but it seems “hacky”. Thanks!
st49742
Hi everyone, I’m trying to use pretrained resnet18 for my project and it fits very good to my train data but not to validation data. I have tried changing batch size, convolutionlayers, lr_scheduler but there’s still no success.Screenshot from 2020-10-07 09-47-351366×768 137 KB This is my first project in DNN and I am a beginner. Can someone please give me a suggestion what can I do? Green is Train Loss and gray Validation Loss. My Code looks like: class Resnet_Speaker_Recognition(nn.Module): def init(self): super(Resnet_Speaker_Recognition, self).init() self.resnet_model = models.resnet18(pretrained=True) self.resnet_model.conv1 = torch.nn.Conv2d(1, 64, kernel_size=(7,7), stride=(2,2), bias=True) self.resnet_model.conv2 = torch.nn.Conv2d(64, 32, kernel_size=(7,7), stride=(2,2), bias=True) self.resnet_model.conv3 = torch.nn.Conv2d(32, 64, kernel_size=(7,7), stride=(2,2), bias=True) def forward(self,x): x=self.resnet_model(x) return x
st49743
How large are your current training, validation, and test datasets? Generally more samples help in this scenario. Additionally you could increase the regularization (such as adding or increasing dropout or adding weight decay), adding more data augmentation etc. It can also be useful to check the training and validation datasets manually and make sure the samples come from the same domain and are thus “similar”.
st49744
Hi, thank you for your answer. I am using VoxCeleb 1 as my dataset and it contains 138361 files for training, 8251 for test and 6904 files for validation. There are 1251 speakers to be identified. I was using till now optim.SGD as my optimizer, now I’m trying optim.Adam. It is a bit better but I’m still not having results. Could ypu please give me an simple example how to increase dropout? I have changed my model again: class Resnet_Speaker_Recognition(nn.Module): def init(self): super(Resnet_Speaker_Recognition, self).init() self.resnet_model = models.resnet18(pretrained=True) self.resnet_model.conv1 = torch.nn.Conv2d(1, 64, kernel_size=(7,7), stride=(2,2), padding=(3,3), bias=False) self.num_ftrs = self.resnet_model.fc.in_features self.resnet_model.fc = nn.Linear(self.num_ftrs, 1251) def forward(self,x): x = self.resnet_model(x) return x
st49745
I think resnets don’t use dropout initially so you could add e.g. an nn.Sequential module as self.resnet_model.fc containing linear and dropout layers and could check, if it can improve the training.
st49746
It’s unfortunately still not working. Train Dataset works perfectly but Validation not. Can you please tell me how to check the Datasets manually?
st49747
Kla: Can you please tell me how to check the Datasets manually? Plot random images (or all) and check if the data might come from different domains, might contain different features, has special artifacts etc.
st49748
Hello, I started working with deep learning with Pytorch not very long ago and I have an issue with segmentation. My labels have three values : ‘0: no data’, ‘1: no vegetation’ and ‘2: vegetation’. I don’t want my algorithm to learn and classify ‘0: no data’ labels. After digging on internet I saw that one possible solution consists in putting a weight of 0 in the loss function, but I’m thinking that it might be problematic with respect to the computation of the gradient, I don’t know if I’m right though. Is there an elegant way to ignore the ‘0: no data’ label from training apart from setting a weight to “0” in the loss function? Thank you in advance.
st49749
You could alternatively use ignore_index in nn.CrossEntropyLoss to ignore this class during the loss calculation.
st49750
how i can get the orignal image back? Screenshot from 2020-10-13 11-15-071366×768 207 KB
st49751
Solved by ptrblck in post #2 The usage of reshape is wrong as it will interleave the image. Use permute to permute the dimensions of the tensor instead. If you want to get the original pixel values back, you would have to “denormalize” the image such that the values are again in the original range (most likely [0, 255]) inste…
st49752
The usage of reshape is wrong as it will interleave the image. Use permute to permute the dimensions of the tensor instead. If you want to get the original pixel values back, you would have to “denormalize” the image such that the values are again in the original range (most likely [0, 255]) instead of [0, 1], which is done by ToTensor().
st49753
I am using nn.AdaptiveLogSoftmaxWithLoss. the way I am building my model, the loss is outside of my nn.Module. How can I pass the weights included in this loss for them to appear in my model.parameters() and model.modules()? Or at least, how can I join both the parameters/modules of my model with the one sin the loss function?
st49754
Solved by MariosOreo in post #2 Hi, model.parameters() and model.modules() are both generator, firstly you could get the list of parameters and modules by list(model.parameters()) and then passing the weights and the loss module in a append to list method. But model.modules() get submodules in a iteration way, so there will be s…
st49755
Hi, model.parameters() and model.modules() are both generator, firstly you could get the list of parameters and modules by list(model.parameters()) and then passing the weights and the loss module in a append to list method. But model.modules() get submodules in a iteration way, so there will be something difficult.
st49756
This answer 418 is pretty much all you need! In the SGD example of the answer, you would only need to change the model.base by your_model_name and model.classifier by your_loss_name. If you wrote your loss module properly (with registered nn.Parameters and not just tensors), it should work.
st49757
Yes, you are both right. Since the losses in PyTorch are nn.Module, it’s no problem, they have parameters and modules methods. But what if I wanted to add an extra tensor to the optimizer (with grad set to True)?
st49758
Firstly, the extra tensor should be trainable (with requires_grad=True) and included in the computational graph. Secondly, you could add the extra tensor into optimizer.params_group like this. 230
st49759
Can you give a code example please? I don’t find that optimizer.params_groupin the link that you mentioned
st49760
I meant you could add extra tensor into optimizer by optimizer.add_param_group or initialize your optimizer as follows: optim.SGD([ {'params': model.base.parameters()}, {'params': model.classifier.parameters(), 'lr': 1e-3} ], lr=1e-2, momentum=0.9)
st49761
I don’t think you actually can do that with Tensors, which is the whole point of torch.nn.Parameter. My understanding is that Parameter was added specifically to avoid computing gradients for normal Tensors. What is your use case for Tensors which are not part of the model?
st49762
Hi, So in my case I have pretrained a model and after pre-training, in the main training loop, I have applied a few functions (tensor operations) on top of the pre-trained model’s output. This, applying a new function has some parameters that I need to update at each iteration. Do you recommend making a nn.Module sub-class out of these new functions and parameter where I can define the new parameter using nn.Parameter ?? The problem is, I will not be using any pytorch’s nn layer in this new model (model on top of pre-trained model). So, is it even necessary or the best practice to add additional parameters like this? What do you guys suggest? Please let me know If I am being unclear. Would appreciate your help, Thanks!
st49763
I think even if you don’t use PyTorch layers you should use Parameters for your learnable weights/params. This will probably make it easier for you. Using a subclass of nn.Module will ensure that all the backend work for computing gradients will work out of the box, and you can use your functions (or let’s call it a module, then) like another module of pytorch layers…
st49764
Hello everyone, I am currently doing my project on segmentation. The problem with my dataset is that my training and validation dataset is actually slightly different to the actual raw dataset or test dataset. So, I came up with a solution which is that since data augmentation tends to help regularization and prevents overfitting, I have given a random noise and rotation to my training dataset. Hence, my training dataset is comprised with 1000 original training dataset + 1000 noisy training dataset while keeping the validation dataset unchanged. However, my current model is around 94% with 0.87IoU in epoch 50 whereas my original model performance was 95% with 0.89IoU in epoch 50. Can someone explain to me that whether my data augmentation technique has actually worsen my model’s performance and if not, should I wait for longer epochs to see the results?
st49765
edshkim98: Can someone explain to me that whether my data augmentation technique has actually worsen my model’s performance and if not, should I wait for longer epochs to see the results? Data augmentation might increase the model performance on the validation dataset, if the data distribution of the test data would be made more similar to the validation distribution. E.g. if your validation data contains more gaussian noise, adding this preprocessing step to the training might help.
st49766
@ptrblck, Thank you for your reply! Unfortunately, my main goal is to test on the raw dataset, so it is not possible to change the distribution akin to the validation dataset. My validation data also has similar distribution as the training dataset. Then my question is, should I also add gaussian noise to my validation dataset or do you still recommend adding noise to generate more data in this case? I know it would be difficult, but I want to maximize my validation data accuracy as well as accuracy on the raw dataset. Furthermore, would it be better to train with 1000 original training data + 1000 fixed noised data or 1000 noised data that changes the amount of noise in every epoch? e.g. transforms in torch.vision
st49767
edshkim98: My validation data also has similar distribution as the training dataset. I think I misunderstood the use case and thought you might have a slight difference between the training and validation datasets, but it seems the difference is between training+validation vs. test? I don’t think adding data augmentation to the validation set is a good idea, but also the difference of the test dataset is concerning. A common way would be to decrease the test set and use some of it in the training and validation set. However, if your test set is small or if you don’t have the targets, this won’t be easily possible. In that case unfortunately I don’t know what the best approach would be to generalize to a “new” data domain.