id
stringlengths
3
8
text
stringlengths
1
115k
st81768
Yes, that was my problem. It worked now. I can’t believe that was just it! Thank you very much for the help @Nikronic.
st81769
No problem mate, I can say approximately %90 of my problems related to python type system. As a computer engineering student, I really do not know why this language is even popular! If you omit great frameworks, python is a shame for computer engineering! I highly benefited from using PyCharm (a smarter IDE). Recently, I found a framework on top of python that enables you do type checking even hierarchically but I cannot remember its name unfortunately. Bests
st81770
Hi . I am a university student studying in South Korea. I’m confused about the videos dataloader, so I leave a question. I want to make a 3 x 3 x 3 cube by stacking 64 frames in the video, but if the entire frame of the video is 200 frames, how is it common and best to select 64 frames? If the total frame of the entire video is 40, how is it best to increase it to 64 frames? If there’s any problem, I’ll delete it. I am sorry for my short English.
st81771
Guys, help solve the problem. I took the nn.transformer module. At the entrance is a sub-offer, at the output I need to get an analysis, positive or negative. I want to use softmax (0,1). How can I do it? nn.transformer for decoder is a required parameter (tgt - the sequence to the decoder (required)). What should I apply to this input?
st81772
Say that I want a model looks like y=A*x where X is a matrix and * is element wise product. How can I construct a tranable A? Thank you very much
st81773
Revise:I want y=(A*B) * x where x is a vector as one input and A is a matrix as another input and B is the trainable elementwise product matrix I desire. So far, I did : self.net = nn.Sequential(A, ) And I am confused how to step forward.
st81774
Hi @WeiHao97, If I understand correctly you want to train your model w.r.t. B, the learning parameter matrix. class Network(nn.Module): def __init__(self, shape): super().__init__() self.B = nn.Parameter(torch.zeros(shape)) torch.nn.init.xavier_uniform_(self.B) #or any other init method def forward(self, x, A): M = A*self.B return x @ M.t() shape has to be (output_dim, input_dim). input_dim being the dim of x. Hope this is what you wanted to do.
st81775
I applaud you for providing builds, but… I think it is a bad practice to make the lib and dlls the same name when doing debug and release. it makes using the lib difficult. You seem to like CMake and it has and expects to create both release and debug from same generated instance. (you could provide one zip for download if you just packaged them in debug and release subfolders but that depends on the FindPackage…
st81776
Context: I have a main model netG, which includes a bunch of modules (e.g. nn.conv2D, nn.Sequential and some other modules defined by myself). Due to some issues on the code, some submodules of netG don’t support data parallel. So, instead of wrapping the whole netG like this nn.DataParallel(netG) I do nn.DataParallel(nn.conv2D) nn.DataParallel(nn.Sequential) … for modules that support nn.DataParallel, and leaves other modules as it is (no data parallel) Problem: This works fine, but when I save netG, I want to remove the DataParallel model wrapper for the whole model. Clearly I can’t do netG.module.state_dict() because netG is not wrapped so it doesn’t have the attribute module. Candidate solutions: I am thinking of recursively removing the DataParallel wrapper for netG's children, but I am not sure if I can do that in-palce, or I have to construct a new netG object from scratch. What it the solution? Thanks!
st81777
Maybe it’s not a good practice to separate your modules. But have you try this way to save your separate modules? torch.save({ 'nn.conv2D_state_dict': nn.conv2D.module.state_dict(), 'nn.Sequential_state_dict': nn.Sequential.module.state_dict(), 'optimizer_state_dict': optimizer.state_dict(), ... }, PATH)
st81778
Thank you! I haven’t try that, but that makes perfect sense. Also, if some sub-modules can’t be parallelized (as in my case), what would be a good practice? Should we not parallelize the whole module at all?
st81779
I think DataParallel would work well, if you can make the batch size of your sub-modules at the same position for DataParallel to scatter.
st81780
Hi everyone, im working on an LSTM which gets vibration signal of a mechanical part to predict the remaining useful lifetime. Since the signal is very noisy, i’m not sure if the LSTM is able to learn the data properly. Is it necessary to filter noisy inputs for LSTMs? How is the approach in other fields, where the signal is noisy as well, like for voice signal processing? I wanted to use the scipy.savgol_filter for the problem.! Thanks in advance. blue is the original signal, orange is the filtered signal The signal is normalized for the training.
st81781
Can I use def __init__hidden (self, batch) instead: hidden = torch.randn (1, batch, 512) .to (device) return hidden hidden - which was obtained from the previous sequence. that is, do not make the new hidden, but use the previous one each time? The fact is that I am exploring a continuous sequence and it seems to me that hidden should be continuous.
st81782
And another question, in my version, do I have the right to do randn initialization or do I need to do it with zeros?
st81783
I’m not sure to understand the question clearly, but it seems you would like to initialize the hidden state once and then just use if for the whole training?
st81784
Would it be similar to this example 2? What do you mean by “do not show once per batch”? What hidden state should be passed instead for the next data batch?
st81785
Take a look: def forward(self, out): out = self.fc1(out) out = torch.transpose(out,0,1) hidden = self.__init__hidden(batch) out,hidden = self.gru(out, hidden) out = self.fc3(hidden) out = out.reshape(batch,3) return out def __init__hidden(self, batch): hidden = torch.randn(1, batch, 512).to(device) return hidden When I call outputs = net (wn), def forward is triggered. This happens for every sequence in batch. And each time we initialize the hidden variable, which calls def __init__hidden. I want def __init__hidden to work the first time, and for the second and next passes hidden should be remembered from the previous sequence (out,hidden = self.gru(out, hidden)).
st81786
u can use a constant to remember the hidden for the next sequence. this constant will not update.
st81787
I am getting the following error , trying to create a simple rnn for implementing ctc based neural net matrices expected, got 3D, 2D tensors import torch import torch.nn as nn import torchvision.datasets as dsets import torchvision.transforms as transforms from torch.autograd import Variable import torch.optim as optim import numpy as np import matplotlib from __future__ import print_function # Hyper Parameters sequence_length = 2 input_size = 13 hidden_size = 4 num_layers = 2 num_classes = 3 batch_size = 1 num_epochs = 2 learning_rate = 0.01 class RNN(nn.Module): def __init__(self, input_size, hidden_size, num_layers, num_classes): super(RNN, self).__init__() self.hidden_size = hidden_size self.num_layers = num_layers self.rnn = nn.RNN(input_size, hidden_size, num_layers) self.fc = nn.Linear(hidden_size, num_classes) def forward(self, x): # Set initial states h0 = Variable(torch.zeros(self.num_layers, 1 ,1, self.hidden_size)) # Forward propagate RNN out, out2 = self.rnn(x, h0) # Decode hidden state of last time step # out = self.fc(out[:, -1, :]) return out rnn = RNN(input_size, hidden_size, num_layers, num_classes) criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate) input = Variable(torch.randn(sequence_length, 1, 13)) h0 = Variable(torch.randn(2, 1, num_classes)) print(input) output = rnn.forward(input)
st81788
your variable out has 3 dimensions (sequence length, batch, hidden_size), and you must feed your linear layer with a 2 dimensions tensor. You should, for example, reshape your output like this: out = out.view(-1, hidden_size)
st81789
Thanks for your reply , but I dont think thats the problem , that line is commented out , so it should atleast result intermediate out variable , when I try printing that out I get this error
st81790
I think you got your hidden state Variable with wrong dimensions. I haven’t executed your code, but I think it should be: h0 = Variable(torch.zeros(self.num_layers, 1 ,self.hidden_size)) as it stands for num_layers, batch_size, hidden_size. Regards
st81791
This is the error message stack ` RuntimeError Traceback (most recent call last) in () 3 h0 = Variable(torch.randn(2, 1, num_classes)) 4 ----> 5 print( rnn.forward(input)) in forward(self, x) 13 14 # Forward propagate RNN —> 15 out, out2 = self.rnn(x, h0) 16 17 # Decode hidden state of last time step /usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py in call(self, *input, **kwargs) 204 205 def call(self, *input, **kwargs): –> 206 result = self.forward(*input, **kwargs) 207 for hook in self._forward_hooks.values(): 208 hook_result = hook(self, input, result) /usr/local/lib/python3.5/dist-packages/torch/nn/modules/rnn.py in forward(self, input, hx) 89 dropout_state=self.dropout_state 90 ) —> 91 output, hidden = func(input, self.all_weights, hx) 92 if is_packed: 93 output = PackedSequence(output, batch_sizes) /usr/local/lib/python3.5/dist-packages/torch/nn/_functions/rnn.py in forward(input, *fargs, **fkwargs) 341 else: 342 func = AutogradRNN(*args, **kwargs) –> 343 return func(input, *fargs, **fkwargs) 344 345 return forward /usr/local/lib/python3.5/dist-packages/torch/nn/_functions/rnn.py in forward(input, weight, hidden) 241 input = input.transpose(0, 1) 242 –> 243 nexth, output = func(input, hidden, weight) 244 245 if batch_first and batch_sizes is None: /usr/local/lib/python3.5/dist-packages/torch/nn/_functions/rnn.py in forward(input, hidden, weight) 81 l = i * num_directions + j 82 —> 83 hy, output = inner(input, hidden[l], weight[l]) 84 next_hidden.append(hy) 85 all_output.append(output) /usr/local/lib/python3.5/dist-packages/torch/nn/_functions/rnn.py in forward(input, hidden, weight) 110 steps = range(input.size(0) - 1, -1, -1) if reverse else range(input.size(0)) 111 for i in steps: –> 112 hidden = inner(input[i], hidden, *weight) 113 # hack to handle LSTM 114 output.append(isinstance(hidden, tuple) and hidden[0] or hidden) /usr/local/lib/python3.5/dist-packages/torch/nn/_functions/rnn.py in RNNTanhCell(input, hidden, w_ih, w_hh, b_ih, b_hh) 16 17 def RNNTanhCell(input, hidden, w_ih, w_hh, b_ih=None, b_hh=None): —> 18 hy = F.tanh(F.linear(input, w_ih, b_ih) + F.linear(hidden, w_hh, b_hh)) 19 return hy 20 /usr/local/lib/python3.5/dist-packages/torch/nn/functional.py in linear(input, weight, bias) 447 def linear(input, weight, bias=None): 448 state = _functions.linear.Linear() –> 449 return state(input, weight) if bias is None else state(input, weight, bias) 450 451 /usr/local/lib/python3.5/dist-packages/torch/nn/functions/linear.py in forward(self, input, weight, bias) 8 self.save_for_backward(input, weight, bias) 9 output = input.new(input.size(0), weight.size(0)) —> 10 output.addmm(0, 1, input, weight.t()) 11 if bias is not None: 12 # cuBLAS doesn’t support 0 strides in sger, so we can’t use expand RuntimeError: matrices expected, got 3D, 2D tensors at /b/wheel/pytorch-src/torch/lib/TH/generic/THTensorMath.c:1232 `
st81792
@santi-pdp Thanks it worked , I am new to pytorch and deeplearning framework trying hard to learn I found out what I did wrong from docs " h_0 (num_layers * num_directions, batch, hidden_size): tensor containing the initial hidden state for each element in the batch. " I got confused between num_layers * num_directions and treated them as two params instead of one , Thanks for help again
st81793
@saurabhvyas I’m having similar issues, can you point me to where in the docs you saw num_layers * num_directions? Specifically, what does num_directions refer to here? thanks.
st81794
I’m trying to reproduce RetinaNet in PyTorch by directly porting the original Caffe implementation 1. One issue I’ve run into is that Detectron makes use of a normalization operation called AffineChannel instead of batch normalization. This is due to the small batch sizes one encounters when training object detection models. AffineChannelOp: https://github.com/pytorch/pytorch/blob/54c4b5a4db6e939fd441a00c42d185547e77aaa2/caffe2/operators/affine_channel_op.h#L32-L57 6 Cuda Kernel for AffineChanne: https://github.com/pytorch/pytorch/blob/2db847b3a7edc48652e144e7c9d7aa0bbed66aaa/caffe2/utils/math/broadcast.cu#L22-L41 4 Am I correct in my understanding that AffineChannel simply multiplies each channel but it’s own learnable scale parameter and adds a learnable bias parameter? Is there any evidence that this helps learning? I haven’t encountered this approach before.
st81795
Based on your description and skimming through the code, it seems you are right in your assumption. Wouldn’t the operation thus correspond to a nn.BatchNorm2d layer with track_running_stats=False? This might make sense, if the batch size is small, as the running estimates will most likely be quite noisy.
st81796
Yes I think you might be right. I’ve never used track_running_stats before and I see it’s described as: track_running_stats: a boolean value that when set to `True`, this module tracks the running mean and variance, and when set to `False`, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default: `True` It sounds like while it doesn’t track a running mean/variance, it is still using the batch mean and variance? In my case I think I would want a fixed mean of 0 and a variance of 1, but still allow for gamma and beta to be learnable parameters, correct?
st81797
Yeah, you are right, sorry for the mistake. It would rather correspond to track_running_stats=True and set to evaluation mode (bn.eval()), which would then use the initial running stats.
st81798
It would rather correspond to track_running_stats=True and set to evaluation mode ( bn.eval() ), which would then use the initial running stats. That sounds like it will freeze the statistics (eg. mean and variance) but it also sounds like it will freeze gamma and beta which appear to be learnable in AffineChannel. I think I might be able to mimic theAffineChannel operation by creating a vector of torch.ones([num_channels]) and torch.zeros([num_channels]) and using these for gamma and beta respectively. I’ll give it a shot and see how it goes.
st81799
bn.eval() will not freeze the trainable parameters, just use the running statistics, which would have their initial values.
st81800
Hey,guys.I’m a beginner of machine learning and pytorch.Now i’m follow a chinese guide video but meet a problem i can’t solve.I follow the teacher and create a sample nn in a stupid way but when i train it ,my grad become “Nonetype” and i don’t know how to do.Here is my code : n,input,h,output=64,1000,100,10 x=torch.randn(n,input) y=torch.randn(n,output) w1=torch.randn(input,h,requires_grad=True) w2=torch.randn(h,output,requires_grad=True) learningrate=0.000001 for t in range(500): #forward h=x.mm(w1) h_relu=h.clamp(min=0) y_pred=h_relu.mm(w2) #loss loss=(y_pred-y).pow(2).sum() print(t,loss.item()) #backward loss.backward() with torch.no_grad(): w1=w1-learningrate*w1.grad.data w2=w2-learningrate*w2.grad.data w1.grad.zero_() w2.grad.zero_() AttributeError Traceback (most recent call last) in 35 w1=w1-0.00001w1.grad.data 36 w2=w2-0.00001w2.grad.data —> 37 w1.grad.zero_() 38 w2.grad.zero_() 39 AttributeError: ‘NoneType’ object has no attribute ‘zero_’ I will be very grateful if some master give me a hand and show me the way to dell with the problem.
st81801
You are overwriting your parameters in the update block. To avoid this. you could use inplace operations: with torch.no_grad(): w1.sub_(learningrate*w1.grad.data) w2.sub_(learningrate*w2.grad.data) w1.grad.zero_() w2.grad.zero_()
st81802
I’m working with dilated temporal 2d convolutions and in order for the output to have the same shape as the input, I need to add left padding (same as the dilatation). I looked at nn.Conv2d and it only accepts symmetric padding, which is not idea in my case. Also I’m trying to avoid the manual padding, since the code became hard to read. Are there any other solutions that I am not aware of?
st81803
Solved by spanev in post #2 Hi @razvanc92, I dont know a better way than manual padding. This wrapper should leave the code readable: class LeftPaddedConv2D(nn.Module): def __init__(self, in_channels, out_channels, kernel_size, stride, left_padding, dilation, bias): super().__init__() self.pad = nn.ZeroPa…
st81804
Hi @razvanc92, I dont know a better way than manual padding. This wrapper should leave the code readable: class LeftPaddedConv2D(nn.Module): def __init__(self, in_channels, out_channels, kernel_size, stride, left_padding, dilation, bias): super().__init__() self.pad = nn.ZeroPad2d((left_padding, 0, 0, 0)) self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride, 0, dilation, bias=bias) def forward(x): x = self.pad(x) return self.conv(x)
st81805
Today, I found suddenly my torch version is 1.3. I don’t know this is why. My pytorch build from source, It can do automatic update ? Hope get your reply.
st81806
The code after a release gets bumped to the next version counter. I.e. if you build PyTorch from source after the 1.2.0 release, you’ll see a 1.3.0a0+commit version. The same applies to the nightly builds, which will be named 1.3.0.devBUILDDATE.
st81807
Thank you to reply me very much! From your answer, I know it, and first know what is nightly builds. Finally, confirm one thing with you: because I build torch from source(it isn’t stable), so torch can do nightly builds? If I conda install stable through https://pytorch.org/ 2 , torch will do not update. Is this what I understand? Thank you again!
st81808
Hey Guys, I see flicker8k and flicker30k under torchvision.dataset. I cannot use it. Does anybody have any idea how to use it? I want to use it for image captioning. Thank you
st81809
Hi everyone, I design my nn but i got error about different sizes. My training set size is [77,768] but my validation set size is [77,1,3] how can i fix this problem: My loops are: class Module(nn.Module): def __init__(self, D_in, H1, H2, D_out): super().__init__() self.linear1 = nn.Linear(D_in, H1) self.linear2 = nn.Linear(H1, H2) self.linear3 = nn.Linear(H2, D_out) def forward(self, x): x = F.relu(self.linear1(x)) x = F.relu(self.linear2(x)) x = self.linear3(x) return x model=Module(768,600,360,256) for e in range(epochs): running_loss = 0.0 running_corrects = 0.0 val_running_loss = 0.0 val_running_corrects = 0.0 for inputs,out in train_generator: #inputs=torch.squeeze(inputs) inputs = inputs.view(inputs.shape[0], -1) output=model(inputs) print(inputs.size()) #output = torch.squeeze(output) out=out.view(77,256) loss = criterion(output,out) preds,_=torch.max(output,1) # outputss.append(preds.max().detach().numpy()) losses.append(loss) optimizer.zero_grad() loss.backward() optimizer.step() #outputss.append(output.detach().numpy()) #print(loss.item()) ''' else: with torch.no_grad(): for val_inputs, val_labels in valid_generator: val_inputs = val_inputs.view(val_inputs.shape[0], -1) val_outputs = model(val_inputs) val_loss = criterion(val_outputs, val_labels) _, val_preds = torch.max(val_outputs, 1) val_running_loss += val_loss.item() val_running_corrects += torch.sum(val_preds == val_labels.data)
st81810
Solved by Nikronic in post #7 Based on the above code, you changing the sizes, before this, val and train are consistent. If you want to extract val dataset, first do the size changing then extract val. First, just separate your x and y, then do the dimension changes as you want, the extract val using list comprehension which…
st81811
Hi, Usually a validation set is a subset of a whole dataset which means the structure of both training and validation should be consistent as you know. Could you please show a sample of your training set and validation set to track the issue? As you commented in your validation code, we can play with view ,etc but first, you need to make sure the data is transformed reliably.
st81812
Firstly I get datas from .txt file and datasets are storing in ‘DATA’ array and then My manipulations are: val_range = int(data_x.shape[0]/100) * 15 val_x = data_x[0:val_range, :,:] train_x = data_x[val_range:None, :,:] val_y = data_y[0:val_range, :] train_y = data_y[val_range:None, :] train_x=torch.Tensor(train_x) #train_x=train_x.view(-1,1,3) train_x=train_x.view(77,1,16,16,3) train_y=torch.Tensor(train_y) train_y=train_y.view(77,1,16,16) val_x=torch.Tensor(val_x) val_y=torch.Tensor(val_y) If you asked me the original size of data: train_x=[19172,1,3] and train_y=[19172,1]
st81813
I can not understand why training set size is [77,768] but validation set size is [77,1,3] because the following code make the shape of the train and valid set the same: val_range = int(data_x.shape[0]/100) * 15 val_x = data_x[0:val_range, :,:] train_x = data_x[val_range:None, :,:]
st81814
Hi, it cannot be same because when I did the calculation, outputs is that: train x size: torch.Size([77, 1, 16, 16, 3]) train y size: torch.Size([77, 1, 16, 16]) val x size: torch.Size([3465, 1, 3]) val y size: torch.Size([3465, 1])
st81815
generally, we use the entire (partly) trained model to run on the val sets. if the input shape of the train and val set have difference, u can change it to be consistent. this is my suggestion. hope that others give a good solution for this problem.
st81816
dreamer: train_x=torch.Tensor(train_x) #train_x=train_x.view(-1,1,3) train_x=train_x.view(77,1,16,16,3) train_y=torch.Tensor(train_y) train_y=train_y.view(77,1,16,16) Based on the above code, you changing the sizes, before this, val and train are consistent. If you want to extract val dataset, first do the size changing then extract val. dreamer: train x size: torch.Size([77, 1, 16, 16, 3]) train y size: torch.Size([77, 1, 16, 16]) val x size: torch.Size([3465, 1, 3]) val y size: torch.Size([3465, 1]) First, just separate your x and y, then do the dimension changes as you want, the extract val using list comprehension which you already have done. In this situation the sizes should be consistent. Note that we always extract train, test and val set from same source so they all have to be same thing.
st81817
I’m doing a project, deploy the model under linux to windows. About Project migration, I want to ask a question. Call libtorch under windows, what are the requirements for the operating system, libtorch version? For example: Can I load the trained model under Linux to Windows? Can be between different pytorch and libtorch, workd it ?
st81818
Hey guys, recent I was quite puzzled about the learning rate for Adam optimizer in Pytorch: In many demos I have read, it seems that the proper value of lr for Adam is usually 0.01 or 0.001. However, when I set lr=0.001 in my code, the training loss and accuracy will increasing and decreasing violently all the time, and never converge at all. So, only when I set lr around 0.00001, the training process becomes to be normal, and finally my model works well ! Though the problem has been solved, I think the current value of lr is still too much small =_=. Is this phenomenon normal? If not, what caused it? I am waiting sincerely for you guys to solve my puzzle!!!
st81819
Hi all, I’m working on a project and I’d like to use hook to achieve the goal which is only update some weight tensor in the training. However, I encountered a problem and tried my best with no result. Below is my codes: def hook(self, model, inputs): with torch.no_grad(): print(model.weight.cuda().data.dtype) print(self.sparse_mask[self.type[model]].dtype) #model.weight.data = model.weight.data * self.sparse_mask[self.type[model]] model.weight.cuda().data = model.weight.cuda().data * self.sparse_mask[self.type[model]] def register_hook(self,module): self.handle = module.register_forward_pre_hook(self.hook) And I got this output torch.float32 torch.float32 Traceback (most recent call last): File "<input>", line 1, in <module> File ".pycharm_helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File ".pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/sparse_processing.py", line 352, in <module> jsegnet(input_data) File "/.conda/envs/mengdietao/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "jsegnet.py", line 199, in forward h = self.block1(x) #[1, 32, 160, 160] File "/.conda/envs/mengdietao/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/jsegnet.py", line 75, in forward h = self.conv_a(x) File "/.conda/envs/mengdietao/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/tangshan/tangshan_data/image_segmentation/libs/models/jsegnet.py", line 51, in forward return super(_ConvBatchNormReLU, self).forward(x) File "/.conda/envs/mengdietao/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward input = module(input) File "/.conda/envs/mengdietao/lib/python3.7/site-packages/torch/nn/modules/module.py", line 539, in __call__ result = hook(self, input) File "/sparse_processing.py", line 229, in hook model.weight.cuda().data = model.weight.cuda().data * self.sparse_mask[self.type[model]] RuntimeError: expected device cuda:0 and dtype Float but got device cpu and dtype Float Both dtype of model.weight.cuda().data and self.sparse_mask[self.type[model]] are torch.float32, my input_data is data = data.cuda().float() data = data.cuda().float() I’d like to keep it calculate on GPU but I can’t figure out how to fix that problem. Any response will be really appreciated!!
st81820
I GOT THE SOLUTION! def hook(self, model, inputs): with torch.no_grad(): model.weight.cpu().data = model.weight.cpu().data * self.sparse_mask[self.type[model]] or def hook(self, model, inputs): with torch.no_grad(): model.weight.data = model.weight.data * self.sparse_mask[self.type[model]].cuda()
st81821
I have tensor condition operation, such as for i: if check(rate[i]): rate[i] = reset(rate[i]) right now, i implemented as following, rate = torch.where(check(rate), reset(rate), rate) It turns out that even for tensor where check(rate) is false, reset(rate) is still operated, kind of like new_rate = reset(rate) rate = torch.where(check(rate), new_rate, rate) I am wondering anyway i could improve the performance here, if reset(rate) is really expensive.
st81822
It seems mask should help the case, like mask = check(rate) rate[mask] = reset(rate[mask]) And now only the tensor of the mask will be operated. However, it is even slower.
st81823
Here is an example, import torch import time import copy def reset(t): return 1.0 / torch.log(1.0 + t) source1 = torch.FloatTensor(10000, 10000) source1.uniform_() source2 = copy.deepcopy(source1) start = time.time() mask = source1 < 0.5 source1[mask] = reset(source1[mask]) print('test1 takes: ', time.time() - start) start = time.time() mask = source2 < 0.5 source2 = torch.where(mask, reset(source2), source2) print('test2 takes: ', time.time() - start) On my machine, test1 takes: 3.4766557216644287 test2 takes: 1.7760250568389893
st81824
I have two networks: net1 and net2 and an input x. I want to feed the input x to the net1 to generate the pred x1. Then, the pred x1 is fed to the network 2 to generate pred x2. I want to freeze network net1, while train the net2. The loss is computed as the mse between pred x1 and pred x2. I shows my implementation but I checked that the weights of net1 is still updated during training. How can I fix it? net1=Model_net1() net1.eval() net2=Model_net2() net2.train() for param in net1.parameters(): param.requires_grad = False with torch.no_grad(): pred_x1= net1(x) pred_x2= net2(pred_x1) loss = mse(pred_x1, pred_x2) loss.backward() optimizer.step() Screenshot from 2019-09-05 15-38-33.png1053×270 11.4 KB
st81825
Could you try to detach pred_x1 before passing it to net2? Also, did you pass the parameters of net1 to the optimizer and are you using weight decay?
st81826
Sorry. I mistaken. I have check it and the parameter is not changed. Some bug in my code. Sorry bro
st81827
Hi – So, I’m new to PyTorch, and I’m spending a lot of time in the docs. Recently, I was digging around trying to find out how log_softmax is implemented. I started out looking at the source for torch.nn.LogSoftmax 23, which is implemented with torch.nn.functional.log_softmax 27. OK, so I went to the docs for that and clicked the source link, and found that this function is implemented by using the method .log_softmax of its input. I figured that this meant that .log_softmax would probably be a method of torch.Tensor, so I went to the docs for Tensor, but was unable to find any such method. I’m wondering, where is this method implemented? I’d also appreciate any general insight into how torch's organization influences the location of this code/other code like it. Thank you!
st81828
You’ll find the CPU implementation here 294. PS: rgrep is often a useful tool to find the corresponding function. The GitHub search function is often also quite helpful.
st81829
Hi, I have a question about the use of the MaxUnpool with Pytorch. I Made an Autoencoder (AE) symetric with 5/5 layers: Encoder: [ConvLayer+MaxPool]*5 Decoder: [ConvLayer+MaxUnpool]*5 At the end of the encoder I achieved a reduction of 3%. Astonishingly my AE is performing perfectly. I almost have an overlap of inputs/ouputs. It seems like the Maxunpool is a way of cheating in the learning. Keeping the indexes sounds too powerful. I believe that the Network is not learning enough and expect that the MaxUnpools are enough to reconstruct the data backward. The final goal of my project is to perform transfer learning using the Encoder that should learn and understand the “main features” of my dataset. Do you think that the MaxUnpool are too powerful and are not a good solution for what I am aiming?
st81830
I’m not sure if MaxUnpool is too powerful, as the majority of the activation output should be zero, so that your model still would have to learn the other values. Did you try to use the encoder for your other use case? If so, did it perform poorly? Just as a test you could try to use transposed convolutions and see, if that helps your model (especially the encoder as it seems to be the main use case).
st81831
Thank you so much for your answer. We haven’t tried yet on the other models, but we think we are going to move forward and check the performance !
st81832
@ptrblck We finally did a test that was confirming that MaxUnpool was too powerful. We fed the encoder with an image to get the indices and then put a random vector before the decoder. def forward(self, input): latent_vector, indices = self.encoder(input) # Here we overwrite the latent vector with a random one latent_vector = torch.randn(self.dim_latent_vector) output = self.decoder(latent_vector, indices) return output The decoder performed a successful reconstitution of the input expected anyway. The following paper 1 details the phenomena encountered.
st81833
Thanks for the follow-up! That’s an interesting observation. What did you end up using?
st81834
For now we are trying a fully CNN model but it’s learning slowly so we don’t have any satisfactory results yet .
st81835
The nn.transformer module has an example >>> transformer_model = nn.Transformer (src_vocab, tgt_vocab) But there is no Ebidding module in the Transformer class implementation. Why use (src_vocab, tgt_vocab)?
st81836
Hi experts, I got an import error while importing pytorch_geometric. My environment: //====================================== CentOS Linux release 7.6.1810 (Core) Python 3.6.8 :: Anaconda custom (64-bit) torch==1.2.0+cpu torch-cluster==1.4.4 torch-geometric==1.3.1 torch-scatter==1.3.1 torch-sparse==0.4.0 torch-spline-conv==1.1.0 torchvision==0.4.0+cpu //====================================== Error messages: //====================================== import torch import torch_geometric Traceback (most recent call last): File “”, line 1, in File “/home/hliu/.conda/envs/hliuPython/lib/python3.6/site-packages/torch_geometric/init.py”, line 2, in import torch_geometric.nn File “/home/hliu/.conda/envs/hliuPython/lib/python3.6/site-packages/torch_geometric/nn/init.py”, line 2, in from .data_parallel import DataParallel File “/home/hliu/.conda/envs/hliuPython/lib/python3.6/site-packages/torch_geometric/nn/data_parallel.py”, line 5, in from torch_geometric.data import Batch File “/home/hliu/.conda/envs/hliuPython/lib/python3.6/site-packages/torch_geometric/data/init.py”, line 1, in from .data import Data File “/home/hliu/.conda/envs/hliuPython/lib/python3.6/site-packages/torch_geometric/data/data.py”, line 7, in from torch_sparse import coalesce File “/home/hliu/.conda/envs/hliuPython/lib/python3.6/site-packages/torch_sparse-0.4.0-py3.6-linux-x86_64.egg/torch_sparse/init.py”, line 6, in from .spspmm import spspmm File “/home/hliu/.conda/envs/hliuPython/lib/python3.6/site-packages/torch_sparse-0.4.0-py3.6-linux-x86_64.egg/torch_sparse/spspmm.py”, line 4, in import torch_sparse.spspmm_cpu ImportError: /home/hliu/.conda/envs/hliuPython/lib/python3.6/site-packages/torch_sparse-0.4.0-py3.6-linux-x86_64.egg/torch_sparse/spspmm_cpu.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC1ENS_14SourceLocationERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE // ===========================================
st81837
Did you get any errors/warning during the installation of pytorch_geometric? PS: I think you might get a better answer, if you create an issue in the repo 67.
st81838
Hi ptrblck, Thanks for the reply and advice I will post it in the repo, there is no warning and/or error installing pytorch_geometric //========================================================== pip install torch-geometric Collecting torch-geometric Requirement already satisfied: plyfile in /home/hliu/.conda/envs/hliuPython/lib/python3.6/site-packages (from torch-geometric) (0.7) Requirement already satisfied: scipy in /home/hliu/.conda/envs/hliuPython/lib/python3.6/site-packages (from torch-geometric) (1.3.1) Requirement already satisfied: pandas in /home/hliu/.conda/envs/hliuPython/lib/python3.6/site-packages (from torch-geometric) (0.24.2) Requirement already satisfied: scikit-learn in /home/hliu/.conda/envs/hliuPython/lib/python3.6/site-packages (from torch-geometric) (0.21.3) Requirement already satisfied: h5py in /home/hliu/.conda/envs/hliuPython/lib/python3.6/site-packages (from torch-geometric) (2.9.0) Requirement already satisfied: networkx in /home/hliu/.conda/envs/hliuPython/lib/python3.6/site-packages (from torch-geometric) (2.2) Requirement already satisfied: numpy in /home/hliu/.conda/envs/hliuPython/lib/python3.6/site-packages (from torch-geometric) (1.16.4) Requirement already satisfied: googledrivedownloader in /home/hliu/.conda/envs/hliuPython/lib/python3.6/site-packages (from torch-geometric) (0.4) Requirement already satisfied: rdflib in /home/hliu/.conda/envs/hliuPython/lib/python3.6/site-packages (from torch-geometric) (4.2.2) Requirement already satisfied: pytz>=2011k in /home/hliu/.conda/envs/hliuPython/lib/python3.6/site-packages (from pandas->torch-geometric) (2018.9) Requirement already satisfied: python-dateutil>=2.5.0 in /home/hliu/.conda/envs/hliuPython/lib/python3.6/site-packages (from pandas->torch-geometric) (2.8.0) Requirement already satisfied: joblib>=0.11 in /home/hliu/.conda/envs/hliuPython/lib/python3.6/site-packages (from scikit-learn->torch-geometric) (0.13.2) Requirement already satisfied: six in /home/hliu/.conda/envs/hliuPython/lib/python3.6/site-packages (from h5py->torch-geometric) (1.12.0) Requirement already satisfied: decorator>=4.3.0 in /home/hliu/.conda/envs/hliuPython/lib/python3.6/site-packages (from networkx->torch-geometric) (4.4.0) Requirement already satisfied: pyparsing in /home/hliu/.conda/envs/hliuPython/lib/python3.6/site-packages (from rdflib->torch-geometric) (2.3.1) Requirement already satisfied: isodate in /home/hliu/.conda/envs/hliuPython/lib/python3.6/site-packages (from rdflib->torch-geometric) (0.6.0) Installing collected packages: torch-geometric Successfully installed torch-geometric-1.3.1
st81839
When should one subclass nn.ModuleDict over nn.Module? for example, here, github.com civodlu/trw/blob/42bb09e4bee07c85c3d9585b07e0e23f7a60906f/tutorials/dcgan_mnist.py 6 import trw.train import trw.datasets import torch import torch.nn as nn import torch.nn.functional as F import functools import collections batch_size = 32 latent_size = 64 mnist_size = 28 * 28 hidden_size = 256 class Flatten(nn.Module): def forward(self, i): return i.view(i.shape[0], -1) class Generator(nn.Module): This file has been truncated. show original is it better to use nn.ModuleDict whenever merging multiple neural networks? so, class Y(nn.ModuleDict): def __init__(self): super().__init__() self['NetA'] = NetA() self['NetB'] = NetB() def X(nn.Module): def __init__(self): super().__init__() self.modelone = NetA() self.modeltwo = NetB() which is the preferred way?
st81840
It might depend on your use case, but nn.ModuleDict is just a container (dict), which stores modules and is used to register these modules properly inside a parent nn.Module. Based on your code snippet, I would derive from nn.Module (your X class).
st81841
Does anyone know of an existing evaluator? I know that the CocoEvaluator was used in the pytorch tutorial: https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html 5, but does anyone know of any others? I would like to evaluate a model for pedestrian detection and I have seen in many papers that miss rate per false positives per image is often used. Due to lack of time, it would be better if I could find an existing evaluator, rather than write from scratch.
st81842
Here’s a look at the LSTM model # Create LSTM Model class LSTMModel(nn.Module): def __init__(self, input_dim, hidden_dim, layer_dim, output_dim): super(LSTMModel, self).__init__() # Number of hidden dimensions self.hidden_dim = hidden_dim # Number of hidden layers self.layer_dim = layer_dim # LSTM self.lstm = nn.LSTM(input_dim, hidden_dim, layer_dim, batch_first=True, dropout=0.1) # Readout layer self.f1 = nn.Linear(hidden_dim, output_dim) self.dropout_layer = nn.Dropout(p=0.2) self.softmax = nn.Softmax() def forward(self, x): # Initialize hidden state with zeros h0 = Variable(torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).type(torch.FloatTensor)) # Initialize cell state c0 = Variable(torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).type(torch.FloatTensor)) # One time step out, (hn, cn) = self.lstm(x, (h0,c0)) out = self.dropout_layer(hn[-1]) out = self.f1(out) out = self.softmax(out) return out #LSTM Configuration batch_size = 3000 num_epochs = 20 learning_rate = 0.001#Check this learning rate # Create LSTM input_dim = 1 # input dimension hidden_dim = 30 # hidden layer dimension layer_dim = 15 # number of hidden layers output_dim = 1 # output dimension num_layers = 10 #num_layers print("input_dim = ", input_dim,"\nhidden_dim = ", hidden_dim,"\nlayer_dim = ", layer_dim,"\noutput_dim = ", output_dim) model = LSTMModel(input_dim, hidden_dim, layer_dim, output_dim) model.cuda() error = nn.BCELoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) graph_index = 0 test_loss = [] train_loss = [] plt_test_index = [] plt_train_index = [] tmp_index = [] tmp_train = [] tmp_test = [] # model.init_hidden() for epoch in range(num_epochs): # Train model.train() loss_list_train = [] loss_list_test = [] total_train = 0 equals_train = 0 total_test = 0 num0_train = 0 num1_train = 0 num0_test = 0 num1_test = 0 equals_test = 0 TP_train = 0 FP_train = 0 TN_train = 0 FN_train = 0 TP_test = 0 FP_test = 0 TN_test = 0 FN_test = 0 # for i, (inputs, targets) in enumerate(train_loader): for i, (inputs, targets) in enumerate(train_loader): train = Variable(inputs.type(torch.FloatTensor).cuda()) targets = Variable(targets.type(torch.FloatTensor).cuda()) optimizer.zero_grad() outputs = model(train) loss = error(outputs, targets) loss_list_train.append(loss.item()) loss.backward() # loss.backward(retain_graph=True) optimizer.step() t = np.where(targets.cpu().detach().numpy() > 0.5, 1, 0) o = np.where(outputs.cpu().detach().numpy() > 0.5, 1, 0) total_train += t.shape[0] equals_train += np.sum(t == o) num0_train += np.sum(t == 0) num1_train += np.sum(t == 1) TP_train += np.sum(np.logical_and(t == 1, o==1)) FP_train += np.sum(np.logical_and(t == 1, o==0)) TN_train += np.sum(np.logical_and(t == 0, o==0)) FN_train += np.sum(np.logical_and(t == 0, o==1)) tb.save_value('Train Loss', 'train_loss', globaliter, loss.item()) globaliter += 1 tb.flush_line('train_loss') print(i) # Test model.eval() targets_plot = np.array([]) outputs_plot = np.array([]) inputs_plot = np.array([]) for inputs, targets in test_loader: inputs = Variable(inputs.type(torch.FloatTensor).cuda()) targets = Variable(targets.type(torch.FloatTensor).cuda()) outputs = model(inputs) loss = error(outputs, targets) loss_list_test.append(loss.item()) #print(outputs.cpu().detach().numpy()) t = np.where(targets.cpu().detach().numpy() > 0.5, 1, 0) o = np.where(outputs.cpu().detach().numpy() > 0.5, 1, 0) total_test += t.shape[0] equals_test += np.sum(t == o) num0_test += np.sum(t == 0) num1_test += np.sum(t == 1) TP_test += np.sum(np.logical_and(t == 1, o==1)) FP_test += np.sum(np.logical_and(t == 0, o==1)) TN_test += np.sum(np.logical_and(t == 0, o==0)) FN_test += np.sum(np.logical_and(t == 1, o==0)) tb.save_value('Test Loss', 'test_loss', globaliter2, loss.item()) globaliter2 += 1 tb.flush_line('test_loss') # Save value in array graph_index += 1 plt_train_index.append(graph_index) plt_test_index.append(graph_index) train_loss.append(np.mean(np.array(loss_list_train))) test_loss.append(np.mean(np.array(loss_list_test))) print("------------------------------") print("Epoch : ", epoch) print("----- Train -----") print("Total =", total_train, " | Num 0 =", num0_train, " | Num 1 =", num1_train) print("Equals =", equals_train) print("Accuracy =", (equals_train / total_train)*100, "%") # print("TP =", TP_train / total_train, "% | TN =", TN_train / total_train, "% | FP =", FP_train / total_train, "% | FN =", FN_train / total_train, "%") print("----- Test -----") print("Total =", total_test, " | Num 0 =", num0_test, " | Num 1 =", num1_test) print("Equals =", equals_test) print("Accuracy =", (equals_test / total_test)*100, "%") Capture2.PNG1007×859 44.1 KB I am using the model to do binary classification on the sequence length of 300. The accuracy and the loss are not changing over several epochs.I tried changing the no. of layers,no of hidden states, activation function, but all to no avail. I don’t know what i am doing wrong, I am probably missing something fundamental. Any help is appreciated
st81843
The simple code below yields different results on different GPUs (e.g., 1070 got 998874 but 1080Ti got 998836). I wonder if I did something wrong or it is just impossible to get the same result on different GPUs? import torch import numpy as np import random seed = 0 torch.backends.cudnn.benchmark = False torch.backends.cudnn.deterministic = True torch.manual_seed(seed) torch.cuda.manual_seed(seed) torch.cuda.manual_seed_all(seed) # if you are using multi-GPU. np.random.seed(seed) # Numpy module. random.seed(seed) # Python random module. a = torch.ones(1000,1000).to('cuda:0') dropout = torch.nn.Dropout(0.5).cuda() b = dropout(a) print(torch.sum(torch.abs(b)))
st81844
Are you using the same PyTorch version (CUDA, cudnn)? Getting the same “random” numbers on different hardware is sometimes quite hard. However, using your code, I get the same result (tensor(1000260., device='cuda:0')) for: PyTorch 1.2.0, CUDA10.0.130, cudnn7602, TitanV PyTorch master ~few weeks old, CUDA10.1.168, cudnn7601, V100
st81845
Yes, the same environment. I don’t have a TitanV to try but I guess it is quite similar to V100 so they could yield the same result. More tests: My local server (PyTorch 1.20, CUDA 10.0.130, CUDNN7602, 2080Ti) got 998908. Instances on Google Cloud using the official pytorch 1.20 image (exactly same versions as above): got 1000260 on V100 but 999100 on K80.
st81846
Yeah, that might be the case. However, I would assume the 1070 and 1080Ti are also similar to each other (same architecture).
st81847
My K80 also got 999100. My two 1080s on quite different machines both got 998662.
st81848
I checked again, my 1070 and 1080ti machines have the same CUDA CUDNN PyTorch versions, but their results are different …
st81849
in a sparse autoencoder, you just have an L1 sparsitiy penalty on the intermediate activations. You can create a L1Penalty autograd function that achieves this. import torch from torch.autograd import Function class L1Penalty(Function): @staticmethod def forward(ctx, input, l1weight): ctx.save_for_backward(input) ctx.l1weight = l1weight return input @staticmethod def backward(ctx, grad_output): input, = ctx.saved_variables grad_input = input.clone().sign().mul(self.l1weight) grad_input += grad_output return grad_input Then you can use it like this: import torch.nn.functional as F class SparseAutoEncoder(torch.nn.Module): def __init__(self, feature_size, hidden_size, l1weight): super(SparseAutoEncoder, self).__init__() self.lin_encoder = nn.Linear(feature_size, hidden_size) self.lin_decoder = nn.Linear(hidden_size, feature_size) self.feature_size = feature_size self.hidden_size = hidden_size self.l1weight = l1weight def forward(input): # encoder x = input.view(-1, self.feature_size) x = self.lin_encoder(x) x = F.relu(x) # sparsity penalty x = L1Penalty.apply(x, self.l1weight) # decoder x = self.lin_decoder(x) x = F.sigmoid(x) return x.view_as(input) I didn’t test the code for exact correctness, but hopefully you get an idea. References: https://github.com/Kaixhin/Autoencoders/blob/master/models/SparseAE.lua 186 https://github.com/torch/nn/blob/master/L1Penalty.lua 141
st81850
@smth Just can’t connect the code with the document. http://deeplearning.stanford.edu/wiki/index.php/Autoencoders_and_Sparsity 222 Can you show me some more details? Where is the parameter of sparsity? What is l1weight? Is it the parameter of sparsity, e.g. 5%? What is the loss function? Is there any completed code? Thanks in advance!
st81851
Why put L1Penalty into a Layer? Why dont add it to the loss function? In another words, L1Penalty in just one activation layer will be automatically added into the final loss function by pytorch itself?
st81852
This code doesnt run in Pytorch 1.1.0! I keep getting the backward() needs to return two values not 1! Edit : You need to return None for any arguments that you do not need the gradients. so the L1Penalty would be : import torch from torch.autograd import Function class L1Penalty(Function): @staticmethod def forward(ctx, input, l1weight): ctx.save_for_backward(input) ctx.l1weight = l1weight return input @staticmethod def backward(ctx, grad_output): input, = ctx.saved_variables grad_input = input.clone().sign().mul(self.l1weight) grad_input += grad_output return grad_input, None
st81853
I am using the Faster R-CNN based on the tutorial https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html 2. I am trying to extract some of the activation functions from 'backbone' layer. This is the full model: FasterRCNN( (transform): GeneralizedRCNNTransform() (backbone): BackboneWithFPN( (body): IntermediateLayerGetter( (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (bn1): FrozenBatchNorm2d() (relu): ReLU(inplace) (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) (layer1): Sequential( (0): Bottleneck( (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn1): FrozenBatchNorm2d() (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): FrozenBatchNorm2d() (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn3): FrozenBatchNorm2d() (relu): ReLU(inplace) (downsample): Sequential( (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): FrozenBatchNorm2d() ) ) (1): Bottleneck( (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn1): FrozenBatchNorm2d() (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): FrozenBatchNorm2d() (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn3): FrozenBatchNorm2d() (relu): ReLU(inplace) ) (2): Bottleneck( (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn1): FrozenBatchNorm2d() (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): FrozenBatchNorm2d() (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn3): FrozenBatchNorm2d() (relu): ReLU(inplace) ) ) (layer2): Sequential( (0): Bottleneck( (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn1): FrozenBatchNorm2d() (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn2): FrozenBatchNorm2d() (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn3): FrozenBatchNorm2d() (relu): ReLU(inplace) (downsample): Sequential( (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): FrozenBatchNorm2d() ) ) (1): Bottleneck( (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn1): FrozenBatchNorm2d() (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): FrozenBatchNorm2d() (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn3): FrozenBatchNorm2d() (relu): ReLU(inplace) ) (2): Bottleneck( (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn1): FrozenBatchNorm2d() (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): FrozenBatchNorm2d() (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn3): FrozenBatchNorm2d() (relu): ReLU(inplace) ) (3): Bottleneck( (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn1): FrozenBatchNorm2d() (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): FrozenBatchNorm2d() (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn3): FrozenBatchNorm2d() (relu): ReLU(inplace) ) ) (layer3): Sequential( (0): Bottleneck( (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn1): FrozenBatchNorm2d() (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn2): FrozenBatchNorm2d() (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn3): FrozenBatchNorm2d() (relu): ReLU(inplace) (downsample): Sequential( (0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): FrozenBatchNorm2d() ) ) (1): Bottleneck( (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn1): FrozenBatchNorm2d() (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): FrozenBatchNorm2d() (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn3): FrozenBatchNorm2d() (relu): ReLU(inplace) ) (2): Bottleneck( (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn1): FrozenBatchNorm2d() (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): FrozenBatchNorm2d() (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn3): FrozenBatchNorm2d() (relu): ReLU(inplace) ) (3): Bottleneck( (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn1): FrozenBatchNorm2d() (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): FrozenBatchNorm2d() (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn3): FrozenBatchNorm2d() (relu): ReLU(inplace) ) (4): Bottleneck( (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn1): FrozenBatchNorm2d() (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): FrozenBatchNorm2d() (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn3): FrozenBatchNorm2d() (relu): ReLU(inplace) ) (5): Bottleneck( (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn1): FrozenBatchNorm2d() (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): FrozenBatchNorm2d() (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn3): FrozenBatchNorm2d() (relu): ReLU(inplace) ) ) (layer4): Sequential( (0): Bottleneck( (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn1): FrozenBatchNorm2d() (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn2): FrozenBatchNorm2d() (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn3): FrozenBatchNorm2d() (relu): ReLU(inplace) (downsample): Sequential( (0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): FrozenBatchNorm2d() ) ) (1): Bottleneck( (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn1): FrozenBatchNorm2d() (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): FrozenBatchNorm2d() (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn3): FrozenBatchNorm2d() (relu): ReLU(inplace) ) (2): Bottleneck( (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn1): FrozenBatchNorm2d() (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): FrozenBatchNorm2d() (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn3): FrozenBatchNorm2d() (relu): ReLU(inplace) ) ) ) (fpn): FeaturePyramidNetwork( (inner_blocks): ModuleList( (0): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1)) (1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1)) (2): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1)) (3): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1)) ) (layer_blocks): ModuleList( (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (extra_blocks): LastLevelMaxPool() ) ) (rpn): RegionProposalNetwork( (anchor_generator): AnchorGenerator() (head): RPNHead( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (cls_logits): Conv2d(256, 3, kernel_size=(1, 1), stride=(1, 1)) (bbox_pred): Conv2d(256, 12, kernel_size=(1, 1), stride=(1, 1)) ) ) (roi_heads): RoIHeads( (box_roi_pool): MultiScaleRoIAlign() (box_head): TwoMLPHead( (fc6): Linear(in_features=12544, out_features=1024, bias=True) (fc7): Linear(in_features=1024, out_features=1024, bias=True) ) (box_predictor): FastRCNNPredictor( (cls_score): Linear(in_features=1024, out_features=5, bias=True) (bbox_pred): Linear(in_features=1024, out_features=20, bias=True) ) ) ) This is a model class I created for extracting the activation layers: class BackboneFusionLayer(nn.ModuleDict): def __init__(self, model): super(BackboneFusionLayer, self).__init__() self.model = model self.modules = OrderedDict() for name, module in model.named_children(): self.modules[name] = module self.body = list(self.modules['body'].children()) self.fpn = list(self.modules['fpn'].children()) self.features1 = nn.Sequential(*self.body)[:6] self.features2 = nn.Sequential(*self.body)[6:] self.featpynet = nn.Sequential(*self.body) def forward(self, x): x = self.features1(x) x = self.features2(x) x = self.featpynet(x) print(x.shape) return x Eventually I want to use this to extract certain features and use them for fusion of two inputs. Just to test if I have created the class properly, I set features1 to be the first half of backbone.body and features2 as the second half of backbone.body. self.featpynet is the fpn (i.e. the FeaturePyramidNetwork). But when I start the training, I get the following error: RuntimeError: Given groups=1, weight of size 64 3 7 7, expected input[4, 2048, 25, 25] to have 3 channels, but got 2048 channels instead When I print the outputs for features1 and features2, the outputs are: torch.Size([4, 512, 100, 100]) torch.Size([4, 2048, 25, 25]) I understand that the issue is to due with the output of features2. Am I missing something? I can’t seem to find anything that I may have missed based on ‘backbone’. It seems to work fine when I use: class BackboneFusionLayer(nn.ModuleDict): def __init__(self, model): super(BackboneFusionLayer, self).__init__() self.model = model self.modules = OrderedDict() for name, module in model.named_children(): self.modules[name] = module self.body = list(self.modules['body'].children()) self.fpn = list(self.modules['fpn'].children()) self.features1 = nn.Sequential(*self.body)[:6] self.features2 = nn.Sequential(*self.body)[6:] self.featpynet = nn.Sequential(*self.body) def forward(self, x): x = self.model(x) return x Sorry for the long post. Any advice would be greatly appreciated.
st81854
I just noticed a typo in the forward in BackboneFusionLayer, where I set self.featpynet = nn.Sequential(*self.body) but it should be self.featpynet = nn.Sequential(*self.fpn). Having changed that, I get the following error now: NotImplementedError
st81855
Which line of code throws this error? The format of your code looks alright here, but make sure the indentation levels are right for forward and __init__ in your script.
st81856
Thank you for your reply, but I realised my mistake soon after posting. The input of the fpn is supposed to be an Ordereddrict(). I fixed that and now the model is working.
st81857
Hi, I have a network in which I want to train one set of params at odd numbered epochs and another set of params at even numbered epochs. What is the best way to organize this in PyTorch? Thanks!
st81858
Solved by spanev in post #2 Hi @Hovnatan_Karapetyan, Given P1, P2 the two param sets, I think the best way is to create two optimizers with the same hyperparameters: optim_even = torch.optim.SGD(P1, lr=0.01) optim_odd = torch.optim.SGD(P2, lr=0.01) for e in range(epochs): optim = optim_even if e % 2 == 0 else optim_odd…
st81859
Hi @Hovnatan_Karapetyan, Given P1, P2 the two param sets, I think the best way is to create two optimizers with the same hyperparameters: optim_even = torch.optim.SGD(P1, lr=0.01) optim_odd = torch.optim.SGD(P2, lr=0.01) for e in range(epochs): optim = optim_even if e % 2 == 0 else optim_odd for x in dataloader: #use optim normally here There might be some further optimizations to do depending on the optimizer you use (SGD should be with his approach) and on your model topology, but this should be sufficient in simple use cases.
st81860
Hey mates. Hope you are in good health. I need some guidance. I’m rebuilding an embedding model from TensorFlow to PyTorch. Here is the link: github.com Sujit-O/pykg2vec/blob/702e0d4012195dd8b807ec9f412887a7f3c02c9e/pykg2vec/core/TransM.py#L124 score_pos = pos_r_theta*self.distance(pos_h_e, pos_r_e, pos_t_e) score_neg = neg_r_theta*self.distance(neg_h_e, neg_r_e, neg_t_e) self.loss = tf.reduce_sum(tf.maximum(score_pos + self.config.margin - score_neg, 0)) def test_batch(self): """Function that performs batch testing for the algorithm. Returns: Tensors: Returns ranks of head and tail. """ head_vec, rel_vec, tail_vec = self.embed(self.test_h_batch, self.test_r_batch, self.test_t_batch) norm_ent_embeddings = tf.nn.l2_normalize(self.ent_embeddings, axis=1) score_head = self.distance(norm_ent_embeddings, tf.expand_dims(rel_vec, 1), tf.expand_dims(tail_vec, 1)) score_tail = self.distance(tf.expand_dims(head_vec, 1), tf.expand_dims(rel_vec, 1), norm_ent_embeddings) _, head_rank = tf.nn.top_k(score_head, k=self.data_stats.tot_entity) _, tail_rank = tf.nn.top_k(score_tail, k=self.data_stats.tot_entity) On line 112 and 113, As I build and pass in my pytorch code: pos_r_theta = nn.Embedding(self.theta, pos_r) neg_r_theta = nn.Embedding(self.theta, neg_r) It show me error (attached image). Q1: why this embedding not accepting the tensors as my parameters? Q2: what is the difference between tf.nn.embedding_lookup() in tf and nn.embedding() in pytorch? Q3: Does pytorch support any function similar to np.asarray (code line 103) (now want to use torch.from_numpy, proceed only in pytorch)? error.png1111×167 7.75 KB
st81861
Hey @muhibkhan, A1: When you do this: pos_r_theta = nn.Embedding(self.theta, pos_r) you are creating an embedding laye where the two first args (self.theta, pos_r) should be integers indicating first the number of embeddings, then the embeddings dim. Here’s an example of how to use it: # in __init__ num_embs = 3 emb_dim = 10 emb = nn.Embedding(num_embs, emb_dim) ... # later in forward x = torch.LongTensor([2,1,0]) and the documentation 2 A2: From what I see from tf.nn.embedding_lookup, they are basically the same thing. The difference being that the lookup table initialization is automatic in PyTorch (based on the arguments the num_embeddings x embedding_dim param will be initialized from a standard normal distribution), while you have to pass the already initialized table in TF (the param argument). A3: If I understand correctly you want to create a torch.Tensor from a list, similarly to what is done on line 103 2. You can simply create the Tensor by passing your array list to the constructor: list = [1,2,3] tensor = torch.FloatTensor(list) #or tensor = torch.LongTensor(list) Hope that answers some of your questions.
st81862
Thanks mate. In attached link, kindly see lines (58, 59, 103) that used by the lines 112 & 113. He passed the tensor as parameters. When IO passed the tensor, it showed me error of - got (Tensor, Tensor)…?
st81863
ValueError Traceback (most recent call last) <ipython-input-30-33821ccddf5f> in <module> 23 output = model(data) 24 # calculate the batch loss ---> 25 loss = criterion(output, target) 26 # backward pass: compute gradient of the loss with respect to model parameters 27 loss.backward() C:\Users\mnauf\Anaconda3\envs\federated_learning\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs) 487 result = self._slow_forward(*input, **kwargs) 488 else: --> 489 result = self.forward(*input, **kwargs) 490 for hook in self._forward_hooks.values(): 491 hook_result = hook(self, input, result) C:\Users\mnauf\Anaconda3\envs\federated_learning\lib\site-packages\torch\nn\modules\loss.py in forward(self, input, target) 593 self.weight, 594 pos_weight=self.pos_weight, --> 595 reduction=self.reduction) 596 597 C:\Users\mnauf\Anaconda3\envs\federated_learning\lib\site-packages\torch\nn\functional.py in binary_cross_entropy_with_logits(input, target, weight, size_average, reduce, reduction, pos_weight) 2073 2074 if not (target.size() == input.size()): -> 2075 raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size())) 2076 2077 return torch.binary_cross_entropy_with_logits(input, target, weight, pos_weight, reduction_enum) ValueError: Target size (torch.Size([16])) must be the same as input size (torch.Size([16, 1])) I am working on the Horses vs humans dataset. This is my code 8. I am using criterion = nn.BCEWithLogitsLoss() and optimizer = optim.RMSprop(model.parameters(), lr=0.01). My final layer is self.fc2 = nn.Linear(512, 1) with softmax activation function applying on it. 16 is the batch size. Since the error says ValueError: Target size (torch.Size([16])) must be the same as input size (torch.Size([16, 1])). I don’t understand, where do I need to make change, to rectify the error.
st81864
target = target.unsqueeze(1), before passing target to criterion, changed the target tensor size from [16] to [16,1]. Doing it solved the issue. Furthermore, I also needed to do target = target.float() before passing it to criterion, because our outputs are in float. Besides, there was another error in the code. I was using sigmoid activation function in the last layer, but I shouldn’t because the criterion I am using already comes with sigmoid builtin.
st81865
Hi, I have a weird question about some crazy idea I want to materialize using PyTorch. The problem: I want to train re-id model using multiple datasets. Such a solution already exists in https://github.com/KaiyangZhou/deep-person-reid 2 and it works. However, I beleive it is not optimal, as in this repo they combine the datasets, sums all unique entities and relabel all of them. Example: I have 3 datasets with 50,000 unique entities each. When using deep-person-reid I will get 150,000 unique entities at the end and the last softmax layer will have a size of 150,000 neruons.The problem is that such approach causes explosion of the dimension of the softmax layer. In my case I have almost 1 million unique entities, which casue huge GPU RAM overhead just to store the model! (no worries, my entities are not human people ) To alleviate the problem I came up with a crazy (I think it is a little bit crazy). My idea: (For all examples let’s assume I have 3 different datasets, with 50,000 entities and image sizes etc. are consistent across all datasets) Create separate dataloaders, which will be used in turns during training. (3 seperate dataloaders) Create removable softmax layer that would be used in conjuntion with a specific dataset/dataloader. (3 seperate softmax layers) Create separate (?) optimizers that would be used in conjuction with a specific softmax layer. (3 separate optimizers) In such a setup the training would look like this: 0. Move model (without softmax layer), criterion etc. to GPU. Get 1st dataloder & optimizer. Append 1st softmax layer to the model. Run batch Update parameters. Remove 1st softmax layer from model and from GPU Get 2nd dataloader & optimizer Append 2nd softmax layer to the model. Run batch Update parameters. Remove 2nd softmax … and so on. I see here 2 main challenges: I could not find a clear explanation how optimazers works under-the-hood, but I heard that they are somehow binded to all modules in the model. I assume there should be also 3 seperate optimizers that would be dedicated to each softmax layer. Moreover, each optimizer should be disabled during not-their-training cycle (so that not accumulate gradient when their dedicated softmax layer is not used in current training cycle). Maybe write a custom optimizer that would allow to freeze its parameters when needed? What would be a sensible way to approch this challenge? Not sure how to deal with the problem of replacing softmax layer during training. My instant idea was to just choose desired softmax layer in forward function, but then other softmax layers would be still laying on the GPU RAM, which would be the same effectively as implemented in deep-person-reid, thus it would not solve the main problem of consuming too much GPU RAM. Second thought was to rather modify the model architecture - replace the softmax layer only (with corresponding weights ofc), however I am not sure how doble it is on-the-fly. Moreover there will be some performance overhead with this approach probably. Hope, I explained the idea clearly enough. I am more than happy to explain it more if needed Most of all I hope someone can direct me to docs/tutorial/something else that would give me more insight in the inner working of PyTorch, which help me to solve my problem.
st81866
You pass (some) parameters to the optimizer, which are updated using their gradients (stored in the .grad attribute of each parameter). If you don’t pass certain parameters to an optimizer, they won’t be updated in any way. Since softmax does not contain any parameters, your current idea won’t work unfortunately. If a layer is not used during the forward pass, it won’t get any gradients in the backward pass. You could therefore use an if condition and use different layers in the forward pass. Only the selected (and used) layer will be included in the backward pass. This will add some overhead, but you could transfer all unused layers back to the CPU to free the GPU memory for the current one. I would generally “Go for it!”, especially after reading your first sentence bonzogondo: I have a weird question about some crazy idea I want to materialize using PyTorch. However, even if you skip the idea of using different softmax layers (due to the missing parameters) and instead would use e.g. different linear layers, I’m not sure how you would chose between them in the test case, i.e. if you are dealing with new samples, which are not coming from the three predefined Datasets.
st81867
Thanks for a quick reply. I really counted you will reply ptrblck ptrblck: You pass (some) parameters to the optimizer, which are updated using their gradients (stored in the .grad attribute of each parameter). If you don’t pass certain parameters to an optimizer, they won’t be updated in any way. Since softmax does not contain any parameters, your current idea won’t work unfortunately. Maybe I did not explain this part well enough or I do not understand your reply properly. Moreover, maybe having 3 datasets with 50,000 entities each in my example was wrong decision. Datasets may have any number of entities, so the size of the softmax layers can vary a lot - that’s why using multiple softmax layers seems a must. Of course softmax has no parameters, so I am not willing to update softmax layer with backprop. As I assumed that optimizers are ‘binded’ to all modules in the model, so I reckoned that for each model version (that are different only in softmax layer) should have a separate optimizer, as the model would be different due to different softmax layers. However, after reading your reply it occurred to me that maybe there is an easier approach. As softmax has no parameters, can I have optimizer binded to all layers of the model but for the softmax layer that would optimize all models’ versions parameters? So one optimizer for all models with different softmax layer? ptrblck: If a layer is not used during the forward pass, it won’t get any gradients in the backward pass. You could therefore use an if condition and use different layers in the forward pass. Only the selected (and used) layer will be included in the backward pass. This will add some overhead, but you could transfer all unused layers back to the CPU to free the GPU memory for the current one. This gives me a bit of hope, but I am confused how the GPU memory can be freed with this ‘froward approach’. If the model is loaded into GPU, all softmax layers will be sitting there as well all the time, even if they are not used in the current forward pass, am I right? The problem is, that during the forward pass only softmax layer would be changing, which does not have any parameters anyway. I can’t see any savings here. ptrblck: I would generally “Go for it!”, especially after reading your first sentence Thanks! I will try, maybe asking a couple of more questions