id
stringlengths
3
8
text
stringlengths
1
115k
st45268
Hi, In a specific application, I need to freeze running statistic calculation in BatchNorm layer in a part of my code, but I need to utilize “gamma(weight)” and “beta(bias)” of this layer in training with gradient forward\backward pass. I have implemented this by building an extra BatchNorm layer with “affine” false and doing forward pass as: base_BatchNorm = nn.BatchNorm1d(1200) extra_BatchNorm = nn.BatchNorm1d(1200, affine=False) x = extra_BatchNorm(x)*base_BatchNorm.weight+base_BatchNorm.bias I am curiose if there is more clean way to implement this. I think this can be done by setting momentum=0? or by a combination of momentum=0 and affine=False for base_BatchNorm as: base_BatchNorm.momentum = 0 base_BatchNorm.affine = false x = base_BatchNorm(x) base_BatchNorm.momentum = 0.1 base_BatchNorm.affine = true Is it right?
st45269
You can call bn.eval() which will use the running stats and will not update them. The affine parameters will still be trained.
st45270
I don’t want to use running_stats in this part of my code. I want to apply zero-mean, unit_variance operation, updating affine parameters, but not updating running_stats. I think calling bn.eval() will utilize running_stats in forward pass of bachNorm layer. Is it right?
st45271
In that case you could use F.linear with the bn.weight and bn.bias during the forward pass.
st45272
You mean something like: Mirsadeghi: x = extra_BatchNorm(x)*base_BatchNorm.weight+base_BatchNorm.bias My issue is that, I want to apply this operation to other deep nets (resnet) which I can not directly change their forward pass methods as there are multiple use of bachNorm. Besides, I have to do this for every net and it is not possible. Can I apply this call: Mirsadeghi: x = extra_BatchNorm(x)*base_BatchNorm.weight+base_BatchNorm.bias for bachnorm layers without changing their code internally. Something like ‘model.apply’?
st45273
No, model.apply will recursively apply the passed method to all layers. I think the cleanest way would be to write a custom module using your suggested behavior and replace all batchnorm layers in the model with your custom ones.
st45274
I have written the following function to normalize images to the range [0, 1]: def normalize(image): bn, kn, h, w = image.shape image = image.view(bn, kn, -1) image -= image.min(2, keepdim=True)[0] image /= image.max(2, keepdim=True)[0] image = image.view(bn, kn, h, w) return image Unfortunately this makes images darker, as can be seen here: The first one is before and the second one after normalization. How can that be?
st45275
The posted images don’t seem to be only normalized but also rotated or cropped, so I’m unsure which transformations are applied on the image. Could you upload the original image so that we can check it?
st45276
Hi there I am using google colab , while executing code in local machine m getting no error , but running training.py file in colab , m getting this error. 1534 mel_basis = filters.mel(sr, n_fft, **kwargs) 1535 1536 return np.dot(mel_basis, S) TypeError: mel() got an unexpected keyword argument ‘win_length’ but in doc of this librosa feature, I find this win_length variable. plz guide
st45277
Solved by ptrblck in post #2 I guess the used librosa versions might differ and you might need to update/downgrade the Colab version if your local code is working fine.
st45278
I guess the used librosa versions might differ and you might need to update/downgrade the Colab version if your local code is working fine.
st45279
In the current pytorch docs for torch.Adam 23, the following is written: "Implements Adam algorithm. It has been proposed in Adam: A Method for Stochastic Optimization 3. The implementation of the L2 penalty follows changes proposed in Decoupled Weight Decay Regularization 10." This would lead me to believe that the current implementation of Adam is essentially equivalent to AdamW. The fact that torch.AdamW exists as a separate optimizer leads me to believe that this isn’t true. Also, after looking at the source code of torch.Adam, I don’t see any difference from a standard L2 penalty implementation. So is the documentation incorrect and the “changes proposed in [Decoupled Weight Decay Regularization]” are actually absent from torch.Adam?
st45280
This might be indeed a documentation issue, as I cannot see any point of using AdamW in this case. Would you mind creating a GitHub issue so that we can track it and would you be interested in fixing the doc?
st45281
I’ve made a new issue here: github.com/pytorch/pytorch Weight_decay in torch.Adam 53 opened Dec 3, 2020 maridia 📚 Documentation In the current pytorch docs for torch.Adam, the following is written: "Implements Adam algorithm. It has been proposed in Adam: A Method... I’m not sure what is involved in fixing the doc myself as I’ve never opened a pull request before.
st45282
In my case, class NN(nn.Module): @torch.jit.unused def forward(self): pass @torch.jit.export def another(self): pass nn = NN() scripted = torch.jit.script(nn) scripted.another() It works fine on the Python(In real, much more complex structure) But In cpp(libtorch), error: ‘using Module = struct torch::jit::Module {aka struct torch::jit::Module}’ has no member named ‘another’ How can I call the another in cpp level?
st45283
Hello everyone. How can I do multiclass multi label classification in Pytorch? Is there a tutorial or example somewhere that I can use? I’d be grateful if anyone can help in this regard Thank you all in advance
st45284
Solved by ptrblck in post #7 Have a look at this post for a small example on multi label classification. You could use multi-hot encoded targets, nn.BCE(WithLogits)Loss and an output layer returning [batch_size, nb_classes] (same as in multi-class classification).
st45285
I know everything that’s there, and also there was not a single word on multi class multi label classification! Have you yourself even looked at it before suggesting it?
st45286
Thanks, I didnt mean to be rude, I just genuinely wanted to know if you mistook this for some other link or not. Anyway I appreciate your help.
st45287
Does creating different heads (3 classifiers e.g.) count as a multi task? they all have their respective losses and for backpropagating their losses are summed and then the result is backpropagated! if this is multi-task, what is a multi-lable scenario? Is this the other name for multi label classification? if so , what is multi-class multi-label classification?
st45288
Have a look at this post 4.4k for a small example on multi label classification. You could use multi-hot encoded targets, nn.BCE(WithLogits)Loss and an output layer returning [batch_size, nb_classes] (same as in multi-class classification).
st45289
Thanks, why did you use nn.BCEWithLogitsLoss() ? and not cross entropy? Cant we use a sigmoid and a normal crossentropy to have probablities for all classes?
st45290
nn.CrossEntropyLoss uses the target to index the logits in your model’s output. Thus it is suitable for multi-class classification use cases (only one valid class in the target). nn.BCEWithLogitsLoss on the other hand treats each output independently and is suitable for multi-label classification use cases.
st45291
Thanks a lot as always. Then what about MultiLabelSoftMarginLost? Shouldn’t we use that? (I know its simplye sigmoid + BCE (Link 74) I guess back in 2017 there was an issue about its numerical instability is it why you chose BCEWithLogitsLoss?
st45292
The link points to a legacy version of the loss. This is the current implementation in the master branch 184. The main difference is, that the loss will be averaged over the feature dimension: loss = loss.sum(dim=1) / input.size(1) # only return N loss values Here 47 is an older post, which compared both losses, which won’t work anymore due to the shape mismatch. Here is the updated version: x = torch.randn(10, 3) y = torch.FloatTensor(10, 3).random_(2) # double the loss for class 1 class_weight = torch.FloatTensor([1.0, 2.0, 1.0]) # double the loss for last sample element_weight = torch.FloatTensor([1.0]*9 + [2.0]).view(-1, 1) element_weight = element_weight.repeat(1, 3) bce_criterion = nn.BCEWithLogitsLoss(weight=None, reduction='none') multi_criterion = nn.MultiLabelSoftMarginLoss(weight=None, reduction='none') bce_criterion_class = nn.BCEWithLogitsLoss(weight=class_weight, reduction='none') multi_criterion_class = nn.MultiLabelSoftMarginLoss(weight=class_weight, reduction='none') bce_criterion_element = nn.BCEWithLogitsLoss(weight=element_weight, reduction='none') multi_criterion_element = nn.MultiLabelSoftMarginLoss(weight=element_weight, reduction='none') bce_loss = bce_criterion(x, y) multi_loss = multi_criterion(x, y) bce_loss_class = bce_criterion_class(x, y) multi_loss_class = multi_criterion_class(x, y) bce_loss_element = bce_criterion_element(x, y) multi_loss_element = multi_criterion_element(x, y) print(torch.allclose(bce_loss.mean(1), multi_loss)) > True print(torch.allclose(bce_loss_class.mean(1), multi_loss_class)) > True print(torch.allclose(bce_loss_element.mean(1), multi_loss_element)) > True Shisho_Sama: I guess back in 2017 there was an issue about its numerical instability is it why you chose BCEWithLogitsLoss ? Yes, and I think it could be still an issue, as logsigmoid is mathematically more stable than log + sigmoid, since internally the LogSumExp trick will be applied as seen here 45.
st45293
When I try this, I get the following error: RuntimeError: Boolean value of Tensor with more than one value is ambiguous Here are my logits tensor([[-2.8443, -3.3110, -2.5216, ..., -2.7601, -3.0928, -2.9031], [-2.8533, -2.9637, -2.5839, ..., -2.3841, -2.8846, -3.0366], [-2.8923, -3.2757, -2.6118, ..., -2.4875, -2.7701, -3.1466], ..., [-2.9981, -3.2178, -2.5539, ..., -2.7732, -3.0216, -2.8305], [-2.7969, -3.0189, -2.4602, ..., -2.2811, -2.9239, -3.1404], [-2.8644, -2.9294, -2.5960, ..., -2.4510, -2.8790, -2.9344]], grad_fn=<IndexBackward>) and labels tensor([[0, 0, 0, ..., 0, 1, 0], [0, 0, 0, ..., 0, 1, 0], [0, 0, 0, ..., 1, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 1]]) tensors of shape batch by classes. What am I doing wrong?
st45294
Nvm. I was treating nn.BCEWithLogitsLoss as a function from torch.nn.functional and was doing nn.BCEWithLogitsLoss(logits, label). Fixed by changing to n.BCEWithLogitsLoss()(logits, label) in case anyone runs into that.
st45295
Normally we use pytorch like the following code. class MyNet(nn.Module): def __init__(self): super().__init__() self.layer1 = nn.Conv2d(3, 32, 3) self.layer2 = nn.Conv2d(32, 64, 3) self.layer3 = nn.Conv2d(64, 2, 3) def forward(self, x): out = self.layer1(x) out = self.layer2(out) out = self.layer3(out) return out My question is: Do I have to instantiate layers in init() function? Are the following 2 code segments also okay? Code_1 class MyNet(nn.Module): def __init__(self): super().__init__() self.layer1 = nn.Conv2d(3, 32, 3) def forward(self, x): out = self.layer1(x) self.layer2 = nn.Conv2d(32, 64, 3) self.layer3 = nn.Conv2d(64, 2, 3) out = self.layer2(out) out = self.layer3(out) return out Code_2 layer2 = nn.Conv2d(32, 64, 3) layer3 = nn.Conv2d(64, 2, 3) class MyNet(nn.Module): def __init__(self): super().__init__() self.layer1 = nn.Conv2d(3, 32, 3) def forward(self, x): out = self.layer1(x) out = layer2(out) out = layer3(out) return out
st45296
Solved by Majid_Al in post #2 Pytorch uses that template to instantiate layers to keep track of functions and their derivatives in forward and backward pass. You can check out how those inner-workings come together in the source code. Also look into “tensorboard” to visualize the graph that you build and how it is updated in eac…
st45297
Pytorch uses that template to instantiate layers to keep track of functions and their derivatives in forward and backward pass. You can check out how those inner-workings come together in the source code. Also look into “tensorboard” to visualize the graph that you build and how it is updated in each forward and backward pass. For example in your “Code_1”, you have instantiated 2 of the convolution layers in the forward method. This results in generating 2 new convolution layers in each forward pass. Therefore losing track of parameters you intend to update. These update rules and derivations are defined in the nn.Module class that we inherit from each time we create a new class. In the secod case, The layers are not defined within the class, thus they don’t inherit from the nn.Module class. So when you create your optimizer and pass the network parameters to be updated: optimizer = Adam(MyNet.parameters(), learning_rate=0.001), those weights for convolutions outside the network don’t get passed to the optimizer.
st45298
Hi, I saw so many methods the training/validation part of a CNN and here is mine actually : model.train() for e in range(epoch): train_sum_loss=0.0 validation_sum_loss=0.0 for inputs, labels in train_loader: inputs = inputs.to(device) labels = labels.to(device) optimizer.zero_grad() outputs = model.forward(inputs) batch_loss = loss(outputs.squeeze(),labels) batch_loss.backward() optimizer.step() train_sum_loss += batch_loss.item() model.eval() for inputs, labels in validation_loader: inputs = inputs.to(device) labels = labels.to(device) outputs = model.forward(inputs) batch_loss = loss(outputs.squeeze(),labels) validation_sum_loss += batch_loss.item() Results seems good, the network converge, no overfitting. But if I put the model.train() just before the for training loop (like the validation loop), the NN overfit really fastly. Which one of these methods is the good one ? On which result I can work ? Thanks for helping.
st45299
Solved by pchandrasekaran in post #2 model.train() for inputs, labels in train_loader: The model.train() needs to go there. If you put it outside as in your snippet, the model will only be in training mode for the first epoch. All subsequent epochs will be in evaluation mode.
st45300
model.train() for inputs, labels in train_loader: The model.train() needs to go there. If you put it outside as in your snippet, the model will only be in training mode for the first epoch. All subsequent epochs will be in evaluation mode.
st45301
Ok thanks. What about this instruction : torch.no_grad() ? Some people use it, but not everyone. When I read the documention, it seems to me that this command does the same thing as model.eval(), doesn’t it ?
st45302
No. They aren’t the same. I’ve copied a reply from another user that explains the difference. These two have different goals: model.eval() will notify all your layers that you are in eval mode, that way, batchnorm or dropout layers will work in eval mode instead of training mode. torch.no_grad() impacts the autograd engine and deactivates it. It will reduce memory usage and speed up computations but you won’t be able to backprop (which you don’t want in an eval script). In essence, simply using torch.no_grad() will not impact layers such as dropout or batchnorm, they will still be used during inference.
st45303
Ok thanks for details. So I can use it together during validation loop. But I haven’t find a way to do the opposite of no_grad() (to be back in training mode). Edit: Maybe I’ve just found it : torch.set_grad_enabled(True) Is it this instruction ?
st45304
@DSX I apologize. Just double checked the documentation. You do not need to use set_grad_enabled(). torch.no_grad() only sets the requires_grad=False temporarily. x = torch.randn(3, requires_grad=True) print(x.requires_grad) print((x ** 2).requires_grad) with torch.no_grad(): print((x ** 2).requires_grad) print((x ** 2).requires_grad) True True False True
st45305
My code for multi-task training (segmentation task and reconstruction task): class MyNet(nn.Module): def __init__(self): super().__init__() self.layer1 = nn.Conv2d(3, 32, 3) self.layer2 = nn.Conv2d(32, 64, 3) self.layer3 = nn.Conv2d(64, 2, 3) self.layer4 = nn.Conv2d(64, 1, 3) def forward(self, x): out = self.layer1(x) out = self.layer2(out) out1 = self.layer3(out) out2 = self.layer4(out) return out1, out2 During training, I get 2 outputs and compute their losses respectively. So that I can train the net alternatively. x, y = dataloader() # generate a batch of data net = MyNet() opt = torch.optim.Adam(self.net.parameters(), lr=1e-4) pred1, pred2 = net(x) loss1 = seg_loss(pred1, y) loss2 = rec_loss(pred2, x) loss1.backward() opt.step() loss2.backward() opt.step() ``` Correct, right? One day, I want to just train the reconstruction branch. The I would use the following code: x, y = dataloader() # generate a batch of data net = MyNet() opt = torch.optim.Adam(self.net.parameters(), lr=1e-4) pred1, pred2 = net(x) loss2 = rec_loss(pred2, x) loss2.backward() opt.step() It works well. Pred1 does not participate backward as I expected. But I found that the forward of pred1 also accupy some GPU memory. In order to save GPU memory, how to prevent the forward of pred1???
st45306
If you want to keep this output during reconstruction-only training, and prevent autograd from storing its intermediate tensors, you can wrap self.layer3(out) execution with torch.set_grad_enabled(False) context manager. In order to swap between your training regimes, you can use the following code example: class MyNet(nn.Module): def __init__(self): super().__init__() self.seg_train = True self.layer1 = nn.Conv2d(3, 32, 3) self.layer2 = nn.Conv2d(32, 64, 3) self.layer3 = nn.Conv2d(64, 2, 3) self.layer4 = nn.Conv2d(64, 1, 3) def forward(self, x): out = self.layer1(x) out = self.layer2(out) # in grad is not enabled - we should not enable it grad_enabled = torch.is_grad_enabled() with torch.set_grad_enabled(self.seg_train and grad_enabled): out1 = self.layer3(out) out2 = self.layer4(out) return out1, out2 Then you will be able to swap training regimes by chaning model.seg_train variable.
st45307
thanks, another question is: If I finish the training of “Seg+Recon” Net, and save the whole net to disk. In the future if I want to load the trainned net and continue to fine-tune the “Recon” Net. Can your code work in this case?? Or If I finish the training of “Recon” Net, and save the “Recon” Net to disk. In the future if I want to load the trainned “Recon” and continue to fine-tune the “Seg+Recon” Net. Can your code work in this case?? I am just curious if the saved model will include the “self.layer3” even if I did not use it in “forward” function.
st45308
Hello, im having an hard time doing a simple time series classification using pytorch: x = torch.randn(100, 5, requires_grad=true) y = torch.empty(100, dtype=torch.long).random_(2) trainBa = torch.utils.data.DataLoader(dataset=[x,y], batch_size=10, shuffle=true) model = nn.Sequential(nn.Linear(100, 400), nn.ReLU(), nn.Linear(400, 1), nn.LogSoftmax(dim=0)).to(device) optimizer = optim.Adam(model.parameters(), lr=0.001) for epoch in 1:10 for (i_batch, (z,y)) in enumerate(trainBa) (z, y) = z.to(device), y.to(device) o = model(z) loss = nn.CrossEntropyLoss((o,2), y) optimizer.zero_grad() loss.backward() optimizer.step() end end trainBa = torch.utils.data.DataLoader(dataset=[x,y], batch_size=10,shuffle=true) i get error in: RuntimeError('stack expects each tensor to be equal size, but got [100] at entry 0 and [100, 5] at entry 1',) File "/home/klop/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 435, in __next__ data = self._next_data() File "/home/klop/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/klop/.local/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch return self.collate_fn(data) File "/home/klop/.local/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 55, in default_collate return torch.stack(batch, 0, out=out)
st45309
The error looks to be coming from the DataLoader due to the dataset setup. You could use it as below, from torch.utils.data import TensorDataset, DataLoader tr_dataset = TensorDataset(x, y) trainBa = DataLoader(dataset=tr_dataset, batch_size=10, shuffle=True)
st45310
thank you. im still getting an error though (on loss loss.backward() ): x = torch.randn(100, 5, requires_grad=true) y = torch.empty(100, dtype=torch.long).random_(2) tr_dataset = torch.utils.data.TensorDataset(x, y) trainBa = torch.utils.data.DataLoader(dataset=tr_dataset,batch_size=10,shuffle=true) device = torch.device(ifelse( torch.cuda.is_available(), "cuda", "cpu")) model = nn.Sequential(nn.Linear(5, 400), nn.ReLU(), nn.Linear(400, 1), nn.LogSoftmax(dim=1)).to(device) optimizer = optim.Adam(model.parameters(), lr=0.001) losss = nn.NLLLoss() > > for epoch in 1:10 > for (i_batch, (z,y)) in enumerate(trainBa) > (z, y) = z.to(device), y.to(device) > > o = model(z) > loss = losss(o, y) > > optimizer.zero_grad() > loss.backward() > optimizer.step() RuntimeError(‘cuda runtime error (710) : device-side assert triggered at /pytorch/aten/src/THC/generic/THCTensorMath.cu:29’,) File “/home/klop/.local/lib/python3.6/site-packages/torch/tensor.py”, line 221, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File “/home/klop/.local/lib/python3.6/site-packages/torch/autograd/init.py”, line 132, in backward allow_unreachable=True) # allow_unreachable flag i have checked the dimesion of o and target y: (10,) (10,) without cuda i get: in loss = losss(o, y) IndexError('Target 1 is out of bounds.',) File "/home/klop/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/klop/.local/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 213, in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) File "/home/klop/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 2264, in nll_loss ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) EDIT: fixed error: nn.Linear(400, 1), -> nn.Linear(400,2) ,
st45311
Every time I manually interrupt training, some memory remains stuck. For example, interrupting this tutorial - http://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html 6 during training phase consumes about 500mb. Memory is not connected to any objects, deleting everything in the notebook’s scope doesn’t release memory. Interrupting training second time adds the same amount of leaked memory to the “pool”. Restarting kernel helps:slight_smile: Is there any way to release the memory or reset the graph similar to tf.reset_default_graph()?
st45312
Yes, I face the same problem with jupyter. Restarting kernel also does not do the trick. In fact, jupyter starts a bunch of processes with IDs in a sequence. They won’t even show up on your nvidia-smi command. The only way i’ve found to find them is use sudo fuser -v /dev/nvidia* Then, kill the processes using PID. OR just kill all ipython kernel stuff using pkill -f ipykernel
st45313
Also repeatedly deep coping model to the same variable leads to memory consumption.
st45314
potentially deleting that might resolve the issue! Any way that you know to delete it?
st45315
I think Jupyter caches outputs in a way that adds references to the results. That said, I have been using Jupyter with pytorch for half a year now and have not run into the things you describe… Best regards Thomas
st45316
Thomas, please, could you check it on your machine? It will take only a few minutes:) Run this tutorialhttp://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html and interrupt training several times after maybe 10 seconds. Will your memory consumption grow?
st45317
Tried few other things. The problem appears to be connected with how the Jupyter works. Would appreciate any ideas:)
st45318
I tried this. This doesn’t release the GPU memory. But this works - pkill -f ipykernel This will kill your kernel too, so it will be like restarting the notebook.
st45319
In the main page of the jupyter notebook you open, you can choose the ipynb you’ve just run, and on the top click shutdown, then I think the memory should be released.
st45320
I know that I can shutdown or restart notebook to release memory:) The question is - how it can be done without restarting or why it happens?
st45321
Have you tested whethet the same happens if you run one full pass before the partial ones? By design, pytorch caches allocations. Best regards Thomas
st45322
I now have the exact same issue: cuda runtime error (2) : out of memory at /home/gpu/dev/rt/pytorch/torch/lib/THC/generic/THCStorage.cu:66 Thrown after repeated training during which I stopped the kernel and started training again. Of-course PyTorch returns False for torch.cuda.is_available() Neither torch nor Jupyter are able to recover from this. The only solution for me was to restart the computer since the memory is not released otherwise. BTW I am using this command line for monitoring: watch -n 0.1 'ps f -o user,pgrp,pid,pcpu,pmem,start,time,command -psudo lsof -n -w -t /dev/nvidia*' (source: https://stackoverflow.com/questions/8223811/top-command-for-gpus-using-cuda 22)
st45323
I’m having the same issue. I’m developing a new model and naturally I get a lot of crashes as I’m debugging. Every time of have to restart the kernel, run the pre-processing steps, … It’s a pain in the neck.
st45324
I’m training a GAN, but getting the following error. Can someone explain me why this is happening, and how to resolve it? ############################ # (1) Update D network: maximize D(x)-1-D(G(z)) ########################### real_img = torch.Tensor(target) if torch.cuda.is_available(): real_img = real_img.cuda() z = torch.Tensor(data) if torch.cuda.is_available(): z = z.cuda() fake_img = netG(z) netD.zero_grad() real_out_1 = netD(real_img) real_out = torch.mean(real_out_1) fake_out_1 = netD(fake_img) fake_out = torch.mean(fake_out_1) d_loss = -torch.log(real_out_1) - torch.log(1 - fake_out_1) d_loss = torch.mean(d_loss) d_loss.backward(retain_graph=True) optimizerD.step() ############################ # (2) Update G network: minimize 1-D(G(z)) + Perception Loss + Image Loss ########################### netG.zero_grad() g_loss = generator_criterion(fake_out, fake_img, real_img) g_loss.backward() fake_img = netG(z) fake_out = netD(fake_img).mean() optimizerG.step() RuntimeError Traceback (most recent call last) in () 119 netG.zero_grad() 120 g_loss = generator_criterion(fake_out, fake_img, real_img) –> 121 g_loss.backward() 122 123 fake_img = netG(z) 1 frames /usr/local/lib/python3.6/dist-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph) 219 retain_graph=retain_graph, 220 create_graph=create_graph) –> 221 torch.autograd.backward(self, gradient, retain_graph, create_graph) 222 223 def register_hook(self, hook): /usr/local/lib/python3.6/dist-packages/torch/autograd/init.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables) 130 Variable.execution_engine.run_backward( 131 tensors, grad_tensors, retain_graph, create_graph, –> 132 allow_unreachable=True) # allow_unreachable flag 133 134 RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 1024, 1, 1]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
st45325
Solved by ruchit_1 in post #2 Found solution : All I needed to do was : fake_out_1 = netD(fake_img.detach()) Correct me if I was wrong!
st45326
Found solution : All I needed to do was : fake_out_1 = netD(fake_img.detach()) Correct me if I was wrong!
st45327
Hello Everyone, I am trying to build a system, which should optimize GPU memory utilization during training process. In order to deal with custom user models, I need an estimator of GPU memory for a custom object. A simple example of such an object would be: import torch from argparse import Namespace x = torch.zeros(1024).cuda() y = torch.ones([2, 40]).cuda() tmp_namespace = Namespace() tmp_namespace.find_me = x object_to_check = {'a': [1, 2, {'y': y}], 'b': (6, tmp_namespace, (y, y))} I want to write a function, which will tell me, what is the GPU memory usage for this object. Some observations I already made: torch.cuda.memory_allocated() might help here. I tried this way: def object_gpu_memory(obj): start_memory = torch.cuda.memory_allocated() with torch.no_grad(): tmp = copy.deepcopy(obj) end_memory = torch.cuda.memory_allocated() del tmp return end_memory - start_memory but, sadly, it gives me RuntimeError: Only Tensors created explicitly by the user (graph leaves) support the deepcopy protocol at the moment for my use case. I want to support intermediate tensors as well. Python garbage collector may help to get all Tensor objects, but I did not figure it out how to apply it here. Recursive traversion of a typical object with dir() method falls into infinite recursion. Is there a way to create such function? If so, could you give some hints how to create it? Thanks in advance.
st45328
I have 2 networks with some common layers. So is the following code correct? class CommonNet(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 32, 3) self.conv2 = nn.Conv2d(32, 64, 3) class Net1(CommonNet): def __init__(self): super().__init__() self.final_conv = nn.Conv2d(64, 4, 3) def forward(self, x): out = self.conv1(x) out = self.covn2(out) out = self.final_conv(out) return out class Net2(CommonNet): def __init__(self): super().__init__() self.final_conv = nn.Conv2d(64, 2, 3) def forward(self, x): out = self.conv1(x) out = self.covn2(out) out = self.final_conv(out) return out
st45329
Solved by JuanFMontesinos in post #2 You need to call super in commont net but the code is correct (there are typos).
st45330
You need to call super in commont net but the code is correct (there are typos).
st45331
Normally we use pytorch like the following code. class MyNet(nn.Module): def __init__(self): super().__init__() self.layer1 = nn.Conv2d(3, 32, 3) self.layer2 = nn.Conv2d(32, 64, 3) self.layer3 = nn.Conv2d(64, 2, 3) def forward(self, x): out = self.layer1(x) out = self.layer2(out) out = self.layer3(out) return out My question is: is the order of layers in init() function matter? Is the following code right? class MyNet(nn.Module): def __init__(self): super().__init__() self.layer3 = nn.Conv2d(64, 2, 3) self.layer2 = nn.Conv2d(32, 64, 3) self.layer1 = nn.Conv2d(3, 32, 3) def forward(self, x): out = self.layer1(x) out = self.layer2(out) out = self.layer3(out) return out
st45332
Solved by JuanFMontesinos in post #2 It doesn’t matter. forward is which defines the real order. The only difference is they will be displayed differently when you print the model or save it.
st45333
It doesn’t matter. forward is which defines the real order. The only difference is they will be displayed differently when you print the model or save it.
st45334
Hi all. I’m currently a PhD student at Duke University and I’m using Pytorch to conduct my research. It’s nice to find such a great forum! I define the following network architecture where model_resnet50_bn is a pre-trained ResNet50 with Batch Normalization layers in between. class MyPipeline(nn.Module): def __init__(self, image_size, transformed_meteo_size, num_classes=1000): super(MyPipeline, self).__init__() self.resnet_pretrained = model_resnet50_bn self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) in_features = 2048 self.fc = nn.Linear(in_features, num_classes) self.dropout = nn.Dropout(p=0.6) self.elu = nn.ELU() self.fc1 = nn.Linear(self.resnet_pretrained.fc.out_features+transformed_meteo_size, 300) self.fc2 = nn.Linear(300, 1) def forward(self, image, transformed_meteo_features): img_features = self.resnet_pretrained(image) img_features = self.avgpool(img_features[0]) img_features = torch.flatten(img_features, 1) img_features = self.fc(img_features) # Concatenate image representations with transformed meteo features x = torch.cat((img_features, transformed_meteo_features), dim=-1) x = self.dropout(x) x = self.fc1(x.float()) x = self.elu(x) x = self.fc2(x) return x The last FC layer fc2 will output a float, and I have another target float number so this is a regression task. I use nn.MESLoss() and Adam optimizer with learning rate of 0.0001 to train this network, but I find that the MSE loss does not converge after running 500 training epochs. I adjust the learning rate several times but the problem still exists. Does this have something to do with the network architecture or some unexpected behavior of the autograd package? Any help will be appreciated!
st45335
Solved by albanD in post #2 Hi, I don’t think there is any reason for the autograd to fail here. You might want to make sure that you preprocess your data properly though to avoid any large value as it could hinder training. For the architecture, it is very task-dependent I’m afraid. But for vision related tasks a pre-train…
st45336
Hi, I don’t think there is any reason for the autograd to fail here. You might want to make sure that you preprocess your data properly though to avoid any large value as it could hinder training. For the architecture, it is very task-dependent I’m afraid. But for vision related tasks a pre-trained resnet should like a good idea to me.
st45337
Hi @albanD, Thank you so much for your suggestion! For image preprocessing, I currently have only 2 steps: 1. Center-crop the image to be 110x110, 2. Normalize the image with transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)). Does this look ok or do you think I may need to adjust the normalization parameters? I do not use Pytorch very often so I’m not quite clear what parameters I should use for normalization of different types of images.
st45338
Hi @albanD Do we have any built-in preprocessing function in Pytorch corresponding to tf.keras.applications.resnet.preprocess_input? Thanks for helping in advance!
st45339
I think the parameters for that transform should be taken from your dataset statistics, not random values. That does not sound right. This is not specific to PyTorch though you would need similar preprocessing for many ML framework. Do we have any built-in preprocessing function in Pytorch corresponding to tf.keras.applications.resnet.preprocess_input? I guess that would be the proprocessing when using images from imagenet no? Do your images have the same distributions? I think you need to check that and make sure they do by doing the right normalization.
st45340
Hi @albanD I think my images are not very similar to ImageNet dataset because they are 334x334 satellite images so there exists strong spatial relationship. Also, my ResNet50 is not pretrained on ImageNet but using some self-supervised learning framework.
st45341
Hello, I’m working with pytorch 1.7 within docker (based on the image: nvcr.io/nvidia/pytorch 5 20.10-py3 in case it matter), I’m using Ubuntu LTS 18.04 with CUDA 11.1. >>> torch.__version__ '1.7.0a0+7036e91' I can use the fft functions of pytorch but I want to use the fft module as advised in the documentation. The problem is I can’t reproduce the examples given in the doc: Using torch.fft.fft according to the doc: >>> import torch.fft >>> t = torch.arange(4) >>> t tensor([0, 1, 2, 3]) >>> torch.fft.fft(t) tensor([ 6.+0.j, -2.+2.j, -2.+0.j, -2.-2.j]) What I get when I try to reproduce it: >>> import torch.fft >>> t = torch.arange(4) >>> t tensor([0, 1, 2, 3]) >>> torch.fft.fft(t) Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: Expected a complex tensor. Using torch.fft.rfft according to the doc: >>> import torch.fft >>> t = torch.arange(4) >>> t tensor([0, 1, 2, 3]) >>> torch.fft.rfft(t) tensor([ 6.+0.j, -2.+2.j, -2.+0.j]) What I get when I try to reproduce it: >>> import torch.fft >>> t = torch.arange(4) >>> t tensor([0, 1, 2, 3]) >>> torch.fft.rfft(t) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'torch.fft' has no attribute 'rfft' So I’ve got 2 questions :: 1)Am I doing something wrong, or is there a problem with my pytorch installation (I had no problem so far)? How can I reproduce successfully the doc examples? 2)Is it possible to use half datatype with torch.fft module? (it’s discouraged in the torch.fft functions) Thanks in advance for any help Thomas
st45342
Solved by ptrblck in post #5 As @Nikronic said, update to the latest container, as 1.7.0a0+7036e91 is a pre-1.7.0 release and might thus be missing the methods.
st45343
I still have the same issue. That would be really great if someone with a pytorch version 1.7 could try the above examples, at least to know if the torch.fft module 54 is actually working or not. I’ve search on google and couldn’t find any example of the use of this module yet, as this module is quite recent maybe the documentation is wrong ? Best regards, Thomas
st45344
Hi, On my machine with latest stable build of PyTorch (1.7.0) I can reproduce all examples you have mentioned. Maybe you should make sure you are using correct build of PyTorch. Unfortunately I cannot test docker edition. Bests
st45345
Hi, Thanks for testing. I will investigate, maybe test an other docker image or simply go for a workaround. In case I found something interesting I’ll let a message here.
st45346
As @Nikronic said, update to the latest container, as 1.7.0a0+7036e91 is a pre-1.7.0 release and might thus be missing the methods.
st45347
Ok, thanks for the confirmation. I started using Docker quite recently and I found interesting the way it can create and manage separated environments, and it is also a good tool to share an app with all its environment. So I searched for Docker image of pytorch, and I found the nvidia deep learning frameworks (free of access, only need to register your email). This catalog seems quite useful, all the release are described here 2. The only sad thing is it seems all the images described there seems to be build with pre-release versions of pytorch (like for example the pytorch V1.7 image you can found are based on: 1.7.0a0+7036e91 or 1.7.0a0+8deb4fe or 1.7.0a0+6392713). They did the same with the pre-release of v1.8 (1.8.0a0+17f8c32). Maybe the simplest solution would be to switch to anaconda? Anyways, thanks @ptrblck and @Nikronic for your answers. I will put this thread to solved has you pointed to the root cause of the problem, but don’t hesitate to let a message if you have any further remarks.
st45348
You can always install the latest stable or nightly binaries from conda or pip. We are building the NGC container with the latest PyTorch master for this release. Since the containers go through our QA, the commit is delayed by approx. a month.
st45349
Hello, I am trying to perform a complex operation for which my implementation is very inefficient right now. I have a list of tuples of the same number of tensors of shape (batch, …) on the one hand, and a tensor of shape (batch, 1) representing indexes in the list. for instance with a batch size of 4: l = [(t11, t12),(t21, t22),(t31, t32)] # list of tuples of tensors and idx = [0,2,1,1] # index along batch dimension, pointing to positions in the list I want to create a tuple of tensors like (ti1, ti2), but where the components of ti1 and ti2 along the batch dimension are picked from corresponding tensors according to the index. The solution I came up with is this one: mod_tuple = tuple((torch.stack([l[i][itup][ibatch] for ibatch, i in enumerate(idx)]) for itup in range(tuple_size))) However with a batch size of 128 it takes a veeeeery long time. Could you help me find an efficient implementation please? Thanks in advance !
st45350
Would it be possible to create a tensor of l and directly index it e.g. via gather or are all t** tensors differently shaped?
st45351
Thanks for the answer. The t1* tensors are possibly differently shaped. Actually (t11, t12) is an observation, the list is a trajectory of observations, and what I try to do is to build a new observation by picking a different point in the trajectory for each element of the batch because only one observation is interesting in each trajectory. Then I can do a single forward pass in my model. I guess it will be more efficient to do one forward pass per element of the trajectory, and then pick the relevant results instead of what I am trying to do here, I will try that.
st45352
Hey! I am loading the same network (before even training, right after the initialization) from a pickle file, but it has different parameters every time. I tried 2 different ways of saving and loading: net = … with open(‘train.pickle’, ‘wb’) as f: pickle.dump([net], f) then: with open(‘train.pickle’, ‘rb’) as f: net1 = pickle.load(f) with open(‘train.pickle’, ‘rb’) as f: net2 = pickle.load(f) net1=net1[0] net2=net2[0] print(net1.eval()==net2.eval()) //gives False if net1.parameters() != net2.parameters(): print(True) //gives True. I also tried: torch.save(model, PATH) model = torch.load(PATH) How can I fix this?
st45353
Hello The best practice in pytorch is to save a model state_dict. Consider this minimal example. import torch import torch.nn as nn m = nn.Linear(10, 2) state = m.state_dict() torch.save(state, 'm.pt') m1 = nn.Linear(10, 2) m2 = nn.Linear(10, 2) for p1, p2 in zip(m1.parameters(), m2.parameters()): print(torch.all(p1 == p2)) prints: tensor(False) tensor(False) and state = torch.load('m.pt') m1.load_state_dict(state) m2.load_state_dict(state) for p1, p2 in zip(m1.parameters(), m2.parameters()): print(torch.all(p1 == p2)) prints: tensor(True) tensor(True)
st45354
Thank you for your answer. But, it didn’t work. It gives true for a very few number of parameters, but not all of them. I am using google colab btw (if it affects anything).
st45355
I run this example in colab. Gives me all parameters of m1 and m2 are equal after loading the same state dict into them. Is running the exact code from my previous post gives you different results?
st45356
Yes, it works. except that, if you add s = sum([np.prod(list(p.size())) for p in m1.parameters()]); print (‘Number of params: %d’ % s) It prints 22. Which means, that you have 22 parameters for each of m1 and m2, but only 2 of them are alike. And if m1.parameters() != m2.parameters(): print(True) prints True.
st45357
rony2: s = sum([np.prod(list(p.size())) for p in m1.parameters()]); print (‘Number of params: %d’ % s) It prints 22. Which means, that you have 22 parameters for each of m1 and m2, but only 2 of them are alike. And I don’t see how you can come to this conclusion from this code. What it is doing. It is just calculating the number of parameters in m1 model and nothing to compare it with m2 rony2: m1.parameters() != m2.parameters() Here you are comparing two generators (and as they are 2 different python objects they are obviously not the same). You can see the type: type(m1.parameters()). That is why you need to iterate and compare parameters themself. Also in this simple model you can just print parameters for both models and make sure they are the same: for p1, p2 in zip(m1.parameters(), m2.parameters()): print("m1") print(p1) print("m2") print(p2)
st45358
has been modified by an inplace operation: [torch.FloatTensor [1]] is at version 2, expected version 1. I know that this error implies that the computational graph has been used previously, and one should not use it twice. However - what does “[torch.FloatTensor [1]]” specifiy? And what does “version 2” and “version 1” mean? My code is quite long but in short: ref_grad = [] for i in range(2): layer_grad = utils.AverageMeter() k = 0 for param in neuralnet.decoder.de_dense.parameters(): if(k == 0 or k == 2): wrtt = param print(param.shape) k = k + 1 layer_grad.avg = torch.zeros(wrtt.shape).to(device) ref_grad.append(layer_grad) for j in range(2): k = 0 for param in neuralnet.encoder.en_dense.parameters(): if(k == 0 or k == 2): wrt = param print(param.shape) k = k + 1 target_grad = torch.autograd.grad(recon_loss, wrt, create_graph = True, retain_graph = True)[0] print(j) grad_loss += -1*func.cosine_similarity(target_grad.view(-1,1), ref_grad[j].avg.view(-1,1), dim = 0) grad_loss = grad_loss/l_2 neuralnet.optimizer.zero_grad() l_tot.backward(retain_graph = True) If I remove target_grad and grad_loss, the code is functional (bus useless). So obviously target_grad is the problem here. How do I fix this?
st45359
Hi, I know that this error implies that the computational graph has been used previously, and one should not use it twice. No it is not what this error means. This error means that a Tensor that is needed to compute the backward has been modified inplace. And so the backward cannot be computed anymore. However - what does “[torch.FloatTensor [1]]” specifiy? And what does “version 2” and “version 1” mean? This means that the faulty Tensor (that was modified inplace) is a float Tensor of shape [1] and is at version 2 (second time it has been modified inplace) while the autograd expect version 1 (the result after it was modified inplace once).
st45360
Hello. Thank you for answering. The code is functional if I remove target_grad = torch.autograd.grad(recon_loss, wrt, create_graph = True, retain_graph = True)[0] What I seem to understand is that this computed the gradient of recon_loss with respect to the tensor wrt. It retains also the graph. Why does this interfere later in the code when I create my loss function l_tot and apply backward() to it, l.backward()?
st45361
I think the grad_loss might be the problematic Tensor as it is of size 1 and is modified inplace when you do +=. Could you change that to be out of place with grad_loss = grad_loss + ... to see if it fixes the problem?
st45362
Looks like an issue with a values saved by batchnorm? You might want to enable anomaly during the forward as proposed in the warning to know which one is faulty.
st45363
It is an error that happens quite often but there is no canonical solution as it depends on what your code does. Basically your code modifies inplace a Tensor that this batchnorm needs to compute its backward. You need to remove that inplace operation. But where that inplace is depends a lot on your code and what you do
st45364
Hi everyone I am trying to run my code on multiple GPUs. According to this tutorial, this is as easy as passing my model into a function with the corresponding GPU IDs I would like to use. However, if I have a model that uses a custom forward method with a for loop, will that be handled correctly by the multiple GPUs? Also: where do I send the images and labels from the batches to? When using a single GPU I do something like: for batch in loader: # unpack a batch and throw the images and labels on a specific device images = batch[0].to(device) labels = batch[1].to(device) How does that look like for multiple GPUs? Any help is very much appreciated! All the best snowe
st45365
You can pass anything as you want. The only constrain is that any operation carried out must happen in the same device. You can have hybrid forward and mix everything as you want.
st45366
Thank you @JuanFMontesinos! But how do I specify certain (multiple) GPUs as my device? Say I have access to 8 GPUs but can only run my code on GPU 7 and 8 because the other ones are occupied. How do I tell PyTorch to just use GPU 7 and 8 and then also grab the images and labels from the corresponding GPU?
st45367
So, There are two options: You want to run independent process en each GPU (like two training pipes in parallel or so). You want to use several gpus to train a single experiment. I think you are asking for the case 2. Soo there are two options in here. The easiest one is using a cuda environment variable. CUDA_VISIBLE_DEVICES This enviroment variable can be used for any process in your OS. What it does is masking the devices such that the called process can only see the gpus that you allow. Example: CUDA_VISIBLE_DEVICES=7,8 python3 run_exp.py Another option is letting the process to see the 8 gpus and choose which ones you want to parallelize over. The syntax of dataparallel is: torch.nn.DataParallel(module, device_ids=None, output_device=None, dim=0) You can pass device_ids=[7,8] The former case is preferred since there is less chance you mess it up. Note that when you call cuda_visible_devices=7,8, pytorch will only see two gpus. Thus the indices for those will be (inside python) 0,1 instead of 7,8 In short this module automatically allocates your inputs to the model splitting the batch in as many chunks as gpus you have chosen. So you don’t really need to take care of allocating them. You just need to ensure that everything inside your nn.Module has been written soft-coding devices. This is, if you need to create a tensor of zeros, you do something like var=torch.zeros(10).to(input.device) instead of vcar=torch.zeros(10).cuda()