id
stringlengths
3
8
text
stringlengths
1
115k
st47968
Since you write inplace into it a Tensor that requires gradient, it will work fine. You only need to set requires_grad=True explicitly for a Tensor if you plan to access it .grad field or ask for gradients for it in autograd.grad.
st47969
How do you construct a tensor from data in C++ API? Python: x = torch.Tensor([[1,2],[2,3],[3,4],[4,-1],[5,-5]]) C++: std::vector<torch::jit::IValue> inputs; auto input = torch::zeros({5,5}); input[0][0] = 1; // manually copy values one by one input[0][1] = 2; inputs.push_back(input);
st47970
Solved by tom in post #2 The problem is in the Python. The torch.Tensor constructor has been deprecated in PyTorch 0.4 or so. The new way is using torch.tensor instead, and that has an equivalent in C++.
st47971
The problem is in the Python. The torch.Tensor constructor has been deprecated in PyTorch 0.4 2 or so. The new way is using torch.tensor instead, and that has an equivalent in C++.
st47972
Hi! I’m trying to create a network with multiple parallel architectures, that in the end are joined by common layers. However, the parallel architectures’ parameters are not registered with the model. What’s the best way to do that so they are registered automatically, or register them manually? The number of the parallel architectures may vary, so I can’t just give each Sequential its own field. class Dummy(nn.Module): def __init__(self): super(Dummy, self).__init__() self.model = [] for _ in range(3): layers = [] for i in range(5): layers.append(nn.Linear(28**2, 28**2)) layers.append(nn.Tanh()) self.model.append(nn.Sequential(*layers)) self.linear = nn.Linear(28**2 * 5, 10) def forward(self, x): channels_out = [] for c in self.model: channels_out.append(c(x)) out = self.linear(torch.cat(channels_out, dim=-1)) return out
st47973
Anything that is a field on your Dummy class will be registered. So in your case you could just replace layers with self.layers and make sure to only initialize it once. (you could also create nested lists instead of putting all layers from all models into a single list).
st47974
I was looking to do some audio classification with PyTorch. I’ve found this 3 tutorial but its code is removed. Why’s that?
st47975
The tutorial was removed in this PR 5 over a year ago and I guess it might have been outdated by then. CC @vincentqb who might know more.
st47976
I’ve noticed the tutorial uses some deprecated methods from torchaudio. This could be the reason. However, it’s almost trivial to convert the old code (can be found here 7), to a compatible version (which I did).
st47977
I’m trying to use MSE loss on a batch the following way: My CNN’s output is a vector of 32 samples. So, for example, if my batch size is 4, I’ll have an output of 4X32 samples. Each output vector needs to be loss- calculated with another vector. Then, I want to take each vector and apply on it backward function and so on. the code as it is right now: loss_criterion = nn.MSELoss(reduction= 'none') ... ... for batch_idx, data in enumerate(training_loader, 0): optimizerS.zero_grad() simulated_spec = netS(data['image_tensor'], batch_size) S_loss = loss_criterion(data['metadata_tensor'], simulated_spec) S_loss.backward() optimizerS.step() S_loss is now a tensor of (batch size, 32) , and I get a run time error: raise RuntimeError(“grad can be implicitly created only for scalar outputs”) how can it be solved and how do i get the MSE loss per every 32 samples?
st47978
You would just take the sum or the mean over the batch to get the total loss. i.e. S_loss.mean() or S_loss.sum(). I’m guessing if your total loss is supposed to be MSE you would be taking the mean.
st47979
Hi, I am trying to train a dataset taken from TACO (trash images in COCO format). The network I used is called ESNet (from github). I am new in using dataset that is in COCO format so I don’t understand why I got an error when attempting to train.
st47980
my train code is: params = [p for p in model.parameters() if p.requires_grad] optimizer = torch.optim.SGD(params, lr=0.005, momentum=0.9, weight_decay=0.0005) len_dataloader = len(data_loader) for epoch in range(num_epochs): model.train() i = 0 for imgs, annotations in data_loader: i += 1 imgs = list(img.to(device) for img in imgs) annotations = [{k: v.to(device) for k, v in t.items()} for t in annotations] loss_dict = model(imgs, annotations) losses = sum(loss for loss in loss_dict.values()) optimizer.zero_grad() losses.backward() optimizer.step() print(f'Iteration: {i}/{len_dataloader}, Loss: {losses}')
st47981
and this is the error I got: TypeError Traceback (most recent call last) <ipython-input-9-2a363c197ef5> in <module>() 12 imgs = list(img.to(device) for img in imgs) 13 annotations = [{k: v.to(device) for k, v in t.items()} for t in annotations] ---> 14 loss_dict = model(imgs, annotations) 15 losses = sum(loss for loss in loss_dict.values()) 16 /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), TypeError: forward() takes 2 positional arguments but 3 were given
st47982
Hi, From the error I guess that the model that you have only takes imgs as input and does not expect the annotations?
st47983
I was thinking the same too but I dont have any idea how to modify my network such that it can cater annotations too. Can you help me out? I don’t know if I should add layers or what
st47984
by the way this is the network that I used: ################################################################################################### #ESNet: An Efficient Symmetric Network for Real-time Semantic Segmentation #Paper-Link: https://arxiv.org/pdf/1906.09826.pdf ################################################################################################### import torch import torch.nn as nn import torch.nn.init as init import torch.nn.functional as F from torchsummary import summary class DownsamplerBlock(nn.Module): def __init__(self, ninput, noutput): super().__init__() self.conv = nn.Conv2d(ninput, noutput-ninput, (3, 3), stride=2, padding=1, bias=True) self.pool = nn.MaxPool2d(2, stride=2) self.bn = nn.BatchNorm2d(noutput, eps=1e-3) self.relu = nn.ReLU() def forward(self, input): x1 = self.pool(input) x2 = self.conv(input) diffY = x2.size()[2] - x1.size()[2] diffX = x2.size()[3] - x1.size()[3] x1 = F.pad(x1, [diffX // 2, diffX - diffX // 2, diffY // 2, diffY - diffY // 2]) output = torch.cat([x2, x1], 1) #torch.Size([2, 16, 50, 50]), torch.Size([2, 64, 25, 25]), torch.Size([2, 128, 13, 13]) output = self.bn(output) #torch.Size([2, 16, 50, 50]), torch.Size([2, 64, 25, 25]), torch.Size([2, 128, 13, 13]) output = self.relu(output) return output class UpsamplerBlock (nn.Module): def __init__(self, ninput, noutput): super().__init__() self.conv = nn.ConvTranspose2d(ninput, noutput, 3, stride=2, padding=1, output_padding=1, bias=True) self.bn = nn.BatchNorm2d(noutput, eps=1e-3) def forward(self, input): output = self.conv(input) output = self.bn(output) return F.relu(output) class FCU(nn.Module): def __init__(self, chann, kernel_size,dropprob, dilated): """ Factorized Convolution Unit """ super(FCU,self).__init__() padding = int((kernel_size-1)//2) * dilated self.conv3x1_1 = nn.Conv2d(chann, chann, (kernel_size,1), stride=1, padding=(int((kernel_size-1)//2)*1,0), bias=True) self.conv1x3_1 = nn.Conv2d(chann, chann, (1,kernel_size), stride=1, padding=(0,int((kernel_size-1)//2)*1), bias=True) self.bn1 = nn.BatchNorm2d(chann, eps=1e-03) self.conv3x1_2 = nn.Conv2d(chann, chann, (kernel_size,1), stride=1, padding=(padding,0), bias=True, dilation = (dilated,1)) self.conv1x3_2 = nn.Conv2d(chann, chann, (1,kernel_size), stride=1, padding=(0,padding), bias=True, dilation = (1, dilated)) self.bn2 = nn.BatchNorm2d(chann, eps=1e-03) self.relu = nn.ReLU(inplace = True) self.dropout = nn.Dropout2d(dropprob) def forward(self, input): residual = input output = self.conv3x1_1(input) output = self.relu(output) output = self.conv1x3_1(output) output = self.bn1(output) output = self.relu(output) output = self.conv3x1_2(output) output = self.relu(output) output = self.conv1x3_2(output) output = self.bn2(output) if (self.dropout.p != 0): output = self.dropout(output) return F.relu(residual+output,inplace=True) class PFCU(nn.Module): def __init__(self,chann): """ Parallel Factorized Convolution Unit """ super(PFCU,self).__init__() self.conv3x1_1 = nn.Conv2d(chann, chann, (3,1), stride=1, padding=(1,0), bias=True) self.conv1x3_1 = nn.Conv2d(chann, chann, (1,3), stride=1, padding=(0,1), bias=True) self.bn1 = nn.BatchNorm2d(chann, eps=1e-03) self.conv3x1_22 = nn.Conv2d(chann, chann, (3,1), stride=1, padding=(2,0), bias=True, dilation = (2,1)) self.conv1x3_22 = nn.Conv2d(chann, chann, (1,3), stride=1, padding=(0,2), bias=True, dilation = (1,2)) self.conv3x1_25 = nn.Conv2d(chann, chann, (3,1), stride=1, padding=(5,0), bias=True, dilation = (5,1)) self.conv1x3_25 = nn.Conv2d(chann, chann, (1,3), stride=1, padding=(0,5), bias=True, dilation = (1,5)) self.conv3x1_29 = nn.Conv2d(chann, chann, (3,1), stride=1, padding=(9,0), bias=True, dilation = (9,1)) self.conv1x3_29 = nn.Conv2d(chann, chann, (1,3), stride=1, padding=(0,9), bias=True, dilation = (1,9)) self.bn2 = nn.BatchNorm2d(chann, eps=1e-03) self.dropout = nn.Dropout2d(0.3) def forward(self, input): residual = input output = self.conv3x1_1(input) output = F.relu(output) output = self.conv1x3_1(output) output = self.bn1(output) output = F.relu(output) output2 = self.conv3x1_22(output) output2 = F.relu(output2) output2 = self.conv1x3_22(output2) output2 = self.bn2(output2) if (self.dropout.p != 0): output2 = self.dropout(output2) output5 = self.conv3x1_25(output) output5 = F.relu(output5) output5 = self.conv1x3_25(output5) output5 = self.bn2(output5) if (self.dropout.p != 0): output5 = self.dropout(output5) output9 = self.conv3x1_29(output) output9 = F.relu(output9) output9 = self.conv1x3_29(output9) output9 = self.bn2(output9) if (self.dropout.p != 0): output9 = self.dropout(output9) return F.relu(residual+output2+output5+output9,inplace=True) class ESNet(nn.Module): def __init__(self, classes): super().__init__() #-----ESNET---------# self.initial_block = DownsamplerBlock(3, 16) self.layers = nn.ModuleList() for x in range(0, 3): self.layers.append(FCU(16, 3, 0.03, 1)) self.layers.append(DownsamplerBlock(16,64)) for x in range(0, 2): self.layers.append(FCU(64, 5, 0.03, 1)) self.layers.append(DownsamplerBlock(64,128)) for x in range(0, 3): self.layers.append(PFCU(chann=128)) self.layers.append(UpsamplerBlock(128,64)) self.layers.append(FCU(64, 5, 0, 1)) self.layers.append(FCU(64, 5, 0, 1)) self.layers.append(UpsamplerBlock(64,16)) self.layers.append(FCU(16, 3, 0, 1)) self.layers.append(FCU(16, 3, 0, 1)) self.output_conv = nn.ConvTranspose2d( 16, 8, 2, stride=2, padding=0, output_padding=0, bias=True) self.hidden = nn.Linear(8*104*104, 208) self.out = nn.Linear(208, classes) self.act = nn.ReLU() def forward(self, input): output = self.initial_block(input) print(input.shape) for layer in self.layers: output = layer(output) output = self.output_conv(output) output = output.view(output.size(0), -1) output = self.act(self.hidden(output)) output = self.out(output) print(output.shape) return output """print layers and params of network""" if __name__ == '__main__': device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = ESNet(classes=60).to(device) summary(model,(3,100,100))
st47985
If you re-use a net that you took online, then you need to find a different one. Or modify it directly. To modify it to take annotations is a wide question. It depends on what the annotations are, how you should read them, what you’re trying to do with them etc. It’s basically like designing a brand new net.
st47986
Oh, okay. Thank you for answering I will try to understand the network and will try to modify it to cater annotations. Thanks!
st47987
I have a similar problem to this issue 7 on github however no solution is proposed. The proposed example code : slize = [1, 2, 3, 4] x = torch.randn(10, requires_grad=True) y = x[slize] y.sum().backward() # breaks second time calling backward y.sum().backward() # rasises RuntimeErroe In my case large tensor and I want to iterate gradient descent steps on mini batches. example: old_log_probs = ... # size [8000, 1] for epoch in range(epoch): ... for batch_idx in rollout.shuffle_index(batch_size=8): loss = ppo_loss( new_log_probs, # size [8, 1] old_log_probs[batch_index], # size [8, 1] advantages[batch_index], # size [8, 1] clip=0.2 ) Because when we get the subtensor with a list it is a copy and not a view. Is it possible to get a subtensor with random indexes as a view?
st47988
Hi, There is a solution: set retain_graph=True in the first call to backward if you want to backrprop through it again later This behavior is expected.
st47989
Hi, I tried the solution you proposed but it doesn’t work. Do you have other ideas ? here is my error log when retain_graph = True RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [128, 2]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! When loss.backward() withour retain graph : RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling backward the first time.
st47990
AdilZouitine: Hi, I tried the solution you proposed but it doesn’t work. It does work It is just that the code has another unrelated problem. Running the updated sample works fine: import torch slize = [1, 2, 3, 4] x = torch.randn(10, requires_grad=True) y = x[slize] y.sum().backward(retain_graph=True) y.sum().backward() # runs fine As the error mentions, you modify inplace a Tensor whose value is required for the backward computation. You can follow the instructions in the error message to get a pointer to which op is faulty and replace the inplace op there by an out of place one.
st47991
I want to draw loss per epoch from the example https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py 259 The log file is Out: [1, 2000] loss: 2.173 [1, 4000] loss: 1.839 [1, 6000] loss: 1.659 [1, 8000] loss: 1.600 [1, 10000] loss: 1.533 [1, 12000] loss: 1.468 [2, 2000] loss: 1.395 [2, 4000] loss: 1.378 [2, 6000] loss: 1.368 [2, 8000] loss: 1.340 [2, 10000] loss: 1.316 [2, 12000] loss: 1.307 Finished Training where the first column is the epoch number. So if I want to draw the loss per epoch, do I need to average the loss when they have same epoch number? It will be Epoch Loss 1 (2.173+1.839+1.659+1.600+1.533+1.468)/6 2 ... Have you have more simple way in pytorch?
st47992
for epoch in range(2): # loop over the dataset multiple times epoch_loss = 0.0 running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() epoch_loss += outputs.shape[0] * loss.item() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 # print epoch loss print(epoch+1, epoch_loss / len(trainset)) print('Finished Training')
st47993
@klory Why do you multiply the loss.item() with the first dimension of the outputs tensor. This seems odd to me. epoch_loss += outputs.shape[0] * loss.item()
st47994
Inside the definition of criterion https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss 326
st47995
I really couldn’t understand this for a long time. I think what Klory is trying to say is this: If you look at most loss functions (e.g. Cross Entropy Loss 19) you will see that reduction="mean". This means that the loss is calculated for each item in the batch, summed and then divided by the size of the batch. If you want to compute the standard loss (without the average) you will need to multiply the mean loss outputted by criterion() with the batch size, which is outputs.shape[0].
st47996
I am trying to execute the retinanet model included in torchvision on an android mobile with Pytorch Mobile. When using the following snippet : import torch import torchvision from torch.utils.mobile_optimizer import optimize_for_mobile model = torchvision.models.detection.retinanet_resnet50_fpn(pretrained=True) model.eval() traced_script_module = torch.jit.script(model) traced_script_module_optimized = optimize_for_mobile(traced_script_module) I get the following error : File "xxx.py", line 28, in <module> traced_script_module_optimized = optimize_for_mobile(traced_script_module) File "yyy/torch/utils/mobile_optimizer.py", line 43, in optimize_for_mobile optimized_cpp_module = torch._C._jit_pass_optimize_for_mobile(script_module._c, optimization_blocklist, preserved_methods) RuntimeError: node->kind() == prim::GetAttr INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1603729096996/work/torch/csrc/jit/passes/freeze_module.cpp":22, please report a bug to PyTorch. Expected prim::GetAttr nodes Is retinanet incompatible with optimize_for_mobile() or is it a bug ? Is there any workaround ?
st47997
Hi, I pretrained a custom model with targets 402. Now I need to transfer those weights to a model with 206 targets. How can I do that? This is what I am doing right now model.load_state_dict(torch.load(f"FOLD{fold}_.pth"), strict=False) But it is not working, showing a size mismatch error.
st47998
if you mean a subset of targets, something like that: d = torch.load(f"FOLD{fold}_.pth") for suffix in (".weight",".bias"): key = PREFIX + suffix d[key]=map_402_to_206(d[key]) model.load_state_dict(d) PREFIX is output linear layer’s key. map_402_to_206 should reflect how you select a subset, simplest case would be a subrange: d[key] = d[key][:206] if you want to reuse hidden layers on a different set of targets, delete above keys instead and use strict=False
st47999
Thanks for your response, I actually want to use for a different set of targets and I am using strict =False but it is throwing a size mismatch error.
st48000
for i in range(len(model_source.layers[:-1])): model_dest.layers[i].set_weights(model_source.layers[i].get_weights()) return model_dest This is the Keras equivalent of what I want to do.
st48001
Did you follow @googlebot’s suggestion and deleted the key(s) before using strict=False? The strict=False argument ignores unexpected or missing keys, not shape mismatch errors.
st48002
You can delete key-value pairs from an OrderedDict using del as seen here: d = OrderedDict() d['a'] = 1 d['b'] = 2 print(d) del d['a'] print(d) However, @googlebot already posted a solution where you would copy some of the pretrained parameters into the state_dict.
st48003
you can just explore python directory, e.g. list(d.keys()), it will contain [sub]module names as assigned in __init__, and deleting is something like: del d[“output_layer.weight”]
st48004
Hi I am reading an image using both matplotlib imread and imagefolder dataloader. When plotting the output looks okay for imread but the one with imagefolder output is multiplied to 9 for one image though the shape looks okay (9 of the same images with different channels I suppose as its not color anymore). TEST_DATA_PATH = “./data” TRANSFORM_IMG = transforms.Compose([ transforms.ToTensor(), ]) test_data = torchvision.datasets.ImageFolder(root=TEST_DATA_PATH, transform=TRANSFORM_IMG) test_data_loader = data.DataLoader(test_data, batch_size=2, shuffle=False) for i, data in enumerate(test_data_loader): images,labels = data plt.imshow(images[0]) plt.show()
st48005
you sure the shape is ok? szahan: TEST_DATA_PATH = “./data” TRANSFORM_IMG = transforms.Compose([ transforms.ToTensor(), ]) test_data = torchvision.datasets.ImageFolder(root=TEST_DATA_PATH, transform=TRANSFORM_IMG) test_data_loader = data.DataLoader(test_data, batch_size=2, shuffle=False) for i, data in enumerate(test_data_loader): images,labels = data this will give you an image of shape [C, H, W] while plt.imread(img_path) will give you an image of shape [H, W, C] If you want the image tensor you get from your datalaoder to be of the same shape, then you have to use .permute(1, 2, 0) on your tensor. On your example this could be done here: plt.imshow(images[0].permute(1, 2, 0)) Please also keep in mind that .ToTensor() puts your image data into the range [0, 1] by dividing by 255
st48006
RaLo4: .permute(1, 2, 0) Thanks a lot. I was trying to change the shape using view/reshape but permute did the trick.
st48007
I trained the efficientDet with DistributedDataParallel model = EfficientDet(num_classes=args.num_class, network=args.network, W_bifpn=EFFICIENTDET[args.network]['W_bifpn'], D_bifpn=EFFICIENTDET[args.network]['D_bifpn'], D_class=EFFICIENTDET[args.network]['D_class'] ) if(args.resume is not None): model.load_state_dict(checkpoint['state_dict']) del checkpoint if args.distributed: # For multiprocessing distributed, DistributedDataParallel constructor # should always set the single device scope, otherwise, # DistributedDataParallel will use all available devices. if args.gpu is not None: print('Gpu setting...',args.gpu) torch.cuda.set_device(args.gpu) model.cuda(args.gpu) # When using a single GPU per process and per # DistributedDataParallel, we need to divide the batch size # ourselves based on the total number of GPUs we have args.batch_size = int(args.batch_size / ngpus_per_node) args.workers = int((args.workers + ngpus_per_node - 1) / ngpus_per_node) model = torch.nn.parallel.DistributedDataParallel( model, device_ids=[args.gpu] #,output_device=[args.gpu] ,find_unused_parameters=True) #model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu]) print('Run with DistributedDataParallel with divice_ids....A') #modify #model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu]) then for the evaluation I load the weight with if(args.weight is not None): resume_path = str(args.weight) print("Loading checkpoint: {} ...".format(resume_path)) checkpoint = torch.load( args.weight, map_location=lambda storage, loc: storage) params = checkpoint['parser'] args.num_class = params.num_class args.network = params.network model = EfficientDet( num_classes=args.num_class, network=args.network, W_bifpn=EFFICIENTDET[args.network]['W_bifpn'], D_bifpn=EFFICIENTDET[args.network]['D_bifpn'], D_class=EFFICIENTDET[args.network]['D_class'], is_training=False, threshold=args.threshold, iou_threshold=args.iou_threshold) model.load_state_dict(checkpoint['state_dict']) model = model.cuda() then got the error Traceback (most recent call last): File "demokogas.py", line 150, in <module> detect = Detect(weights=args.weight) File "demokogas.py", line 74, in __init__ self.model.load_state_dict(state_dict) File "/home/jake/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1045, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for EfficientDet: Missing key(s) in state_dict: "backbone._conv_stem.weight", "backbone._bn0.weight", "backbone._bn0.bias", "backbone._bn0.running_mean", "backbone._bn0.running_var", "backbone._blocks.0._depthwise_conv.weight", "backbone._blocks.0._bn1.weight", "backbone._blocks.0._bn1.bias", "backbone._blocks.0._bn1.running_mean", "backbone._blocks.0._bn1.running_var", "backbone._blocks.0._se_reduce.weight", "backbone._blocks.0._se_reduce.bias", "backbone._blocks.0._se_expand.weight", "backbone._blocks.0._se_expand.bias", "backbone._blocks.0._project_conv.weight", "backbone._blocks.0._bn2.weight", "backbone._blocks.0._bn2.bias", "backbone._blocks.0._bn2.running_mean", "backbone._blocks.0._bn2.running_var", "backbone._blocks.1._expand_conv.weight", "backbone._blocks.1._bn0.weight", "backbone._blocks.1._bn0.bias", "backbone._blocks.1._bn0.running_mean", "backbone._blocks.1._bn0.running_var", "backbone._blocks.1._depthwise_conv.weight", "backbone._blocks.1._bn1.weight", "backbone._blocks.1._bn1.bias", "backbone._blocks.1._bn1.running_mean", "backbone._blocks.1._bn1.running_var", "backbone._blocks.1._se_reduce.weight", "backbone._blocks.1._se_reduce.bias", "backbone._blocks.1._se_expand.weight", "backbone._blocks.1._se_expand.bias", "backbone._blocks.1._project_conv.weight", "backbone._blocks.1._bn2.weight", "backbone._blocks.1._bn2.bias", "backbone._blocks.1._bn2.running_mean", "backbone._blocks.1._bn2.running_var", "backbone._blocks.2._expand_conv.weight", "backbone._blocks.2._bn0.weight", "backbone._blocks.2._bn0.bias", "backbone._blocks.2._bn0.running_mean", "backbone._blocks.2._bn0.running_var", "backbone._blocks.2._depthwise_conv.weight", "backbone._blocks.2._bn1.weight", "backbone._blocks.2._bn1.bias", "backbone._blocks.2._bn1.running_mean", "backbone._blocks.2._bn1.running_var", "backbone._blocks.2._se_reduce.weight", "backbone._blocks.2._se_reduce.bias", "backbone._blocks.2._se_expand.weight", "backbone._blocks.2._se_expand.bias", "backbone._blocks.2._project_conv.weight", "backbone._blocks.2._bn2.weight", "backbone._blocks.2._bn2.bias", "backbone._blocks.2._bn2.running_mean", "backbone._blocks.2._bn2.running_var", "backbone._blocks.3._expand_conv.weight", "backbone._blocks.3._bn0.weight", "backbone._blocks.3._bn0.bias", "backbone._blocks.3._bn0.running_mean", "backbone._blocks.3._bn0.running_var", "backbone._blocks.3._depthwise_conv.weight", "backbone._blocks.3._bn1.weight", "backbone._blocks.3._bn1.bias", "backbone._blocks.3._bn1.running_mean", "backbone._blocks.3._bn1.running_var", "backbone._blocks.3._se_reduce.weight", "backbone._blocks.3._se_reduce.bias", "backbone._blocks.3._se_expand.weight", "backbone._blocks.3._se_expand.bias", "backbone._blocks.3._project_conv.weight", "backbone._blocks.3._bn2.weight", "backbone._blocks.3._bn2.bias", "backbone._blocks.3._bn2.running_mean", "backbone._blocks.3._bn2.running_var", "backbone._blocks.4._expand_conv.weight", "backbone._blocks.4._bn0.weight", "backbone._blocks.4._bn0.bias", "backbone._blocks.4._bn0.running_mean", "backbone._blocks.4._bn0.running_var", "backbone._blocks.4._depthwise_conv.weight", "backbone._blocks.4._bn1.weight", "backbone._blocks.4._bn1.bias", "backbone._blocks.4._bn1.running_mean", "backbone._blocks.4._bn1.running_var", "backbone._blocks.4._se_reduce.weight", "backbone._blocks.4._se_reduce.bias", "backbone._blocks.4._se_expand.weight", "backbone._blocks.4._se_expand.bias", "backbone._blocks.4._project_conv.weight", "backbone._blocks.4._bn2.weight", "backbone._blocks.4._bn2.bias", "backbone._blocks.4._bn2.running_mean", "backbone._blocks.4._bn2.running_var", "backbone._blocks.5._expand_conv.weight", "backbone._blocks.5._bn0.weight", "backbone._blocks.5._bn0.bias", "backbone._blocks.5._bn0.running_mean", "backbone._blocks.5._bn0.running_var", "backbone._blocks.5._depthwise_conv.weight", "backbone._blocks.5._bn1.weight", "backbone._blocks.5._bn1.bias", "backbone._blocks.5._bn1.running_mean", "backbone._blocks.5._bn1.running_var", "backbone._blocks.5._se_reduce.weight", "backbone._blocks.5._se_reduce.bias", "backbone._blocks.5._se_expand.weight", "backbone._blocks.5._se_expand.bias", "backbone._blocks.5._project_conv.weight", "backbone._blocks.5._bn2.weight", "backbone._blocks.5._bn2.bias", "backbone._blocks.5._bn2.running_mean", "backbone._blocks.5._bn2.running_var", "backbone._blocks.6._expand_conv.weight", "backbone._blocks.6._bn0.weight", "backbone._blocks.6._bn0.bias", "backbone._blocks.6._bn0.running_mean", "backbone._blocks.6._bn0.running_var", "backbone._blocks.6._depthwise_conv.weight", "backbone._blocks.6._bn1.weight", "backbone._blocks.6._bn1.bias", "backbone._blocks.6._bn1.running_mean", "backbone._blocks.6._bn1.running_var", "backbone._blocks.6._se_reduce.weight", "backbone._blocks.6._se_reduce.bias", "backbone._blocks.6._se_expand.weight", "backbone._blocks.6._se_expand.bias", "backbone._blocks.6._project_conv.weight", "backbone._blocks.6._bn2.weight", "backbone._blocks.6._bn2.bias", "backbone._blocks.6._bn2.running_mean", "backbone._blocks.6._bn2.running_var", "backbone._blocks.7._expand_conv.weight", "backbone._blocks.7._bn0.weight", "backbone._blocks.7._bn0.bias", "backbone._blocks.7._bn0.running_mean", "backbone._blocks.7._bn0.running_var", "backbone._blocks.7._depthwise_conv.weight", "backbone._blocks.7._bn1.weight", "backbone._blocks.7._bn1.bias", "backbone._blocks.7._bn1.running_mean", "backbone._blocks.7._bn1.running_var", "backbone._blocks.7._se_reduce.weight", "backbone._blocks.7._se_reduce.bias", "backbone._blocks.7._se_expand.weight", "backbone._blocks.7._se_expand.bias", "backbone._blocks.7._project_conv.weight", "backbone._blocks.7._bn2.weight", "backbone._blocks.7._bn2.bias", "backbone._blocks.7._bn2.running_mean", "backbone._blocks.7._bn2.running_var", "backbone._blocks.8._expand_conv.weight", "backbone._blocks.8._bn0.weight", "backbone._blocks.8._bn0.bias", "backbone._blocks.8._bn0.running_mean", "backbone._blocks.8._bn0.running_var", "backbone._blocks.8._depthwise_conv.weight", "backbone._blocks.8._bn1.weight", "backbone._blocks.8._bn1.bias", "backbone._blocks.8._bn1.running_mean", "backbone._blocks.8._bn1.running_var", "backbone._blocks.8._se_reduce.weight", "backbone._blocks.8._se_reduce.bias", "backbone._blocks.8._se_expand.weight", "backbone._blocks.8._se_expand.bias", "backbone._blocks.8._project_conv.weight", "backbone._blocks.8._bn2.weight", "backbone._blocks.8._bn2.bias", "backbone._blocks.8._bn2.running_mean", "backbone._blocks.8._bn2.running_var", "backbone._blocks.9._expand_conv.weight", "backbone._blocks.9._bn0.weight", "backbone._blocks.9._bn0.bias", "backbone._blocks.9._bn0.running_mean", "backbone._blocks.9._bn0.running_var", "backbone._blocks.9._depthwise_conv.weight", "backbone._blocks.9._bn1.weight", "backbone._blocks.9._bn1.bias", "backbone._blocks.9._bn1.running_mean", "backbone._blocks.9._bn1.running_var", "backbone._blocks.9._se_reduce.weight", "backbone._blocks.9._se_reduce.bias", "backbone._blocks.9._se_expand.weight", "backbone._blocks.9._se_expand.bias", "backbone._blocks.9._project_conv.weight", "backbone._blocks.9._bn2.weight", "backbone._blocks.9._bn2.bias", "backbone._blocks.9._bn2.running_mean", "backbone._blocks.9._bn2.running_var", "backbone._blocks.10._expand_conv.weight", "backbone._blocks.10._bn0.weight", "backbone._blocks.10._bn0.bias", "backbone._blocks.10._bn0.running_mean", "backbone._blocks.10._bn0.running_var", "backbone._blocks.10._depthwise_conv.weight", "backbone._blocks.10._bn1.weight", "backbone._blocks.10._bn1.bias", "backbone._blocks.10._bn1.running_mean", "backbone._blocks.10._bn1.running_var", "backbone._blocks.10._se_reduce.weight", "backbone._blocks.10._se_reduce.bias", "backbone._blocks.10._se_expand.weight", "backbone._blocks.10._se_expand.bias", "backbone._blocks.10._project_conv.weight", "backbone._blocks.10._bn2.weight", "backbone._blocks.10._bn2.bias", "backbone._blocks.10._bn2.running_mean", "backbone._blocks.10._bn2.running_var", "backbone._blocks.11._expand_conv.weight", "backbone._blocks.11._bn0.weight", "backbone._blocks.11._bn0.bias", "backbone._blocks.11._bn0.running_mean", "backbone._blocks.11._bn0.running_var", "backbone._blocks.11._depthwise_conv.weight", "backbone._blocks.11._bn1.weight", "backbone._blocks.11._bn1.bias", "backbone._blocks.11._bn1.running_mean", "backbone._blocks.11._bn1.running_var", "backbone._blocks.11._se_reduce.weight", "backbone._blocks.11._se_reduce.bias", "backbone._blocks.11._se_expand.weight", "backbone._blocks.11._se_expand.bias", "backbone._blocks.11._project_conv.weight", "backbone._blocks.11._bn2.weight", "backbone._blocks.11._bn2.bias", "backbone._blocks.11._bn2.running_mean", "backbone._blocks.11._bn2.running_var", "backbone._blocks.12._expand_conv.weight", "backbone._blocks.12._bn0.weight", "backbone._blocks.12._bn0.bias", "backbone._blocks.12._bn0.running_mean", "backbone._blocks.12._bn0.running_var", "backbone._blocks.12._depthwise_conv.weight", "backbone._blocks.12._bn1.weight", "backbone._blocks.12._bn1.bias", "backbone._blocks.12._bn1.running_mean", "backbone._blocks.12._bn1.running_var", "backbone._blocks.12._se_reduce.weight", "backbone._blocks.12._se_reduce.bias", "backbone._blocks.12._se_expand.weight", "backbone._blocks.12._se_expand.bias", "backbone._blocks.12._project_conv.weight", "backbone._blocks.12._bn2.weight", "backbone._blocks.12._bn2.bias", "backbone._blocks.12._bn2.running_mean", "backbone._blocks.12._bn2.running_var", "backbone._blocks.13._expand_conv.weight", "backbone._blocks.13._bn0.weight", "backbone._blocks.13._bn0.bias", "backbone._blocks.13._bn0.running_mean", "backbone._blocks.13._bn0.running_var", "backbone._blocks.13._depthwise_conv.weight", "backbone._blocks.13._bn1.weight", "backbone._blocks.13._bn1.bias", "backbone._blocks.13._bn1.running_mean", "backbone._blocks.13._bn1.running_var", "backbone._blocks.13._se_reduce.weight", "backbone._blocks.13._se_reduce.bias", "backbone._blocks.13._se_expand.weight", "backbone._blocks.13._se_expand.bias", "backbone._blocks.13._project_conv.weight", "backbone._blocks.13._bn2.weight", "backbone._blocks.13._bn2.bias", "backbone._blocks.13._bn2.running_mean", "backbone._blocks.13._bn2.running_var", "backbone._blocks.14._expand_conv.weight", "backbone._blocks.14._bn0.weight", "backbone._blocks.14._bn0.bias", "backbone._blocks.14._bn0.running_mean", "backbone._blocks.14._bn0.running_var", "backbone._blocks.14._depthwise_conv.weight", "backbone._blocks.14._bn1.weight", "backbone._blocks.14._bn1.bias", "backbone._blocks.14._bn1.running_mean", "backbone._blocks.14._bn1.running_var", "backbone._blocks.14._se_reduce.weight", "backbone._blocks.14._se_reduce.bias", "backbone._blocks.14._se_expand.weight", "backbone._blocks.14._se_expand.bias", "backbone._blocks.14._project_conv.weight", "backbone._blocks.14._bn2.weight", "backbone._blocks.14._bn2.bias", "backbone._blocks.14._bn2.running_mean", "backbone._blocks.14._bn2.running_var", "backbone._blocks.15._expand_conv.weight", "backbone._blocks.15._bn0.weight", "backbone._blocks.15._bn0.bias", "backbone._blocks.15._bn0.running_mean", "backbone._blocks.15._bn0.running_var", "backbone._blocks.15._depthwise_conv.weight", "backbone._blocks.15._bn1.weight", "backbone._blocks.15._bn1.bias", "backbone._blocks.15._bn1.running_mean", "backbone._blocks.15._bn1.running_var", "backbone._blocks.15._se_reduce.weight", "backbone._blocks.15._se_reduce.bias", "backbone._blocks.15._se_expand.weight", "backbone._blocks.15._se_expand.bias", "backbone._blocks.15._project_conv.weight", "backbone._blocks.15._bn2.weight", "backbone._blocks.15._bn2.bias", "backbone._blocks.15._bn2.running_mean", "backbone._blocks.15._bn2.running_var", "backbone._conv_head.weight", "backbone._bn1.weight", "backbone._bn1.bias", "backbone._bn1.running_mean", "backbone._bn1.running_var", "backbone._fc.weight", "backbone._fc.bias", "neck.lateral_convs.0.conv.weight", "neck.lateral_convs.0.conv.bias", "neck.lateral_convs.1.conv.weight", "neck.lateral_convs.1.conv.bias", "neck.lateral_convs.2.conv.weight", "neck.lateral_convs.2.conv.bias", "neck.lateral_convs.3.conv.weight", "neck.lateral_convs.3.conv.bias", "neck.lateral_convs.4.conv.weight", "neck.lateral_convs.4.conv.bias", "neck.stack_bifpn_convs.0.w1", "neck.stack_bifpn_convs.0.w2", "neck.stack_bifpn_convs.0.bifpn_convs.0.0.conv.weight", "neck.stack_bifpn_convs.0.bifpn_convs.0.0.conv.bias", "neck.stack_bifpn_convs.0.bifpn_convs.1.0.conv.weight", "neck.stack_bifpn_convs.0.bifpn_convs.1.0.conv.bias", "neck.stack_bifpn_convs.0.bifpn_convs.2.0.conv.weight", "neck.stack_bifpn_convs.0.bifpn_convs.2.0.conv.bias", "neck.stack_bifpn_convs.0.bifpn_convs.3.0.conv.weight", "neck.stack_bifpn_convs.0.bifpn_convs.3.0.conv.bias", "neck.stack_bifpn_convs.0.bifpn_convs.4.0.conv.weight", "neck.stack_bifpn_convs.0.bifpn_convs.4.0.conv.bias", "neck.stack_bifpn_convs.0.bifpn_convs.5.0.conv.weight", "neck.stack_bifpn_c
st48008
It seems that you are saving state_dict saved from a single-gpu model and loading it to your DDP model. DDP models have their elements under .module. ex) self.model.module.backbone._conv_stem I’d recommend you to try loading the state_dict by self.model.module.load_state_dict(state_dict). You can find more details in this thread 14.
st48009
This is my efficient Det then when I used the self.model.module.load_state_dict(state_dict). it said torch.nn.modules.module.ModuleAttributeError: ‘EfficientDet’ object has no attribute ‘module’ import torch import torch.nn as nn import math from models.efficientnet import EfficientNet from models.bifpn import BIFPN from .retinahead import RetinaHead from models.module import RegressionModel, ClassificationModel, Anchors, ClipBoxes, BBoxTransform from torchvision.ops import nms from .losses import FocalLoss MODEL_MAP = { 'efficientdet-d0': 'efficientnet-b0', 'efficientdet-d1': 'efficientnet-b1', 'efficientdet-d2': 'efficientnet-b2', 'efficientdet-d3': 'efficientnet-b3', 'efficientdet-d4': 'efficientnet-b4', 'efficientdet-d5': 'efficientnet-b5', 'efficientdet-d6': 'efficientnet-b6', 'efficientdet-d7': 'efficientnet-b6', } class EfficientDet(nn.Module): def __init__(self, num_classes, network='efficientdet-d0', D_bifpn=3, W_bifpn=88, D_class=3, is_training=True, threshold=0.01, iou_threshold=0.5): super(EfficientDet, self).__init__() self.backbone = EfficientNet.from_pretrained(MODEL_MAP[network]) self.is_training = is_training self.neck = BIFPN(in_channels=self.backbone.get_list_features()[-5:], out_channels=W_bifpn, stack=D_bifpn, num_outs=5) self.bbox_head = RetinaHead(num_classes=num_classes, in_channels=W_bifpn) self.anchors = Anchors() self.regressBoxes = BBoxTransform() self.clipBoxes = ClipBoxes() self.threshold = threshold self.iou_threshold = iou_threshold for m in self.modules(): if isinstance(m, nn.Conv2d): n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels m.weight.data.normal_(0, math.sqrt(2. / n)) elif isinstance(m, nn.BatchNorm2d): m.weight.data.fill_(1) m.bias.data.zero_() self.freeze_bn() self.criterion = FocalLoss() def forward(self, inputs): #print('EfficientDet forwarding...') if self.is_training: #try: inputs, annotations = inputs #except: # inputs = inputs # self.is_training = False else: #try: inputs = inputs #except: #inputs, annotations = inputs #self.is_training = True x = self.extract_feat(inputs) outs = self.bbox_head(x) classification = torch.cat([out for out in outs[0]], dim=1) regression = torch.cat([out for out in outs[1]], dim=1) anchors = self.anchors(inputs) if self.is_training: return self.criterion(classification, regression, anchors, annotations) else: transformed_anchors = self.regressBoxes(anchors, regression) transformed_anchors = self.clipBoxes(transformed_anchors, inputs) scores = torch.max(classification, dim=2, keepdim=True)[0] scores_over_thresh = (scores > self.threshold)[0, :, 0] if scores_over_thresh.sum() == 0: #print('No boxes to NMS 222') # no boxes to NMS, just return return [torch.zeros(0), torch.zeros(0), torch.zeros(0, 4)] classification = classification[:, scores_over_thresh, :] transformed_anchors = transformed_anchors[:, scores_over_thresh, :] scores = scores[:, scores_over_thresh, :] anchors_nms_idx = nms( transformed_anchors[0, :, :], scores[0, :, 0], iou_threshold=self.iou_threshold) nms_scores, nms_class = classification[0, anchors_nms_idx, :].max( dim=1) return [nms_scores, nms_class, transformed_anchors[0, anchors_nms_idx, :]] def freeze_bn(self): '''Freeze BatchNorm layers.''' for layer in self.modules(): if isinstance(layer, nn.BatchNorm2d): layer.eval() def extract_feat(self, img): """ Directly extract features from the backbone+neck """ x = self.backbone(img) x = self.neck(x[-5:]) return x
st48010
How to yo set up the DistributedDataparallel for evaluation? self.model = EfficientDet(num_classes=args.num_class, network=args.network, W_bifpn=EFFICIENTDET[args.network]['W_bifpn'], D_bifpn=EFFICIENTDET[args.network]['D_bifpn'], D_class=EFFICIENTDET[args.network]['D_class'] ) #self.model = torch.nn.parallel.DistributedDataParallel(self.model, device_ids=[args.gpu],find_unused_parameters=True) #self.model = torch.nn.parallel.DistributedDataParallel(self.model) self.model = torch.nn.parallel.DistributedDataParallel( self.model, output_device=[1])
st48011
Oh, I was mistaken. You are already loading the parameters before setting up DDP. Then it is not necessary to load to use self.model.module.load_state_dict. Would you mind checking if this is the opposite case? How did you save your ‘state_dict’? Did you save parameters of the DDP model as self.model.state_dict() where isinstance(self.model, torch.nn.parallel.DistributedDataParallel) == True? If so, you may need to encapsulate the model as DDP and then load parameters. model = ... model = torch.nn.parallel.DistributedDataParallel( model, device_ids=[args.gpu] ,find_unused_parameters=True) checkpoint = torch.load(...) model.load_state_dict(checkpoint[state_dict])
st48012
I saved the file like this torch.save( state, os.path.join( args.save_folder, args.dataset, args.network, "checkpoint_{}.pth".format(epoch))) and the key missing errror is happen with this code self.model = EfficientDet(num_classes=args.num_class, network=args.network, W_bifpn=EFFICIENTDET[args.network]['W_bifpn'], D_bifpn=EFFICIENTDET[args.network]['D_bifpn'], D_class=EFFICIENTDET[args.network]['D_class'] ) #self.model = torch.nn.parallel.DistributedDataParallel(self.model, device_ids=[args.gpu],find_unused_parameters=True) #self.model = torch.nn.parallel.DistributedDataParallel(self.model) #self.model = torch.nn.parallel.DistributedDataParallel( # self.model, output_device=[1]) self.model = self.model.cuda() if(self.weights is not None): print('load state dic...',self.weights) checkpoint = torch.load( self.weights, map_location=lambda storage, loc: storage) #self.model = torch.nn.parallel.DistributedDataParallel(self.model) state_dict = checkpoint['state_dict'] self.model.load_state_dict(state_dict) #self.model.module.load_state_dict(state_dict) if torch.cuda.is_available(): self.model = self.model.cuda() self.model.eval() Im not sure how to declare #self.model = torch.nn.parallel.DistributedDataParallel(self.model, device_ids=[args.gpu],find_unused_parameters=True) #self.model = torch.nn.parallel.DistributedDataParallel(self.model) #self.model = torch.nn.parallel.DistributedDataParallel( # self.model, output_device=[1]) to use the model.module do I need to declare the model has #self.model = torch.nn.parallel.DistributedDataParallel(self.model, device_ids=[args.gpu],find_unused_parameters=True) #self.model = torch.nn.parallel.DistributedDataParallel(self.model) #self.model = torch.nn.parallel.DistributedDataParallel( # self.model, output_device=[1]) to load the trained DistributedDataParallel model?
st48013
No, what I meant is just simply moving the line model.load_state_dict(checkpoint['state_dict']) after you casting DDP encapsulation not before it. model = torch.nn.parallel.DistributedDataParallel( model, device_ids=[args.gpu] #,output_device=[args.gpu] ,find_unused_parameters=True) In short, you can try: model = torch.nn.parallel.DistributedDataParallel( model, device_ids=[args.gpu] #,output_device=[args.gpu] ,find_unused_parameters=True) model.load_state_dict(checkpoint['state_dict']) When you cast your model as DataParallel or DistributedDataParallel, they move your layers under .module. For example, self.model.backbone will move to self.model.module.backbone. When you call your parameters as self.model.state_dict() at saving, the changed hierarchy will also be applied to the state_dict. Due to the difference, you will be able to load your state_dict when your model is the same DistributedDataParallel class, not the basic nn.Module class.
st48014
ok now i understand to using .module this is my calling model part # multi gpu load self.model = EfficientDet(num_classes=args.num_class, network=args.network, W_bifpn=EFFICIENTDET[args.network]['W_bifpn'], D_bifpn=EFFICIENTDET[args.network]['D_bifpn'], D_class=EFFICIENTDET[args.network]['D_class'] ) if torch.cuda.is_available(): self.model = self.model.cuda() if args.distributed: self.model = self.model.to(args.rank) self.model = torch.nn.parallel.DistributedDataParallel(self.model ,device_ids=[args.rank] ,output_device=[args.rank] ,find_unused_parameters=True) self.model = self.model.module #self.model = self.model.cuda() if(self.weights is not None): print('load state dic...',self.weights) checkpoint = torch.load( self.weights, map_location=lambda storage, loc: storage) state_dict = checkpoint['state_dict'] self.model.load_state_dict(state_dict) if torch.cuda.is_available(): self.model = self.model.cuda() self.model.eval() then now it gives following error File "/home/jake/venv/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 305, in __init__ self.process_group = _get_default_group() File "/home/jake/venv/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 285, in _get_default_group raise RuntimeError("Default process group has not been initialized, " RuntimeError: Default process group has not been initialized, please make sure to call init_process_group. args.rank is 0
st48015
The error is related to DDP initialization, not model loading. You should initialize a distributed process group before creating a DDP module. As you said you already trained your model with DDP, maybe the dist.init_process_group function moved to somewhere unwanted. When loading model parameters after you create DDP model, it is safe to load parameters on all processes. If you want to get a general feeling of DDP code, you can refer to this DDP setup 4 and model definition 4 example. See how state_dict, save, load, parallelize functions are defined.
st48016
I saw these kind of topics a lot in forum but generally batch dimension is missing in their tensors. But in my case channel dimension is missing and I don’t know how to just change my model to meet its requirements. Here is my code class DAE(nn.Module): def __init__(self): super(DAE, self).__init__() self.conv1 = nn.Sequential( nn.Conv2d(1,8,3,1,1), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2) ) self.conv2 = nn.Sequential( nn.Conv2d(8, 64, 3, 1, 1), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2) ) self.up = nn.UpsamplingBilinear2d(scale_factor=2) self.down = nn.MaxPool2d(kernel_size=2 , stride=2) self.deconv1 = nn.Sequential( nn.Conv2d(64, 8, 3, 1, 1), nn.ReLU(), nn.UpsamplingBilinear2d(scale_factor=2) ) self.deconv2 = nn.Sequential( nn.Conv2d(8, 1, 3, 1, 1), nn.ReLU(), nn.UpsamplingBilinear2d(scale_factor=2) ) self.linear1 = nn.Linear(64*7*7 , 64 ) self.linear2 = nn.Linear( 64 ,64*7*7 ) def forward(self , x ): x = self.conv2(self.conv1(x)) # x = self.res5(self.res4(self.res3(self.res2(self.res1(x))))) x = x.view(x.shape[0] ,64*7*7 ) x = self.linear2(self.linear1(x)) x = x.view(x.shape[0] , 64 ,7 ,7) x = self.deconv2(self.deconv1(x)) return x f, axes= plt.subplots(6, 3, figsize = (5, 10)) axes[0,0].set_title("Original Image") axes[0,1].set_title("Noisy Image") axes[0,2].set_title("Cleaned Image") for idx, (noisy, clean, label) in enumerate(test_dataloader): if idx > 5: break # denoising with DAE noisy = noisy.view(noisy.size(0),-1).type(torch.FloatTensor) noisy = noisy.to(device) output = model(noisy) # fix size output = output.view(1, 28, 28) output = output.permute(1, 2, 0).squeeze(2) output = output.detach().cpu().numpy() noisy = noisy.view(1, 28, 28) noisy = noisy.permute(1, 2, 0).squeeze(2) noisy = noisy.detach().cpu().numpy() clean = clean.view(1, 28, 28) clean = clean.permute(1, 2, 0).squeeze(2) clean = clean.detach().cpu().numpy() # plot axes[idx, 0].imshow(clean, cmap="gray") axes[idx, 1].imshow(noisy, cmap="gray") axes[idx, 2].imshow(output, cmap="gray") axes[idx, 0].set(xticks=[], yticks=[]) axes[idx, 1].set(xticks=[], yticks=[]) axes[idx, 2].set(xticks=[], yticks=[]) RuntimeError Traceback (most recent call last) in () 13 noisy = noisy.view(noisy.size(0),-1).type(torch.FloatTensor) 14 noisy = noisy.to(device) —> 15 output = model(noisy) 16 17 # fix size 6 frames /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: –> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), in forward(self, x) 32 33 def forward(self , x ): —> 34 x = self.conv2(self.conv1(x)) 35 # x = self.res5(self.res4(self.res3(self.res2(self.res1(x))))) 36 x = x.view(x.shape[0] ,6477 ) /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: –> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), /usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py in forward(self, input) 115 def forward(self, input): 116 for module in self: –> 117 input = module(input) 118 return input 119 /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: –> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in forward(self, input) 417 418 def forward(self, input: Tensor) -> Tensor: –> 419 return self._conv_forward(input, self.weight) 420 421 class Conv3d(_ConvNd): /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight) 414 _pair(0), self.dilation, self.groups) 415 return F.conv2d(input, weight, self.bias, self.stride, –> 416 self.padding, self.dilation, self.groups) 417 418 def forward(self, input: Tensor) -> Tensor: RuntimeError: Expected 4-dimensional input for 4-dimensional weight [8, 1, 3, 3], but got 2-dimensional input of size [1, 784] instead
st48017
You are passing a 2-dimensional input tensor to the model, while the first conv layer expects a 4-dimensional tensor in the shape [batch_size, channels, height, width]. This line of code creates the flattened tensor: noisy = noisy.view(noisy.size(0),-1).type(torch.FloatTensor) noisy = noisy.to(device) output = model(noisy) After the view operation noisy will have the shape [batch_size, nb_features] and you would need to reshape it to the expected 4 dimensions.
st48018
I have a loss function called SI-SNR loss function implemented as follows: def si_snr_loss(ests, egs): refs = egs["ref"] num_spks = len(refs) def sisnr_loss(permute): return sum([sisnr(ests[s], refs[t]) for s, t in enumerate(permute)]) / len(permute) N = egs["mix"].size(0) sisnr_mat = torch.stack([sisnr_loss(p) for p in permutations(range(num_spks))]) max_perutt,_ = torch.max(sisnr_mat, dim=0) return -torch.sum(max_perutt) / N I want to replace this loss function with an MSE loss function I write a simple equation as follows: for egs in val_dataloader: current_step += 1 egs = to_device(egs, self.device) ests = data_parallel(self.net, egs['mix'], device_ids=self.gpuid) #loss = si_snr_loss(ests, egs) loss = (ests - torch.Tensor(np.array(egs.values())))**2 losses.append(loss.item()) Unfortunately, this gives me an error: TypeError: can’t convert np.ndarray of type numpy.object_. The only supported types are: float64, float32, float16, int64, int32, int16, int8, uint8, and bool.
st48019
The error seems to be raised by numpy in: np.array(egs.values()) Based on the error message it seems that egs.values() might return objects with variable shapes or any other objects which numpy cannot transform to an array. Could you print this object and check what is stored in it?
st48020
Hi everyone I was wondering whether there is an easy option to access the first layer in a custom non-Sequential CNN. When the network is constructed the ‘sequential way’, you can just use: network[0]. Is there anything similar to that? Any help is very much appreciated! All the best snowe
st48021
Solved by ptrblck in post #6 Not a beautiful approach, but you could register forward hooks for each layer and print its name when it’s called. Based on the output you would see which layer was called first in this execution. Note that this is also not a bulletproof approach, as your forward might have conditions, loops, etc…
st48022
Yes, you can directly access the layer by its name as seen here: class MyModel(nn.Module): def __init__(self): super(MyModel, self).__init__() self.conv = nn.Conv2d(3, 3, 3, 1, 1) self.fc = nn.Linear(10, 10) def forward(self, x): x = F.relu(self.conv(x)) x = self.fc(x) return x model = MyModel() print(model.conv) > Conv2d(3, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
st48023
Thank you @ptrblck, but I can only do that when I know the name of the first layer. How can I access it if I don’t know the name? For example: I want to create a function that takes a model as parameter and returns the first layer of this model…
st48024
In custom modules the “first layer” annotation might be misleading as seen in this example: class MyModel(nn.Module): def __init__(self): super(MyModel, self).__init__() self.conv2 = nn.Conv2d(3, 3, 3, 1, 1) self.conv1 = nn.Conv2d(3, 3, 3, 1, 1) def forward(self, x): x = F.relu(self.conv1(x)) x = self.conv2(x) return x model = MyModel() print(model) print(next(model.named_children())) model.named_children() as well as the print(model) statement will return conv2 as the “first” layer, but in the end the usage of the modules in the forward defines the order of execution. Anyway, next(model.named_children()) might work for you. Note that finding the “first layer” also depends what you mean by layer, as nn.Modules might themselves contain more modules as seen here: class MyModel(nn.Module): def __init__(self): super(MyModel, self).__init__() self.conv2 = models.resnet18() self.conv1 = nn.Conv2d(3, 3, 3, 1, 1) def forward(self, x): x = F.relu(self.conv1(x)) x = self.conv2(x) return x model = MyModel() print(model) print(next(model.named_children()))
st48025
Oh I see, I did not take this into consideration. In my case, the term first means the first convolutional layer. And since most of the times (at least with the models I am working with) the first layer is a convolutional layer I mismatched the terms. Is there a similar way to access the order of execution in the forward method then?
st48026
snowe: Is there a similar way to access the order of execution in the forward method then? Not a beautiful approach, but you could register forward hooks for each layer and print its name when it’s called. Based on the output you would see which layer was called first in this execution. Note that this is also not a bulletproof approach, as your forward might have conditions, loops, etc. which can change during the runtime. I guess for the majority of models you might be find to iterate model.named_modules() or .named_children(), filter using if isinstance(module, nn.Conv2d) and print the first occurrence.
st48027
Hi, I have trouble. When I run my graph convolutional model, the pytorch will respond the error message like: AttributeError: Can’t get attribute ‘NusDatasetGCN’ on <module ‘main’ (built-in)> ##The NusDatasetGCN code: class NusDatasetGCN(Dataset): def init(self, data_path, anno_path, transforms, w2v_path): self.transforms = transforms with open(anno_path) as fp: json_data = json.load(fp) # with open('small_train.json',encoding="utf-8") as fp: #json_data = json.loads(fp.read()) samples = json_data['samples'] self.classes = json_data['labels'] self.imgs = [] self.annos = [] self.data_path = data_path print('loading', anno_path) for sample in samples: self.imgs.append(sample['image_name']) self.annos.append(sample['image_labels']) for item_id in range(len(self.annos)): item = self.annos[item_id] vector = [cls in item for cls in self.classes] self.annos[item_id] = np.array(vector, dtype=float) # Load vectorized labels for GCN from json. with open(w2v_path) as fp: self.gcn_inp = np.array(json.load(fp)['vect_labels'], dtype=float) def __getitem__(self, item): anno = self.annos[item] img_path = os.path.join(self.data_path, self.imgs[item]) img = Image.open(img_path) if self.transforms is not None: img = self.transforms(img) return img, anno, self.gcn_inp def __len__(self): return len(self.imgs) ################################### Run training. epoch = 0 iteration = 0 while True: batch_losses = [] for batch_number, (imgs, targets, gcn_input) in enumerate(train_dataloader): imgs, targets, gcn_input = imgs.to(device), targets.to(device), gcn_input.to(device) optimizer.zero_grad() model_result = model(imgs, gcn_input) loss = criterion(model_result, targets.type(torch.float)) batch_loss_value = loss.item() loss.backward() torch.nn.utils.clip_grad_norm(model.parameters(), 10.0) optimizer.step() logger.add_scalar('train_loss', batch_loss_value, iteration) batch_losses.append(batch_loss_value) with torch.no_grad(): result = calculate_metrics(model_result.cpu().numpy(), targets.cpu().numpy()) for metric in result: logger.add_scalar('train/' + metric, result[metric], iteration) if iteration % test_freq == 0: model.eval() with torch.no_grad(): model_result = [] targets = [] for imgs, batch_targets, gcn_input in test_dataloader: gcn_input = gcn_input.to(device) imgs = imgs.to(device) model_batch_result = model(imgs, gcn_input) model_result.extend(model_batch_result.cpu().numpy()) targets.extend(batch_targets.cpu().numpy()) result = calculate_metrics(np.array(model_result), np.array(targets)) for metric in result: logger.add_scalar('test/' + metric, result[metric], iteration) print("epoch:{:2d} iter:{:3d} test: " "micro f1: {:.3f} " "macro f1: {:.3f} " "samples f1: {:.3f}".format(epoch, iteration, result['micro/f1'], result['macro/f1'], result['samples/f1'])) model.train() iteration += 1 loss_value = np.mean(batch_losses) print("epoch:{:2d} iter:{:3d} train: loss:{:.3f}".format(epoch, iteration, loss_value)) if epoch % save_freq == 0: checkpoint_save(model, save_path, epoch) epoch += 1 if max_epoch_number < epoch: break when I run training, I will get an error message. How can solve this problem? Very thanks!!
st48028
This error seems to be raised by Python’s import system and apparently Python isn’t able to find the definition of NusDatasetGCN. Are you importing it from another file or define everything in a single script?
st48029
Thanks your reply . Yes, but I write the class in script, maybe I need to write NusDatasetGCN to another file
st48030
The code is from here: github.com spmallick/learnopencv/blob/master/Graph-Convolutional-Networks-Model-Relations-In-Data/graph_convolutional_networks_model_relations_in_data.ipynb 3 { "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Install requirements\n", "!pip install numpy scikit-image scipy scikit-learn matplotlib tqdm tensorflow torch torchvision" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ This file has been truncated. show original
st48031
I am trying to train my first CNN after trying an existing one but it didn’t work . this error was appeared and i cann’t understand it.Screenshot from 2020-09-29 17-03-081920×1080 320 KB I work on ubuntu 18.0 , GTX 1660 Ti 6G. this is a code sample that i think causes the error: the code for epoch in range(num_epochs): model.train() for batch_idx, (features, targets, levels, x) in enumerate(train_loader): features = features.to(DEVICE) targets = targets targets = targets.to(DEVICE) levels = levels.to(DEVICE) logits, probas = model(features) if epoch >= 190: print('\n i=',batch_idx,'logits =', logits) print( '\n i=',batch_idx,' probas =',probas) impf=torch.ones([logits.shape[0], NUM_CLASSES]) for i in range(len(x) ): impf[i]=impFactor(x[i]) impf = impf.to(DEVICE) logits= (logits * impf).to(DEVICE) cost = cost_fn(logits, levels) optimizer.zero_grad() cost.backward() optimizer.step()
st48032
Which PyTorch, CUDA and cudnn versions are you using? Also, could you post the model definition as well as the shapes of all tensors, so that we could reproduce and debug this issue, please?
st48033
I am using pytorch 1.5.0 and my cuda version is 10.2 on ubuntu 18.0. Screenshot from 2020-10-04 01-37-15786×527 72.3 KB I used resrnet34: def resnet34(num_classes, grayscale): “”“Constructs a ResNet-34 model.”"" model = ResNet(block=BasicBlock, layers=[3, 4, 6, 3], num_classes=num_classes, grayscale=grayscale) return model Now when I run my script on a small dataset it work perfect. but the same script on the large dataset caused a different error in the line: impf=impf.to(DEVICE) …RuntimeError: CUDA error: unspecified launch failure .
st48034
it works fine till epoch=31 then the error appears: RuntimeError: CUDA error: unspecified launch failure Sometimes it work fine till epoch=39 then the same error appears
st48035
In another time, the run stopped at epoch=107 of 200 epochs Screenshot from 2020-09-18 21-20-481920×1080 429 KB
st48036
do you mean memcheck … yes i do and this is the output: ========= CUDA-MEMCHECK ========= ERROR SUMMARY: 0 errors
st48037
and if this may be the cause what is the solution ??? reduce the batch size can fix
st48038
No, I meant if your GPU memory is filling up and you thus cannot allocate any more data on the device. You can check the memory usage via nvidia-smi or in your script via e.g. torch.cuda.memory_allocated(). Are you using custom CUDA code or did you execute cuda-memcheck just on the complete PyTorch model?
st48039
ptrblck: torch.cuda.memory_allocated() please explain more. where to add this in my script?
st48040
marwa1: logits= (logits * impf).to(DEVICE) is this true where the tensors ‘logits and impf’ have the same size ?
st48041
You could add it e.g. at the beginning and at the end of each iteration to check the allocated memory, which would show if you are close to the device limit. Note that this call does not return the memory usage of the CUDA context or from other applications.
st48042
the output is: Epoch: 001/200 | Batch 0000/0343 | Cost: 38.0783 memory allocated after 369382912 memory allocated before 346285056 memory allocated after 369382912 memory allocated before 346285056 memory allocated after 369382912 memory allocated before 346285056 memory allocated after 369382912 memory allocated before 346285056 memory allocated after 369382912 memory allocated before 346285056 memory allocated after 369382912 and this repeated for all epochs …so I found that the used memory is almost constant during each iteration. in the other side the run stopped at epoch 17 with a new error :Screenshot from 2020-10-08 14-32-15786×527 93.4 KB
st48043
In that case could you rerun the code with CUDA_LAUNCH_BLOCKING=1 python script.py args and post the stack trace here?
st48044
I am already using this format ptrblck: CUDA_LAUNCH_BLOCKING=1 python script.py args
st48045
Could you update to PyTorch 1.6 or the nightly/master, since 1.5 had an issue where device assert statements were ignored. This could mean that you are in fact hitting a valid assert.
st48046
I noticed that the code stoped at the same line in many time : impf=impf.to(device) where it done will at the small data
st48047
marwa1: Are you mean that PyTorch `1.5’ is the reason not an error in my script Yes, since (some) assert statements were broken in PyTorch 1.5, thus you would have to update to 1.6 or the nightly binary. marwa1: I noticed that the code stoped at the same line in many time : impf=impf.to(device) Is this a new issue or why is the code not crashing anymore?
st48048
Dear all, I have trained a pre-trained Resnet50 classifier on the kaggle images dataset and label them as 0 for No-Covid 1 for Thorax diseases and 2 for covid. now for the competition, I need to save these labels in the CSV files. when I run the following code for evaluation of the test images, total_imgs = 0 covid_positive = 0 for root, dirs, files in os.walk("/content/gdrive/My Drive/dlai3-phase3/VALIDATE/VALIDATE", topdown=False): for name in files: total_imgs += 1 #im_covid = Image.open(“data/test/covid/03BF7561-A9BA-4C3C-B8A0-D3E585F73F3C.jpeg”) img = Image.open(os.path.join(root, name)) img = img.convert(‘RGB’) pred, probs = predict_covid(img) print(pred) print the string labels, not the number. Could anyone please tell me how I can get the numeric labels? thanks
st48049
Your predict_covid method seems to return a string, so you would have to look into this method and check, what it is doing. The standard PyTorch models return e.g. class logits and you could get the predicted class index via: preds = torch.argmax(outputs, dim=1) for a multi-class classification.
st48050
Hi everybody, unfortunately I am not able to reproduce the results between different runs of the same scripts on the same machine. The results are the same within the for-loop, but if I start the script new, the results are different. As all seeds are set (multiple times), I am not shuffling in the data loader and I spent a lot of time in debugging and looking at the inputs, weights etc, it would be great if you could help me. Here you can find my code. Unfortunately I am not able to share the data, so I am also not able to provide a complete working example. import configparser import os import sys import numpy as np import random import torch import torch.nn as nn import torch.nn.functional as F import sklearn.preprocessing import datetime import pandas as pd from training import TrainHelper, ModelsANN, ModelsBaseClass class ANN(torch.nn.Module): def __init__(self, n_feature: int, n_hidden: int, num_hidden_layer: int, n_output: int = 1, dropout_rate: float = 0.0): super(ANN, self).__init__() TrainHelper.init_pytorch_seeds() self.hidden_layer = nn.ModuleList() hidden_in = n_feature hidden_out = n_hidden for layer_num in range(num_hidden_layer): self.hidden_layer.append(nn.Linear(in_features=hidden_in, out_features=hidden_out)) hidden_in = hidden_out hidden_out = int(hidden_in / 2) self.output_layer = nn.Linear(in_features=hidden_in, out_features=n_output) self.dropout = nn.Dropout(p=dropout_rate) def forward(self, x): TrainHelper.init_pytorch_seeds() for layer in self.hidden_layer: x = F.relu(layer(x)) x = self.dropout(x) out = self.output_layer(x) return out # get optim parameters base_dir, seasonal_periods, split_perc, init_train_len, test_len, resample_weekly = \ TrainHelper.get_optimization_run_parameters(config=config, company=company, target_column=target_column, split_perc=split_perc, period=period) # load datasets datasets = TrainHelper.load_datasets(config=config, company=company, target_column=target_column, period=period) dataset = datasets[0] train_test_list = TrainHelper.get_ready_train_test_lst(dataset=dataset, config=config, init_train_len=init_train_len, test_len=test_len, split_perc=split_perc, imputation='mean', target_column='CutFlowers', dimensionality_reduction=None, featureset='full') pred_list = [] inst_list = [] models_list = [] for diff_run in range(0, 3): TrainHelper.init_pytorch_seeds() # noinspection PyUnboundLocalVariable for train, test in train_test_list: # noinspection PyTypeChecker,PyTypeChecker,PyTypeChecker,PyTypeChecker,PyTypeChecker model = ANN(n_feature=train.shape[1] - 1, n_hidden=10, num_hidden_layer=1, dropout_rate=0) batch_size = 4 learning_rate = 1e-1 epochs = 10000 min_val_loss_improvement = 100 max_epochs_wo_improvement = 20 x_scaler = sklearn.preprocessing.StandardScaler() y_scaler = sklearn.preprocessing.StandardScaler() valid_size = 0.2 split_ind = int(train.shape[0] * (1 - valid_size)) train_data = train.iloc[:split_ind] valid_data = train.iloc[split_ind:] # scale input data x_train = x_scaler.fit_transform(train_data.drop(target_column, axis=1)) x_valid = x_scaler.transform(valid_data.drop(target_column, axis=1)) # create train ready data x_train = torch.tensor(x_train.astype(np.float32)) x_valid = torch.tensor(x_valid.astype(np.float32)) y_train = torch.tensor(data=train_data[target_column].values.reshape(-1, 1).astype(np.float32)) y_valid = torch.tensor(data=valid_data[target_column].values.reshape(-1, 1).astype(np.float32)) # noinspection PyUnresolvedReferences,PyUnresolvedReferences train_loader = torch.utils.data.DataLoader(dataset=torch.utils.data.TensorDataset(x_train, y_train), batch_size=batch_size, shuffle=False, drop_last=False, worker_init_fn=np.random.seed(0)) loss = nn.MSELoss() # more identical checkpoint name to prevent loading of checkpoints of parallel runs checkpoint_name = '_' + datetime.datetime.now().strftime("%d-%b-%Y_%H-%M-%S-%f") min_valid_loss = 99999999 epochs_wo_improvement_threshold = 0 epochs_wo_improvement_total = 0 # instantiate new optimizer to ensure independence of previous runs optimizer = torch.optim.Adam(params=model.parameters(), lr=learning_rate) # Adam as standard optimizer, change with if-elif loop if another optimizer should be used # get device and shift model and data to it device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model.to(device) x_valid, y_valid = x_valid.to(device), y_valid.to(device) for e in range(200): model.train() for (batch_x, batch_y) in train_loader: # copy data to device batch_x, batch_y = batch_x.to(device), batch_y.to(device) # gradients are summed up so they need to be zeroed for new run optimizer.zero_grad() y_pred = model(batch_x) loss_train = loss(y_pred, batch_y) loss_train.backward() optimizer.step() model.eval() y_pred_valid = model(x_valid) loss_valid = loss(y_pred_valid, y_valid).item() if loss_valid < min_valid_loss: min_valid_loss = loss_valid epochs_wo_improvement_threshold = 0 epochs_wo_improvement_total = 0 torch.save(model.state_dict(), 'Checkpoints/checkpoint_' + checkpoint_name + '.pt') if e % 100 == 0: print('Epoch ' + str(e) + ': valid loss = ' + str(loss_valid) + ', min_valid_loss = ' + str(min_valid_loss)) model.load_state_dict(state_dict=torch.load('Checkpoints/checkpoint_' + checkpoint_name + '.pt')) os.remove('Checkpoints/checkpoint_' + checkpoint_name + '.pt') model.eval() # predict on cpu model.to(torch.device("cpu")) x_train = torch.tensor(data=x_scaler.transform(train.drop(target_column, axis=1)).astype(np.float32)) insample = pd.DataFrame(data=model(x=x_train).data.numpy(), index=train.index, columns=['Insample']) x_test = torch.tensor(data=x_scaler.transform(test.drop(target_column, axis=1)).astype(np.float32)) model.eval() # predict on cpu model.to(torch.device("cpu")) predict = model(x=x_test).data.numpy().flatten() predictions = pd.DataFrame({'Prediction': predict}, index=test.index) pred_list.append(predictions) inst_list.append(insample) models_list.append(model) And here you can find the helper function setting all the seeds: def init_pytorch_seeds(): seed = 0 np.random.seed(seed) random.seed(seed) torch.manual_seed(seed) torch.backends.cudnn.enabled = False torch.backends.cudnn.benchmark = False torch.backends.cudnn.deterministic = True
st48051
Not all operations are deterministic even is all seeds are properly set as described in the reproducibility docs. The new torch.set_deterministic() call is currently in its beta stage, but would raise an error if such a non-deterministic operation is used in your script.
st48052
Hi, I try to infer a person’s gender using all of her/his tweets. I want to map each tweet into an embedding space and then fed them into LSTM tweet by tweet. But how do I do it in LSTM? I mean the input for a LSTM is like seq_len(num of words in a sentence), batch(batch size), input_size(embedding dimension). I need to get LSTM output for each tweet of each user and process from there. Is there a way to do this for all users at the same time? A for loop? My original thought was that I use an LSTM to encode all these tweets with dimension seq_len * (batch_sizeT) * embedding_dimension. So the batch_size can be set temporarily as (batch_sizeT). After I get the hidden representations with dimension (batch_sizeT) * hidden_dimension (2D matrix), you can reshape it to dimension batch_size T * hidden_dimension (3D tensor). But the thing is different users have different number of tweets. The solution above doesnt sound right.
st48053
I have a tensor X = (B, 2) and another tensor Y= (b, 2), where Y is a subset of X (B>=b), how can I return the indices of the values of Y in X. for example, X=( [1, 2], [4,5], [3,1], [7,8]) Y=( [1, 2] [7, 8]) I want return [0, 2].
st48054
h = tuple([each.data for each in h]) h stands for hidden state, my question is the data extension is built up function in LTSM?
st48055
Done anyone know how to add DistributedDataParallel() for the transformer model? Transformer model: https://pytorch.org/tutorials/beginner/transformer_tutorial.html
st48056
From the examples https://github.com/rusty1s/pytorch_sparse 3: import torch from torch_sparse import coalesce index = torch.tensor([[1, 0, 1, 0, 2, 1], [0, 1, 1, 1, 0, 0]]) value = torch.Tensor([[1, 2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 7]]) index, value = coalesce(index, value, m=3, n=2) What exactly is the purpose of m and n? They don’t seem to match the dimensions of the output and if I change them arbitrarily it doesn’t seem to make a difference, unless I make them small.
st48057
Assume we have a Tensor like this 7927, 4, 7984, 45, 185, 55, 7876, 6, 172, 7898 and in your collate_fn you want to convert this to, 7927 7927, 4 7927, 4, 7984 7927, 4, 7984, 45 7927, 4, 7984, 45, 185 7927, 4, 7984, 45, 185, 55.... Is there something inbuilt which can do this, i don’t actually want to loop over to create it as i have around 3-4 tensors like that which have to undergo a similar thing. Thanks a lot for your time.
st48058
Is there a function/layer that can take in an input and set elements below some learned parameter to 0, leaving those above that learned parameter the same?
st48059
vymao: set elements below some learned parameter to 0 What do you mean by this? Are you looking for torch.clamp 14?
st48060
Not exactly; clamp takes values lower than the minimum and brings them to the minimum. I am looking for the opposite in some sense - taking values lower than some threshold (but above 0) and bringing them to 0. In some sense I am looking for a right-shifted ReLU by some parameter, but that parameter should be learned and tunable.
st48061
I am trying to write a c++ extension using CUDA libraries in windows 10 following the tutorial here 4 I have python 3.6.11, pytorch 1.8.0.dev20201021, rtx 3080 gpu, cuda 11.1. I ended up getting pytorch c++ 1.7 (stable/debug) with cuda 11.0 working in Microsoft Visual Studio 2019 v 16.6.5 with the cl.exe compiler, and I am wondering what is the best way to write code with syntax completion, debugging abilities so that when I run python setup.py install, that I can be sure it will work. Questions are what is the suggested environment/debugging tools for windows 10 ? Do I need to use Nvidia Nsight Compute to debug the Cuda Code? What flags will I need to pass in to the nvcc compiler (c++11 and maybe --gpu-architecture=compute_86 --gpu-code=sm_86) Should I use debug/release pytorch c++ ? My Cmake file at the moment looks like: cmake_minimum_required (VERSION 3.8) project ("DAConvolution") find_package(Torch REQUIRED) add_library (DAConvolution "DAConvolution.cpp" ) # directory for python includes include_directories("C:/Users/James/Anaconda3/envs/masters/include") target_link_libraries(DAConvolution "${TORCH_LIBRARIES}" "C:/Users/James/Anaconda3/envs/masters/libs/python36.lib") if (MSVC) file(GLOB TORCH_DLLS "${TORCH_INSTALL_PREFIX}/lib/*.dll") add_custom_command(TARGET DAConvolution POST_BUILD COMMAND ${CMAKE_COMMAND} -E copy_if_different ${TORCH_DLLS} $<TARGET_FILE_DIR:DAConvolution>) endif (MSVC) And my code looks like (so far with no compilation errors, as a library) // DAConvolution.cpp #ifdef _DEBUG #undef _DEBUG #include <python.h> #define _DEBUG #else #include <python.h> #endif #include <torch/extension.h> using namespace std; #define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") #define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) // CUDA Forward declarations std::vector<torch::Tensor> DA_Conv_forward( torch::Tensor input_image, torch::Tensor depth_image, torch::Tensor bias, torch::Tensor weight, int stride, int dilation, int padding, float alpha ); std::vector<torch::Tensor> DA_Conv_backward( torch::Tensor grad_output, torch::Tensor input_image, torch::Tensor depth_image, torch::Tensor bias, torch::Tensor weight, int stride, int dilation, int padding, float alpha ); PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { m.def("forward", &DA_Conv_forward, "DAConvolution forward"); m.def("backward", &DA_Conv_backward, "DAConvolution backward"); } Any tips or insights very much appreciated, would be happy to provide more information.
st48062
Hi. I’ve been looking for a way to upsample time series arrays using pytorch. Currently I use scipy.signal.resample() which works great but there is a cost to transferring my data from device -> cpu to run this function. Is there a native pytorch function that does this? (note nn.functional.interpolate is linear as far as I can tell after plotting outputs). If not, is there a way to use scipy without transferring my arrays from device->cpu? Thanks so much! Catubc
st48063
I’m not sure how scipy implements resample, but maybe you can use the PyTorch Spectral OPs 18 to mimic the function. Unfortunately not, since scipy uses numpy internally which only operates on the CPU.
st48064
Ok, thanks. I ended up calling on the expertise of one of our colleagues and he wrote a cuda function that does Bspline interpolation. If there is time/interest, perhaps that could eventually be developed to be shared with the community.
st48065
Hey @catubc , I’m having the exact same problem, any chance you could share the interpolation code? Thanks!
st48066
Hi Kiran. I’m not sure you will find the interpolation code as useful as it has become quite complex and has been over-adapted to our specific problem which is not requires precomputation of bspline coefficients using scipy. In other words, this will be very slow for most problems. It’s also proprietary by the code-developer, but I can ask him to see if he’ll share it.
st48067
May be its too late, but I wrote a code for the same The code can be found at My GitHub Code 7 and also I wrote a small article for the same on PyTorch Discuss 5