id
stringlengths
3
8
text
stringlengths
1
115k
st99868
One can run all pytorch tests by executing run_test.py. Do we have similar script for executing caffe2 tests as well?
st99869
I use Windows 10 and PyCharm, but I can’t import Pytorch when running the code. I used Anaconda to install Pytorch with this command: conda install pytorch-cpu -c pytorch pip install torchvision Then, I configure the interpreter of my project, and the interpreter can see torch package. However. when I import torch in code, it shows the following error: ImportError: DLL load failed: The specified module could not be found. I can only import torch and run the code in Anaconda Prompt, but it failed in PyCharm. Please help me. Thanks.
st99870
Solved by mahi80 in post #6 Please try this
st99871
It’s the admin rights issue you don’t have admin rights hence DLL load is failing
st99872
gist.github.com https://gist.github.com/peterjc123/6b804651288e76db7b5fabe5348e1f03 330 solution.md It was caused by the missing of some of the dependencies: mkl, mkl-fft, intel-openmp and VC 2017 Redist. For conda packages: ```powershell conda install mkl mkl-fft intel-openmp numpy # If package not found, do this conda update conda ``` For wheel packages: This file has been truncated. show original Please try this
st99873
Is there any way to do that? The L-BFGS optimizer does not have a weight decay option, and adding a explicit weight decay throws an error inside the closure() claiming that gradients have been released.
st99874
Hi all, I have two weights and want to optimize them separately: Weight1 = torch.tensor(torch.FloatTensor([1]), requires_grad=True) Weight2 = torch.tensor(torch.FloatTensor([1]), requires_grad=True) params = [Weight1, Weight2] opt = torch.optim.Adam(params, lr=LR) After each update step(), I want to normalize these weights to force them have sum(Weight1+Wright2)==2 To do that, I am using: coef = 2/torch.add(Weight1, Weight2) params = [coef*Weight1, coef*Weight2] My problem is that after training, values of Weight1/Weight2 and params are different. For example, Weights are: tensor([ 0.7168]) tensor([ 0.7028]), and params is: [tensor([ 1.0099]), tensor([ 0.9901])]. Any idea? Thanks, Hossein
st99875
Hello, I have a problem when my model is running. I have searched for the solution but I didn’t find it anywhere. So hopefully I would get the solution here. I have a piece of code generating a ModuleList in the init() method. if pooling_type == 1: maxpool = nn.MaxPool2d(kernel, stride=stride, return_indices=True) module_block.add_module('maxpool2d_{}'.format(index), maxpool) else: maxunpool = nn.MaxUnpool2d(kernel, stride=stride) module_block.add_module('maxunpool2d_{}'.format(index), maxunpool) Then in the forward(), I have this code. if pooling_type == 1: x, return_ids[index] = self.module_list[index](x) else: # unpooling layer must be supplied with ids. ids_from = int(module['ids_from']) ids = return_ids[ids_from] x = self.module_list[index](x, ids) self.module_list is a ModuleList generated in the init method. I keep the return_indices in a dictionary called return_ids. However when it came to unpooling layer, it threw an error that TypeError: forward() takes 2 positional arguments but 3 were given PS. module_block is nn.Sequential(). I am not sure what is wrong with the code. Has anyone experienced this error?
st99876
I solved it myself ! I realized that nn.Sequential allows only 1 parameter. The work around solution is to access the maxunpooling layer by index. For example x = self.module_list[index][0](x, ids)
st99877
Hi baboonga, You can replace nn.Sequential with a custom subclass like below. class MultiPrmSequential(nn.Sequential): def __init__(self, *args): super(MultiPrmSequential, self).__init__(*args) def forward(self, *input): for module in self._modules.values(): input = module(*input) return input
st99878
Say I want to train a model f_W indirectly by learning a displacement D to a (constant) intiallization W0. In other words I want to re-parametrize f_W = f_(W0+D) and then optimize D via SGD while W0 remains unchanged. Is there a simple way to modify an existing PyTorch model to use a different parametrization (preferably without having to re-write the entire model class by hand)?
st99879
Hi! I am currently writing a UNet implementation for medical image segmentation. However, I am struggling with memory problems, both when running with GPU and CPU. I’ve already read some posts and managed to solve the leaks by deleting intermediate tensors when doing the forward pass, but still my model requires around 16GB of RAM at some moments. When only loading the data, this already requires 2.5GB of RAM. Then performing a forward pass with a batch-size of 5 pushes this to 16GB at most. You can find most of my code below, all tips on how to write memory efficient code or inspecting what parts require a lot of memory are really appreciated. Dataset class MedicalData(Dataset): patches = [] labels = [] def __init__(self, dataset_location, transform=None): self.transform = transform for file in os.listdir(dataset_location): filename = os.fsdecode(file) #Load the training data from pickle files if filename.startswith('patches'): with open(dataset_location + filename, 'rb') as handle: patches = pickle.load(handle) self.patches.append(patches) masks_filename = 'masks' + filename.split('patches')[1] print(filename, masks_filename) with open(dataset_location + masks_filename, 'rb') as handle: labels = pickle.load(handle) self.labels.append(labels) #Flatten the lists into one list self.patches = [item for sublist in self.patches for item in sublist] self.labels = [item for sublist in self.labels for item in sublist] self.patches = self.patches[:10] self.labels = self.labels[:10] assert (len(self.patches) == len(self.labels)) def __getitem__(self, index): patch = self.patches[index] patch = np.expand_dims(patch, axis=0) if self.transform is not None: patch = self.transform(patch) # Convert patch and label to torch tensors patch = torch.from_numpy(np.asarray(patch)) label = torch.from_numpy(np.asarray(self.labels[index])) #Convert uint8 to float tensors patch = patch.type(torch.FloatTensor) label = label.type(torch.FloatTensor) return patch, label UNet class Unet(nn.Module): #input_channels is number of channels in input image #num_filters is the amount of filters in the first conv layer def __init__(self, input_channels, num_classes, num_filters, depth, padding=False): super(Unet, self).__init__() self.input_channels = input_channels self.num_classes = num_classes self.num_filters = num_filters self.depth = depth self.padding = padding print("No of classes: %d \nNo of input channels: %d \nNo of filters first layer: %d \nDepth of the network: %d \nPadding: %d" % (self.num_classes, self.input_channels, self.num_filters, self.depth, self.padding)) self.contracting_path = nn.ModuleList() for i in range(depth): input = self.input_channels if i == 0 else output output = self.num_filters*(2**i) self.contracting_path.append(DownConvBlock(input, output, padding)) self.upsampling_path = nn.ModuleList() for i in range(depth-1): input = output output = input // 2 self.upsampling_path.append(UpConvBlock(input, output, padding)) self.last_layer = nn.Conv2d(output, num_classes, kernel_size=1) def forward(self, x): blocks = [] for i, down in enumerate(self.contracting_path): x = down(x) if i != len(self.contracting_path)-1: blocks.append(x) #x = F.avg_pool2d(x, 2) x = F.max_pool2d(x, 2) for i, up in enumerate(self.upsampling_path): x = up(x, blocks[-i-1]) del blocks #Delete to fix memory leak return self.last_layer(x) class DownConvBlock(nn.Module): def __init__(self, input_dim, output_dim, padding): super(DownConvBlock, self).__init__() layers = [] layers.append(nn.Conv2d(input_dim, output_dim, kernel_size=3, padding=int(padding))) layers.append(nn.BatchNorm2d(output_dim)) layers.append(nn.ReLU(inplace=True)) #Inplace is true is used to save memory layers.append(nn.Conv2d(output_dim, output_dim, kernel_size=3, padding=int(padding))) layers.append(nn.BatchNorm2d(output_dim)) layers.append(nn.ReLU(inplace=True)) self.layers = nn.Sequential(*layers) def forward(self, patch): return self.layers(patch) class UpConvBlock(nn.Module): def __init__(self, input_dim, output_dim, padding, bilinear=False): super(UpConvBlock, self).__init__() if bilinear: #self.upconv_layer = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True) self.upconv_layer = nn.Sequential(nn.Upsample(mode='bilinear', scale_factor=2, align_corners=True), nn.Conv2d(input_dim, output_dim, kernel_size=1)) else: self.upconv_layer = nn.ConvTranspose2d(input_dim, output_dim, kernel_size=2, stride=2) self.conv_block = DownConvBlock(input_dim, output_dim, padding) def forward(self, x, bridge): up = self.upconv_layer(x) crop1 = self.center_crop(bridge, up.shape[2:]) out = torch.cat([up, crop1], 1) del up del crop1 return self.conv_block(out)
st99880
Hi, This looks like a fairly standard model. Just a few questions What is the value of depth, num_filters? What is the size of your inputs in width and height? This output = self.num_filters*(2**i) looks like it’s going to grow very very fast no? Is that expected to be exponential in the depth?
st99881
Hi, thanks for your reply! The num_filters is 64 and the depth is 4. The input images are 512x512x1. This is an example of the network’s structure: Unet( (contracting_path): ModuleList( (0): DownConvBlock( (layers): Sequential( (0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace) (3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace) ) ) (1): DownConvBlock( (layers): Sequential( (0): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace) (3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace) ) ) (2): DownConvBlock( (layers): Sequential( (0): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace) (3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace) ) ) (3): DownConvBlock( (layers): Sequential( (0): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace) (3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace) ) ) ) (upsampling_path): ModuleList( (0): UpConvBlock( (upconv_layer): ConvTranspose2d(512, 256, kernel_size=(2, 2), stride=(2, 2)) (conv_block): DownConvBlock( (layers): Sequential( (0): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace) (3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace) ) ) ) (1): UpConvBlock( (upconv_layer): ConvTranspose2d(256, 128, kernel_size=(2, 2), stride=(2, 2)) (conv_block): DownConvBlock( (layers): Sequential( (0): Conv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace) (3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace) ) ) ) (2): UpConvBlock( (upconv_layer): ConvTranspose2d(128, 64, kernel_size=(2, 2), stride=(2, 2)) (conv_block): DownConvBlock( (layers): Sequential( (0): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace) (3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace) ) ) ) ) (last_layer): Conv2d(64, 1, kernel_size=(1, 1), stride=(1, 1)) )
st99882
Hi, Maybe you have a bit too many channels? Convs like Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) on a 512x512x512 image is going to be quite memory heavy ! It will have ((512+2*1-1)/1+1)*((512+2*1-1)/1+1) = 264196 patches each of size 512 x 3x3 = 4608. Which means that the unfolding for the convolution will contain batch x 1 217 415 168 elements.
st99883
Thanks for having a look! I am running this model on my 1080Ti and I get CUDA out of memory immediately. What is the best way to spread the load between my RAM and my GPU, so that I can run the full model? Thanks!
st99884
Spreading between RAM and gpu might be a bit tricky to do. There is no tool to do it at the moment. You can try to put your large DownConvBlock and UpConvBlock into a checkpoint 74. This is built just for that purpose.
st99885
Clear! I will have a look at checkpoints. If you have any other tips for memory efficiency, please tell me!
st99886
Hi it seems torch.save and torch.load can support many types of objects, e.g. list, dict… one question is that what’s the difference between saving and loading a model/optimizer directly and model/optimizer.state_dict() ?
st99887
Solved by ptrblck in post #2 You could use scipy.stats.signaltonoise.
st99888
hi I am sharing GPUs with other people. The thing is during training, the batch size is small, which taks 1G gpu memory while during validation, the batch size is large, which takes 5G. The problem is that during training, other people may use the left memory, and my task gets out of memory when it goes to validation . So, is it possible I specify the gpu memory usage for my task during the whole process and keep it 5G all the time. thank you
st99889
when i ues pytorch.load() , it give an error : No module named ‘model’ import torch path = “D:\python\my_ML\model\resume.49.pkl” LM_model = torch.load(path)
st99890
Hi, First, you should not serialize models but just their state_dict() to avoid such problem. Then you can recreate the model and load_state_dict() into it to get all the weights back. This is a problem of python serialization, you should have exactly the same imports as when you saved the model when loading. You should import the model module the same way as it was done when you saved.
st99891
Thank you very much. It has been done by the way you said. Since the model needs to be initialized by passing in a ‘’ data ‘’ object that contains parameters to build the model, I save the model directly. If only save the parameters, I need to create a ‘’ data ‘’ object when you build the model. It’s troublesome. Is there any good way?
st99892
I am not sure to understand in details what you do here. But maybe you can make this data object a state dict? That way you just need to save that, then load it and build your new model with it?
st99893
Hi, I am currently experimenting with an idea that would require me to have a “dynamic” kernel (a kernel that changes with the input). So for each input “patch”, I would have a function f (a simple MLP) that produces the desired filter for this specific part of the image, but it seems the convolution-operator only takes a static filter. How can I achieve this? Thanks, Leander
st99894
Hi, There is no builtin way to do this. But you can do it by using multiple convs: if you want your filter to be a MLP that takes an chan_in x in x in patch, then the hidden layer has hidden size and then output chan_out values so that the whole thing transforms an image of size Batch x chan x in x in -> Batch x chan_out x 1 x 1 (increasing the size in will increase the output size from 1 to something else depending on stride/padding). You can use 2 conv with the following parameters: first convolution will change the channels from chan_in to hidden and kernel size will be in x in with the stride and padding of the original conv Add a Relu or whatever non-linearity you want here. second conv will change channels from hidden to chan_out and kernel size will be 1 x 1, stride 1 and no padding. I haven’t tested so some numbers might be off, but that should work
st99895
thanks, interesting idea. But, if I understood you right, this will only evaluate f and not apply the filter. My function f doesn’t produce the result, but the filter for the convolution for the specific input. I don’t think I can use a convolution to apply the f’s produced filter, because with a custom kernel I can only add stuff together.
st99896
Hooo sorry I misread your question, I though you wanted the weights to be an MLP. You want for every patch to have an MLP to generates the weights, and then apply these weights to this patch? In that case you will need to use unfold 17. From the example in the doc, you will need to generate w from inp_unf which contains every patch (L such patches). And since you want one weight per patch, your weights will be (N, patch_size, L, chan_out). Then replace the matmul that does the conv by an element wise multiplication after expanding inp_unf and accumulate for each batch. This might not make sense so here is a small sample based on the unfold example (same notations as the ones introduced in the doc for unfold): import torch inp = torch.randn(1, 3, 10, 12) w = torch.randn(2, 3, 4, 5) # Original conv print("Original Conv") inp_unf = torch.nn.functional.unfold(inp, (4, 5)) out_unf = inp_unf.transpose(1, 2).matmul(w.view(w.size(0), -1).t()).transpose(1, 2) out = torch.nn.functional.fold(out_unf, (7, 8), (1, 1)) # or equivalently (and avoiding a copy), # out = out_unf.view(1, 2, 7, 8) print((torch.nn.functional.conv2d(inp, w) - out).abs().max()) # Custom conv print("Custom Conv") def f(inp_unf, chan_out): # Input: (N, L, patch_size) that contains every single input patch # Output: (N, L, chan_out * patch_size) that contains the weights that will be used for every patch # Here you can have an MLP that has patch_size input features and chan_out * patch_size output features. # For simplicity (and check) we just expand the original weights here: output = w.view(-1).unsqueeze(0).unsqueeze(0) out_size = list(inp_unf.size()) out_size[-1] *= chan_out return output.expand(*out_size) inp_unf = torch.nn.functional.unfold(inp, (4, 5)) full_weights = f(inp_unf.transpose(1, 2), w.size(0)) # Reshape full_weights to the expected shape full_weights = full_weights.view(inp_unf.size(0), inp_unf.size(2), w.size(0), inp_unf.size(1)).permute(0, 3, 1, 2) # Compute the product weight*entry in each patch full_out = inp_unf.unsqueeze(-1).expand_as(full_weights) * full_weights # Sum over patches out_unf = full_out.sum(1) # Put chan dim at the right place out_unf = out_unf.transpose(1, 2) out = torch.nn.functional.fold(out_unf, (7, 8), (1, 1)) # or equivalently (and avoiding a copy), # out = out_unf.view(1, 2, 7, 8) print((torch.nn.functional.conv2d(inp, w) - out).abs().max()) Disclaimer: The original conv ops are much more optimized than that so even the unfold/matmul/fold version will be slower than conv2d There are a lot of LARGE intermediary matrices here, the memory requirement for the autograd is going to be quite large. checkpointing 4 might help if you really need to do this. The MLP within f will do a mapping from patch_size features to chan_out*patch_size features which should be fairly small. This same MLP will work with a batch size of N*L. Where L is given by the formula in the link for the unfold. This will be HUGE and so be careful what you do here as this can because very expensive (both in terms of runtime and memory) very quickly. Hope this helps
st99897
hmm thanks for the idea I will try & see whether it’s working good enough. Is there a way to extend pytorch to provide this functionality (with reasonable effort)?
st99898
Not really. The convolutions are actually done (with some minor optimization) exactly as in the first part of the sample above. This means that there is never a point in the code where you look at a single patch at the time.
st99899
Assuming the speed of the naive solution would not suffice, how difficult would you think implementing/extending pytorch would be?
st99900
Hi, I am not sure if it would be possible to implement it much better. If you use convolutions as MM, then it will be just saving a bit of memory by wrapping the above code in a Function. If you want to use other algorithms, I am not even sure it is possible to do it.
st99901
My loss function is NLL loss,which takes in as inputs [108416, 3] and targets as [108416] and i get a resulting loss value of 2.2623 , but after the loss computation when i do the optimizer.step() call. I get AND THIS IS LOSS Variable containing: 2.2623 [torch.cuda.FloatTensor of size 1 (GPU 0)] Traceback (most recent call last): File "/mnt/sdc1/project/training/fpr4x_liver_1x_2channel.py", line 336, in <module> train_fpr4x_liver_1x_2channel_model() File "/mnt/sdc1/project/training/fpr4x_liver_1x_2channel.py", line 245, in train_fpr4x_liver_1x_2channel_model est.run_experiment(opts.num_epochs, 5000,50) File "/media/redible/sdc/project/training/expt_utils.py", line 236, in run_experiment self.trainer.train() File "/media/redible/sdc/project/training/expt_utils.py", line 75, in train loss, outputs = self.net_mgr._forward_backward(network_inputs, loss_inputs) File "/media/redible/sdc/project/training/network_manager.py", line 19, in _forward_backward self.optimizer.step() File "/usr/local/lib/python3.5/dist-packages/torch/optim/adam.py", line 69, in step exp_avg.mul_(beta1).add_(1 - beta1, grad) RuntimeError: invalid argument 3: sizes do not match at /pytorch/torch/lib/THC/generated/../generic/THCTensorMathPointwise.cu:271 Not sure whats the cause of the error any help would be appreciated,Thanks in advance.
st99902
sorry for the late reply, i was a bit caught up, I have fixed this issue the problem turned out to the resume training part of code,which was linking to a different model path , and thats why i was getting sizes dont match. But anyways thank you for your time.
st99903
I have a dataset in which the class labels of the training set are not from 0 to C-1. It is a subset of a bigger range, but in no particular order. Hence, if I take the max class id as C-1, there will be many ids for which there are no training examples. It would be great if I could pass in the one-hot vectors. What could be a good way to do it that way ? Thanks!
st99904
Why would you like to pass the target in a one-hot format? This would not change the fact that there are many class ids without training examples or do I misunderstand it? I would recommend to remap the class indices to a valid range, i.e. [0, maxClassID]. Otherwise your model will output a large output tensor where a lot of logits are meaningless. E.g. let’s assume your original target had 1000 classes. Your model would therefore output a tensor of shape [batch_size, 1000]. If your subset has only two classes, e.g. class0 and class999, your model might predict unknown classes (class1-class998). How would you deal with it, if the maximal prediction is class789?
st99905
Yes, you are right, it makes sense to map it to a [0, C-1] range and use it as such. Thanks a lot!
st99906
I am installing the nightly build for pytorch using conda install pytorch-nightly -c pytorch. However, if I try to install torchvision after this via conda install torchvision -c pytorch it tries to install pytorch 0.4.1.post2. How to have torchvision use the nightly build instead?
st99907
Solved by Radulescu_Petru in post #2 First install torchvision, then uninstall torch 0.4 that comes with it and then finally install torch-nightly. That’s what I did to get it working.
st99908
First install torchvision, then uninstall torch 0.4 that comes with it and then finally install torch-nightly. That’s what I did to get it working.
st99909
I am training some models on GPU, however, it seems like the current (runtime) performance is limited by CPUs. I got something like %Cpu(s): 29.3 us, 54.2 sy, ... in top. Is there any common reason for this?
st99910
Do you mind providing some more information about your model and task? I think it may be hard to diagnose the issue without it. My initial thought is that some processing task each time is maxing out one of the CPU threads and is bottlenecking the model but I can’t say without knowing the model and task and how it’s implemented. I’m not sure if someone else has any other thoughts?
st99911
I am doing a RNN model. I got a customized dataloader, which takes a string and return the padded sequence and its original length (i.e., “abc” => [1,2,3,0,0], 3 when padding to length 5). And these data are feed into a lstm model with sequence packing. Also, if it is a CPU job, shouldn’t the CPU util be around 100% with a small sys?
st99912
Hi, I’m trying to use only a linear layer (fc = nn.Linear(1000, 1000)), and I think I can treat the weight as a 1000 x 1000 matrix. I plan to use only part of the layer, for example, the input has dimension 400 and output has dimension 600, so I only use the 400*600 submatrix of the 1000 x 1000 matrix. How could I achieve this? Do I need to construct another nn.Linear(400, 600) and copy the updated weights to original 1000 x 1000 matrix for each iteration? I want to do this since every time when I update the weight, it is possible that I update different 400 x 600 submatrix of the 1000 x 1000 matrix.
st99913
Can somebody give me a link to the pytorch c++ api? I’ve looked for it on github but there are a lot of versions and none of them seem to be from the official account. Thanks!
st99914
As far as I understand, it’s a work in progress, but you might consult https://pytorch.org/cppdocs/ 192 Best regards Thomas
st99915
To simply the question, we use the sample code from documentation >>> rnn = nn.LSTMCell(10, 20) >>> input = torch.randn(6, 3, 10) >>> hx = torch.randn(3, 20) >>> cx = torch.randn(3, 20) >>> output = [] >>> for i in range(6): hx, cx = rnn(input[i], (hx, cx)) output.append(hx) Let’s say if in the time step 4 (out of 6) in the sequence, we want to reset the hidden states for the second example (with batch size 3) i.e. termination of an sequential data and next time step new episode comes in. In this case, what is the right way to do ? Should I use hx[1].fill_(0.0) ? To my understanding, it might break the backward pass, as it is not allowed to modify tensor values in this way. Another idea I have in mind is to use a mask tensor created on the fly, which does not require gradients e.g. hx = hx*torch.tensor([1.0, 0.0, 1.0].unsqueeze(1).expand_as(hx))
st99916
fc = nn.Linear(128, 128) lin = fc1(torch.tensor([16,128])) Can anyone please tell me why this error appears?.. it works when editing the last line to be lin = fc1(torch.Tensor(16,128)) … what is the difference between torch.tensor() and torch.Tensor() Thanks a lot
st99917
Solved by ptrblck in post #6 The types of output and target should be the same. Try to cast your target to float32 using target = target.float().
st99918
In the first example you are creating a tensor containing two values: 16 and 128. As the type is inferred from the input, it’s a torch.LongTensor, which yields the error message (besides having a wrong shape, too). In your second example you are creating an un-initialized tensor of shape [16, 128]. Your code won’t throw an error, but since the tensor is uninitialized, you might get strange output values. I assume you would like to create a random tensor of shape [16, 128]. You can do it with x = torch.randn(16, 128).
st99919
@ptrblck … Thanks for the clarification i missed… but can you tell me here why the error still exists: loss = criterion(output, target) although output shape and target are the same (torch.Size([16, 1]))
st99920
This is the error: RuntimeError: Expected object of type torch.cuda.FloatTensor but found type torch.cuda.DoubleTensor for argument #2 ‘target’ and the criterion is: criterion = nn.MSELoss(size_average=True).cuda() and those are my output shapes and values: output tensor([[ 0.4398], [ 0.4516], [ 0.4457], [ 0.4424], [ 0.4521], [ 0.4275], [ 0.4483], [ 0.4403], [ 0.4385], [ 0.4634], [ 0.4566], [ 0.4427], [ 0.4404], [ 0.4565], [ 0.4455], [ 0.4476]], device=‘cuda:0’) target tensor(1.00000e-02 * [[ 1.7727], [ 4.0564], [ 4.9746], [ 4.9140], [ 5.3685], [ 5.9247], [ 5.2389], [ 5.3715], [ 5.4332], [ 5.2338], [ 5.4343], [ 3.9557], [ 4.1932], [ 4.2754], [ 3.8148], [ 3.9546]], dtype=torch.float64, device=‘cuda:0’)
st99921
The types of output and target should be the same. Try to cast your target to float32 using target = target.float().
st99922
That’s really helped! … Thank you a lot, but can you tell me how output and targets did not match in type although both of them is float and how can i avoid it again because i believe that part of this issue happened while importing my dataset using dataloader. this is the class for importing my dataframe: class My_dataset(Dataset): def __init__(self, csv_path): self.data_info = pd.read_csv(csv_path,header=None) self.data = np.asarray(self.data_info.iloc[:,1:7], dtype=np.float64) self.label = np.asarray(self.data_info.iloc[:,7]) self.data_len = len(self.data_info.index) def __getitem__(self, index): single_row = self.data[index] row = torch.FloatTensor(single_row) target = torch.from_numpy(np.array(self.label[index]) ) return (row, target) def __len__(self): return self.data_len and here’s a snippet of the main function while iterating over my_dataset: for data, target in train_loader: target = target.float() target = target.unsqueeze_(0) data = Variable(data.permute(1, 0)).contiguous() target = Variable(target.permute(1,0)) where do you think this error aroused from?
st99923
If I’m not mistaken np.asarray casts the data usually to np.float64. Also pd.read_csv might already return float64. You also cast your self.data to float64 which is unnecessary as you create torch.FloatTensors in __getitem__. Try to load your data as: self.data = np.asarray(self.data_info.iloc[:, 1:7], dtype=np.float32) self.label = np.asarray(self.data_info.iloc[:,7], dtype=np.float32)
st99924
RuntimeError: Expected object of type torch.FloatTensor but found type torch.LongTensor def TernarizeWeights(self): for index in range(self.num_of_params): self.target_modules[index].data = self.Ternarize(self.target_modules[index].data) def Ternarize(self,tensor): tensor = tensor.cpu() output = torch.zeros(tensor.size()) delta = self.Delta(tensor) alpha = self.Alpha(tensor,delta) for i in range(tensor.size()[0]): for w in tensor[i].view(1,-1): pos_one = (w > delta[i]).type(torch.FloatTensor) neg_one = torch.mul((w < -delta[i]).type(torch.FloatTensor),-1) out = torch.add(pos_one,neg_one).view(tensor.size()[1:]) output[i] = torch.add(output[i],torch.mul(out,alpha[i])) return output.cuda() can anyone help me , how to solve this error? thanks in adavance
st99925
Hi, i want to use the functions in torch._C._thcunn. Is there any docs for that? what’s the meaning of the parameters? i find the C code for CudaSpatialFullDilatedConvolution_updateGradInput void THNN_(SpatialFullDilatedConvolution_updateOutput)( THCState *state, THCTensor *input, THCTensor *output, THCTensor *weight, THCTensor *bias, THCTensor *columns, THCTensor *ones, int kW, int kH, int dW, int dH, int padW, int padH, int dilationW, int dilationH, int adjW, int adjH)``` but i don't know the meaning of the ```ones```?
st99926
a) Don’t do that. The torch.nn.functional interface is the supported way to use that. b) Ones and columns are buffers (see aten/src/ATen/nn.yaml), likely for storing information for the backward, they are used to store the columns after im2col and have a buffer of ones to pass into multiplication functions at the right time. Best regards Thomas
st99927
then what’s the meaning of the state. It seems there is no state in python interface.
st99928
That’s where legacy coda (THC/THCUNN) keeps the information it needs on the GPUs, the RNG state, allocators etc. This used to be passed to every function. Nowadays ATen usually handles most in the background and there are some GlobalContext functions to get the current device, stream etc. Best regards Thomas
st99929
I’ve implemented a specific dataset class for my purpose by inheriting Dataset object. It works properly. I’d like to take a very small subset of dataset, say 50, to see if my model overfits it successfully. Yet the data consist of many h5 files and json files, therefore changing it from my dataset class seems very hard and infeasible. I tried manipulating the training file by using indexing. But that was not possible since Dataset object or enumerate object does not support indexing.I can provide additional info or code, if requested. The way I use Dataloader is: for idx, batch in enumerate(dataloader_train): ...
st99930
This may not be applicable to your case, but for small sanity checks like these I often just insert a break statement after a few batches: for idx, batch in enumerate(dataloader_train): if idx > 10: break ... You should turn shuffling off in the dataloader, to get the same batches each epoch. You have to edit the lines out afterwards instead of getting a proper regression test, but for quick-and-dirty model development it’s a simple trick.
st99931
My workaround was like: # indices to draw samples from the dataset. picks = np.random.permutation(20) dataloader_train = DataLoader( dataset, batch_size=batch_size, shuffle=False, # note that sampler and shuffle arguments are mutually exclusive sampler=picks, collate_fn=dataset.collate_fn )
st99932
How to use a trained model as part of a new model in pytorch and the parameters of the trained model can not be changed during the training process of the new model
st99933
I want to implement my own version of LSTM but I have a question to ask. I am using this code (basic LSTM cell): https://github.com/jihunchoi/recurrent-batch-normalization-pytorch/blob/master/bnlstm.py 13 to modify the LSTM cell. My question is if I use my version of LSTM and LSTM cell, will loss.backward() update weights. Same for opt.step() . Should I write code for loss to update weights and optimizer functions?
st99934
Solved by tom in post #2 At first glance, the code you link assembles the LSTM from typical NN-functions like linear. Autograd will provide gradients, but they will be slower than a custom-made gradient – in the PyTorch C++ extension tutorial, you get a ~20% performance boost from moving from Autograd components to a custom…
st99935
At first glance, the code you link assembles the LSTM from typical NN-functions like linear. Autograd will provide gradients, but they will be slower than a custom-made gradient – in the PyTorch C++ extension tutorial, you get a ~20% performance boost from moving from Autograd components to a custom backward for the LLTM model (it advertises a 30% speedup for moving from Python + automatic gradients to C++ + custom backward, my experience is that ~10%-pts are for moving to C++ and ~20%-pts are for the backward). You wouldn’t need to change weight update and optimizer steps. Best regards Thomas
st99936
Thank you for the answer. I need to add another weight to the LSTM and change the gate functions of it, so it is the best I can found to modify. Maybe it will be slower but at least it will work the way I wanted.
st99937
I’m looking for a way to replicate some behavior from Lua/Torch7’s nn.GPU. Basically one could split a single model across multiple GPUs, and then run that model backwards without having to modify the “Closure” function: github.com jcjohnson/neural-style/blob/master/neural_style.lua#L360-L405 1 function setup_multi_gpu(net, params) local DEFAULT_STRATEGIES = { [2] = {3}, } local gpu_splits = nil if params.multigpu_strategy == '' then -- Use a default strategy gpu_splits = DEFAULT_STRATEGIES[#params.gpu] -- Offset the default strategy by one if we are using TV if params.tv_weight > 0 then for i = 1, #gpu_splits do gpu_splits[i] = gpu_splits[i] + 1 end end else -- Use the user-specified multigpu strategy gpu_splits = params.multigpu_strategy:split(',') for i = 1, #gpu_splits do gpu_splits[i] = tonumber(gpu_splits[i]) end end assert(gpu_splits ~= nil, 'Must specify -multigpu_strategy') This file has been truncated. show original github.com jcjohnson/neural-style/blob/master/neural_style.lua#L275-L298 -- Function to evaluate loss and gradient. We run the net forward and -- backward to get the gradient, and sum up losses from the loss modules. -- optim.lbfgs internally handles iteration and calls this function many -- times, so we manually count the number of iterations to handle printing -- and saving intermediate results. local num_calls = 0 local function feval(x) num_calls = num_calls + 1 net:forward(x) local grad = net:updateGradInput(x, dy) local loss = 0 for _, mod in ipairs(content_losses) do loss = loss + mod.loss end for _, mod in ipairs(style_losses) do loss = loss + mod.loss end maybe_print(num_calls, loss) maybe_save(num_calls) This file has been truncated. show original The key to this seems to have been the nn.GPU function in Torch7: github.com torch/nn/blob/master/doc/simple.md#gpu 1 <a name="nn.simplelayers.dok"></a> # Simple layers # Simple Modules are used for various tasks like adapting Tensor methods and providing affine transformations : * Parameterized Modules : * [Linear](#nn.Linear) : a linear transformation ; * [LinearWeightNorm](#nn.LinearWeightNorm) : a weight normalized linear transformation ; * [SparseLinear](#nn.SparseLinear) : a linear transformation with sparse inputs ; * [IndexLinear](#nn.IndexLinear) : an alternative linear transformation with for sparse inputs and max normalization ; * [Bilinear](#nn.Bilinear) : a bilinear transformation with sparse inputs ; * [PartialLinear](#nn.PartialLinear) : a linear transformation with sparse inputs with the option of only computing a subset ; * [Add](#nn.Add) : adds a bias term to the incoming data ; * [CAdd](#nn.CAdd) : a component-wise addition to the incoming data ; * [Mul](#nn.Mul) : multiply a single scalar factor to the incoming data ; * [CMul](#nn.CMul) : a component-wise multiplication to the incoming data ; * [Euclidean](#nn.Euclidean) : the euclidean distance of the input to `k` mean centers ; * [WeightedEuclidean](#nn.WeightedEuclidean) : similar to [Euclidean](#nn.Euclidean), but additionally learns a diagonal covariance matrix ; * [Cosine](#nn.Cosine) : the cosine similarity of the input to `k` mean centers ; * [Kmeans](#nn.Kmeans) : [Kmeans](https://en.wikipedia.org/wiki/K-means_clustering) clustering layer; * Modules that adapt basic Tensor methods : This file has been truncated. show original The intended use-case is not for model-parallelism where the models are executed in parallel on multiple devices, but for sequential models where a single GPU doesn’t have enough memory. In trying to replicate this in PyTorch, I started trying to use nn.DataParallel: github.com ProGamerGov/neural-style-pt/blob/multi-gpu/neural_style.py#L292-L315 2 def setup_multi_gpu(net): gpu_splits = params.multigpu_strategy.split(',') gpus = params.gpu #for i, gpu in enumerate(gpus): # gpus[i] = int(gpus[i]) cur_chunk = nn.Sequential() chunks = [] for i, l in enumerate(net): cur_chunk.add_module(str(i), net[i]) if str(i) in gpu_splits and gpu_splits != '': del gpu_splits[0] chunks.append(cur_chunk) cur_chunk = nn.Sequential() chunks.append(cur_chunk) new_net = nn.Sequential() for i, chunk in enumerate(chunks): out_device = gpus[i] if i == len(chunks): This file has been truncated. show original However I still seem to have to manually convert the output of each set of model layers, to a single GPU. This single GPU then ends up having a really high memory usage that negates what I am trying to do. nn.DataParallel as I understand, is meant for batch sizes larger than 1, however I am only using a batch size of one (style transfer). The issue of nn.DataParallel using a lot of memory on a single GPU is documented here, here 1, and in many other posts. Basically I am trying to separate a sequential model into a set of smaller models across multiple GPUs, so that one can use more GPU memory and thus larger inputs/outputs can be used. I am not sure that nn.DataParallel is the best option for what I am trying to do, but I am not aware of any alternatives which would work. Here’s a basic MSPaint diagram of what I am currently doing in my code, with an example of 4 GPUs. The total number of GPUs, and how many layers to give each GPU, is meant to be entirely user controlled. test.jpg1309×859 273 KB
st99938
Hey, Im curious as to if there is a performance difference between these two: mask = ... target = ... pred = ... loss = torch.sum(mask * (target - pred) ** 2) loss.backward() and mask = ... target = ... pred = ... loss = torch.sum((target[mask] - pred[mask]) ** 2) loss.backward() Code would mostly be running on GPU.
st99939
So, have you tried? mask = torch.randn(100,100,100, device='cuda') mask2 = mask>0 target = torch.randn(100,100,100, requires_grad=True, device='cuda') pred = torch.randn(100,100,100, requires_grad=True, device='cuda') def fn1(): loss = torch.sum(mask * (target - pred) ** 2) loss.backward() torch.cuda.synchronize() def fn2(): loss = torch.sum((target[mask2] - pred[mask2]) ** 2) loss.backward() torch.cuda.synchronize() torch.cuda.synchronize() %timeit fn1() %timeit fn2() for me, the first variant is ~4 times as fast. That isn’t terribly surprising considering that multiplication of dense matrices is much cheaper than assembling the mask-selected items in a new tensor. (An intermediate case would be to only use the mask once and take the difference on all.) The first variant will be even faster when you get the JIT to fuse the two multiplications. Finally a bit of warning: In the presence of NaN or Inf in the “masked away” part, the two will be different, as the first will give NaN. You could avoid that with torch.where, but that isn’t free, either. Best regards Thomas
st99940
Thanks for an excellent answer! My only attempt was without synchronization as I forgot about the async nature of cuda and I ended up moving on. NaN values should not be an issue in this case. Thanks as well for the explanation, super helpful. Once I have more time I will try to test the difference in runtime on the project and give an update in this thread.
st99941
I am having crashes at import time when trying to run pytorch through pytest unit tests in IDE, with no meaningful information in the stack trace: ❯ coredumpctl info 12120 * PID: 12120 (python) UID: 10022 (XXXXXXXXX) GID: 1000 (XXXXXXXXX) Signal: 6 (ABRT) Timestamp: Thu 2018-09-13 14:09:05 CEST (42min ago) Command Line: /home/XXXXXXXXX/.env/bin/python /usr/share/java/pycharm-community/helpers/pycharm/_jb_pytest_runner.py --target test_pytorch.py::test_pool_01 Executable: /usr/bin/python3.6 Control Group: /user.slice/user-10022.slice/session-3.scope Unit: session-3.scope Slice: user-10022.slice Session: 3 Owner UID: 10022 (XXXXXXXXX) Boot ID: c5c915cb3ad04176800e940692d66625 Machine ID: 9ad705d336bc443abb3ac948d7d2987c Hostname: XXXXXX Storage: /var/lib/systemd/coredump/core.python.10022.c5c915cb3ad04176800e940692d66625.12120.1536840545000000.lz4 Message: Process 12120 (python) of user 10022 dumped core. Stack trace of thread 12120: #0 0x00007fcff1be1e5b raise (libpthread.so.0) #1 0x00007fcff1be1fc0 __restore_rt (libpthread.so.0) #2 0x00007fcff10acfeb raise (libc.so.6) #3 0x00007fcff10975c1 abort (libc.so.6) #4 0x00007fcf8ba19a9b _ZN9__gnu_cxx27__verbose_terminate_handlerEv.cold.1 (libstdc++.so.6) #5 0x00007fcf8ba1fefc _ZN10__cxxabiv111__terminateEPFvvE (libstdc++.so.6) #6 0x00007fcf8ba1ff57 _ZSt9terminatev (libstdc++.so.6) #7 0x00007fcf8ba201b8 __cxa_throw (libstdc++.so.6) #8 0x00007fcfb7f2ca61 n/a (/home/XXXXXXXXX/.env/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so) Does the pytorch project or someone else publish debug wheels somewhere ? I can see many build jobs at https://ci.pytorch.org 4 but no published artifacts.
st99942
Why is this line in-place operation? How should I make it not in-place? perturbed[masks] = perturbed[masks] + (gradient * epsilon)[masks] perturbed[masks] = perturbed[masks].clamp(0, 255) Error RuntimeError: a leaf Variable that requires grad has been used in an in-place operation. PyTorch is complaining that I am modifying a variable that requires gradient in place. However, I could not figure out how I can do it in other ways. Can someone kindly help?
st99943
It depends whether you want gradients to backpropagate through the update. It looks like you’re doing a masked gradient step, so these would typically not be backpropagated through. In this case: Just wrap the two statements in with torch.no_grad():. You could try if “+=” and inplace clamp_ is faster. If you want to propagate through to the previous version of perturbed, you would need to do the masking first: update = torch.zeros_like(perturbed) # doesn't require grad update[masks] = (gradient * epsilon)[masks] # because update didn't require grad, this works with autograd perturbed = (perturbed + update).clamp(0, 255) # we replace the name perturbed with a new version, so no inplace business Best regards Thomas
st99944
I sometimes find myself in a situation where I want to detach a variable from the computational graph, but keep requires_grad=True. Currently I’m doing torch.tensor(oldtensor.detach(), requires_grad=True), but this seems clumsy and (I think) copies the data, which is unncessary. Would it be possible to add a “requires_grad” argument to detach to specify that the detached variable should still require grad (although it is now a leaf)?
st99945
The idiomatic way to do this is oldtensor.detach().requires_grad_(). That’s even a bit shorter than a keyword argument. Best regards Thomas
st99946
I have the following model: model = torch.nn.Sequential( torch.nn.Linear(D_in, H), torch.nn.ReLU(), torch.nn.Linear(H, D_out), ) The inputs are sparse: ids_torch = torch.LongTensor([rows, cols]) values_torch = torch.FloatTensor(vals) shape = [len(x_python), max_input_dim] x_tensor = torch.sparse.FloatTensor(ids_torch, values_torch, torch.Size(shape)) When the code gets to here: print("x_batches[i].size(): ", x_batches[i].size()) print("y_batches[i].size(): ", y_batches[i].size()) y_pred = model(x_batches[i]) it gives the following error: x_batches[i].size(): torch.Size([5, 4476850]) y_batches[i].size(): torch.Size([5, 1]) Traceback (most recent call last): File "mlp_classifier.py", line 186, in <module> experiment() File "mlp_classifier.py", line 183, in experiment train(x_batches, y_batches) File "mlp_classifier.py", line 101, in train y_pred = model(x_batches[i].to_dense()) File "/usr/local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/site-packages/torch/nn/modules/container.py", line 67, in forward input = module(input) File "/usr/local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 55, in forward return F.linear(input, self.weight, self.bias) File "/usr/local/lib/python3.6/site-packages/torch/nn/functional.py", line 835, in linear return torch.addmm(bias, input, weight.t()) RuntimeError: addmm(): argument 'mat1' (position 1) must be Variable, not torch.FloatTensor Any thoughts?
st99947
Which PyTorch version are you using? Variables are deprecated since version 0.4.0. You can check your version using print(torch.__version__). You can find the install instructions here 37.
st99948
Hello. I’m having trouble with indexing 3D Tensor with 2D Tensor I have 3D Tensor and 2D LongTensor. For example, I have following 3D tensor (3 x 3 x 9) X = [ (111, 112, 113) , (121, 122, 123), (131, 132, 133) ], [ (211, 212, 213) , (221, 222, 223), (231, 232, 233) ], … [ (911, 912, 913) , (921, 922, 923), (931, 932, 933) ], and 2D LongTensor (3x3) Y = [ (1 2 3), (4 5 6), (7 8 9)] and i want Index X with Y to get the following valule Z = [ (111, 212, 313) , (421, 522, 623), (731, 832, 933) ] how can i do this?
st99949
Video card: gtx1070ti 8Gb, batchsize 64, input image size 128*128 I had such UNET with resnet152 as encoder wich worket pretty fine: class UNetResNet(nn.Module): def __init__(self, encoder_depth, num_classes, num_filters=32, dropout_2d=0.2, pretrained=False, is_deconv=False): super().__init__() self.num_classes = num_classes self.dropout_2d = dropout_2d if encoder_depth == 34: self.encoder = torchvision.models.resnet34(pretrained=pretrained) bottom_channel_nr = 512 elif encoder_depth == 101: self.encoder = torchvision.models.resnet101(pretrained=pretrained) bottom_channel_nr = 2048 elif encoder_depth == 152: self.encoder = torchvision.models.resnet152(pretrained=pretrained) bottom_channel_nr = 2048 else: raise NotImplementedError('only 34, 101, 152 version of Resnet are implemented') self.pool = nn.MaxPool2d(2, 2) self.relu = nn.ReLU(inplace=True) self.conv1 = nn.Sequential(self.encoder.conv1, self.encoder.bn1, self.encoder.relu, self.pool) #from that pool layer I would like to get rid off self.conv2 = self.encoder.layer1 self.conv3 = self.encoder.layer2 self.conv4 = self.encoder.layer3 self.conv5 = self.encoder.layer4 self.center = DecoderCenter(bottom_channel_nr, num_filters * 8 *2, num_filters * 8, False) self.dec5 = DecoderBlockV(bottom_channel_nr + num_filters * 8, num_filters * 8 * 2, num_filters * 8, is_deconv) self.dec4 = DecoderBlockV(bottom_channel_nr // 2 + num_filters * 8, num_filters * 8 * 2, num_filters * 8, is_deconv) self.dec3 = DecoderBlockV(bottom_channel_nr // 4 + num_filters * 8, num_filters * 4 * 2, num_filters * 2, is_deconv) self.dec2 = DecoderBlockV(bottom_channel_nr // 8 + num_filters * 2, num_filters * 2 * 2, num_filters * 2 * 2, is_deconv) self.dec1 = DecoderBlockV(num_filters * 2 * 2, num_filters * 2 * 2, num_filters, is_deconv) self.dec0 = ConvRelu(num_filters, num_filters) self.final = nn.Conv2d(num_filters, num_classes, kernel_size=1) def forward(self, x): conv1 = self.conv1(x) conv2 = self.conv2(conv1) conv3 = self.conv3(conv2) conv4 = self.conv4(conv3) conv5 = self.conv5(conv4) center = self.center(conv5) dec5 = self.dec5(torch.cat([center, conv5], 1)) dec4 = self.dec4(torch.cat([dec5, conv4], 1)) dec3 = self.dec3(torch.cat([dec4, conv3], 1)) dec2 = self.dec2(torch.cat([dec3, conv2], 1)) dec1 = self.dec1(dec2) dec0 = self.dec0(dec1) return self.final(F.dropout2d(dec0, p=self.dropout_2d)) # blocks class DecoderBlockV(nn.Module): def __init__(self, in_channels, middle_channels, out_channels, is_deconv=True): super(DecoderBlockV2, self).__init__() self.in_channels = in_channels if is_deconv: self.block = nn.Sequential( ConvRelu(in_channels, middle_channels), nn.ConvTranspose2d(middle_channels, out_channels, kernel_size=4, stride=2, padding=1), nn.BatchNorm2d(out_channels), nn.ReLU(inplace=True) ) else: self.block = nn.Sequential( nn.Upsample(scale_factor=2, mode='bilinear'), ConvRelu(in_channels, middle_channels), ConvRelu(middle_channels, out_channels), ) def forward(self, x): return self.block(x) class DecoderCenter(nn.Module): def __init__(self, in_channels, middle_channels, out_channels, is_deconv=True): super(DecoderCenter, self).__init__() self.in_channels = in_channels if is_deconv: """ Paramaters for Deconvolution were chosen to avoid artifacts, following link https://distill.pub/2016/deconv-checkerboard/ """ self.block = nn.Sequential( ConvRelu(in_channels, middle_channels), nn.ConvTranspose2d(middle_channels, out_channels, kernel_size=4, stride=2, padding=1), nn.BatchNorm2d(out_channels), nn.ReLU(inplace=True) ) else: self.block = nn.Sequential( ConvRelu(in_channels, middle_channels), ConvRelu(middle_channels, out_channels) ) def forward(self, x): return self.block(x) Then I edited my class looks to make it work without pooling layer: class UNetResNet(nn.Module): def __init__(self, encoder_depth, num_classes, num_filters=32, dropout_2d=0.2, pretrained=False, is_deconv=False): super().__init__() self.num_classes = num_classes self.dropout_2d = dropout_2d if encoder_depth == 34: self.encoder = torchvision.models.resnet34(pretrained=pretrained) bottom_channel_nr = 512 elif encoder_depth == 101: self.encoder = torchvision.models.resnet101(pretrained=pretrained) bottom_channel_nr = 2048 elif encoder_depth == 152: self.encoder = torchvision.models.resnet152(pretrained=pretrained) bottom_channel_nr = 2048 else: raise NotImplementedError('only 34, 101, 152 version of Resnet are implemented') self.relu = nn.ReLU(inplace=True) self.input_adjust = nn.Sequential(self.encoder.conv1, self.encoder.bn1, self.encoder.relu) self.conv1 = self.encoder.layer1 self.conv2 = self.encoder.layer2 self.conv3 = self.encoder.layer3 self.conv4 = self.encoder.layer4 self.dec4 = DecoderBlockV(bottom_channel_nr, num_filters * 8 * 2, num_filters * 8, is_deconv) self.dec3 = DecoderBlockV(bottom_channel_nr // 2 + num_filters * 8, num_filters * 8 * 2, num_filters * 8, is_deconv) self.dec2 = DecoderBlockV(bottom_channel_nr // 4 + num_filters * 8, num_filters * 4 * 2, num_filters * 2, is_deconv) self.dec1 = DecoderBlockV(bottom_channel_nr // 8 + num_filters * 2, num_filters * 2 * 2, num_filters * 2 * 2,is_deconv) self.final = nn.Conv2d(num_filters * 2 * 2, num_classes, kernel_size=1) def forward(self, x): input_adjust = self.input_adjust(x) conv1 = self.conv1(input_adjust) conv2 = self.conv2(conv1) conv3 = self.conv3(conv2) center = self.conv4(conv3) dec4 = self.dec4(center) #now without centblock dec3 = self.dec3(torch.cat([dec4, conv3], 1)) dec2 = self.dec2(torch.cat([dec3, conv2], 1)) dec1 = F.dropout2d(self.dec1(torch.cat([dec2, conv1], 1)), p=self.dropout_2d) return self.final(dec1) is_deconv - in both cases True. After changing it stop to work with batchsize 64, only with with size of 16 or with batchsize 64 but with resnet16 only - otherwise out of cuda memory. What am I doing wrong? Full stack of error: ~/Desktop/ml/salt/open-solution-salt-identification-master/common_blocks/unet_models.py in forward(self, x) 418 conv1 = self.conv1(input_adjust) 419 conv2 = self.conv2(conv1) --> 420 conv3 = self.conv3(conv2) 421 center = self.conv4(conv3) 422 dec4 = self.dec4(center) ~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 355 result = self._slow_forward(*input, **kwargs) 356 else: --> 357 result = self.forward(*input, **kwargs) 358 for hook in self._forward_hooks.values(): 359 hook_result = hook(self, input, result) ~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/container.py in forward(self, input) 65 def forward(self, input): 66 for module in self._modules.values(): ---> 67 input = module(input) 68 return input 69 ~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 355 result = self._slow_forward(*input, **kwargs) 356 else: --> 357 result = self.forward(*input, **kwargs) 358 for hook in self._forward_hooks.values(): 359 hook_result = hook(self, input, result) ~/anaconda3/lib/python3.6/site-packages/torchvision-0.2.0-py3.6.egg/torchvision/models/resnet.py in forward(self, x) 79 80 out = self.conv2(out) ---> 81 out = self.bn2(out) 82 out = self.relu(out)
st99950
This is because without the pooling, your inputs are larger. You could try your luck with checkpointing 1. Best regards Thomas
st99951
EmbeddingBag are filled with Nan in GPU for empty bags, but zeros in CPU. It should be zeros according to the pytorch document. image.png1332×300 42.2 KB here is the test code: u_embedding = nn.EmbeddingBag(180,100) offset = torch.LongTensor([0,2,2]) word_in = torch.LongTensor([234,234,23,234,53]) out = u_embedding(word_in,offset) #out[1] is zeros u_embedding = nn.EmbeddingBag(180,100).cuda() offset = torch.cuda.LongTensor([0,2,2]) word_in = torch.cuda.LongTensor([234,234,23,234,53]) out = u_embedding(word_in,offset) #out[1] is Nan Is this a bug? what should i do if want to using GPU and set zeros by default for empty bags? Is there a setup like ''padding_idx" in nn.EmbeddingBag layers?
st99952
Yes it’s a bug. I’m fixing. See https://github.com/pytorch/pytorch/issues/11739 46
st99953
Thank you, but what should i do to use the new version as early as possible. Or is there any other way to set ‘padding_idx’ in nn.EmbeddingBag layers?
st99954
I saw the code to fix this bug is very few. Can I fix this problem by changing few of the pytorch package codes?
st99955
The fix has some issues to be smoothed out, so it hasn’t been merged yet. Since it involves c++ change, you would have to compile from source to make it work.
st99956
Disturb you again. After i update pytorch from source, i got RuntimeError: cuda runtime error (9) : invalid configuration argument at /data/users/sdf/pytorch/aten/src/ATen/native/cuda/EmbeddingBag.cu:257 when run my code. and when i replace all cuda() with cpu(), it works perfectly. There may be other bug exist in EmbeddingBag GPU codes. here is the test code: import torch.optim as optim import torch import torch.nn as nn import numpy as np from scipy.special import expit import os import time class SkipGramModel(nn.Module): def __init__(self, component_size, word_size, dim): super(SkipGramModel, self).__init__() self.emb_size = dim self.component_size = component_size self.word_size = word_size # atten = torch.zeros([word_size,5]) # atten[:,0] += torch.log(torch.FloatTensor([4])) # self.atten = nn.Parameter(atten,requires_grad=True) self.atten_layers = nn.Embedding(word_size,1) self.u_embeddings = nn.EmbeddingBag(component_size,dim) self.word_embeddings = nn.Embedding(word_size,dim,sparse=True) self.v_embeddings = nn.Embedding(word_size,dim,sparse=True) # self.attention_matrix = 0.5 * torch.ones(self.word_size, 1).cuda() self.m = nn.Sigmoid() self.init_emb() def init_emb(self): initrange = 0.5 / self.emb_size self.word_embeddings.weight.data.uniform_(-initrange,initrange) self.u_embeddings.weight.data.uniform_(-initrange, initrange) self.v_embeddings.weight.data.uniform_(-0, 0) atten = torch.zeros([self.word_size, 5]) atten[:, 0] += torch.log(torch.FloatTensor([4])) self.atten_layers.weight.data = atten def forward(self, word_in,component_in, word_out, offset): char_in = torch.cuda.LongTensor(component_in[0]) redical_in = torch.cuda.LongTensor(component_in[1]) com1_in = torch.cuda.LongTensor(component_in[2]) com2_in = torch.cuda.LongTensor(component_in[3]) offset1 = torch.cuda.LongTensor(offset[0]) offset2 = torch.cuda.LongTensor(offset[1]) offset3 = torch.cuda.LongTensor(offset[2]) offset4 = torch.cuda.LongTensor(offset[3]) attention = torch.softmax(self.atten_layers(word_in),dim=-1).unsqueeze(1) emb_uword = self.word_embeddings(word_in) emb_char = self.u_embeddings(char_in,offset1) emb_redical = self.u_embeddings(redical_in,offset2) emb_com1 = self.u_embeddings(com1_in,offset3) emb_com2 = self.u_embeddings(com2_in,offset4) emb_all = torch.stack((emb_uword,emb_char,emb_redical,emb_com1,emb_com2),1) emb_vword = self.v_embeddings(word_out) emb_mixin = torch.bmm(attention,emb_all).squeeze(1) score = torch.mul(emb_mixin, emb_vword) score = torch.sum(score, dim=-1) score = self.m(score) return score if __name__ == '__main__': model = SkipGramModel(364, 180, 100).cuda() optimizer = optim.SGD(model.parameters(), lr=0.025) Lossfunc = nn.BCELoss(reduction='sum') for _ in range(100): word_in = torch.cuda.LongTensor([2]*128) word_out = torch.cuda.LongTensor([2]*128) label = torch.cuda.FloatTensor([1]*128) component_in = [[3,5],[2,4,5],[2,3,4],[]] offset = [[0]*127+[1],[0]*127+[1],[0]*128,[0]*128] outs = model.forward(word_in, component_in, word_out, offset) loss = Lossfunc(outs, label) optimizer.zero_grad() loss.backward() optimizer.step()
st99957
mmmartin: import torch.optim as optim import torch import torch.nn as nn import numpy as np from scipy.special import expit import os import time class SkipGramModel(nn.Module): def init(self, component_size, word_size, dim): super(SkipGramModel, self).init() self.emb_size = dim self.component_size = component_size self.word_size = word_size # atten = torch.zeros([word_size,5]) # atten[:,0] += torch.log(torch.FloatTensor([4])) # self.atten = nn.Parameter(atten,requires_grad=True) self.atten_layers = nn.Embedding(word_size,1) self.u_embeddings = nn.EmbeddingBag(component_size,dim) self.word_embeddings = nn.Embedding(word_size,dim,sparse=True) self.v_embeddings = nn.Embedding(word_size,dim,sparse=True) # self.attention_matrix = 0.5 * torch.ones(self.word_size, 1).cuda() self.m = nn.Sigmoid() self.init_emb() def init_emb(self): initrange = 0.5 / self.emb_size self.word_embeddings.weight.data.uniform_(-initrange,initrange) self.u_embeddings.weight.data.uniform_(-initrange, initrange) self.v_embeddings.weight.data.uniform_(-0, 0) atten = torch.zeros([self.word_size, 5]) atten[:, 0] += torch.log(torch.FloatTensor([4])) self.atten_layers.weight.data = atten def forward(self, word_in,component_in, word_out, offset): char_in = torch.cuda.LongTensor(component_in[0]) redical_in = torch.cuda.LongTensor(component_in[1]) com1_in = torch.cuda.LongTensor(component_in[2]) com2_in = torch.cuda.LongTensor(component_in[3]) offset1 = torch.cuda.LongTensor(offset[0]) offset2 = torch.cuda.LongTensor(offset[1]) offset3 = torch.cuda.LongTensor(offset[2]) offset4 = torch.cuda.LongTensor(offset[3]) attention = torch.softmax(self.atten_layers(word_in),dim=-1).unsqueeze(1) emb_uword = self.word_embeddings(word_in) emb_char = self.u_embeddings(char_in,offset1) emb_redical = self.u_embeddings(redical_in,offset2) emb_com1 = self.u_embeddings(com1_in,offset3) emb_com2 = self.u_embeddings(com2_in,offset4) emb_all = torch.stack((emb_uword,emb_char,emb_redical,emb_com1,emb_com2),1) emb_vword = self.v_embeddings(word_out) emb_mixin = torch.bmm(attention,emb_all).squeeze(1) score = torch.mul(emb_mixin, emb_vword) score = torch.sum(score, dim=-1) score = self.m(score) return score if name == ‘main’: model = SkipGramModel(364, 180, 100).cuda() optimizer = optim.SGD(model.parameters(), lr=0.025) Lossfunc = nn.BCELoss(reduction=‘sum’) for _ in range(100): word_in = torch.cuda.LongTensor([2]*128) word_out = torch.cuda.LongTensor([2]*128) label = torch.cuda.FloatTensor([1]*128) component_in = [[3,5],[2,4,5],[2,3,4],[]] offset = [[0]*127+[1],[0]*127+[1],[0]*128,[0]*128] outs = model.forward(word_in, component_in, word_out, offset) loss = Lossfunc(outs, label) optimizer.zero_grad() loss.backward() optimizer.step() I tried to python setup.py install to reshow your problem. But I got No module named 'tools.setup_helpers' Then I just use my pytorch ( 0.4.0 on win10) and it occurs an alike problem. RuntimeError: cuda runtime error (9) : invalid configuration argument at C:/Users/Administrator/Downloads/new-builder/win-wheel/pytorch/aten/src/ATen/native/cuda/EmbeddingBag.cu:281 Due to my version is old, this issue should not be caused by the fix of last issue.
st99958
so, I’m looking through the torch source code because I’m curious what exactly torch.tensor() does it appears that it’s somewhere in /torch/_C.cpython-36m-x86_64-linux-gnu.so, which I’m assuming means it’s somewhere in pytorch/torch/csrc/, but I looked in the obvious places, and somehow wasn’t able to find where it was? would appreciate if someone could point me to the implementation.
st99959
Solved by tom in post #2 That’s a great question! As I had written something about how the “regular” functions arrive in PyTorch, I added a bit on torch.tensor() at the bottom. It’s in section odds and ends in my selective tour through PyTorch internals. If you have feedback, I’d be most happy to hear about it. Best rega…
st99960
That’s a great question! As I had written something about how the “regular” functions arrive in PyTorch, I added a bit on torch.tensor() at the bottom. It’s in section odds and ends in my selective tour through PyTorch internals 18. If you have feedback, I’d be most happy to hear about it. Best regards Thomas
st99961
I modified the imagenet example for training on my own dataset and it become quite slower than before. I’m not sure what is the main reason. First, my dataset have a list of labeled [images, labels] and another list of unlabeled images. So I modified _getitem__ in ImageFolder class as follows, def __getitem__(self, index): """ Args: index (int): Index Returns: tuple: (image, target) where target is class_index of the target class. """ pindex = index + self.midx * self.nimgs path, target = self.imgs[index] pathu, _ = self.imgus[pindex] img = self.loader(path) imgu = self.loader(pathu) if self.transform is not None: img = self.transform(img) imgu = self.transform(imgu) if self.target_transform is not None: target = self.target_transform(target) return img, target, imgu self.imgus is the added list of unlabeled images. Then I changed training code as follows, def train(train_loader, model, criterion, optimizer, epoch): batch_time = AverageMeter() data_time = AverageMeter() var_time = AverageMeter() model_time = AverageMeter() ... top1 = AverageMeter() top5 = AverageMeter() # switch to train mode model.train() # set midx train_loader.dataset.midx = epoch % train_loader.dataset.max_midx print(epoch, train_loader.dataset.midx) end = time.time() for i, (input, target, inputu) in enumerate(train_loader): # measure data loading time dtime = time.time() data_time.update(dtime - end) target = target.cuda(async=True) input_var = torch.autograd.Variable(input) target_var = torch.autograd.Variable(target) inputu_var = torch.autograd.Variable(inputu) input_concat_var = torch.cat([input_var, inputu_var]) vtime = time.time() var_time.update(vtime - dtime) # compute output output = model(input_concat_var) mtime = time.time() model_time.update(mtime - vtime) ... Now, in the for loop, I got batch of input, target and inputu(unlabeled image), change each of them into Variable, and concatenate labeled and unlabeled images before feed into the model. In order to check where the code get slower, I added var_time and model_time as in the code. Following is one part of the log on the terminal, Expand Table Epoch: [2][0/4180] Time 11.312 -11.312 Data 8.702 -8.702 Var 1.706 -1.706 Model 0.481 -0.481 Epoch: [2][100/4180] Time 47.901 -80.702 Data 0.001 -0.087 Var 46.429 -79.423 Model 1.021 -0.765 Epoch: [2][200/4180] Time 11.375 -69.206 Data 0.001 -0.044 Var 10.028 -67.958 Model 0.93 -0.779 Epoch: [2][300/4180] Time 9.444 -64.922 Data 0.001 -0.03 Var 8.087 -63.683 Model 0.934 -0.783 Epoch: [2][400/4180] Time 10.702 -62.866 Data 0.001 -0.023 Var 9.8 -61.639 Model 0.488 -0.777 Epoch: [2][500/4180] Time 93.547 -63.055 Data 0.001 -0.019 Var 92.354 -61.813 Model 0.78 -0.796 Epoch: [2][600/4180] Time 104.527 -60.569 Data 0.001 -0.016 Var 103.357 -59.318 Model 0.761 -0.808 Epoch: [2][700/4180] Time 1.772 -57.497 Data 0.001 -0.014 Var 0.726 -56.248 Model 0.639 -0.809 Epoch: [2][800/4180] Time 1.706 -50.549 Data 0.001 -0.012 Var 0.865 -49.337 Model 0.39 -0.776 Epoch: [2][900/4180] Time 1.741 -45.143 Data 0.001 -0.011 Var 0.945 -43.96 Model 0.392 -0.75 Epoch: [2][1000/4180] Time 1.879 -40.818 Data 0.001 -0.01 Var 0.918 -39.658 Model 0.564 -0.729 Epoch: [2][1100/4180] Time 1.879 -37.277 Data 0.002 -0.009 Var 0.881 -36.136 Model 0.588 -0.712 You can see that batch_time fluctuated a lot and major reason of the increase seems come from var_time. var_time becomes very large and ranges from 1.x to 100.x. I understand that concat operation make some increase in time(1.x) but its weird that it goes up to hundreds. I don’t know what makes it so slow. When I see the htop or nvidia-smi during that perioid, both cpus and gpus are not used much (almost not used). Is there any problem in my modified code? Or can it be a hardware problem? I’m running on 8GPUs with 16 workers, batch size is 384 (192 each for labeled and unlabeled image).
st99962
Hi, I am using PyTorch to do matrix factorization with huge sparse matrix. Transferring it to a dense one is the last option as it would consume too much storage. Is there any existing solution to load a sparse matrix to training? Or, is there any example on how to design such an customized dataloader? Thanks
st99963
Hi All, I want to select GPU in the c++ program(Not through env vars) in caffe2. I am trying to infer in multithreading approach where each image should run on different GPU. I am not sure how can I achieve this using c++ APIs. I came across CaffeCudaSetDevice API but it doesn’t seem like working for me! Currently, I am modifying env variables (CUDA_VISIBLE_DEVICES) in the code to achieve this which is big hack!
st99964
Hi, I am loading a custom data and I’ve read the post here: Balanced Sampling between classes with torchvision DataLoader vision Hi all, I’m trying to find a way to make a balanced sampling using ImageFolder and DataLoader with a imbalanced dataset. I suppose that I should build a new sampler. I’m not sure if I’m missing something. Is there an already implemented way of do it? Thanks Code: train_loader = torch.utils.data.DataLoader( datasets.ImageFolder(traindir, transforms.Compose([ transforms.Scale(600), transforms.RandomSizedCrop(512), transforms.ToTensor(), normalize ])), batch_size=args.batch_size, shuffl… I am using the same code as follows: class_sample_count = np.array([len(np.where(y_train==t)[0]) for t in np.unique(y_train)]) weight = 1. / class_sample_count samples_weight = np.array([weight[t] for t in y_train]) samples_weight = torch.from_numpy(samples_weight) sampler = WeightedRandomSampler(samples_weight.type('torch.DoubleTensor'), len(samples_weight)) mb_size = 13 trainDataset = torch.utils.data.TensorDataset(torch.FloatTensor(X_train), torch.FloatTensor(y_train.astype(int))) trainLoader = torch.utils.data.DataLoader(dataset = trainDataset, batch_size=mb_size, num_workers=1, sampler = sampler) However, when I iterate for some epochs, there are cases of target vectors with all from the same class (either 0 or 1) which generates an error in my triplet loss function. What is the best way to handle this issue? Is there a way to force the trainLoader to always load from both classes? Thanks!
st99965
If you really need samples from both classes, I would write an own Dataset and return a tuple containing both samples. The WeightedRandomSampler uses probabilities to sample the classes, so some batches might be randomly get more samples of a certain class than the other. Let me know, if you get stuck with your Dataset.
st99966
For example, I have a tensor of shape 3x3 but would like to assign in-place the first 1x3 slice of it to another tensor of size 1x3. Is this possible in Caffe2?
st99967
Would anyone like to respond? If it is not possible I will have to switch to tensorflow.