id
stringlengths
3
8
text
stringlengths
1
115k
st31968
Hi, I’m very new to PyTorch. I’m somewhat experienced with Python, but have been programming in other languages (namely Java) for a few years now. My goal is to implement a machine learning program that will take in–as input–and learn from hundreds of arrays of numbers and a corresponding value to each array. To put it simply, I have a long list of arrays, and each array of numbers has 1 correct corresponding value based on the values inside the array. I’m still new to neural networks, so you can imagine that all these introductory code examples and basics mainly feature simple neural networks designed to learn images and predict them. How do I go about using arrays as an input? I don’t know where else to go, so if someone could point me in the right direction that would be super helpful.
st31969
You could use your input data in a similar way, but would probably need to adapt the model architecture to accept “arrays” instead of images. This can be done by e.g. nn.Linear layers and here is a very simple example with some annotations to get you started: # 10 data samples with 8 features each data = torch.randn(10, 8) # one floating point target for each sample target = torch.randn(10, 1) # create a simple model using a linear layer class MyModel(nn.Module): def __init__(self): super(MyModel, self).__init__() self.fc1 = nn.Linear(in_features=8, out_features=1) def forward(self, x): x = self.fc1(x) return x model = MyModel() optimizer = torch.optim.Adam(model.parameters(), lr=1e-3) criterion = nn.MSELoss() # train the model for 10 epochs nb_epochs = 1000 for epoch in range(nb_epochs): # zero out gradients, since gradients are accumulated in PyTorch optimizer.zero_grad() # perform the forward pass output = model(data) # calculate the loss loss = criterion(output, target) # calculate the gradients loss.backward() # optimize the parameters using the gradients optimizer.step() # print stats if epoch % 100 == 0: print('epoch {}, loss {}'.format(epoch, loss.item())) I assume you’ve already taken a look at our tutorials 4, which would be the next steps to dig into the framework.
st31970
Hello, Please correct me if I am wrong. Sorry in advance if I have mistaken the information. Your code indicates a simple neural network. What if one wants to implement CNN for numerical data? Can you please provide insights related to the input to convolutional and max-pooling layer for numerical data?
st31971
CNNs would require the input data in another shape (e.g. nn.Conv2d would expect an input in the shape [batch_size, channels, height, width]). I’m unsure what kind of data set you are using, but you could provide the input in the expected shape and use a CNN as an alternative to the proposed model. Note that CNNs apply their filters to the input and convolve (or cross-correlate) them. This is often beneficial, if your input data has a spatial pattern (such as images), but you could of course try this approach on any kind of data and report your findings.
st31972
Thank you for the information. I have given details of my dataset in another question. Please go through the attached link. https://discuss.pytorch.org/t/convolutional-neural-network-cnn-on-numerical-data/122225/2 3 Please let me know if further information is required.
st31973
Hi there, Sorry for the stupid and elementary question! I am totally new to Python and PyTorch. My question is that I want to multiply a [128,10,512] tensor by a tensor with the length of 10 such that the final output to be a [128*512] tensor. I know how to do that using for loops. However, I am wondering whether there is a direct PyTorch command in this regard. Thank you in advance for your kind help
st31974
Solved by ptrmcl in post #6 sum again? I think it’s the same idea. ten = torch.ones((128,10)) big = torch.ones((128,10,512)) result = (big*ten.view(128,10,1)).sum(1)
st31975
First thing that comes to mind: # length 10 vector ten = torch.ones((10)) # 128x10x512 tensor big = torch.ones((128,10,512)) result = (big*ten.view(1,-1,1)).sum(1).flatten() We reshape the tensor to be 1x10x1 to align with the other one. Then we sum across that dimension and flatten. Makes sense?
st31976
Thank you! That’s great. What about if “big” is a tensor shaped [batchsize, num_windows, CNN_features], “ten” is a tensor shaped [batchsize, num_windows], and using “ten” we want to map “big” to the “result” tensor with the shape of [batchsize, CNN_features]? i.e. big = [128,10,512] ten = [128,10] result = [128,512]
st31977
sum again? I think it’s the same idea. ten = torch.ones((128,10)) big = torch.ones((128,10,512)) result = (big*ten.view(128,10,1)).sum(1)
st31978
ptrmcl: result = (big*ten.view(128,10,1)).sum(1) Thank you again! I learned a lot!
st31979
aaaaab: result = [128,512] btw, use the </> button to add code or three backticks ``` to start and finish the thing. It just makes everything easier and clean formatting. Easier for other people to understand everything.
st31980
I was wondering what happens if you have a parameter (P), that has required_grad to be set to true, and manually set its associated tensor, P.data, to requires_grad False. So P = parameter with requires_grad:True, but P.data.requires_grad = False. What happens if I back-propagated on a Neural Network that had this phenomenon. Also i noticed that when i printed out the .data value of one of the parameters in my nn.Module, its requires grad property is set to False. Does this mean that requires_grad of .data is not used if it is embedded in a Parameter whose requires_grad property is True?
st31981
The .data attribute is used internally and only its parent would reflect the .requires_grad attribute. Note that the usage of .data is deprecated and could yield to many side effects, as Autograd won’t be able to track operation performed on .data.
st31982
I have a dataset of images in the “dataset” and was trying to use “make_grid” like in this link: Training a Classifier — PyTorch Tutorials 1.8.1+cu102 documentation", but but there’s something wrong and I don’t know what. dataiter = iter(dataset) images, labels = dataiter.next() plt.imshow(torchvision.utils.make_grid(images.permute(1,2,0))) # show images My dataset contains 3 images of shape (3, 40, 40). The error is: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-92-0a9e2d56a78a> in <module> 1 dataiter = iter(dataset) ----> 2 images, labels = dataiter.next() 3 plt.imshow(torchvision.utils.make_grid(images.permute(1,2,0))) 4 # show images AttributeError: 'iterator' object has no attribute 'next'
st31983
I am training a set of FF networks on tabular data. The input is a sparse matrix. Here’s the relevant code: #training self.data_tr = TensorDataset( torch.tensor(train_csc.toarray(), dtype=torch.float32, device=self.device), torch.tensor(train_pd['is_case'].values, dtype=torch.float32, device=self.device) #labels ) #validation self.data_va = TensorDataset( torch.tensor(valid_csc.toarray(), dtype=torch.float32, device=self.device), torch.tensor(valid_pd['is_case'].values, dtype=torch.float32, device=self.device) #labels ) and used it in the training train_ldr = DataLoader(dataset=self.data_tr, batch_size=param['bs'], shuffle=True) for X_mb,y_mb in train_ldr: yhat_mb = model(X_mb) loss = criterion(yhat_mb[:,0], y_mb) ... The dense array is being stored on the GPU and sliced as required. This runs very fast. Unfortunately, a couple of instances are so big that they do not fit on the GPU memory as required in the above approach. For those instances I have the following: class SparseDataset(Dataset): def __init__(self, mat_csc, label, device='cpu'): self.dim = mat_csc.shape self.device = torch.device(device) csr = mat_csc.tocsr(copy=True) self.indptr = torch.tensor(csr.indptr, dtype=torch.int64, device=self.device) self.indices = torch.tensor(csr.indices, dtype=torch.int64, device=self.device) self.data = torch.tensor(csr.data, dtype=torch.float32, device=self.device) self.label = torch.tensor(label, dtype=torch.float32, device=self.device) def __len__(self): return self.dim[0] def __getitem__(self, idx): obs = torch.zeros((self.dim[1],), dtype=torch.float32, device=self.device) ind1,ind2 = self.indptr[idx],self.indptr[idx+1] obs[self.indices[ind1:ind2]] = self.data[ind1:ind2] return obs,self.label[idx] instantiated as self.data_tr = SparseDataset(train_csc, train_pd['is_case'].values, device) self.data_va = SparseDataset(valid_csc, valid_pd['is_case'].values, device) and used as train_ldr = DataLoader(dataset=self.data_tr, batch_size=param['bs'], shuffle=True, collate_fn=my_collate) for X_mb,y_mb in train_ldr: yhat_mb = model(X_mb) loss = criterion(yhat_mb[:,0], y_mb) .... While this is VERY memory efficient, even on my smallest instance (which fits in the memory), it is 20 times slower than the first approach. I am looking for ideas to make this faster. Thx.
st31984
I’m using a matrix of size 5k x 90k. When i load treat this as a dense matrix and run a AE network, each epoch took me around 60-65 sec. But using your sparse matrix dataloader approach took me about 15-16 secs only. Looks like it works ?
st31985
I have two code with difference pytorch version In torch 1.0.0 torch.randn(4, 4).view(-1)>0. The result of this is tensor([0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1], dtype=torch.uint8) In torch 1.7.1 torch.randn(4, 4).view(-1)>0. The result of this is tensor([ True, True, False, False, False, True, False, True, False, True, True, True, False, True, False, False]) I don’t know what is happen? Note: This is no problem for me, i can solve with key word “convert true flase to 0 1 python”. But i want to know how to torch work. Thanks for reading.
st31986
Solved by albanD in post #2 Hi, This is expected yes. Comparison operations now return the new boolean dtype instead of the old uint8 dtype. This should not be a problem as both can be used for masking and regular ops. But the using uint8 as a boolean is deprecated an will be removed in the future.
st31987
Hi, This is expected yes. Comparison operations now return the new boolean dtype instead of the old uint8 dtype. This should not be a problem as both can be used for masking and regular ops. But the using uint8 as a boolean is deprecated an will be removed in the future.
st31988
Hi I’m trying to modify contrast in the transformation pipeline for data augmentation but I have this error TypeError: adjust_contrast() missing 1 required positional argument: 'img’ This is my code transform = transforms.Compose([ transforms.Grayscale(num_output_channels=1), transforms.RandomRotation(degrees= (-15, 15)), transforms.ToTensor(), transforms.functional.adjust_contrast( contrast_factor= 0.5) ])
st31989
The issue is that the functional version of contrast expects to take in the input directly rather than returning a callable function. You can work around this by using ColorJitter 2 and only changing the contrast (e.g., torchvision.transforms.ColorJitter(brightness=0, contrast=0.5, saturation=0, hue=0)). If you want a constant adjustment you can just wrap the current function: def my_adjust_contrast(): def _func(img): return transforms.functional.adjust_contrast(img, contrast_factor= 0.5) return _func transform = transforms.Compose([ transforms.Grayscale(num_output_channels=1), transforms.RandomRotation(degrees= (-15, 15)), transforms.ToTensor(), my_adjust_contrast() ])
st31990
I want to use Conv1D depthwise on the text and the filters should be the same with each other. Is there a way we can do this?
st31991
Could you explain the use case a bit more? Repeating a kernel would yield the same outputs, so I’m not sure if I understand the question correctly.
st31992
Hi, @ptrblck! Thanks for interested in this question. I’m doing a multi-label classification task, and the label space is about 8900. The classifier needs to make predictions about what labels the input text corresponds to (generally, an input text might correspond to 5~10 labels). As described in this paper: https://arxiv.org/pdf/1909.11386.pdf 1, I’m gonna do a per-label mask of the input embedding [B, L, D] (batch size, input length, embed dimension), after the mask, the embedding would become a 4-D tensor [B, L, T, D]. Then I will do convolution. The original paper suggests that all embedding share the same convolution layer, which means all label embedding should be convolved by the same weights. For simplicity, we could stack the 4-D tensor at the embedding dimension, then it has the shape [B, L, T*D], which is suitable for depthwise convolution. However, if we directly use 1-D convolution, there will be one unique filter for each label embedding, and there will be 8900 different filters in total, which can be a disaster for GPU memory. I’m wondering if there is a method to make the filters share the same parameters.
st31993
Is your input shape [B, L, T*D] corresponding to a channels-last memory format, i.e. would T*D represent the channels? If so, you would have to permute the data, but each kernel would still use all input channels in the default layout. Could you post the input shape and desired output shapes (with the description what the temporal and channel dimensions would be), please?
st31994
Yes, T*D represents channels, so the code should be like conv1d([B, T*D, L]). While initializing the convolution layer, it would be layer = nn.Conv1d(T*D,filter_maps*T,kernel_size,groups=T)
st31995
This setup: T, D, filter_maps = 2, 3, 4 kernel_size=5 layer = nn.Conv1d(T*D,filter_maps*T,kernel_size,groups=T) print(layer.weight.shape) > torch.Size([8, 3, 5]) would use 8 filters (defined by filter_maps*T) where each will use 3 input channels (defined by T*D and the groups). Would you explain a bit more which filters should now be shared? Would you like to use a single output filter only?
st31996
The code you wrote above is exactly what I mean. Sorry for not explaining clearly. Well, as you can see, we set groups=T, then there would be T kernels (filters) in total. What I want is, all the filters should be initialized with the same weight (share the weights), and in gradient descent, they could be updated to the same value.
st31997
Dewei_Hu: What I want is, all the filters should be initialized with the same weight (share the weights) You could certainly initialize all 8 filters to the same value either by directly set the values: T, D, filter_maps = 2, 3, 4 kernel_size=5 layer = nn.Conv1d(T*D,filter_maps*T,kernel_size,groups=T) with torch.no_grad(): ref = layer.weight[0:1] layer.weight.copy_(ref.repeat(8, 1, 1)) print(layer.weight) or by using the functional API and stacking the filters. Dewei_Hu: and in gradient descent, they could be updated to the same value. Unfortunately that won’t work directly, since you are using 2 groups. Even though all filters are equal, they would use different input channels and would thus also create different outputs and gradients: x = torch.randn(2, 6, 24) out = layer(x) out.mean().backward() print(layer.weight.grad)
st31998
Hey @ptrblck bro, I came up with an idea where we can initialize the weights and biases in different groups with the same value and update them during training with the same value as well. class Model(nn.Module): def __init__(self): # some other codes # initialize the weights np.random.seed(123) a = np.random.randn(self.num_filter_maps,self.embed_size,kernel_size) a = torch.from_numpy(a).type(torch.FloatTensor) self.cnn_weight = nn.Parameter(a) b = np.random.randn(64) b = torch.from_numpy(b).type(torch.FloatTensor) self.cnn_bias = nn.Parameter(b) #linear output self.fc = nn.Linear(num_filter_maps, label_space) xavier_uniform_(self.fc.weight) def forward(self,x): # pdb.set_trace() batch_size = x.shape[0] max_len = x.shape[1] with torch.no_grad(): lengths = torch.count_nonzero(x,dim=-1).cpu() with torch.no_grad(): conv = nn.Conv1d(50*self.embed_size, self.num_filter_maps*50, kernel_size=self.kernel_size, padding=int(self.kernel_size//2)) conv.weight.data = self.cnn_weight.repeat(50,1,1).data.clone() conv.bias.data = self.cnn_bias.repeat(50).data.clone() return something and then during training we only update the self.cnn_weights and self.cnn_bias instead of the whole CNN layer.
st31999
Hi, Is there any comprehensive guideline for indexing tensor? I’m trying to find the pytorch version of this link. Or, is the indexing supported by numpy also supported in pytorch? Thanks,
st32000
Solved by ptrblck in post #2 I’m unsure about the relative parity between numpy’s and PyTorch’s indexing, but I’m usually also referring to the numpy docs in necessary and wasn’t running into an unsupported use case so far.
st32001
I’m unsure about the relative parity between numpy’s and PyTorch’s indexing, but I’m usually also referring to the numpy docs in necessary and wasn’t running into an unsupported use case so far.
st32002
Hi, given a Conv2d module I’d like to manually perform the biases addition, i.e. given a module M with weights w and biases b, I can compute the output y given and input x as y = x * w + b. What I’d like to do is to evaluate x * w using the module’s forward method M(x) and then manually add the biases b. Unfortunately I cannot just override the module’s biases since the one I want to add might be N-D tensors while PyTorch expects only 1-D tensors. The problem I’m facing is that the results I obtain with the manual addition are different from the results obtained using the full module’s forward method when working with float32 tensor but the same results when working in float64 format. Is there any way this manual addition of the biases can give the same results as the forward method working with float32? Is the underlying convolution code operating differently from a python-side module(x) + bias? Here how to reproduce the issue: def test_conv_manual_bias_float32(self): module = nn.Conv2d(3, 64, 3, padding=1) x = torch.randn((64, 3, 128, 128)) y_src = module(x) bias = module.bias.data.clone() module.bias.data.mul_(0) y_prop = module(x) + bias[:, None, None] print(torch.allclose(y_src, y_prop)) def test_conv_manual_bias_float64(self): module = nn.Conv2d(3, 64, 3, padding=1).double() x = torch.randn((64, 3, 128, 128)).double() y_src = module(x) bias = module.bias.data.clone() module.bias.data.mul_(0) y_prop = module(x) + bias[:, None, None] print(torch.equal(y_src, y_prop)) The difference can also be tested ussing torch.max(y_src.abs() - y_prop.abs()), that returns a value grater than 0 for the first function and 0 for the second.
st32003
Solved by ptrblck in post #4 Yes, the internal conv implementation (using MKL?) could use another approach than the native addition used in PyTorch, which could thus yield these small numerical mismatches.
st32004
The max. absolute error for the FP32 implementation is tensor(8.3447e-07, grad_fn=<MaxBackward1>) on my system, which would be explained by the limited floating point precision. I don’t know which path is taken for FP64 on the CPU, but assume that less or no optimizations are applied internally, which could explain the 0 difference.
st32005
Is it possible that the C implementation of the convolution bias differs from a simple addition in python? Because I’m simply adding the same exact tensor. This problem doens’t seem to exist when using Linear layers.
st32006
Yes, the internal conv implementation (using MKL?) could use another approach than the native addition used in PyTorch, which could thus yield these small numerical mismatches.
st32007
Hello everybody, I am new to PyTorch, and I am looking for some PyTorch Coding Conventions or Best Practices. PyTorch is fantastic to allow you a lot of freedom, but it can sometimes be challenging to find something in someone else code when they have a completely different way of coding with PyTorch. There might also be some best practices to ensure your code can run as fast as possible. I am thinking about something similar to Serialization semantics 40 but describing more straightforward cases such as in which context should one create a separate module to define a part of a model and the way it should be structured. The goal is that anyone who knows the coding conventions can easily find what they are looking for in the code and extend it in a way other people will be able to understand it quickly too. Thanks for your time,
st32008
Solved by Igor_Susmelj in post #10 @LucasVandroux, thanks for referring to my unofficial style guide. @justusschock and @tom, I added most of your recommendation to my style guide. Feel free to add more if you feel like:
st32009
@klory thank you for your answer but it doesn’t answer my question. I found something closer to what I am looking for here: GitHub IgorSusmelj/pytorch-styleguide 303 An inofficial styleguide and best practices summary for PyTorch - IgorSusmelj/pytorch-styleguide But still looking for other documents.
st32010
I don’t think, there are some documents like that (at least not official ones), but if you go by python, I’d recommend you to follow the PEP 8 style 47 as this mostly applies to pytorch as well. There are a few pytorch specific things, I’d append: don’t use autograd of not necessary (use with torch.no_grad() if possible) only push tensors to GPU, if they are actually needed try to avoid loops over tensor dimensions (slows things down) try to free graphs as soon as possible (use detach or item whenever you can) to avoid memory leaks If there’s anything coming up to my mind, I’ll just edit this post.
st32011
PEP8 (PyTorch uses flake8 for coding style itself) is a good idea, as is the general Python Zen. Don’t write Python like people who don’t like Python. However, a couple of the PyTorch-specific items I disagree with (in the top two below). My list would be something like: If you need to use torch.no_grad() somewhere where it isn’t because you’re evaluating something that’s written for training, you should ask yourself if you’re doing it wrong. Be mindful of loops over tensor dimensions slowing that down. It’s conventional wisdom to avoid these, but there are quite a few legitimate cases for them in PyTorch. I have a half-written section “For loop or not for loop” discussing them somewhere. Using item and detach for things to keep around longer than the next backward is generally a good idea (e.g. when you record loss history, statistics, …), but be careful to not ruin your graph. (Targeted detach is good in nn.Module subclass code, with torch.no_grad() should be needed very rarely.) If you write for re-use, the functional / Module split of PyTorch has turned out to be a good idea. Use functional for stuff without state (unless you have a quick and dirty Sequential). Don’t use deprecated stuff .data, Tensor(...) and friends, .type (might be me), t.new_.... It’s bad! Use the documented PyTorch interfaces if you can (e.g. when something from torch.nn.functional shows up in torch for internal reasons. Benchmark your stuff if you think it’s performance critical. Don’t forget CUDA needs synchronising for valid results. The JIT will speed up chains of pointwise ops a lot. C++ will be a bit faster than plain Python but for many cases only ca 10%. Keep your modules clean. Best regards Thomas
st32012
I agree with you on the first point. Maybe I did not express my intention very well. What I meant is, that you could theoretically also validate/predict without no_grad. Of course you’re right, but I just assumed people to know, where they need gradients and where they aren’t necessary. Regarding the second point on your list: sure there are some legitimate cases (LSTMs and stuff), but for most cases they can be avoided. I’ll look forward to your discussion on that! And I totally agree with the rest of your list.
st32013
@justusschock and @tom thank you for your recommendations. I will make sure to keep them in mind. Browsing the forum a bit more, I also found this link: GitHub Spandan-Madan/A-Collection-of-important-tasks-in-pytorch 91 Everyday things people use in Pytorch. No need to spend hours reading Pytorch forums trying to find them! - Spandan-Madan/A-Collection-of-important-tasks-in-pytorch
st32014
I meant to delete the first sentence (and did now), sorry. It’s probably hard to be the first. My more controversial / situation specific things: Personally, I tend to copy code into one giant notebook and I think most configuration things (argparse etc) are terrible. When you have all your stuff in def main(): and I change something and get an exception, I cannot use ipython -i foo.py to inspect the variables in main. But these are really me.
st32015
Don’t worry, I can take this For me it’s kind of the other way round. I absolutely prefer splitting code in many files and packages, because this way it’s easier to avoid confusion (for me). And I usually have a configuration file defining all the hyperparameters. This is coming since I can heavily parallelize jobs on a cluster (grid search etc.) and don’t want to do this manually. Also the files are copied to the directory containing the weights to keep an overview on trained configurations. I don’t prefer ipython or notebooks at all for GPU related stuff since you always need to restart the notebook server to free the GPU memory. But that’s only on me
st32016
@LucasVandroux, thanks for referring to my unofficial style guide. @justusschock and @tom, I added most of your recommendation to my style guide. Feel free to add more if you feel like: GitHub IgorSusmelj/pytorch-styleguide 58 An inofficial styleguide and best practices summary for PyTorch - IgorSusmelj/pytorch-styleguide
st32017
Sigh! While I appreciate that people want checklists and easy to follow instructions and I’m sure that your checklist is as good as any other, checklist invariably mix obvious good things with suggestions of questionable merit. For example I never provide that 10% speedup estimate without context or the opportunity to ask for context, much like Justus’ advice on avoiding loops is quite right for a lot of cases, but it’s important to know when it’s not applicable. When you provide advice in a checklist form, all that context is lost. Here you took a bunch of bullet points and left out even the few qualifications I put in that overly condensed form of discussion. And that’s the crux: Good style cannot be achieved by following checklists, much like you don’t gain much wit by buying a book of famous quotes. If there is a craft component to writing code, you need to learn - possibly by studying what people who would know - say like Soumith - wrote and trying to understand how it works and why it was written that way. By following a checklist approach you get the code equivalent of how development processes look like when large companies decide to do “agile”, and implements it just by following an “agile checklist” someone gave them. Best regards Thomas
st32018
Well Thomas, you’re right, but I think this is a good way to start. The idea behind PEP8 is just about the same. And instead of asking Soumith, he just follows your advice (which is for sure also pretty good). And this whole discussion board is meant for asking questions like that. So in my opinion, this is a good way to start, but of course it cannot replace the learning process.
st32019
I agree with you. Nothing would ever replace the process of learning by reading and trying to understand what more advanced people have done. However, having some simple snippets to start with the simple tasks and then being able to understand how they work, is also an excellent way to learn too.
st32020
Hey Tom, If you want to debug your code you should try the python debugger pdb (it took me a while to start using it but it’s a game changer). ipython -m pdb is your best friend in these cases. Jupyter notebook (and many IDE’s) also support it pretty well. There are probably some good tutorials around…
st32021
I fully agree with your points. The best way to learn is by doing. During my learning process, I often struggled because of missing documentation or seeing different ways of doing the same thing. I learned a lot about how to use PyTorch more efficiently by spending hours studying repositories made by companies such as Nvidia or Facebook. The goal of this style guide and best practices summary is just to help others and myself to learn from this journey. Being in the area of deep learning since quite a while and starting with tensorflow I saw how many people (as well as myself) struggled with getting started on their custom projects. Tutorials teach you on how to implement a specific model to solve a task but unfortunately, they don’t always tell you why a certain workflow or coding style / pattern can help you avoiding mistakes and keeping the project clean. This is also one of the main reasons I like PyTorch. Being simpler and more intuitive for Python users, it just allows you to learn faster and make fewer mistakes. In tensorflow I was confused by different strategies on how to build a model all over the place and when to pick one in particular.
st32022
A bit late to the party but while I personally enjoy Igor’s style guide I also think Branislav Holländer’s file organization may be useful to some people. github: GitHub - branislav1991/PyTorchProjectFramework: A basic framework for your PyTorch projects 11 article: https://towardsdatascience.com/how-to-structure-your-pytorch-project-89310b8b2da9 4
st32023
I just started NN few months ago , now playing with data using Pytorch. I learnt how we use embedding for high cardinal data and reduce it to low dimensions. There is one thumb of role i saw that for reducing high dimensional categorical data in the form of embedding you use following formula embedding_sizes = [(n_categories, min(50, (n_categories+1)//2)) for _,n_categories in embedded_cols.items()] embedding_sizes [(69, 35), (11, 6)] After creating this embedding layer how do we know these embedding layers are appropriate for MLP ? Do we check score with different size of embedding layer or do we visualise this layer? If we visualise then what are the ways to visualise , in short how can i validate that my embedding layers are good in terms of its reduced size from 69 to 35 and 11 to 6?
st32024
Solved by Kushaj in post #4 Use layer.weight to get the embedding matrix. In general whenever you want to extract something from any layer in pytorch just look up at the __init__ function in the source code.
st32025
The embedding layer is just a look up table. So you pass an index and an embedding vector is returned. When you initialize the embedding layer, these are just random values. After training the embeddings, you can try the following to check the quality of the embeddings Check the metric. As everything kept same, the metric value of 69 dim and 35 dim embedding can give you some idea of the quality of embeddings You can use PCA to visualize the embeddings in 2D space. Although this is not a good approach.
st32026
@Kushaj thanks Kushaj , i think i need to extract those layers after training and then extract those to visualize with target variable , how can i extract these embeded layers weights after trainig?
st32027
Use layer.weight to get the embedding matrix. In general whenever you want to extract something from any layer in pytorch just look up at the __init__ function in the source code.
st32028
I want to implement my own version of Tensor.trace() by following along the native/README.md 1. As an initial test, I just duplicated the code corresponding to trace in aten/src/ATen/native/native_functions.yaml, aten/src/ATen/native/ReduceOps.cpp, aten/src/ATen/native/cuda/TriangularOps.cu and tools/autograd/derivatives.yaml, and added the prefix ‘my_’ to the relevant declarations. This means I now got the following: native_functions.yaml: - func: my_trace(Tensor self) -> Tensor variants: method, function dispatch: CPU: my_trace_cpu CUDA: my_trace_cuda - func: my_trace_backward(Tensor grad, int[] sizes) -> Tensor variants: function device_check: NoCheck device_guard: False derivatives.yaml: - name: my_trace(Tensor self) -> Tensor self: my_trace_backward(grad, self.sizes()) ReduceOps.cpp: Tensor my_trace_cpu(const Tensor& self) { ... } TriangularOps.cu: Tensor my_trace_cuda(const Tensor& self) { ... } There still seems to be something missing somewhere, because during my build I get the following error: FAILED: bin/conv_to_nnpack_transform_test : && /usr/lib/ccache/c++ -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow -DHAVE_AVX_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -rdynamic -Wl,-Bsymbolic-functions caffe2/CMakeFiles/conv_to_nnpack_transform_test.dir/transforms/conv_to_nnpack_transform_test.cc.o -o bin/conv_to_nnpack_transform_test -Wl,-rpath,/home/me/pytorch/build/lib: lib/libgtest_main.a -Wl,--no-as-needed,"/home/me/pytorch/build/lib/libtorch.so" -Wl,--as-needed -Wl,--no-as-needed,"/home/me/pytorch/build/lib/libtorch_cpu.so" -Wl,--as-needed lib/libprotobuf.a lib/libc10.so -lmkl_intel_lp64 -lmkl_gnu_thread -lmkl_core -fopenmp /usr/lib/x86_64-linux-gnu/libpthread.so -lm /usr/lib/x86_64-linux-gnu/libdl.so lib/libdnnl.a -ldl lib/libgtest.a -pthread && : /usr/bin/ld: /home/me/pytorch/build/lib/libtorch_cpu.so: undefined reference to `at::native::my_trace_backward(at::Tensor const&, c10::ArrayRef<long>)' collect2: error: ld returned 1 exit status [1291/1581] Building CXX object test_tensorexpr/CMakeFiles/test_tensorexpr.dir/test_simplify.cpp.o ninja: build stopped: subcommand failed. The error message speaks of libtorch, but not the README.md. What did I miss?
st32029
Tensor my_trace_backward(const Tensor& grad, IntArrayRef sizes) It’s just a copy of trace_backward.
st32030
Say I create a placeholder for a batch of 3 images - batch = torch.zeros(3, 3, 256, 256, dtype=torch.uint8) I have my dummy image - image = torch.randint(size = (3,256,256), low=0, high=256) I then do - batch[0] = image I am unable to understand the following outputs - id(batch[0]) == id(image) out: False Should this not be true as both hold references to the same Tensor object ‘image’? id(batch.storage()) == id(image.storage()) out: True batch[0][0][0][0] = 5 print(image[0][0][0]) out: 171 Since both ‘batch’ and ‘image’ share the same underlying storage, why does the change in batch[0][0][0][0] not reflect when I print the corresponding element of ‘image’ ? Thank you!
st32031
Solved by KFrank in post #2 Hi Rohit! Batch is a single “holistic” tensor of shape [3, 3, 256, 256]. It is not a “collection” (in the sense of, say, a python list or dictionary) of three 3x256x256 tensors. Pytorch is doing some moderately fancy stuff with python here. This is not simply assigning image to the 0 element…
st32032
Hi Rohit! Rohit_R: batch = torch.zeros(3, 3, 256, 256, dtype=torch.uint8) Batch is a single “holistic” tensor of shape [3, 3, 256, 256]. It is not a “collection” (in the sense of, say, a python list or dictionary) of three 3x256x256 tensors. batch[0] = image Pytorch is doing some moderately fancy stuff with python here. This is not simply assigning image to the 0 element of the batch “collection.” I don’t understand the python details, but this line of code, roughly speaking, calls something like: batch.modify_tensor_slice (0, image) id(batch[0]) == id(image) out: False Should this not be true as both hold references to the same Tensor object ‘image’? No, batch[0] and image are two different tensors. In this case (unlike in the assignment batch[0] = image) you should understand batch[0] as calling something like: batch.return_slice_as_new_tensor (0) Pytorch tensors are fancy objects that can do a lot of things. In this case, batch[0] is indeed a new tensor object, but it is a “view,” so to speak, into another tensor, batch. But even though the tensor batch[0] and the tensor batch share some of the same underlying data, they are two distinct tensor objects. id(batch.storage()) == id(image.storage()) out: True Since both ‘batch’ and ‘image’ share the same underlying storage, why does the change in batch[0][0][0][0] not reflect when I print the corresponding element of ‘image’ ? This is a confusing fake-out. batch and image do not share the same storage. I don’t know the actual details of what is going on, but I deduce that some_tensor.storage() returns a new “storage” object that wraps (for whatever reason) the actual underlying storage. Consider this: >>> import torch >>> torch.__version__ '1.7.1' >>> batch = torch.tensor ([[1, 2], [3, 4]]) >>> image = torch.tensor ([10, 20]) >>> id (batch.storage()) 139730615547904 >>> id (image.storage()) 139730615547328 >>> id (batch.storage()) == id (image.storage()) True >>> batch_storage = batch.storage() >>> image_storage = image.storage() >>> id (batch_storage) == id (image_storage) False >>> id (batch_storage) 139730615547904 >>> id (image_storage) 139730615546112 >>> id (batch.storage()) 139730615547328 >>> id (image.storage()) 139730615547584 You can see that calling .storage() multiple times on the same tensor returns a new, different “storage” object each time. (Why pytorch does things this way I don’t know.) But how then can we have: >>> id (batch.storage()) == id (image.storage()) True Python / pytorch creates a new storage object when image.storage() is called, gets its id() (address in memory), and then discards the storage object. The same thing happens when batch.storage() is called – a new storage object is created. Due to the vagaries of the python interpreter the new storage object created by batch.storage() happens to land at the same location in memory previously being used by the since-discarded storage object created by image.storage(), so the two have the same id(). (Different python objects are only guaranteed to have different id()s if they exist at the same time. Memory – and hence id()s – can be reused after an object is disposed of.) Finally, consider this: >>> batch = torch.tensor ([[1, 2], [3, 4]]) >>> image = torch.tensor ([10, 20]) >>> b0 = batch[0] >>> batch tensor([[1, 2], [3, 4]]) >>> image tensor([10, 20]) >>> b0 tensor([1, 2]) >>> batch[0] = image >>> batch tensor([[10, 20], [ 3, 4]]) >>> image tensor([10, 20]) >>> b0 tensor([10, 20]) >>> batch[0, 0] = 55 >>> batch tensor([[55, 20], [ 3, 4]]) >>> image tensor([10, 20]) >>> b0 tensor([55, 20]) >>> image[0] = 666 >>> batch tensor([[55, 20], [ 3, 4]]) >>> image tensor([666, 20]) >>> b0 tensor([55, 20]) >>> b0[0] = 9999 >>> batch tensor([[9999, 20], [ 3, 4]]) >>> image tensor([666, 20]) >>> b0 tensor([9999, 20]) To reiterate, batch, image, and b0 are three distinct tensor objects. But batch and b0 share the same underlying storage (in that b0 is a “view” into batch), while image has its own, unrelated storage. Best. K. Frank
st32033
Hello Frank! Thank you very much for your detailed reply. Your answer has cleared all my doubts! On another note, could you please elaborate a bit more on your first statement " Batch is a single ‘holistic’ tensor of shape [3, 3, 256, 256]. It is not a ‘collection’. " The mental picture I create of ‘Batch’ is a collection of 3 images each of shape [3,256,256]. I also understand that this is just really a single 4D tensor. Is this what you meant by a ‘holistic’ tensor ? Thank you again!
st32034
Hi Rohit! Rohit_R: The mental picture I create of ‘Batch’ is a collection of 3 images each of shape [3,256,256]. However you point out that this is not true. Well, first, you could consider this a semantic distinction about what “collection” ought to mean. But leaving that aside … Your mental picture is not unreasonable. It does have aspects, however, that could be misleading. From a low-level technical perspective, in the simplest case pytorch stores the data for a tensor of shape [3, 3, 256, 256] as a contiguous array of 3 * 3 * 256 * 256 (in your case) bytes. Ignoring strides and various kinds of views, you can’t build a [3, 3, 256, 256]-tensor from 3 [3, 256, 256]-tensors without copying that data from the three separate tensors into the contiguous data array for the new [3, 3, 256, 256]-tensor. From the perspective of functionality, if the batch tensor were a (general-purpose) collection of three image tensors, you might imagine building a batch that consisted of a [3, 256, 256]-image, a [1, 256, 256]-image, and a [3, 128, 128]-image. (Some languages refer to such constructs as “ragged arrays.”) But you can’t. The three images that make up the batch tensor all have to have the same shapes. They are “slices” of a higher-dimensional tensor (rather than items in a general-purpose collection), and, as such, their shapes are constrained to be the same. Best. K. Frank
st32035
In the middle of my model, there is some for loops that I must do, there is no replace for it, so to make it faster I will use numba.njit, but it works for only numpy arrays and also if I detach the tensor the gradients won’t be recorded. So is there a solution for that, and also I heard of torch.jit.trace, will it do the same as numba.njit and record the gradients.
st32036
Hi, If you have to break the autograd, you can use a custom Function Extending PyTorch — PyTorch 1.8.1 documentation 1. You will have to specify what the backward pass of this op is though.
st32037
Oh I get it, another thing please, in this code import torch depth = torch.randn((22000,3), requires_grad= True) depth[depth<0].zero_() rot_sin = torch.sin(torch.FloatTensor([0.4])) rot_cos = torch.cos(torch.FloatTensor([0.4])) rot_mat_T = torch.FloatTensor( [[rot_cos, 0, -rot_sin], [0, 1, 0], [rot_sin, 0, rot_cos]], ) rot_mat_T.requires_grad = True depth = depth @ rot_mat_T linear = torch.nn.Linear(3,4) out = linear(depth) out.sum().backward(retain_graph =True) print(depth.grad) It gives me this warning UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. print(depth.grad) put if I replaced this line depth = depth @ rot_mat_T with new_depth = depth @ rot_mat_T and out = linear(depth) to out = linear(new_depth) it works and print the gradient, why is that?
st32038
Abdelrahman_Akram: depth = depth @ rot_mat_T When you do that, you make the “depth” python variable point to the result of depth @ rot_mat_T instead of the Tensor it was pointing out before (the result of torch.randn((22000,3), requires_grad= True)). But the .grad field is only populated for leaf Tensors that require gradients. And you can check that the “depth” just afterr creating it at the top .is_leaf==True. But after depth = depth @ rot_mat_T the new Tensor “depth” points to has .is_leaf==False. So the .grad field won’t be populated, hence the warning.
st32039
So if I want the grads flow to the first depth variable, I have to make the new_depth step, right ? and I f I did so won’t it take more memory !?
st32040
Not really, because the computational graph keeps a reference to the original “depth” Tensor and so that Tensor stays alive anyways. So you won’t see any memory difference.
st32041
Is there a way for me to access PyTorch documentation offline? I checked the github repo and there seems to be a doc folder but I am not clear on how to generate the documentation so that I can use it offline. I am looking for documentation for stable 0.4.0.
st32042
Solved by albanD in post #2 Hi, The doc needs to be generated from the source code. To get it you will need to go into the docs folder and then run make (or the bat file if you’re on windows). You might need to install the dependencies in the requirements.txt file before. Once make run, you will have a local html file that …
st32043
Hi, The doc needs to be generated from the source code. To get it you will need to go into the docs folder and then run make (or the bat file if you’re on windows). You might need to install the dependencies in the requirements.txt file before. Once make run, you will have a local html file that you can browse offfline containing all the doc.
st32044
Thanks. I was able to generate the documentation in html format using the steps you mentioned. Any idea how I can generate the docs specific to 0.4.0 stable? The generated docs show unstable 0.3.0.post4. I tried checking out to tag v0.4.0 in repo, but it still generates the unstable docs.
st32045
The version number might not be correct. Because the build system for the releases is reponsible for setting the versions to something else than “unstable”. It will generate the doc for the current version of the repo. So if you’re at tag 0.4.0, it will generate the doc for this version.
st32046
Hi, but how can I run the Makefile in some specific folder? I want to install the docs only.
st32047
how to install the dependencies in the requirements.txt sh ./Makefile ./Makefile: 5: ./Makefile: SPHINXOPTS: not found ./Makefile: 6: ./Makefile: SPHINXBUILD: not found ./Makefile: 7: ./Makefile: SPHINXPROJ: not found ./Makefile: 8: ./Makefile: SOURCEDIR: not found ./Makefile: 9: ./Makefile: BUILDDIR: not found ./Makefile: 10: ./Makefile: PYCMD: not found ./Makefile: 13: ./Makefile: help:: not found ./Makefile: 14: ./Makefile: SPHINXBUILD: not found ./Makefile: 14: ./Makefile: SOURCEDIR: not found ./Makefile: 14: ./Makefile: BUILDDIR: not found ./Makefile: 14: ./Makefile: SPHINXOPTS: not found ./Makefile: 14: ./Makefile: O: not found ./Makefile: 14: ./Makefile: @: not found ./Makefile: 16: ./Makefile: figures:: not found ./Makefile: 17: ./Makefile: PYCMD: not found ./Makefile: 17: ./Makefile: @: not found ./Makefile: 19: ./Makefile: docset:: not found ./Makefile: 20: ./Makefile: SPHINXPROJ: not found ./Makefile: 20: ./Makefile: SOURCEDIR: not found ./Makefile: 20: ./Makefile: BUILDDIR: not found ./Makefile: 20: ./Makefile: doc2dash: not found ./Makefile: 23: ./Makefile: SPHINXPROJ: not found ./Makefile: 23: ./Makefile: SPHINXPROJ: not found cp: cannot stat '.docset/icon.png': No such file or directory ./Makefile: 24: ./Makefile: SPHINXPROJ: not found ./Makefile: 24: ./Makefile: SPHINXPROJ: not found convert: unable to open image `.docset/[email protected]': No such file or directory @ error/blob.c/OpenBlob/2712. convert: no images defined `.docset/icon.png' @ error/convert.c/ConvertImageCommand/3210. ./Makefile: 26: ./Makefile: html-stable:: not found Traceback (most recent call last): File "source/scripts/build_activation_images.py", line 67, in <module> function = torch.nn.modules.activation.__dict__[function_name]() KeyError: 'CELU' Makefile:17: recipe for target 'figures' failed make: *** [figures] Error 1 ./Makefile: 33: ./Makefile: .PHONY:: not found ./Makefile: 37: ./Makefile: %:: not found ./Makefile: 38: ./Makefile: SPHINXBUILD: not found ./Makefile: 38: ./Makefile: SOURCEDIR: not found ./Makefile: 38: ./Makefile: BUILDDIR: not found ./Makefile: 38: ./Makefile: SPHINXOPTS: not found ./Makefile: 38: ./Makefile: O: not found ./Makefile: 38: ./Makefile: @: not found ./Makefile: 40: ./Makefile: clean:: not found ./Makefile: 41: ./Makefile: @echo: not found ./Makefile: 42: ./Makefile: BUILDDIR: not found ./Makefile: 42: ./Makefile: BUILDDIR: not found ./Makefile: 42: ./Makefile: @rm: not found
st32048
You’re note suppose to execute a Makefile with bash. Just run make in the folder where the Makefile is.
st32049
Thank you for your Replay. But I still got the issue. Wendell_Philips: Traceback (most recent call last): File "source/scripts/build_activation_images.py", line 67, in &lt;module&gt; function = torch.nn.modules.activation.__dict__[function_name]() KeyError: 'CELU' Makefile:17: recipe for target 'figures' failed make: *** [figures] Error 1
st32050
Not working for me in May 2021 ARNING: autodoc: failed to import class ‘ConvReLU3d’ from module ‘torch.nn.intrinsic.qat’; the following exception was raised: Traceback (most recent call last): File “c:\python38\lib\site-packages\sphinx\util\inspect.py”, line 403, in safe_getattr return getattr(obj, name, *defargs) AttributeError: module ‘torch.nn.intrinsic.qat’ has no attribute ‘ConvReLU3d’ The above exception was the direct cause of the following exception: Traceback (most recent call last): File “c:\python38\lib\site-packages\sphinx\ext\autodoc\importer.py”, line 111, in import_object obj = attrgetter(obj, mangled_name) File “c:\python38\lib\site-packages\sphinx\ext\autodoc_init_.py”, line 320, in get_attr return autodoc_attrgetter(self.env.app, obj, name, *defargs) File “c:\python38\lib\site-packages\sphinx\ext\autodoc_init_.py”, line 2604, in autodoc_attrgetter return safe_getattr(obj, name, *defargs) File “c:\python38\lib\site-packages\sphinx\util\inspect.py”, line 419, in safe_getattr raise AttributeError(name) from exc AttributeError: ConvReLU3d c:\python38\lib\site-packages\torch\nn\quantized\modules\activation.py:docstring of torch.nn.quantized.modules.activation.ReLU6:14: WARNING: image file not readable: scripts/activation_images/ReLU6.png looking for now-outdated files… none found pickling environment… done checking consistency… done preparing documents… done writing output… [ 0%] complex_numbers Exception occurred: File “c:\python38\lib\subprocess.py”, line 1307, in _execute_child hp, ht, pid, tid = _winapi.CreateProcess(executable, args, FileNotFoundError: [WinError 2] The system cannot find the file specified The full traceback has been saved in C:\Users\paulb\AppData\Local\Temp\sphinx-err-nxayl6xl.log, if you want to report the issue to the developers. Please also report this if it was a user error, so that a better error message can be provided next time. A bug report can be filed in the tracker at https://github.com/sphinx-doc/sphinx/issues. Thanks!
st32051
Hi, We build the doc on linux machines all the time I’m afraid. It might be possible that it is not working on Windows Do you have a linux machine you can use to get the doc by any chance? Another convoluted way to do this is to download it directly from our build. On every PR (pick one randomly on pytorch github), there is a pytorch_python_doc_build that actually builds the doc. You can check there the artifact page that contains the full doc: https://app.circleci.com/pipelines/github/pytorch/pytorch/326575/workflows/30cd30c4-fea5-4165-ab08-a54b85d0e67e/jobs/13676034/artifacts 16 You can download all of these as a single file by using circle CI API directly: Storing Build Artifacts - CircleCI 8
st32052
Hi, I want to implement batched linear combination of batches bs of tensors b of arbitrary shape with batches as of coefficients a. This means bs has size [batch_size, n0, n1, ..., nk ] and as has size [batch_size, n0]. The output is of size [batch_size, n1, ..., nk] The following does what I want: def cust_td(a,b): return torch.tensordot(a, b, dims=1) batch_td = torch.vmap(cust_td) batch_td(as, bs) Is this fine (from an efficiency perspective?). On another note this question maybe motivates allowing torch.vmap to get keyword args (e.g. dims=1 in my case) and pass them to the function func before vectorizing it.
st32053
Hi, I have segmentation maps with size = [5,64,64] where batch size is 5, the spatial size is 64x64. Given a feature map F with the same size 64x64 and N channels. Let select locations in the segmentation map whose labels are v. Now I want to select features in F based on the selected locations whose labels are v in the segmentation map. Here’s how it looks like: seg = torch.randint(0,5,size=(5,64,64)) F = torch.rand(5,25,64,64) locations = seg==1 F[locations]?? Examples: seg with batch_size=2, spatial_size=2x2=4 seg = [[[0,1,2,3],[1,2,1,2] F with 3 channels, the size is (2,3,4) F = [[2,3,4,3], [4,1,2,4], [4,0,5,1]], [1,2,0,4], [0,1,10,3], [1,2,0,5]], location = seg == 1 output = F[location]= [[3,1,0],[1,0,1],[0,10,0]]
st32054
Solved by the-dharma-bum in post #2 Your example isn’t coherent with your explanation above. Given your explanation, seg and F size should be respectively (2, 2, 2) and (2, 3, 2, 2): seg = [[[0, 1], [2, 3]], [[1, 2], [1, 2]]] mask = np.array(seg) print(mask.shape) # (2, 2, 2) F = [[[[2, 3], [4, 3]], …
st32055
Your example isn’t coherent with your explanation above. Given your explanation, seg and F size should be respectively (2, 2, 2) and (2, 3, 2, 2): seg = [[[0, 1], [2, 3]], [[1, 2], [1, 2]]] mask = np.array(seg) print(mask.shape) # (2, 2, 2) F = [[[[2, 3], [4, 3]], [[4, 1], [2, 4]], [[4, 0], [5, 1]]], [[[1, 2], [0, 4]], [[0, 1], [10, 3]], [[1, 2], [0, 5]]]] features = np.array(F) print(features.shape) # (2, 3, 2, 2) output = features[mask == 1] # yields an error because mask and features haven't the same dimension But in that case I don’t understand what you want to do. If you have a batch of segmentation masks, and a batch of features maps, don’t you want to treat each sample from the batch independantly ? Based on your example, it seems you want the positive indices across all the batch, which is implicitely a fusion of the batched segmentation masks. Is it really what you want to do ? For instance let’s forget this batch issue and assume you have one segmentation mask and one set of features: mask = torch.randint(0, 5, size=(64, 64)) features = torch.rand(25, 64, 64) You can then use torch.masked_select: class_mask = torch.where(mask == 1, True, False) output = features.masked_select(class_mask)
st32056
Hi, I am trying to train an object detection model for my custom data. While I’ve constructed the model and data loading, I am having some doubts over how to prepare targets and calculate the loss. The details of the training are: The problem is a binary classification task with localization. I am using Yolov3 for the base model with a bit of customization for my project needs. For each scale in Yolov3, I am constructing an adjacent zero tensor that is identical to the output of the model then calculating the midpoint of any objects in the image, finding out which grid cell does the midpoint belongs to, and assigning that grid cell with the necessary information about object co-ordinates and height/width. This tensor will be used as the target tensor for the scale. Find which grid cells are responsible for objects then save indices for the non-zero positions. Filter target and feature tensors for both objectness score and coordination regression by using the indices from step 3. Reshape the features and target tensor to (-1, n), meaning no information about batch, grid cell, or anything, just plain old 1:1 comparison. Calculate IOU between the predicted box and target box. Object Loss is being calculated as (IOU * Object Confidence). Add to Total Loss. Repeat for another scale. Backprop Object Loss. Some training shenanigans: For each scale, positional weights are being calculated for object == 0 or object == 1 then negative/positive as pos_weight for BCEWithLogitsLoss which means criterion is being created for each scale in each batch. Adam Optimizer with 1e-3 LR and ReduceOnPlateau with the patience of 2 and learning rate reduction of 0.1 per step on eval_loss. This was done because my loss stops decreasing at some point and starts increasing and I read on multiple Github issues that something like this could help but it hasn’t in my case. Also, train_loss is oscillating a lot while monitoring in Tensorboard. Batch Size of 16, can go up to 256. Clipping gradients to 10. My questions are: Is it okay to only assign a single grid cell for the target? I am only considering whether the grid cell “supposed” to detect the object is successful or not. I have gone through the paper and multiple codes from various sources and never been able to grasp how they are building targets. What if the object is sufficiently large and the scale at which I am predicting is not able to fully detect it? Do I still calculate the loss for that scale? I am using “mean” as the reduction for the BCEWithLogitsLoss but reading the paper I get the impression that they are summing over the losses which is the reduction “sum”. Does this affect the training procedure much? Before using (IOU * Object Confidence), I was trying to train by using Object Confidence + (1 - IOU) or MSE_Loss(IOU, 1). When using this formulation, the output of the model for Object Confidence became NaN after a few steps to a few epochs based on the learning rate. What might’ve caused this? Should I use MSE_Loss on coordinates and height/width directly instead of IOUs? I am using the IOUs because using the bounding box attributes directly just gives larger numbers while using IOUs the same information can be supplied to the criterion while keeping the numbers in check. I am sorry for the lengthy post.
st32057
Hi all, I wrote this GAN (in the example it uses fake input): gist.github.com https://gist.github.com/pzaffino/7c3714ffe8eb867eb45b721ac4d2d808 5 pytorch_error.py import torch import torch.nn as nn import torch.optim as optim import torchvision.models import torch.nn.functional as F class Generator(nn.Module): def __init__(self, initial_features, num_channels=1, dropout=0.1): super(Generator, self).__init__() This file has been truncated. show original Basically, the generative accuracy is quantified in terms of image difference. The generator net and the discriminator net get trained (individually), but it looks like they don’t “communicate”. I mean, the discriminator judgment doesn’t affect the generative training…the training goes parallel, without affecting each other. Any idea? Am I missing something? I multiply the discriminator score by a float factor in order to make it comparable to the image difference. Thank you in advance. Best, Paolo
st32058
hi i want to use vector loss. i.e output of loss function is be a vector as follow: def my_loss(y_pred, y_val): return (y_pred - y_val)**2 but in train step i (loss.backward()) I got this error: grad can be implicitly created only for scalar outputs that mean output of loss function can not be vector!!! Is there a solution to this problem? i want loss be a vector
st32059
Well the issue here is that in order to take a gradient, the derivative must be with respect to a single value. How should the vector loss be interpreted?
st32060
@saeed_i You can use reduction=None parameter in the MSELoss to get what you are desiring. But I agree with @eqy , in order to take a gradient, the derivative must be with respect to a single value. While calculating the gradient you must consider the error over the complete dataset.
st32061
why we cant do that??? I confused about the mathematical of this. Although it may be simple !! Can you explain with an example ?? thankssss
st32062
Hi Saeed! saeed_i: Can you explain with an example ?? When you use a loss function to train a model, the loss function is telling you which set of model parameters is “better” than other sets of model parameters. Let’s say you have a model, and when it has weight_A as its parameters it produces loss_vector_A = [1.1, 4.4, 2.2]. Let’s also say the when the same model has weight_B as its parameters it produces loss_vector_B = [2.2, 1.1, 3.3]. Is the model a better model with weight_A or weight_B? If the loss function produced just a scalar (instead of a vector), we would just say that the smaller scalar value corresponds to the better model. (That’s really what “loss function” means.) (If you say just add up the elements of your loss vectors to see which model is better, then you would really be saying that loss_vector_A.sum() and loss_vector_B.sum() should be your scalar loss-function values.) Best. K. Frank
st32063
yes i know these things. i confused in mathematical calucation: assume we have: x = torch.tensor([2. , 3. , 5.], requires_grad= True) y_1 = x.pow(2) y_2 = x.pow(2).sum() then: y_1 = tensor([ 4., 9., 25.], grad_fn=) y_2 = tensor(38., grad_fn=) we know Derivation of x^2 is equal to 2x and gradient of y_1 is be 2x why we put sum end of it(y_1==>y_2)??
st32064
Hi Saeed! saeed_i: we know Derivation of x^2 is equal to 2x and gradient of y_1 is be 2x You are making the hidden assumption that y_1[0] depends only on x[0], y_1[1] only on x[1], and y_1[2] only on x[2]. This happens to be true in your particular example of y_1 = x.pow (2). How would you change your reasoning for the case where y_1 = x * x.roll (1)? Please look at the concept of the Jacobean matrix and how it is the generalization of the gradient to a vector-valued function. (And to avoid any misconception, let me reiterate what I said in my previous post: In your current example, y_1 is a vector, rather than a scalar, so it cannot be used as a loss function.) Best. K. Frank
st32065
I think backward() documentation 11 explains it pretty well, you can backward() on vector if you specify grad_tensor[s], that is a gradient of some vector-to-scalar function, if this argument consists of ones that’s the same as .sum().backward(), and a weighted sum otherwise. As parameter.grad is for scalar-valued functions, reduction to a scalar is present one way or the other.
st32066
Is there a way to do in place operations when you are accessing a tensor by reference? As an example I’d like to be able to change the original temp calling sigmoid (and possibly other operations more complex than += ) on temp2. >>> import torch >>> temp = {0:torch.zeros(4), 1:torch.zeros(4)} >>> temp2 = temp[0] >>> temp2+=1 >>> temp {0: tensor([1., 1., 1., 1.]), 1: tensor([0., 0., 0., 0.])} >>> temp2 = temp2.sigmoid() >>> temp2 tensor([0.7311, 0.7311, 0.7311, 0.7311]) >>> temp {0: tensor([1., 1., 1., 1.]), 1: tensor([0., 0., 0., 0.])}
st32067
Hi Pytorcher! pytorcher: Is there a way to do in place operations when you are accessing a tensor by reference? Yes, you can use temp2.copy_ (temp2.sigmoid()). Best. K. Frank