id
stringlengths
3
8
text
stringlengths
1
115k
st47668
I have 100 folders of different class images and I am getting nan loss value in some folders, I already checked gray scale,truncated,missing labels etc everything is fine but still getting ‘nan’ loss. What could be the possible reason ?
st47669
Do you get a NaN output from your model if you are using samples from certain folders or what do you mean by: monster: I am getting nan loss value in some folders If your model is returning NaNs, you could set torch.autograd.detect_anomaly(True) at the beginning of your script to get a stack trace, which would hopefully point to the operation, which is creating the NaNs.
st47670
I am getting ‘nan’ loss after 1st epoch on a large dataset, please tell me all possible reasons for ‘nan’ loss value. check this : dict_values([tensor(5.5172, device='cuda:0', grad_fn=<NllLossBackward>), tensor(nan, device='cuda:0', grad_fn=<DivBackward0>), tensor(3.7665, device='cuda:0', grad_fn=<BinaryCrossEntropyWithLogitsBackward>), tensor(inf, device='cuda:0', grad_fn=<DivBackward0>)])
st47671
NaN values can be created by invalid operations, such as torch.log(torch.tensor(-1.)), by operations executed on Infs (created through over-/underflows) etc. To isolate it, use torch.autograd.set_detect_anomaly(True).
st47672
Thanks for reply. Can I just use gradient clipping? If yes,how can I choose clip value ?
st47673
If larger gradient magnitudes are expected and would thus create invalid values, you might clip the gradients. You could start with a max norm value of 1 or refer to any paper, which uses a similar approach. Note however, that FloatTensors have a maximal value of: print(torch.finfo().max) > 3.4028234663852886e+38 so you should make sure that the NaNs are not created by an invalid operation.
st47674
by using torch.autograd.set_detect_anomaly(True) I found this error. My dataset seems ok so to resolve this issue should I use gradient clipping or just ignore ‘nan’ values using torch.isnan(x) ? RuntimeError: Function 'SmoothL1LossBackward' returned nan values in its 0th output
st47675
I would recommend to try to figure out what is causing the NaNs instead of ignoring them. Based on the raised error, the loss function might either have created the NaNs or might have gotten them through their input. To isolate it, you could try to make the script deterministic following the reproducibility docs 95. Once it’s deterministic and you could trigger the NaNs in a single step, you could check the parameters, inputs, gradients etc. for the iteration which causes the NaNs.
st47676
I don’t know if this applies to this case and I made sure nothing’s wrong with my data, but I see nans after some time when I use RMSProp but not with Adam. Try changing your optimizer maybe? A similar experience is shared for Keras as well 11.
st47677
I am implementing the sequence-to-sequence model introduced by this paper 27 from Google Magenta: My code is here: GitHub alexis-jacq/Pytorch-Sketch-RNN 144 a pytorch implementation of https://arxiv.org/abs/1704.03477 - alexis-jacq/Pytorch-Sketch-RNN As I don’t have Google’s power of computation, I changed a bit some hyperparameters of the papers (the number of nodes in the decoder LSTM and the learning rate) and it takes me half a day to run 10.000 epochs. However, according to the default hyperparameters suggested by the autor’s TF implementation 15, 512 neurons in decoder LSTM and lr = 0.0001 should be ok to get good results. But, at least after 40.000 epochs, it still generates random straight strokes and even the reconstruction (giving true inputs from the training set to the decoder) is similar to the drawing but not satisfying. Something must be wrong somewhere… If any of you is interested by state-of-the-art seq2seq drawing generation, I would be glad to receive some help. I really need to make it work, its for a robotic project (making collaborative child-robot hand-drawing)
st47678
Good niews ! My implementation was good since the beginning, but I had a terrible and invisible typo in the trajectory generation: - next_state[q_idx+1] = 1 + next_state[q_idx+2] = 1 Now it works properly, and I obtain nice samples of generated cats after short training (this one after 1900 epochs, ~3 epoch/s):
st47679
This is great, I’m happy you were able to port sketch-rnn over to pytorch. I’d also recommend using LSTM along with Recurrent Dropout without Memory Loss 31, which involves a one-line change of code in LSTM, and also Layer Normalization 19. I found these two tricks helped a lot for LSTMs.
st47680
Thanks a lot ! Yes, using recurrent dropout and layer normalization is on my TODO list. It seems to bet not yet implemented on Pytorch’s RNN modules, I will probably have to do my own. Today I was busy finding a way to use the network to find and draw cats into clouds (with Canny edge detection): chat2.svg.png844×608 210 KB Finaly, we used a Baxter 1 robot to draw a cat inferred from the head of a Nao 4 robot. And this led us to realize this creepy video: Baxter improvising from Nao's head
st47681
These cloudy cats are great! It’s such a creative use of edge detection. I like the clouds more than nao’s head tbh. Looking forward to see any gallery or work you end up producing!
st47682
Hahahaha yes, the result with Nao’s head is really ugly, that explains the style of the video! But the project will be more serious, involving children with handwriting difficulties for creative activities with robots. We will probably obtain beautiful piece of child-robot collaborative arts!
st47683
Hi I was looking at your code and I tried to run the cod on Jupyter notebook. Still it seems to me that it doesn’t operate because at this part if __name__=="__main__": model = Model() for epoch in range(50001): model.train(epoch) I have an error saying epoch 0 loss 2.6120107173919678 LR 2.6110057830810547 LKL 0.0010049500269815326 <ipython-input-17-a948bd699ee2>:36: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument. pi = F.softmax(pi.transpose(0,1).squeeze()).view(len_out,-1,hp.M) <ipython-input-17-a948bd699ee2>:42: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument. q = F.softmax(params_pen).view(len_out,-1,3) <ipython-input-31-a793444e3f37>:66: UserWarning: torch.nn.utils.clip_grad_norm is now deprecated in favor of torch.nn.utils.clip_grad_norm_. nn.utils.clip_grad_norm(self.encoder.parameters(), hp.grad_clip) <ipython-input-31-a793444e3f37>:67: UserWarning: torch.nn.utils.clip_grad_norm is now deprecated in favor of torch.nn.utils.clip_grad_norm_. nn.utils.clip_grad_norm(self.decoder.parameters(), hp.grad_clip) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-33-1aab3c2e5ccc> in <module> 2 model = Model() 3 for epoch in range(50001): ----> 4 model.train(epoch) <ipython-input-31-a793444e3f37> in train(self, epoch) 78 if epoch%100==0: 79 #self.save(epoch) ---> 80 self.conditional_generation(epoch) 81 82 def bivariate_normal_pdf(self, dx, dy): <ipython-input-31-a793444e3f37> in conditional_generation(self, epoch) 142 hidden_cell = (hidden, cell) 143 # sample from parameters: --> 144 s, dx, dy, pen_down, eos = self.sample_next_state() 145 #------ 146 seq_x.append(dx) <ipython-input-31-a793444e3f37> in sample_next_state(self) 180 sigma_y = self.sigma_y.data[0,0,pi_idx] 181 rho_xy = self.rho_xy.data[0,0,pi_idx] --> 182 x,y = sample_bivariate_normal(mu_x,mu_y,sigma_x,sigma_y,rho_xy,greedy=False) 183 next_state = torch.zeros(5) 184 next_state[0] = x <ipython-input-32-7b287d68c95c> in sample_bivariate_normal(mu_x, mu_y, sigma_x, sigma_y, rho_xy, greedy) 8 cov = [[sigma_x * sigma_x, rho_xy * sigma_x * sigma_y],\ 9 [rho_xy * sigma_x * sigma_y, sigma_y * sigma_y]] ---> 10 x = np.random.multivariate_normal(mean, cov, 1) 11 return x[0][0], x[0][1] 12 mtrand.pyx in numpy.random.mtrand.RandomState.multivariate_normal() TypeError: ufunc 'add' output (typecode 'O') could not be coerced to provided output parameter (typecode 'd') according to the casting rule ''same_kind'' Would you mind if you help me out with this?
st47684
I’m writing a shim to use a C++ function with PyTorch and would like to know how to convert torch::Tensor to a std:vector<int32_t>? float f_cpp(std::vector<int32_t>& result); float f(torch::Tensor results_t) { // std:vector<int32_t> <-- torch::Tensor? std::vector<int32_t> result = result_t.data_ptr<std::vector<int32_t>>; return f_cpp(*result); }
st47685
Solved by tom in post #2 The problem here is that a std::vector can’t use “foreign” memory. Some avenues that might work: Use a pointer (int32_t* ) as an array or a ArrayRef<int32_t> (available in c10). In this case you need to keep the tensor allocated while you are using them. Also note that you need to be a bit carefu…
st47686
The problem here is that a std::vector can’t use “foreign” memory. Some avenues that might work: Use a pointer (int32_t* ) as an array or a ArrayRef<int32_t> (available in c10). In this case you need to keep the tensor allocated while you are using them. Also note that you need to be a bit careful with strides if your tensor can be non-contiguous. Allocate the memory in the vector and then use from_blob to get a tensor. In this case you need to keep the vector around while using the tensor. Copy the data. Best regards Thomas
st47687
tom: from_blob Hey Thomas, Thanks for replying, I’m not a C++ developer so I need a little bit more clarification. I’m only dealing with 1d tensors so if I go with int32_t* am I on the right track with - auto r_ptr = result_t.data_ptr<int32_t>(); std::vector<int32_t> result{r_ptr, r_ptr + result_t.?}; If so, how do I get the size of the torch::Tensor result_t? Regards,
st47688
Hi, I am using differential learning rates for different layers. But at the same time I am also using LR decay (I might change to OneCycle policy). But I wanted to know for which layers is the LR decay applied, below is the sample code: optimizer = optim.SGD([ {'params': model.base.parameters()}, {'params': model.classifier.parameters(), 'lr': 1e-3} ], lr=1e-2, momentum=0.9) MultiStepLR(optimizer, milestones=[5,10], gamma=0.1) Is the rate decay only applied to base layers or to both the classifier and base layers?
st47689
Solved by tom in post #4 Yes, stick the parameters in two optimizers. That should not make much of a difference w.r.t. performance, you just need to zero_grad and step twice.
st47690
They loop over the parameter groups, i.e. the classifier lr will always be a tenth of the base in your example. Best regards Thomas
st47691
Is the a way to make the classifier learning rate be constant while the base learning only decay’s?
st47692
Yes, stick the parameters in two optimizers. That should not make much of a difference w.r.t. performance, you just need to zero_grad and step twice.
st47693
In pytorch I can create a random zero and one tensor with around %50 distribution of each import torch torch.randint(low=0, high=2, size=(2, 5)) I am wondering how I can make a tensor where only 25% of the values are 1s, and the rest are zeros?
st47694
I typically use either (torch.rand((2, 5)) < 0.25).float() or torch.full((2, 5), 0.25).bernoulli_() So technically, the second saves one step (rand + compare + float vs. full + bernoulli), but I haven’t really seen this as the bottleneck in anything I do, so it entirely depends on my mood - whether the glass is half full or half rand. Best regards Thomas
st47695
Hi, I have the following snippet in Unet structure: class DoubleConv(nn.Module): def __init__(self,in_channels, out_channels, mid_channels=None): super(DoubleConv,self).__init__() if not mid_channels: mid_channels = out_channels self.d_conv = nn.Sequential( nn.Conv2d(in_channels,mid_channels,kernel_size=3, padding=1), nn.BatchNorm2d(mid_channels), nn.ReLU(inplace=True), nn.Conv2d(mid_channels,out_channels,kernel_size=3,padding=1), nn.BatchNorm2d(out_channels), nn.ReLU(inplace=True)) def forward(self,x): ## The spatial dim is not changed: (out_dim, h,w) return(self.d_conv(x)) class Up(nn.Module): """Upscaling then double conv""" def __init__(self,in_channels, out_channels, bilinear=True): super(Up,self).__init__() # if bilinear, use the normal convelutions to reduce the number of channels if bilinear: self.up = nn.Upsample(scale_factor=2,mode='bilinear', align_corners=True) # here we devide the number of filters self.conv = DoubleConv(in_channels,out_channels,in_channels//2) else: self.up = nn.ConvTranspose2d(in_channels,in_channels//2,kernel_size=2,stride=2) self.conv = DoubleConv(in_channels,out_channels) def forward(self,x1,x2): x1 = self.up(x1) print(x1.size()) # input is CHW diffy = x2.size()[2] - x1.size()[2] diffx = x2.size()[3]- x1.size()[3] x1 = to_pil_image(x1) x1 = F.pad(x1, [diffx // 2, diffx - diffx // 2,diffy // 2, diffy - diffy // 2]) x1= to_tensor(x1) x = torch.cat([x2, x1], dim=1) return(self.conv(x)) But when I run it to two random tensors, I get two errors: First: img should be PIL Image. Got <class ‘torch.Tensor’> while in documentation of pytorch.org it was written that torchvision.transforms.functional.pad can be applied on both PIL and Tensor images. When I change the tensors in PIL then I get an error in dimension. While in pytorch.org I read that it dose not matter what dimension you have , just it is important we have […, h,w]. In the following when I put tensors if dimension [batch_num, c,h,w] it gives me the error of size 4 that should be 2 or 3 and when I omit the batch_num from the tensors it returns an error of dimension again. x2 = torch.randn(1,3,130,130) x1 = torch.randn(1,3,126,126) upp = Up(3,32,True) result = upp(x1,x2) Error: ValueError: pic should be 2/3 dimensional. Got 4 dimensions.
st47696
Hi, About first error, make sure you have installed latest pytorch and torchvision packages as it has been added to latest build pytorch 1.7 and torchvision 0.8. About second error, pad works on tensors too, and you don’t need to convert it to pil and convert back to tensor. The error is from to PIL where you should only pass 2d or 3d but you have also included batches. 887574002: x1 = to_pil_image(x1) x1 = F.pad(x1, [diffx // 2, diffx - diffx // 2,diffy // 2, diffy - diffy // 2]) x1= to_tensor(x1) Bests
st47697
Hello, I have a (soft) adjacency tensor adj of size B x N x M with batch dimension B. When I perform adj.max(dim=2) I get a tuple of max values and max value indices, indicating for each row in N dimension where it’s maximal in regards to M dimension. Now I would like to use the returned max value indices to select entries from another tensor features of size B x N x M x 10, so that I get a tensor of size B x N x 10 which has kept only the entries of it’s dim=2 where adj was maximal with respect to dim=2. I have been able to do this without the batch dimension: import torch adj = torch.tensor([[1,0,0],[0,1,0],[0,1,0],[0,0,1]]) # 4 x 3 features = torch.stack([torch.arange(12).reshape((4,3))]*10).permute((1,2,0)) # 4 x 3 x 10 m = adj.max(dim=1)[1] # result = features[range(4),m] print(result) #result is: #tensor([[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], # [ 4, 4, 4, 4, 4, 4, 4, 4, 4, 4], # [ 7, 7, 7, 7, 7, 7, 7, 7, 7, 7], # [11, 11, 11, 11, 11, 11, 11, 11, 11, 11]]) Now using range(4) there just doesn’t seem like the correct way to handle this to me. Also I have no idea how to do this with an additional batch dimension. Any ideas?
st47698
Solved by albanD in post #2 Hi, You can use these indices with the other_tensor.gather(dim, indices) function. Note that you either need to use keepdim=True when you call max or unsqueeze the indices before giving them to gather.
st47699
Hi, You can use these indices with the other_tensor.gather(dim, indices) function. Note that you either need to use keepdim=True when you call max or unsqueeze the indices before giving them to gather.
st47700
Thanks, it appears tensor.gather(...) is the right tool. Unfortunately, I still couldn’t get it to work. When I add an extra dimension at the end of adj and use keepdim=True, features.gather(1,m) only returns a tensor of size 4 x 1 x 1 while I need the whole 4 x 10 x 1 tensor. So gather somewhat ignores the last dimension of size 10 of features, it only picks the first element of that dimension. Update: So I got it to work by using m.repeat(1,1,10) and using that as index tensor for the gather call. This still doesn’t seem like the most elegant solution to me. Now I gotta figure out how to do this with batches. Update 2: So batching it is no problem, it’s literally the same with an additional dimension. I’m still interested whether there is a better way than calling repeat.
st47701
Hello, I want to test the code https://github.com/gathierry/FashionAI-KeyPointsDetectionOfApparel 1 but the training and test database is no longer available. Have some of you downloaded it before so you can send it to me please ? Thank you very much Best regards
st47702
Hi, All I have inquiry about how to realize os.path.join in C++ In libtorch In python, it is Torch::load(os.path.join(arg.savedir,‘d.pkl’) Many thanks.
st47703
os is a Python lib and thus not available in C++. You could use e.g. std::filesystem to use a similar as pathlib: #include <iostream> #include <filesystem> namespace fs = std::filesystem; int main() { fs::path dir ("/tmp"); fs::path file ("foo.txt"); fs::path full_path = dir / file; std::cout << full_path << std::endl; return 0; } Code taken from here 20.
st47704
One more inquiry is that if pytorch Is model=torch.load(fullpath) In c++, shall I use torch::load(model,full path) as I cannot use model=torch::load(full path) Also how shall I convert Os.makedir(path) to c++ Thanks a lot.
st47705
auto module = torch::jit::load(path); might work. mkdir is also a C++ function as described here 3.
st47706
But actually it says "no matching function for ‘mkdir’. What shall I include in the header file in order to find mkdir function please ? Thanks
st47707
The link gives you full examples for Windows and Linux systems. On Linux mkdir should be defined in <sys/stat.h>.
st47708
Thanks. It seems that even if I include <sys/stat.h>, it stills says no matching function call to “mkdir”. I am in linux, in libtorch
st47709
Chen0729: It seems that even if I include <sys/stat.h>, it stills says no matching function call to “mkdir”. I am in linux, in libtorch This is the right header though. See an example here: https://coliru.stacked-crooked.com/a/ace3992a9a0d474e 3 The related man page: linux.die.net mkdir(3): make directory - Linux man page 2
st47710
I’m encountering a strange error during backprop when using GE2E loss (contrastive), which also happens to occur randomly, and the probability seems to increase with number of speakers passed, and decrease with number of utterances passed… I’m 99.99% sure it’s the loss, because: a) the parameters passed to it affect it as stated above b) I’ve trained identical setup for a long time with no such error popping up when using my custom replacement loss instead of GE2E Any help on how to go about figuring this out would be greatly appreciated - this is the most cryptic thing I’ve ever encountered… I’m using: -> pytorch 1.6.0 -> python 3.6.9 Custom loss: def helper_loss(self, data): num_spk, num_utt, num_fea = data.size() labels = torch.unsqueeze(torch.unsqueeze(torch.eye(num_spk).to(self.device), 0), 0) upsample = nn.Upsample(scale_factor=num_utt, mode='nearest') labels = torch.squeeze(upsample(labels)) features = data.reshape((num_spk * num_utt, num_fea)) results = torch.mm(features, features.T) return self.mse(labels, results) GE2E code: import torch import torch.nn as nn import torch.nn.functional as F class GE2ELoss(nn.Module): def __init__(self, init_w=10.0, init_b=-5.0, loss_method='softmax'): ''' Implementation of the Generalized End-to-End loss defined in https://arxiv.org/abs/1710.10467 [1] Accepts an input of size (N, M, D) where N is the number of speakers in the batch, M is the number of utterances per speaker, and D is the dimensionality of the embedding vector (e.g. d-vector) Args: - init_w (float): defines the initial value of w in Equation (5) of [1] - init_b (float): definies the initial value of b in Equation (5) of [1] ''' super(GE2ELoss, self).__init__() self.w = nn.Parameter(torch.tensor(init_w)) self.b = nn.Parameter(torch.tensor(init_b)) self.loss_method = loss_method assert self.loss_method in ['softmax', 'contrast'] if self.loss_method == 'softmax': self.embed_loss = self.embed_loss_softmax if self.loss_method == 'contrast': self.embed_loss = self.embed_loss_contrast def calc_new_centroids(self, dvecs, centroids, spkr, utt): ''' Calculates the new centroids excluding the reference utterance ''' excl = torch.cat((dvecs[spkr,:utt], dvecs[spkr,utt+1:])) excl = torch.mean(excl, 0) new_centroids = [] for i, centroid in enumerate(centroids): if i == spkr: new_centroids.append(excl) else: new_centroids.append(centroid) return torch.stack(new_centroids) def calc_cosine_sim(self, dvecs, centroids): ''' Make the cosine similarity matrix with dims (N,M,N) ''' cos_sim_matrix = [] for spkr_idx, speaker in enumerate(dvecs): cs_row = [] for utt_idx, utterance in enumerate(speaker): new_centroids = self.calc_new_centroids(dvecs, centroids, spkr_idx, utt_idx) # vector based cosine similarity for speed cs_row.append(torch.clamp(torch.mm(utterance.unsqueeze(1).transpose(0,1), new_centroids.transpose(0,1)) / (torch.norm(utterance) * torch.norm(new_centroids, dim=1)), 1e-6)) cs_row = torch.cat(cs_row, dim=0) cos_sim_matrix.append(cs_row) return torch.stack(cos_sim_matrix) def embed_loss_softmax(self, dvecs, cos_sim_matrix): ''' Calculates the loss on each embedding $L(e_{ji})$ by taking softmax ''' N, M, _ = dvecs.shape L = [] for j in range(N): L_row = [] for i in range(M): L_row.append(-F.log_softmax(cos_sim_matrix[j,i], 0)[j]) L_row = torch.stack(L_row) L.append(L_row) return torch.stack(L) def embed_loss_contrast(self, dvecs, cos_sim_matrix): ''' Calculates the loss on each embedding $L(e_{ji})$ by contrast loss with closest centroid ''' N, M, _ = dvecs.shape L = [] for j in range(N): L_row = [] for i in range(M): centroids_sigmoids = torch.sigmoid(cos_sim_matrix[j,i]) excl_centroids_sigmoids = torch.cat((centroids_sigmoids[:j], centroids_sigmoids[j+1:])) L_row.append(1. - torch.sigmoid(cos_sim_matrix[j,i,j]) + torch.max(excl_centroids_sigmoids)) L_row = torch.stack(L_row) L.append(L_row) return torch.stack(L) def forward(self, dvecs): ''' Calculates the GE2E loss for an input of dimensions (num_speakers, num_utts_per_speaker, dvec_feats) ''' #Calculate centroids centroids = torch.mean(dvecs, 1) #Calculate the cosine similarity matrix cos_sim_matrix = self.calc_cosine_sim(dvecs, centroids) torch.clamp(self.w, 1e-6) cos_sim_matrix = cos_sim_matrix * self.w + self.b L = self.embed_loss(dvecs, cos_sim_matrix) return L.mean() Stack trace: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-4-d2aadca10c7e> in <module> 33 34 for step in range((120000 * num_epochs) + 1): ---> 35 trainer.train_step(step) 36 37 # output = np.zeros((0, 512)).astype(np.float32) # 512 /jupyter_lab/ge2e_efnet_test/trainers/features_distance.py in train_step(self, step) 103 # print('GE2E loss: {}'.format(self.custom_loss(labels_reshaped).item())) 104 --> 105 self.each_student_end(i, loss) 106 107 self.base_iteration_end(step) /jupyter_lab/ge2e_efnet_test/trainers/base.py in each_student_end(self, index, loss) 35 def each_student_end(self, index, loss): 36 self.losses.append(loss.item()) ---> 37 loss.backward(retain_graph=True) 38 torch.nn.utils.clip_grad_norm_(self.students[index].parameters(), 1.0) 39 self.optimizers[index].step() /usr/local/lib/python3.6/dist-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph) 183 products. Defaults to ``False``. 184 """ --> 185 torch.autograd.backward(self, gradient, retain_graph, create_graph) 186 187 def register_hook(self, hook): /usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables) 125 Variable._execution_engine.run_backward( 126 tensors, grad_tensors, retain_graph, create_graph, --> 127 allow_unreachable=True) # allow_unreachable flag 128 129 RuntimeError: select(): index 0 out of range for tensor of size [0, 1] at dimension 0 Exception raised from select at /pytorch/aten/src/ATen/native/TensorShape.cpp:889 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7f3c829aa1e2 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so) frame #1: at::native::select(at::Tensor const&, long, long) + 0x347 (0x7f3cbe8b6b97 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so) frame #2: <unknown function> + 0x1288329 (0x7f3cbec9b329 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so) frame #3: <unknown function> + 0x127b623 (0x7f3cbec8e623 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so) frame #4: at::select(at::Tensor const&, long, long) + 0xe0 (0x7f3cbebc0c90 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so) frame #5: <unknown function> + 0x2e06d26 (0x7f3cc0819d26 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so) frame #6: <unknown function> + 0x127b623 (0x7f3cbec8e623 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so) frame #7: at::Tensor::select(long, long) const + 0xe0 (0x7f3cbed4bde0 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so) frame #8: <unknown function> + 0x2d1223d (0x7f3cc072523d in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so) frame #9: torch::autograd::generated::MaxBackward1::apply(std::vector<at::Tensor, std::allocator<at::Tensor> >&&) + 0x188 (0x7f3cc073ec78 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so) frame #10: <unknown function> + 0x3375bb7 (0x7f3cc0d88bb7 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so) frame #11: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0x1400 (0x7f3cc0d84400 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so) frame #12: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x451 (0x7f3cc0d84fa1 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so) frame #13: torch::autograd::Engine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x89 (0x7f3cc0d7d119 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so) frame #14: torch::autograd::python::PythonEngine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x4a (0x7f3cce51d4ba in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so) frame #15: <unknown function> + 0xbd66f (0x7f3d0e2bc66f in /usr/lib/x86_64-linux-gnu/libstdc++.so.6) frame #16: <unknown function> + 0x76db (0x7f3d13cdc6db in /lib/x86_64-linux-gnu/libpthread.so.0) frame #17: clone + 0x3f (0x7f3d1401588f in /lib/x86_64-linux-gnu/libc.so.6)
st47711
Solved by belzebubukas in post #2 I think I sorted it - the model I was using had a dropout layer, which would occasionally make it so that all the utterances of a single speaker would get dropped, causing the error. Sorry I didn’t provide the model, which ended up being the cause (well, when combined with GE2E). Hope this helps so…
st47712
I think I sorted it - the model I was using had a dropout layer, which would occasionally make it so that all the utterances of a single speaker would get dropped, causing the error. Sorry I didn’t provide the model, which ended up being the cause (well, when combined with GE2E). Hope this helps someone!
st47713
Hello everyone, sorry for rookie question I’m starting to learn pytoch this is my simple 1 layer linear classifier : class Classifier(nn.Module): def __init__(self, in_dim ): super(Classifier, self).__init__() self.classify = nn.Linear(in_dim , 1 ) def forward(self, features ): final = torch.sigmoid ( self.classify(features) ) return final I want the output to be probability, so ~1 means class 1 and ~0 means class 0 but I don’t know which loss function to use and how to calculate the accuracy in each epoch when I’m using batching ? This is my current training loop but the loss is not correct, i feel like i need to change the code because this code is written for multi class classification, not a single output classification : criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=50, gamma=0.5) net.train() running_loss = 0 total_iters = len(trainloader) for pos, (train_samples, labels) in zip(bar, trainloader): outputs = net(train_samples) loss = criterion(outputs, labels.float() ) running_loss += loss.item() optimizer.zero_grad() loss.backward() optimizer.step() running_loss = running_loss / total_iters return running_loss
st47714
Solved by KFrank in post #3 Hi Richard! As Prashanth notes, you could use BCELoss in place of CrossEntropyLoss. However, you’ll be better off removing the torch.sigmoid() and using BCEWithLogitsLoss. Doing so will be mathematically the same, but numerically more stable. Thus: class Classifier(nn.Module): def __ini…
st47715
For the loss function, switch out CrossEntropyLoss for BCELoss. I usually like to write a separate function that computes the accuracy (over the whole set) and use that within my training loop. The function takes in as arguments the model and the train dataloader.
st47716
Hi Richard! Richard_S: class Classifier(nn.Module): def __init__(self, in_dim ): super(Classifier, self).__init__() self.classify = nn.Linear(in_dim , 1 ) def forward(self, features ): final = torch.sigmoid ( self.classify(features) ) return final I want the output to be probability, so ~1 means class 1 and ~0 means class 0 criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=50, gamma=0.5) As Prashanth notes, you could use BCELoss in place of CrossEntropyLoss. However, you’ll be better off removing the torch.sigmoid() and using BCEWithLogitsLoss. Doing so will be mathematically the same, but numerically more stable. Thus: class Classifier(nn.Module): def __init__(self, in_dim ): super(Classifier, self).__init__() self.classify = nn.Linear(in_dim , 1 ) def forward(self, features ): final = self.classify(features) return final and: criterion = nn.BCEWithLogitsLoss() Your Classifier will now output raw-score logits that range from -inf to inf instead of probabilities. Should you need probabilities for subsequent processing, you can always pass the logits through sigmoid(). Note, you don’t need probabilities to make hard 0-1 predictions: prediction = 1 if logit > 0.0 is the same as prediction = 1 if probability > 0.5. Two side comments: As written, you never call scheduler.step() so scheduler doesn’t do anything. For getting started with the code, one Linear layer is fine, but it won’t be much of a classifier for anything but special toy problems. Leaving aside the sigmoid(), your single output is just a linear function of your in_dim inputs. Things already get much more interesting (and useful) if you add a single “hidden” layer: class Classifier (nn.Module): def __init__ (self, in_dim, hidden_dim): super (Classifier, self).__init__() self.fc1 = nn.Linear (in_dim, hidden_dim) self.activation = nn.ReLU() # for example self.fc2 = nn.Linear (hidden_dim, 1) def forward (self, features): x = self.fc1 (features) x = self.activation (x) x = self.fc2 (x) return x For more interesting classification tasks, the non-linear activation (for example, ReLU) between fc1 and fc2 is the “secret sauce.” Best. K. Frank
st47717
if I have two tensor A and B,A= [1,128,4,6,6],B= [1,96,4,6,6],How to make the dimension of A the same as B,I don’t know if we can use the convolution operation
st47718
Does A and B u specified in the question represent the shapes of the actual A and B or are they the actual A and B?
st47719
That’s alota dimensions u got there😑 Can u tell me the context of ur data, like is it a 3d CAD model or sth else? Also what does each dimension represent?
st47720
I am running the model (the image all 320*480)using part of voc datasets(different height and width), the resize part seems not to work, how should I edit it and make it work?51301×409 82.5 KB 51837×639 55.5 KB 521117×182 29.4 KB
st47721
Solved by anujd9 in post #3 Hi. I think this error is due to different sized images at input. Make sure to resize the images and the mask to a common size using the Resize() transform before training the model.
st47722
Is it possible to post ur entire code coz I don’t think the error is in the block of code u posted Your resize code works just fine to me What exactly are u feeding into ur network? Are u feeding both the mask and image?
st47723
Hi. I think this error is due to different sized images at input. Make sure to resize the images and the mask to a common size using the Resize() transform before training the model.
st47724
Hi, I’ve written a NN where the loss function consists of gradients of the output with respect to the inputs. The code works fine, but I want to inspect how the gradients flow when I’m updating the NN weights with respect to the loss function. I can’t figure out how the computation graph is laid out here. I hope someone can help me understand how exactly are the gradients computed. Here’s my code dtype = torch.float device = torch.device("cuda:0") # Uncomment this to run on GPU x = torch.tensor([[1., 1.]], device=device, dtype=dtype, requires_grad=True) w1 = torch.tensor([[ 0.6561, 0.6202, -1.5620], [ 1.1547, -1.3227, 0.4719]], device=device, dtype=dtype, requires_grad=True) w2 = torch.tensor([[-0.4917], [-0.3886], [-1.4218]], device=device, dtype=dtype, requires_grad=True) learning_rate = 0.01 z1 = x.mm(w1) a1 = torch.sin(z1) z2 = a1.mm(w2) p = torch.sin(z2) grads, = torch.autograd.grad(p, x, grad_outputs=p.data.new(p.shape).fill_(1), create_graph=True, only_inputs=True) dpdx, dpdt = grads[:,0], grads[:,1] pde = dpdt + dpdx loss = pde.pow(2).mean() loss.backward() with torch.no_grad(): w1 -= learning_rate * w1.grad w2 -= learning_rate * w2.grad # Manually zero the gradients after updating weights w1.grad.zero_() w2.grad.zero_() I just can’t understand how I get w1.grad and w2.grad (i.e. what are the connections in the computation graph) when I run loss.backward().
st47725
Hi, I’m trying to train a NN to rank objects. The NN takes as input one object at a time and outputs it’s score. For the loss, I’m trying to implement a function of several outputted scores. However, I’m getting stuck on the loss function because I cant do a backward pass as the resulting loss doesn’t have a gradient because it’s composed of several outputs of the NN. How should I go about implementing this?
st47726
I need a generalizable solution to the following problem: A neural network has multiple inputs, for example some sort of image (A) which I want to use some convolution layers on etc, and some numerical values (B). (A) and (B) should at some point feed into the same layer(s) © and eventually produce a result. Graphically it might look like this A1 -> A2 -> A3 \ C1 -> C2 -> C3 B1 -> B2 -> B3 / In keras this would be solved by creating a model out of the (A) layers, creating a second model out of the (B) layers and calling keras.layers.concatenate on them. Is something similar possible by stacking torch’s Sequential models and if so, how? The examples I’ve seen use classes derived from nn.Module which doesn’t fit my needs unfortunately.
st47727
Solved by ptrblck in post #2 I’m unsure why this wouldn’t fit your need. Could you explain your concerns a bit? The general approach for your use case would be: define modelA, modelB, and modelC register all models in a parentModel pass two inputs to parentModel.forward, call each branch using the corresponding data, conc…
st47728
Tornac: The examples I’ve seen use classes derived from nn.Module which doesn’t fit my needs unfortunately. I’m unsure why this wouldn’t fit your need. Could you explain your concerns a bit? The general approach for your use case would be: define modelA, modelB, and modelC register all models in a parentModel pass two inputs to parentModel.forward, call each branch using the corresponding data, concatenate the activations, and feed it to modelC Here is an example: class ParentModel(nn.Module): def __init__(self, modelA, modelB, modelC): super(ParentModel, self).__init__() self.modelA = modelA self.modelB = modelB self.modelC = modelC def forward(self, x1, x2): x1 = self.modelA(x1) x2 = self.modelB(x2) x = torch.cat((x1, x2), dim=1) x = self.modelC(x) return x modelA = nn.Linear(10, 10) modelB = nn.Linear(10, 10) modelC = nn.Linear(20, 10) parent = ParentModel(modelA, modelB, modelC) x1 = torch.randn(1, 10) x2 = torch.randn(1, 10) out = parent(x1, x2) print(out.shape) > torch.Size([1, 10]) Instead of the nn.Linear modules for modelX you can also use nn.Sequential if you want.
st47729
Executing some pytorch code I get this message by stdout: “Time is running Backwards!”. What does it mean? It doesn’t seems to affect the program execution at all.
st47730
That’s an interesting message you are seeing. Could you post more information which part of the code is creating this output?
st47731
Hi all, I just want to be sure, where should I use the .zero_grad() function? In the official MNIST example, the .zero_grad() function is used in the beginning of the training loop. def train(args, model, device, train_loader, optimizer, epoch): model.train() for batch_idx, (data, target) in enumerate(train_loader): data, target = data.to(device), target.to(device) optimizer.zero_grad() output = model(data) loss = F.nll_loss(output, target) loss.backward() optimizer.step() if batch_idx % args.log_interval == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item())) if args.dry_run: break from here: https://github.com/pytorch/examples/blob/a74badde33f924c2ce5391141b86c40483150d5a/mnist/main.py#L37 1 Also, in the official tutorials, the zero_grad() function is used right before the backward() function. for t in range(500): # Forward pass: compute predicted y by passing x to the model. Module objects # override the __call__ operator so you can call them like functions. When # doing so you pass a Tensor of input data to the Module and it produces # a Tensor of output data. y_pred = model(x) # Compute and print loss. We pass Tensors containing the predicted and true # values of y, and the loss function returns a Tensor containing the # loss. loss = loss_fn(y_pred, y) if t % 100 == 99: print(t, loss.item()) # Zero the gradients before running the backward pass. model.zero_grad() # Backward pass: compute gradient of the loss with respect to all the learnable # parameters of the model. Internally, the parameters of each Module are stored # in Tensors with requires_grad=True, so this call will compute gradients for # all learnable parameters in the model. loss.backward() In my understanding, the first way makes sense, because we want to have gradient only for the current batch…
st47732
Solved by ptrblck in post #2 Both approaches are valid for the standard use case, i.e. if you do not want to accumulate gradients for multiple iterations. You can thus call optimizer.zero_grad() everywhere in the loop but not between the loss.backward() and optimizer.step() operation.
st47733
Both approaches are valid for the standard use case, i.e. if you do not want to accumulate gradients for multiple iterations. You can thus call optimizer.zero_grad() everywhere in the loop but not between the loss.backward() and optimizer.step() operation.
st47734
I am trying to do a seq2seq prediction. For this, I have an LSTM layer followed by a fully connected layer. I employ Teacher training during the training phase and would like to skip this (I maybe wrong here) during testing phase. I have not found a direct way of doing this so I have taken the approach shown below. def forward(self, inputs, future=0, teacher_force_ratio=0.2, target=None): outputs = [] for idx in range(future): rnn_out, _ = self.rnn(inputs) output = self.fc1(rnn_out) if self.teacher_training: new_input = output if np.random.random() >= teacher_force_ratio else target[idx] else: new_input = output inputs = new_input I use a bool variable teacher_training to check if Teacher training is needed or not. Is this correct? If it is, is there a better way of doing it? Thanks.
st47735
Solved by ptrblck in post #2 Your approach is fine, if you want to set the teacher_training argument independently. Alternatively, you could also use the internal self.training flag, which will be changed by calling model.train() or model.eval().
st47736
Your approach is fine, if you want to set the teacher_training argument independently. Alternatively, you could also use the internal self.training flag, which will be changed by calling model.train() or model.eval().
st47737
Hello, I am wondering if there is some written method already to extract whether neuron is just activated or not. Something like this: if neuron_value >=0: then neuron_value =1 else: neuron_value = 0 but for all neurons in the network. The final output of one layer (linear or convolutional) in this case would be e.g. [0 1 0 1 1 1 1 1 1] or [[0 1 0 0 1 0 ], [0 1 0 1 0 0], [0 1 0 1 0 0], [0 1 1 1 0 0]] … I know now how to get activations values (from this post: How can l load my best model as a feature extractor/evaluator? 1), but now want just to track whether the neuron is on or off in a network. It can be calculated manually also (wit if and else in every layer), but in case of convolutional and dense neural networks and requires lots of programming and considering tensors shapes and dimensions (especially in convolutional layers), so I was wondering if there is some other solution that calculates it for all layers at once for specific input. Thank you.
st47738
Hello, Just started PyTorch, and I cannot understand the meaning of nn.Linear(input_features, output_features, bias = True) From my point of view, linear regression is supposed to be determined by its weight and bias, what does input_features and output_features mean?
st47739
Ok lemme explain this: U see the input feature simply means how many features of a single data point is going to be fed into the network, the output feature means the size of the output and the bias is simply a matrix of values that will be added to the product of ur input and weight. Eg: Say u have a training data with shape (N, F) where N is the number of data points an F is the number of features per data point. Let’s select a random data point from the training data, it’s shape will be (1, F). Let’s assume that F = 5, therefore there are 5 features per data point In this case if ur network is a single dense / linear layers network, the input feature is 5. The output feature in this case will be the number of points per target label (continuous or categorical) simply because it’s a single layer network. So now let’s assume that the points per target in ur training data is 1, therefore ur output feature will be 1. Your linear layer will look sth like this: nn.Linear(5, 1). Now that these numbers have been specified, the shape of your weights will be (5, 1) and the size of ur bias will be (1, 1) because the the output neuron / feature is 1 So let’s say ur weights is denoted by ‘w’ and bias by ‘b’ and input by ‘x’ and output by ‘y_pred’ and target per data point by ‘y_actual’: y_pred = xw + b ≠ wx + b (coz of matrix multiplication rules) Now the shape of y_pred is always equal to the shape of y_actual so that the loss can be computed fine. Also from what I initially said, there is 1 point per target so the target data is a shape of (N, 1) therefore, the target for a single data point (y_actual) should be a of shape (1, 1) Now if u remember from matrix multiplication, let’s check if there shapes are compatible: y_pred = (1, 5) x (5, 1) + (1, 1) The shape of y_pred is (1, 1) for a single data point which is correct coz it’s also the same as that of y_actual. Now these weights and biases are initalized randomly and behind the scenes so u can’t really see them except u print them out. U can also initialize the weights however u please, but just be careful not to make them too large and with same values in each cell (can cause exploding gradients). Sometime u would want to set the bias to false simply coz u don’t need them Eg: When u are using batch Normalization after a layer. I Know I talked too much but hopefully after this u’ll understand this in full detail. Cheers🙃
st47740
So in summary of it all the input and output feature parameters are used to instanciate the shape of the weight and bias.
st47741
Hi, My test code is like this: import torch conv = torch.nn.Conv2d(3, 32, 3, 1, 1) bn = torch.nn.BatchNorm2d(32) conv.eval() bn.eval() fused = torch.nn.utils.fuse_conv_bn_eval(conv, bn) inten = torch.randn(2, 3, 224, 224) out1 = bn(conv(inten)) out2 = fused(inten) print((out1 - out2).abs().sum()) The output is tensor(0.1754, grad_fn=<SumBackward0>). Why is there so large difference between the separate computation and fused computation, and how could I make the fussd version work like separate version?
st47742
If you check the (out1 - out2).abs().max() value you’ll see that it’s in the range ~1e-6, which is most likely created due to the limited floating point precision. Summing these small absolute errors might result in the larger difference.
st47743
Thanks for telling me that!! I tried to replace one conv-bn pair with the fused conv in a trained model, and test the model on some test set. The acc drops almost 1%. It seems that the point precision error is not neglectful. How could I fuse conv-bn pair without losing accuracy please ?
st47744
If you think these changes are causing a drop in model performance, you could fine-tune the model with the fused layers for some iterations and check if this would recover the performance.
st47745
Thanks !! By the way, is there any suggestions about finetune like this ? I plan to fine tune for 5 epochs, with the lr value and its annealing strategy not changed (I used cosine lr to train my model before fuse). Shall I use a smaller lr or have the lr annealing curve changed ?
st47746
Unfortunately I don’t know what the best approach would be. If possible, I would try to add the fused layers from the beginning, but if your model is already trained and you would like to fuse these layers now, you would have to try out different approaches.
st47747
Hi, While tracking down my gradient functions, I came accross this label . As my model fails to learn, I was wondering whether that could be related to that. And more generally speaking, I havent found much information related to that, what does this “Unsafe” account for ? Thanks
st47748
I also realized I have a variable with grad_fn=UnsafeViewBackward after using torchviz. I think that this is related to the function _unsafe_view, which is defined in https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/TensorShape.cpp 44. There is a comment just before the definition that might be helpful. In my case this happened to the variable x_hat, which is created using the following line x_hat = F.linear(Z.view(batch_size,1,-1),self.D) I found that _unsafe_view () is called inside matmul(), which is in turn called by F.linear, being F just torch.nn.functional. See here: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/LinearAlgebra.cpp 13
st47749
From the comment in the code 48: // _unsafe_view() differs from view() in that the returned tensor isn't treated // as a view for the purposes of automatic differentiation. (It's not listed in // VIEW_FUNCTIONS in gen_autograd.py). It's only safe to use if the `self` tensor // is temporary. For example, the viewed tensor here (a + b) is discarded immediately // after viewing: // // res = at::_unsafe_view(a + b, size); // // This is a hack because in-place operations on tensors treated like views // can be much more expensive than the same operations on non-view tensors. Used e.g. in matmul 90.
st47750
Hi @ptrblck, I just created a very simple snippet like below: p = torch.ones((3,4), requires_grad = True) p1 = p.permute(1,0).flatten() p2 = p.transpose(1,0).flatten() p3 = p.T.reshape(-1) p4 = torch.empty(12) for i in range(4): for j in range(3): p4[i*3 + j] = p[j,i] print(p1, p2, p3, p4) output ---------------------- tensor([1., 1., ... 1.], grad_fn=<UnsafeViewBackward>) tensor([1., 1., ... 1.], grad_fn=<UnsafeViewBackward>) tensor([1., 1., ... 1.], grad_fn=<UnsafeViewBackward>) tensor([1., 1., ... 1.], grad_fn=<CopySlices>) seems if we change the shape of original tensor then apply flatten, the grad_fn will become so if I am going to use p1~p3 as convolution weight and apply F.conv2d to a tensor x, which p should I use? F.conv2d(x, p1), or F.conv2d(x, p2), or F.conv2d(x, p3) or F.conv2d(x, p4)???
st47751
You as a user shouldn’t see any difference, so you could use any pX tensor for your operation. Anyway, let us know, if you see any unexpected behavior or errors. Of course the shape doesn’t fit the expected weight shape for F.conv2d, but that is another issue.
st47752
Hi, I receive an error while using this function for reshaping my tensor def reshape_fortran(x, shape): if len(x.shape) > 0: x = x.permute(*reversed(range(len(x.shape)))) return x.reshape(*reversed(shape)).permute(*reversed(range(len(shape)))) and it says: RuntimeError: _unsafe_view does not support automatic differentiation for outputs with complex dtype. it is weird cause, sometime I receive it and sometime it works fine! what is this unsafe_view?
st47753
Could you post the type and shape of x as well as the input argument shape, so that we could debug this issue? This error points towards the complex types so are you using this tensor type or might the error message be misleading?
st47754
Thanks for your response. Yes, I am working with complex data and I use this function to implement F order reshape in tensors. (There is no built-in function for Fortran reshape of tensors) The input is a (4,4,2) complex64 valued matrix and the shape value is (8,4). I was able to fix the problem by using this function instead, but I have no idea why it was not working and why it is working now: def convert_output84_torch(input): shape84 = (8,4) T1 = torch.transpose(input,0,2).permute(0,1,2).contiguous() T2 = T1.view(shape84[::-1]) output = torch.transpose(T2,0,1) return output I replaced reshape with view, and it works fine (for now). But as my code was working fine before, I am not sure if it is a long-term solution as I am not aware of the main source.
st47755
Thanks for the information. I can reproduce this error in PyTorch 1.7.0 and get a different error message in 1.8.0.dev20201022: def reshape_fortran(x, shape): if len(x.shape) > 0: x = x.permute(*reversed(range(len(x.shape)))) return x.reshape(*reversed(shape)).permute(*reversed(range(len(shape)))) x = torch.randn(4, 4, 2, dtype=torch.complex64, requires_grad=True) shape = (8, 4) out = reshape_fortran(x, shape) out.mean().backward() > RuntimeError: mean does not support automatic differentiation for outputs with complex dtype. Note that complex support is not fully implemented yet, so I would recommend to verify the results of your second approach using some reference values.
st47756
I have the following model: model = torch.nn.Sequential( torch.nn.Conv3d(1, 128, kernel_size=3, padding=1), torch.nn.ReLU(), torch.nn.Conv3d(128, 128, kernel_size=3, padding=1), torch.nn.ReLU(), torch.nn.Conv3d(128, 128, kernel_size=3, padding=1), torch.nn.ReLU(), torch.nn.Conv3d(128, 128, kernel_size=3, padding=1), torch.nn.ReLU(), torch.nn.Conv3d(128, 128, kernel_size=3, padding=1), torch.nn.ReLU(), torch.nn.Conv3d(128, 1, kernel_size=3, padding=1), torch.nn.ReLU() ).to(device) For which, I use batches of the size N x 160 x 256 x 256 I was running into the RuntimeError: CUDA out of memory error so I reduced the batch size to 2, which resulted in this error instead: RuntimeError: cuDNN error: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input. A quick search in google suggested that my batch size was still too large so I reduced it 1, which surprisingly gave me the RuntimeError: CUDA out of memory error again. How can I resolve this issue? Thank you.
st47757
Solved by ptrblck in post #2 The first two nn.Conv3d layers with an an input of [1, 1, 160, 256, 256] will already take ~20GB during the forward pass (with intermediate activation needed for the backward pass). The complete model will thus have a much higher memory usage and I assume your GPU doesn’t have enough memory.
st47758
The first two nn.Conv3d layers with an an input of [1, 1, 160, 256, 256] will already take ~20GB during the forward pass (with intermediate activation needed for the backward pass). The complete model will thus have a much higher memory usage and I assume your GPU doesn’t have enough memory.
st47759
Hi, I would like to ask how can i read a list of list and count the numbers of True inside? For example, tensor([[ True, False, True, False, True], [ True, True, True, True, True], [False, False, False, True, True]]) How do I iterate and count the numbers of True inside the whole list?
st47760
Solved by ptrblck in post #2 You could use x.sum() to get the number of True values inside the tensor.
st47761
Thank you! it seem to work when i tried it on a random list. Gonna try it on my actual function after its done training! thank you!
st47762
sry, what if I want to calculate the entire amount in the entire list? this x.sum() surprisingly sums the total number of True. Awesome! #Edit Hi, I tried using len(), but does not work like how sum() works… Is there other way possible?
st47763
Maybe, you could use x.numel() to get the number of values inside the tensor. torch.numel() link 2
st47764
Hi everybody, Is there a current API that’s behave similarly to numpy, where we can repeat each row individually ? x = np.array([[1,2],[3,4]]) np.repeat(x, [1, 2], axis=0) array([[1, 2], [3, 4], [3, 4]]) I didn’t find any similar function. What would be the most efficient way of doing this and still backpropagating ? Thank you in advance
st47765
search for torch.repeat in the following document https://pytorch.org/docs/stable/tensors.html
st47766
justusschock: x = torch.tensor([[1,2],[3,4]]) x.repeat(1,2) I don’t obtain the same output: In [9]: import numpy as np In [10]: import torch In [11]: x = torch.tensor([[1,2],[3,4]]) ...: x.repeat(1,2) Out[11]: tensor([[1, 2, 1, 2], [3, 4, 3, 4]]) In [12]: x = np.array([[1,2],[3,4]]) In [13]: np.repeat(x, [1,2], axis=0) Out[13]: array([[1, 2], [3, 4], [3, 4]])
st47767
Could you provide an example that produces the same output as numpy ? In [9]: import numpy as np In [10]: import torch In [11]: x = torch.tensor([[1,2],[3,4]]) ...: x.repeat(1,2) Out[11]: tensor([[1, 2, 1, 2], [3, 4, 3, 4]]) In [12]: x = np.array([[1,2],[3,4]]) In [13]: np.repeat(x, [1,2], axis=0) Out[13]: array([[1, 2], [3, 4], [3, 4]])