id
stringlengths
3
8
text
stringlengths
1
115k
st45768
paul_shuvo: but the CPU heats up and then shuts down. This sounds like a root cause and I’m not sure if this is still a PyTorch question. Could you make sure your CPU is properly cooled?
st45769
It only happens with PyTorch (I’ve tried several pytorch implementation of human pose estimation techniques). I’ve been running other ml stuff (TF, scikit-learn); they run just fine. Currently, I’m trying to run AlphaPose. Same issue.
st45770
If you cannot provide proper cooling, you could try to artificially slow down the code e.g. by adding sleeps to the code or reducing the number of used threads (via torch.set_num_threads and torch.set_num_interop_threads).
st45771
I’m trying to do language modeling on a custom dataset using nn.TransformerEncoder. I’m using https://github.com/pytorch/examples/tree/master/word_language_model as a reference. Previously, I used Google’s Trax and its TransformerLM model to train a transformer with this dataset, based on this example: https://github.com/jalammar/jalammar.github.io/blob/master/notebooks/Trax_TransformerLM_Intro.ipynb 1. There, I reduced the Adam learning rate to 1e-04, replaced the data with my actual dataset, adjusted the hyperparameters for my use-case, and managed to get very good results. I’m now trying to replicate the same result in PyTorch, but without much luck. Here are the things I modified from the word_language_model example above: Changed the batching and dataset load logic. Since my dataset consists of separate sentences, my input data is of shape (num_examples, seq_len), and each element is a token index, with 0 reserved for padding. I generate inputs of shape (max_seq_len, batch_size). Calculated a padding mask that contains True wherever the original data has 0 (the padding token), passing it as src_key_padding_mask to nn.TransformerEncoder. Added an Adam optimizer instead of the previously inline LR annealing on plateau. Removed F.log_softmax, so now returning raw data from the last nn.Linear layer, and changed the loss to nn.CrossEntropyLoss(ignore_index=0). Added accuracy measurements by masking the final logits using the key padding mask, taking an argmax, and comparing to the actual output. My training plateaus within a few dozen batches to an accuracy of ~10% and a constant loss, and simply predicts the most common overall token, no matter the input (this gives an accuracy of ~10%, as that token is around 10% of the training data). I’ve tried using a learning rate scheduler, adding some warmup steps, played around with hyperparameters and the initial learning rate, tried changing the initialization of the embedding and final nn.Linear weights to nn.init.xavier_uniform_, but nothing helps. For comparison, I tried randomly shuffling my data around to make it nonsense, and the model arrives at similar numbers. So I’m pretty sure it’s learning nothing. Here’s an example of how my training data looks (right before it goes into the model): data = [[ 1, 1, 1, 1], [ 2, 2, 2, 2], [ 3, 107, 81, 81], [115, 4, 111, 46], [ 5, 5, 5, 5], [ 80, 41, 63, 61], # some zeroes here, starting at different rows, # depending on the size of the example ...]] target = [ 2, 3, 115, 5, 80, 42, ... 0, 0, 0, 0, 0, 0, ... 2, 107, 4, 5, 41, 96, ... 0, 0, 0, 0, 0, 0, ...]] (Almost every example starts with 1 2, so I imagined it’d be easy to fit on at least this, but not even that happens.) Here’s the model: # PositionalEncoding definition from word_language_model omitted # for brevity class Transformer(nn.Module): def __init__(self, n_tokens, d_model, n_heads, d_ff, n_layers, dropout=0.1, max_len=4096, activation='relu'): super(Transformer, self).__init__() self.mask = None self.d_model = d_model self.pos_encoder = PositionalEncoding(d_model, dropout=dropout) self.embedding = nn.Embedding(n_tokens, d_model, padding_idx=0) enc_layer = nn.TransformerEncoderLayer(d_model, n_heads, d_ff, dropout) self.tf_encoder = nn.TransformerEncoder(enc_layer, n_layers) self.decoder = nn.Linear(d_model, n_tokens) self.init_weights() def _from_binary_mask(self, mask): return mask.float() \ .masked_fill(mask == False, float('-inf')) \ .masked_fill(mask == True, float(0.0)) def _generate_mask(self, sz): mask = (torch.triu(torch.ones(sz, sz)) == 1).transpose(0, 1) return self._from_binary_mask(mask) def init_weights(self): initrange = 0.1 nn.init.uniform_(self.embedding.weight, -initrange, initrange) nn.init.zeros_(self.decoder.weight) nn.init.uniform_(self.decoder.weight, -initrange, initrange) # Alternative I tried: # nn.init.xavier_uniform_(self.embedding.weight) # nn.init.xavier_uniform_(self.decoder.weight) # nn.init.normal_(self.decoder.bias, 1e-6) def forward(self, src, use_mask=True, src_key_padding_mask=None): if use_mask: device = src.device if self.mask is None or self.mask.size(0) != len(src): mask = self._generate_mask(len(src)).to(device) self.mask = mask else: self.mask = None # embed and add positional information src = self.embedding(src) * math.sqrt(self.d_model) src = self.pos_encoder(src) output = self.tf_encoder(src, self.mask, src_key_padding_mask=src_key_padding_mask) output = self.decoder(output) return output And here’s my training loop: model = model.Transformer(n_tokens, args.d_model, args.n_heads, args.d_ff, args.n_layers, args.dropout).to(device) optimizer = torch.optim.Adam(model.parameters(), weight_decay=1e-3) # init data, etc. def train(train_data): model.train() bs = args.batch_size total_loss = 0.0 total_correct = 0 total_total = 0 start_time = time.time() for batch_i in range(0, train_data.size(0) // bs): transposed_data, target = get_batch(train_data, batch_i) # (bs, seq_len), (bs*seq_len) data = transposed_data.transpose(0, 1).type(torch.LongTensor).to(device) # (seq_len, bs) target = target.type(torch.LongTensor).to(device) # (bs*seq_len) optimizer.zero_grad() key_padding_mask = calc_key_padding_mask(transposed_data, target) # (bs, seq_len), 1 if not padding output = model(data, src_key_padding_mask=torch.logical_not(key_padding_mask)) output = output.view(-1, n_tokens) # (bs*seq_len, n_tokens) logits = F.log_softmax(output, dim=-1) # (bs*seq_len, n_tokens) masked_logits = apply_mask_to_logits(logits, key_padding_mask) # (bs*seq_len, n_tokens), sets element to -inf if padding confidences, predictions = torch.max(masked_logits.exp(), 1) # (bs*seq_len), confidence will be 0 if padding total_correct += (torch.logical_and(predictions == target, confidences > 0.0)).float().sum() total_total += key_padding_mask.float().sum() e = loss(output, target) e.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), 0.25) optimizer.step() total_loss += e.item() if batch_i % args.log_interval == 0 and batch_i > 0: curr_loss = total_loss / args.log_interval / bs curr_accuracy = total_correct / total_total total_loss = 0. total_correct = 0 total_total = 0 # print the stats... Please help - I’m at a loss (no pun intended). I feel like I’ve tried everything.
st45772
I have a multi-class problem, the classes are all encoded 0-72. I have an preds tensor of [256, 72]. Passing it through probs = torch.nn.functional(input, dim = 1) results in a tensor with the same dimensionality. Where probs[0] is a list of probabilities of each class being the correct prediction. I would like to analyse the predictions my model is making, how can I link the probabilities to specific classes?
st45773
Any suggestions? I cannot find anywhere how to link the probabilities to their respective classes.
st45774
Could you expand on the issue you’re having? I’m assuming the functional operation you’re doing is softmax (you don’t have this in your code snippet). Are you trying to get an output to see which class your model thinks is the answer for each sample?
st45775
precisely! My bad for not inserting it, I can’t edit the post anymore. I would like to analyse what classes it mistakes the most and then potentially try finding a solution for it. torch.nn.functional.softmax(preds, dim=1).detach().cpu().numpy() outputs a tensor size:(batch_size, num_classes) But I cannot decipher how do these probabilities relate to each of the classes! It isn’t as simple as getting argmax() as often it is making wrong predictions. The classes are not passed explicitly anywhere in the model (pre-trained resnet50 with finetuning of FC and last conv), and I just cannot connect each prob to its respective class.
st45776
Is CrossEntropyLoss what you’re looking for? That’s the go-to loss function for these type of problems. Or am I misinterpreting what you mean?
st45777
Not completely. I am using CrossEntropyLoss (will be trying out https://github.com/vandit15/Class-balanced-loss-pytorch/blob/master/class_balanced_loss.py 5 tomrrow) however the output of the CrossEntropyLoss is a single scalar. By applying softmax (which you shouldn’t do before CrossEntropyLoss as it applies logmax within) we get a distribution of probabilities of an image being any of the existing classes. Using that I can inspect which class is being predicted, and if it’s not the correct one, then how inaccurate is it (is the correct label second most-likely or is it not even considering it in the slightest). e.g: this is the output of softmax on the first batch of epoch 0 ['0.058431108', '0.039843343', '0.06623752', '0.045099676', '0.05649936', '0.06612508', '0.050207447', '0.08593371', '0.0660381', '0.043260027', '0.06524337', '0.043747015', '0.034064077', '0.06968668', '0.057958122', '0.06703147', '0.042366467', '0.04222748'] They all sum to 1 (as probabilities should) and they represent the probability of each class being the correct answer. But how do I link each of the probabilities to the class?? is prob[0] == class_1 or maybe class_10? It’s impossible to tell, and that is what I am looking for!
st45778
Actually, I just double checked it and they do not always sum to 1… Which is very very weird in my opinion. for i in range(7): print(sum([float(x) for x in data["epoch_0"]["probs"][0][i]])): 1.000000052 0.9999999680000001 1.000000026 1.000000014 1.000000041 0.9999999369999999 1.000000019 They are not far off from 1, nonetheless, they are not ==1. @ptrblck how come? Is this some error on my side or is it down to how python manages small numbers? (they’re not that small considering that python allows for up to 2.1e-308)
st45779
It’s a one-to-one mapping. So assuming your output should be batch_size x num_classes, then prob[0][0] is the probability of your first sample being class 0 and so on. As for the values not summing up to 1.0, I believe this is more of a numerical error. SoftMax is not very numerically stable due to the exponential operations. This is why CrossEntropyLoss uses logmax internally.
st45780
Would using logmax change that? Thank you so much! It’s kind of obvious that they are, but I just wasn’t certain and didn’t want to make some wrong assumptions! Assuming the labels weren’t encoded and I used strings, how would they be sequenced in that case?
st45781
Logmax is more of a mathematical trick due to us performing an exponential operation followed up by a log operation (cross entropy loss uses negative log likelihood). So if we know a log operation is coming up (which is the inverse of a exponential operation), we can rewrite our math in such a way where it’s more numerically stable and avoid performing those exponential operations. It’s not a direct drop-in replacement for softmax; the output of logmax won’t sum up to 1. For your second question, do you mean, instead of having the labels [0, 1, 2], we have something like [‘apple’, ‘orange’, ‘banana’]? We would need to encode these prior to using cross-entropy loss anyways, so I’m not sure how you would even get it working with just pure strings.
st45782
As seen in the above two images, I’m getting different(extremely) reconstruction errors while using the torch.svd() and svd() function from MATLAB. What might be the reason? Is it solely because of the ways in which SVD is computed are different? Or, are there any factors in play here? Is there any method by which I can reduce the error while using torch.svd() ? Thanks in advance
st45783
Solved by ptrblck in post #2 MATLAB uses double precision by default, so you could also apply the same in PyTorch: W = torch.randn(1000, 1000, dtype=torch.float64) U, S, V = torch.svd(W) error = torch.norm(W - [email protected](S)@V.t()) print(error) > tensor(3.3153e-12, dtype=torch.float64)
st45784
MATLAB uses double precision by default, so you could also apply the same in PyTorch: W = torch.randn(1000, 1000, dtype=torch.float64) U, S, V = torch.svd(W) error = torch.norm(W - [email protected](S)@V.t()) print(error) > tensor(3.3153e-12, dtype=torch.float64)
st45785
what is the best way to multiply a transposed matrix D^T with matrix D in pytorch?
st45786
Solved by vaisakh_m in post #2 @John_Price Use any one of the methods given below: D @ D.t() torch.matmul(D,D.t()) torch.mm(D,D.t())
st45787
@John_Price Use any one of the methods given below: D @ D.t() torch.matmul(D,D.t()) torch.mm(D,D.t())
st45788
I am trying to recreate the following array in pure torch. How can I achieve it using nothing but PyTorch? t_list= float('inf') * np.ones(np.append( 10, 3+ 1 + np.arange( 2)))
st45789
Solved by vaisakh_m in post #2 @Harry-Garrison I hope this works for you. t_list = float('inf') * torch.ones(torch.cat( [torch.tensor([10]), 3+1+torch.arange( 2)] ).tolist())
st45790
@Harry-Garrison I hope this works for you. t_list = float('inf') * torch.ones(torch.cat( [torch.tensor([10]), 3+1+torch.arange( 2)] ).tolist())
st45791
vaisakh_m: t_list = float('inf') * torch.ones(torch.cat( [torch.tensor([10]), 3+1+torch.arange( 2)] ).tolist()) Excellent, thank you!
st45792
Sorry for the stupid question, but i cannot find a fast way to solve my issue, so i thought maybe the experts here can help me with that or maybe pytorch has a function that already does this in a fast way. I have a Tensor with size of BxRxC: e.g. here T has dimension of 1x3x4 T = torch.round(torch.rand(1,3,4)*10) T = 6 8 10 8 2 4 7 2 5 0 4 1 Now i have another tensor (K) with way larger size, i know that tensor K includes values of each row of Tensor tensor T somewhere in it as well as other values, but i dont know where they are e.g. here K has dimension of 1x9x4 K = torch.cat((torch.round(torch.rand(1,3,4)*10),T, torch.zeros(1,3,4)),1) K = 5 7 8 1 8 2 7 8 0 10 8 8 6 8 10 8 2 4 7 2 5 0 4 1 0 0 0 0 0 0 0 0 0 0 0 0 as we can see K has the values of T in row: 1,4, and 5 in terms of size B and C will always be the same in both T and K. How I can get the row indexes in K that includes the values in T? Also if I have another tensor D and lets say I have the indexes for the rows from last steps, how I can extract only the values in the rows of tensor D based on the indexes that i got, meaning that if D is: D = torch.round(torch.rand(1,9,4)*10) D = 2 6 8 7 3 3 9 9 4 4 4 4 2 7 5 2 3 1 9 7 3 4 4 7 1 5 2 1 3 7 1 7 5 9 8 10 I want the output be O = 2 7 5 2 3 1 9 7 3 4 4 7 my output will be the same size as T, P.s. I just multiplied the number with 10 to make it easier for reading purposes, they are not integer all the time.
st45793
Get the difference between the common dimension of K and T d = T.unsqueeze(2) - K.unsqueeze(1) this will be of size (1,3,7,4). Where the rows were identical, we would have 0,0,0,0. So sum together the last dimension: dsum = d.sum(-1) Now find out where dsum has zeros: loc = (dsum==0).nonzero() since all 3 rows were found somewhere in K, this will have size 3,3, if only 2 rows were found this would have a shape (2,3). You are interested in the locations inside K, so you need loc[:,-1] Assuming D is same size as K to take out the relevant rows you’d do: D[:,loc[:,-1],:]
st45794
thanks a lot, it just had one minor proble, we should use dsum = torch.abs(d).sum(-1) instead of d.sum(-1), because the sum of numbers might lead to zero, although they are not all zeros. see the following e.g. T = torch.round(torch.rand(1,3,4)*10) 4 3 6 3 1 0 5 8 1 10 4 8 K = torch.cat((torch.round(torch.rand(1,3,4)*10),T, torch.zeros(1,3,4)),1) 0 9 1 9 3 3 4 7 9 3 1 3 4 3 6 3 1 0 5 8 1 10 4 8 0 0 0 0 0 0 0 0 0 0 0 0 D = torch.round(torch.rand(1,9,4)*10) 3 6 7 2 4 0 5 9 4 2 5 10 4 4 9 2 2 0 2 6 2 1 4 0 1 4 8 3 4 3 8 0 2 3 9 10 d = T.unsqueeze(2) - K.unsqueeze(1) (0 ,0 ,.,.) = 4 -6 5 -6 1 0 2 -4 -5 0 5 0 0 0 0 0 3 3 1 -5 3 -7 2 -5 4 3 6 3 4 3 6 3 4 3 6 3 (0 ,1 ,.,.) = 1 -9 4 -1 -2 -3 1 1 -8 -3 4 5 -3 -3 -1 5 0 0 0 0 0 -10 1 0 1 0 5 8 1 0 5 8 1 0 5 8 (0 ,2 ,.,.) = 1 1 3 -1 -2 7 0 1 -8 7 3 5 -3 7 -2 5 0 10 -1 0 0 0 0 0 1 10 4 8 1 10 4 8 1 10 4 8 dsum = d.sum(-1) (0 ,.,.) = -3 -1 0 0 2 -7 16 16 16 -5 -3 -2 -2 0 -9 14 14 14 4 6 7 7 9 0 23 23 23 [torch.FloatTensor of size 1x3x9] loc = (dsum==0).nonzero() 0 0 2 0 0 3 0 1 4 0 2 5 loc[:,-1] 2 3 4 5 D[:,loc[:,-1],:] (0 ,.,.) = 4 2 5 10 4 4 9 2 2 0 2 6 2 1 4 0 [torch.FloatTensor of size 1x4x4]
st45795
on another note, can you help me understand what loc = (dsum==0).nonzero() does? so if the output of (dsum==0) is 0 0 1 0 0 0 0 0 0 1 0 0 then (dsum==0).nonzero() gives us 0 0 2 0 1 3 I understand that 2 and 3 are the indices, but what is 1 here?
st45796
The result gives you a tensor containing indices for all nonzero occurrences in the shape [num_nonzeros, dims]. The first column (loc[:, 0]) gives the indices in dim0, the second one in dim1, etc. As dsum has three dimensions, the second row stands for dsum[0, 1, 3].
st45797
The above code for determination of tensor index is very useful, lately I came across an anomaly when doing this operation. This is function I am using for determining index of host tensor from target tensor. def get_index(host, target): diff = target.unsqueeze(1) - host.unsqueeze(0) dsum = torch.abs(diff).sum(-1) loc = (dsum == 0).nonzero() return loc[:, -1] for example I wanted to extract the index from 2D Tensor of shape (40,2). Such that Target[:,1] = 0 and 1 . This is the result I got: tensor([[0.0000, 0.0000], [1.0000, 0.0000], [0.0000, 0.1111], [1.0000, 0.1111], [0.0000, 0.2222], [1.0000, 0.2222], [0.0000, 0.3333], [1.0000, 0.3333], [0.0000, 0.4444], [1.0000, 0.4444], [0.0000, 0.5556], [1.0000, 0.5556], [0.0000, 0.7778], [1.0000, 0.7778], [0.0000, 0.8889], [1.0000, 0.8889], [0.0000, 1.0000], [1.0000, 1.0000]]) The value of 0.6667 is missing from this output. Can anyone explain this abnormality or am I doing something wrong. @ptrblck could you please suggest anything.
st45798
You might be running into rounding errors due to the limited precision of floating point operations. Try to compare the dsum to a small eps via dsum <= eps values instead of dsum == 0.
st45799
I can’t figure out why/where the dataloaders are figuring there is a file called 4738.jpg. I can’t even find the source of the mistake github.com VishakBharadwaj94/Image_Similarity_AutoEncoder/blob/main/Image_Similarity.ipynb 1 { "nbformat": 4, "nbformat_minor": 0, "metadata": { "colab": { "name": "Image_Similarity.ipynb", "provenance": [], "collapsed_sections": [], "authorship_tag": "ABX9TyP/df/gPIWoABuWEe/KZ1al", "include_colab_link": true }, "kernelspec": { "name": "python3", "display_name": "Python 3" }, "accelerator": "GPU" }, "cells": [ { "cell_type": "markdown", This file has been truncated. show original
st45800
Solved by ptrblck in post #2 The image path is defined as img_loc = self.data_path/f'{idx}.jpg', so it uses the passed idx to the __getitem__. By default this variable will be in the range [0, len(dataset)-1], so you should make sure that all these files exist.
st45801
The image path is defined as img_loc = self.data_path/f'{idx}.jpg', so it uses the passed idx to the __getitem__. By default this variable will be in the range [0, len(dataset)-1], so you should make sure that all these files exist.
st45802
Hi I am trying to make customized LSTM cell but have some problems with figuring out what the really output is. From the source code, it seems like returned value of output and permute_hidden value. And output and hidden values are from result. I am wondering the what result means and where the result is coming from. if batch_sizes is None: result = _VF.lstm(input, hx, self._flat_weights, self.bias, self.num_layers, self.dropout, self.training, self.bidirectional, self.batch_first) else: result = _VF.lstm(input, batch_sizes, hx, self._flat_weights, self.bias, self.num_layers, self.dropout, self.training, self.bidirectional) output = result[0] hidden = result[1:] # xxx: isinstance check needs to be in conditional for TorchScript to compile if isinstance(orig_input, PackedSequence): output_packed = PackedSequence(output, batch_sizes, sorted_indices, unsorted_indices) return output_packed, self.permute_hidden(hidden, unsorted_indices) else: return output, self.permute_hidden(hidden, unsorted_indices) Any help or comments would be appriciated! Thank you!
st45803
Depending on which implementation and device you are using I assume the result is created in one of these 5 methods.
st45804
I am trying to understand what result means and how they are separated into output and hidden value! Since as far as I know the output of the LSTM is hidden and cell states continuously, but wondering how output is made of.
st45805
Trying to implement a recurrent Resnet of sorts… Stuck on this error… Please help this lost lamb class Block(nn.Module): def __init__(self, channels, seq): super(Block, self).__init__() self.seq = seq self.channels = channels self.conv = nn.Conv2d(self.channels, self.channels, 3, padding = 1, stride = 1, groups = self.seq) self.bn = nn.BatchNorm2d(self.channels) self.relu = nn.ReLU() def forward(self, tensor): identity = tensor tensor = self.conv(tensor) tensor = self.bn(tensor) tensor = self.relu(tensor) tensor = self.conv(tensor) tensor = self.bn(tensor) image1062×477 13.6 KB
st45806
Solved by ptrblck in post #6 Your Block class is returning None (or rather nothing which will be None), so you probably want to return tensor.
st45807
Your code works fine, if I pass a valid tensor to it: module = Block(1, 1) x = torch.randn(1, 1, 3, 3) out = module(x) so you would have to check the input and make sure it’s not a None object.
st45808
The input is generated in this block and it seems fine before it is actually passed into the block above… class ResNet(nn.Module): def __init__(self, Block, Block_Temporal, layers, img_channels, seq): super(ResNet, self).__init__() self.sequence = seq self.in_channels = 64*self.sequence self.temporal_channels = 64 self.conv_init = nn.Conv2d(img_channels*self.sequence, 64*self.sequence, kernel_size = 4, stride = 2, padding = 1, groups = self.sequence) self.bn = nn.BatchNorm2d(64*self.sequence) self.relu = nn.ReLU() self.maxpool = nn.MaxPool2d(3, stride = 2, padding = 1) self.layer1 = self._make_layer(Block, layers[0], 1) self.layer2 = self._make_layer(Block, layers[1], 2) self.layer3 = self._make_layer(Block, layers[2], 4) self.layer4 = self._make_layer(Block, layers[3], 8) self.temporal1 = self._make_temporal_layer(Block_Temporal, 2) self.temporal2 = self._make_temporal_layer(Block_Temporal, 4) self.temporal3 = self._make_temporal_layer(Block_Temporal, 8) self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) self.sigmoid = nn.Sigmoid() self.gru = nn.GRU(input_size = 1024, hidden_size = 512, batch_first = True, bias = False) def _make_layer(self, Block, num_blocks, step): layers = [] channels = self.in_channels * step for i in range(num_blocks): layers.append(Block(channels, self.sequence)) return nn.Sequential(*layers, nn.Conv2d(channels, channels*2, 3, stride = 2, padding = 1, groups = self.sequence), nn.BatchNorm2d(channels*2)) def _make_temporal_layer(self, BLock_Temporal, step): return nn.Sequential(BLock_Temporal(self.temporal_channels*step, self.sequence)) def forward(self, x): temporal_list = [] x = self.conv_init(x) x = self.bn(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) temporal = self.temporal1(x) temporal_list.append(temporal) x = self.layer2(x) temporal = self.temporal2(x) temporal_list.append(temporal) x = self.layer3(x) temporal = self.temporal3(x) temporal_list.append(temporal) x = self.layer4(x) x = self.avgpool(x) x = x.squeeze() x = torch.split(x, int(x.size(1)/self.sequence), 1) x = torch.cat([x[i].unsqueeze(1) for i in range(len(x))], 1) x = self.gru(x) x = self.Sigmoid(x) temporal_list.append(x) return temporal_list Error occurs in ‘self.layer1(x)’
st45809
I still cannot reproduce this error using: class ResNet(nn.Module): def __init__(self, Block, layers, img_channels, seq): super(ResNet, self).__init__() self.sequence = seq self.in_channels = 64*self.sequence self.temporal_channels = 64 self.conv_init = nn.Conv2d(img_channels*self.sequence, 64*self.sequence, kernel_size = 4, stride = 2, padding = 1, groups = self.sequence) self.bn = nn.BatchNorm2d(64*self.sequence) self.relu = nn.ReLU() self.maxpool = nn.MaxPool2d(3, stride = 2, padding = 1) self.layer1 = self._make_layer(Block, layers[0], 1) self.layer2 = self._make_layer(Block, layers[1], 2) self.layer3 = self._make_layer(Block, layers[2], 4) self.layer4 = self._make_layer(Block, layers[3], 8) def _make_layer(self, Block, num_blocks, step): layers = [] channels = self.in_channels * step for i in range(num_blocks): layers.append(Block(channels, self.sequence)) return nn.Sequential(*layers, nn.Conv2d(channels, channels*2, 3, stride = 2, padding = 1, groups = self.sequence), nn.BatchNorm2d(channels*2)) def forward(self, x): temporal_list = [] x = self.conv_init(x) x = self.bn(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) return x model = ResNet(Block, [2, 2, 2, 2], 3, 1) x = torch.randn(1, 3, 224, 224) out = model(x) Could you post an executable code snippet using random input data and show how you are creating the model so that I could reproduce it?
st45810
I don’t get what you mean by executable code snippet so i’m posting the whole thing here… import torch import torch.nn as nn class Block(nn.Module): def __init__(self, channels, seq): super(Block, self).__init__() self.seq = seq self.channels = channels self.conv = nn.Conv2d(self.channels, self.channels, 3, padding = 1, stride = 1, groups = self.seq) self.bn = nn.BatchNorm2d(self.channels) self.relu = nn.ReLU() def forward(self, tensor): identity = tensor tensor = self.conv(tensor) tensor = self.bn(tensor) tensor = self.relu(tensor) tensor = self.conv(tensor) tensor = self.bn(tensor) tensor += identity tensor = self.relu(tensor) return class Block_Temporal(nn.Module): def __init__(self, channels, seq): super(Block_Temporal, self).__init__() self.sequence = seq self.channels = channels self.conv_std = nn.Conv2d(self.channels*2, self.channels, kernel_size = 3, padding = 1, stride = 1) self.conv_update = nn.Conv2d(self.channels*2, self.channels, kernel_size = 2, stride = 1) self.pad = nn.ZeroPad2d((0, 1, 0, 1)) self.sigmoid = nn.Sigmoid() self.tanh = nn.Tanh() self.bn = nn.BatchNorm2d(self.channels) def forward(self, tensor): tensor_seq = torch.split(tensor, self.channels, 1) hidden_tensor = torch.zeros(tensor_seq[0].size()) for i in range(self.sequence): x = torch.cat([tensor_seq[i], hidden_tensor], 1) reset = self.sigmoid(self.conv_std(x)) x = self.pad(x) update = self.sigmoid(self.conv_update(x)) cnd_memory = update * self.bn( self.conv_std(torch.cat([tensor_seq[i], (reset * hidden_tensor)], 1))) hidden_tensor = self.tanh(cnd_memory) + (hidden_tensor * (1 - update)) return hidden_tensor class ResNet(nn.Module): def __init__(self, Block, Block_Temporal, layers, img_channels, seq): super(ResNet, self).__init__() self.sequence = seq self.in_channels = 64*self.sequence self.temporal_channels = 64 self.conv_init = nn.Conv2d(img_channels*self.sequence, 64*self.sequence, kernel_size = 4, stride = 2, padding = 1, groups = self.sequence) self.bn = nn.BatchNorm2d(64*self.sequence) self.relu = nn.ReLU() self.maxpool = nn.MaxPool2d(3, stride = 2, padding = 1) self.layer1 = self._make_layer(Block, layers[0], 1) self.layer2 = self._make_layer(Block, layers[1], 2) self.layer3 = self._make_layer(Block, layers[2], 4) self.layer4 = self._make_layer(Block, layers[3], 8) self.temporal1 = self._make_temporal_layer(Block_Temporal, 2) self.temporal2 = self._make_temporal_layer(Block_Temporal, 4) self.temporal3 = self._make_temporal_layer(Block_Temporal, 8) self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) self.sigmoid = nn.Sigmoid() self.gru = nn.GRU(input_size = 1024, hidden_size = 512, batch_first = True, bias = False) def _make_layer(self, Block, num_blocks, step): layers = [] channels = self.in_channels * step for i in range(num_blocks): layers.append(Block(channels, self.sequence)) return nn.Sequential(*layers, nn.Conv2d(channels, channels*2, 3, stride = 2, padding = 1, groups = self.sequence), nn.BatchNorm2d(channels*2)) def _make_temporal_layer(self, BLock_Temporal, step): return nn.Sequential(BLock_Temporal(self.temporal_channels*step, self.sequence)) def forward(self, x): temporal_list = [] x = self.conv_init(x) x = self.bn(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) temporal = self.temporal1(x) temporal_list.append(temporal) x = self.layer2(x) temporal = self.temporal2(x) temporal_list.append(temporal) x = self.layer3(x) temporal = self.temporal3(x) temporal_list.append(temporal) x = self.layer4(x) x = self.avgpool(x) x = x.squeeze() x = torch.split(x, int(x.size(1)/self.sequence), 1) x = torch.cat([x[i].unsqueeze(1) for i in range(len(x))], 1) x = self.gru(x) x = self.Sigmoid(x) temporal_list.append(x) return temporal_list def ResNet34_Temporal(img_channels, seq): return ResNet(Block, Block_Temporal, [3, 4, 6, 3], img_channels = img_channels, seq = seq) def test(): net = ResNet34_Temporal(3, seq = 5) x = torch.randn(2, 15, 128, 128) y = net(x) print(len(y)) test() Here’s an image of the error itself image1021×746 32.3 KB
st45811
Your Block class is returning None (or rather nothing which will be None), so you probably want to return tensor.
st45812
I have the following definition in the init of my model self.p = nn.Parameter(torch.ones(1)) My question is how to implement a geometric sequence based on self.p and use it during forward(), which is like [p**i for i in range(5)], I have tried p_geometric = torch.tensor([self.p**i for i in range(x.size(1))]).cuda(), but the weight of self.p did not get updated with that implementation. Thanks!
st45813
Solved by ptrblck in post #2 You shouldn’t recreate a tensor, as it would break the computation graph. Use torch.cat or torch.stack instead: p = nn.Parameter(torch.ones(1) * 2) out = torch.cat([p**i for i in range(10)]) out.mean().backward() print(p.grad) > tensor([409.7000])
st45814
You shouldn’t recreate a tensor, as it would break the computation graph. Use torch.cat or torch.stack instead: p = nn.Parameter(torch.ones(1) * 2) out = torch.cat([p**i for i in range(10)]) out.mean().backward() print(p.grad) > tensor([409.7000])
st45815
Given that I have a model and multiple sets of weights, say weights from epoch 1 to epoch 5 (w_1, w_2, w_3, w_4, w_5). I realise that different results are generated (during inference) when I load my weights into my model in two different ways: My model is a CNN network with Batch Norms and ReLUs included. Method 1: model = myModel() model = nn.DataParallel(model) model = model.cuda() for idx in range(1, 6): state_dict = torch.load(w_idx) model.load_state_dict(state_dict) model.eval() outputs = model(inputs) #inference Method 2: for idx in range(1, 6): state_dict = torch.load(w_idx) ############################### model = myModel() model = nn.DataParallel(model) model = model.cuda() ############################### model.load_state_dict(state_dict) model.eval() outputs = model(inputs) #inference May I ask why is that the case? Thank you
st45816
How large are these differences and are you also seeing them using the same approach? If so, you might be facing non-deterministic results, if you use e.g. cudnn benchmarking etc. due to the limited floating point precision.
st45817
Hello, I am a bit stuck with loading csv file. I get invalid load key, '\xef'. or KeyError: 44 for code snippets below respectively vrn = torch.load('dataset-housing-price-prediction-service/17-nov-2020-infoline-real-estate.csv') import joblib vrn = joblib.load(open('test.csv', 'rb')) In web there were different proposals on the cause of the issue, but nothing in details. As you may notice I tried to vary tools to find out is this a tool related issue. Likely it is an issue with file, but it is plane cvs. I attach file sample by this link 2 Could you please take a look on?
st45818
You can try to read CSV files from pandas library. Import pandas as pd Data = pd.read_csv(“path/to/file.csv”) Then you can convert it to the data type you want
st45819
Thank you, pandas parse it without an issue. Do you have ideas why pytorch didn’t it?
st45820
One classic network for image classification is as follows: class Mnist_CNN(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1) self.conv2 = nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1) self.conv3 = nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1) def forward(self, xb): xb = xb.view(-1, 1, 28, 28) xb = F.relu(self.conv1(xb)) xb = F.relu(self.conv2(xb)) xb = F.relu(self.conv3(xb)) xb = F.avg_pool2d(xb, 4) return xb.view(-1, xb.size(1)) If I want to add a new convolution layer, can I just reuse the self.conv2 ? like the following code: class Mnist_CNN(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1) self.conv2 = nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1) self.conv3 = nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1) def forward(self, xb): xb = xb.view(-1, 1, 28, 28) xb = F.relu(self.conv1(xb)) xb = F.relu(self.conv2(xb)) xb = F.relu(self.conv2(xb)) # add a new layer xb = F.relu(self.conv3(xb)) xb = F.avg_pool2d(xb, 4) return xb.view(-1, xb.size(1)) I am afraid the new-added layer would have the same weights as the former layer.
st45821
Solved by ptrblck in post #2 Adding a new layer (with new parameters) is not the same as reusing the layer (same parameters). If you want to add a module, you would have to initialize it and use it in the forward. Depending on your use case, you might want to derive a new class using Mnist_CNN as the base class and add the ne…
st45822
Adding a new layer (with new parameters) is not the same as reusing the layer (same parameters). If you want to add a module, you would have to initialize it and use it in the forward. Depending on your use case, you might want to derive a new class using Mnist_CNN as the base class and add the new layer to it.
st45823
I am trying to create three separate LSTM networks, and then merge them together into one big model. From my understanding I can create three lstm networks and then create a class for merging those networks together. Is that correct? I am kind of new to this.
st45824
Yes, that should be possible as you can freely concatenate the outputs of several modules and pass it to a new module.
st45825
I have tried to code this, but this is as far as I got. I am not sure how to determine the input and output dimension of concat_layer. Also, what goes in the forward function under lstms? Any help is greatly appreciated. class Net(torch.nn.Module): def __init__(self): super(Net, self).__init__() self.lstm1 = torch.nn.LSTM(input_size = 1, hidden_size = 2, num_layers = 1, batch_first = True) self.lstm2 = torch.nn.LSTM(input_size = 1, hidden_size = 2, num_layers = 1, batch_first = True) self.lstm3 = torch.nn.LSTM(input_size = 1, hidden_size = 2, num_layers = 1, batch_first = True) self.concat_layer = torch.nn.Linear(?, ?) self.linear = torch.nn.Linear(,1) def forward(self, x): lstm1 = ? lstm2 = ? lstm3 = ? concat = torch.cat((lstm1, lstm2, lstm3), dim=1) output = self.linear(concat) return output
st45826
The LSTM docs 9 explain the expected input with their shapes. Once your input works, you could add a print statement in the forward method to check the shape of the concatenated tensor and adapt the in_features accordingly.
st45827
I am using the following code and I got the iou_score, but I want to evaluate it by: Precision & Recall such as this one: FP, FN, TP, TN = numeric_score(prediction, groundtruth) N = FP + FN + TP + TN accuracy = np.divide(TP + TN, N) return accuracy * 100.0` how can I add them in , or just print the Dice? def iou_score(output, target): smooth = 1e-5 if torch.is_tensor(output): output = torch.sigmoid(output).data.cpu().numpy() if torch.is_tensor(target): target = target.data.cpu().numpy() output_ = output > 0.5 target_ = target > 0.5 intersection = (output_ & target_).sum() union = (output_ | target_).sum() return (intersection + smooth) / (union + smooth) def dice_coef(output, target): smooth = 1e-5 output = torch.sigmoid(output).view(-1).data.cpu().numpy() target = target.view(-1).data.cpu().numpy() intersection = (output * target).sum() return (2. * intersection + smooth) / \ (output.sum() + target.sum() + smooth) iou = iou_score(output, target) avg_meter.update(iou, input.size(0)) output = torch.sigmoid(output).cpu().numpy() for i in range(len(output)): for c in range(config['num_classes']): cv2.imwrite(os.path.join('outputs', config['name'], str(c), meta['img_id'][i] + '.jpg'), (output[i, c] * 255).astype('uint8')) print('IoU: %.4f' % avg_meter.avg) torch.cuda.empty_cache() https://github.com/4uiiurz1/pytorch-nested-unet With reagrds, any help would be so grateful!
st45828
Hello everyone, Let’s say I’m Training an Auto-encoder( Unet) using images of input size 256x256. For testing, I have images of X.X size so how to test the model on these data ? Thank you,
st45829
Since you are working on an Auto-encoder I assume you could calculate the loss using the input and output directly. Depending on the model architecture you could pass variable input shapes to the model. E.g. if you are just using conv and pooling operations, this might be possible. I assume that input with a largely different input shape could create worse results as your model wasn’t trained on these, but that’s just a guess.
st45830
torch.tensor([]) Traceback (most recent call last): File “”, line 1, in TypeError: ‘module’ object is not callable torch.tensor([0,2]) Traceback (most recent call last): File “”, line 1, in TypeError: ‘module’ object is not callable Possible reasons why that would happen?
st45831
Hi, I’m also facing the same issue. Can you please tell me, how you updated the package? I tried conda update pytorch but still its showing the same version. Thanks in advance
st45832
Try to uninstall pytorch and torchvision first, update conda and install it again using the instructions from the website 505.
st45833
I had the same problem. Upgrading to 0.4 works, however I have to use 0.3 for other reasons. Is there any way to work around this issue but still using 0.3 version? (The reason to avoid 0.4 is that the custom function seems to be very slow, so…) Can anyone help me on this? Thank you.
st45834
In older version you could use: torch.FloatTensor([]) torch.FloatTensor([1., 2.]) Also any supported type, e.g. torch.LongTensor.
st45835
For 0.3 version, I want to add a learnable parameter like "self.scale = nn.Parameter(torch.tensor([10.0]), requires_grad=True) in nn.Module. But it throws an error, TypeError: ‘module’ object is not callable. If I use a tensor, torch.FloatTensor([10.0]), it is not learnable, right? class PreActBottleneck(nn.Module): def init(self, inplanes, planes, stride=1, downsample=None): super(PreActBottleneck, self).init() self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) self.scale = nn.Parameter(torch.tensor([10.0]), requires_grad=True)
st45836
You could still wrap the FloatTensor into a Parameter: self.scale = nn.Parameter(torch.FloatTensor([10.]))
st45837
Sorry my apologies, I was making a stupid mistake. I am not facing this issue. Thanks for your help.
st45838
Hello, I have a doubt about using conv2d or conv3d on my problem. I have an array of shape (M, M, N) where each image is formed by M x M pixels and we have N of those. My question is: Should I use 2d conv where the channels are the N value (i.e. input shape (batch size, N, M, M) 3d conv net where we start with one channel on a 3D image (i.e. input shape (batch size, 1, M, M, N) The N images may have shared statistics. Now, I have tried both and the 3D conv net seems to give better results but I am not sure how should I interpret that and maybe it is due to an hyperparameter issue. Thanks.
st45839
The difference between nn.Conv2d and nn.Conv3d would be how the additional N dimension is handled in your use case. The nn.Conv2d layer would interpret N as the channel dimension and each kernel would thus use all channels in the default setup. The sliding windows would be applied in the spatial MxM dimensions. On the other hand, the nn.Conv3d layer would use the “sliding cube” in the NxMxM dimensions and use a single input channel. I’m not familiar with your use case and don’t know which approach would make more sense. Since the 3D layer seems to give better results, you could stick to it.
st45840
I have a subclass of torch.nn.Module for which I have multiple output heads, differing by one parameter that I pass into them. I want to have all of them accessible in the .children() function call, but with using a loop. In the code below, it seems only testClass3 will show any children at all (testClass1 does not). Is there a way to achieve this with loops? import torch class testClass2(torch.nn.Module): def __init__(self, i): super(testClass2, self).__init__() self.i = i class testClass1(torch.nn.Module): def __init__(self): super(testClass1, self).__init__() self.test_classes = [] for i in range(5): self.test_classes.append(testClass2(i)) self.checkParams() def checkParams(self): for children in self.children(): print(children) class testClass3(torch.nn.Module): def __init__(self): super(testClass3, self).__init__() self.test_classes = [] self.t1 = testClass2(1) self.t2 = testClass2(2) self.checkParams() def checkParams(self): for children in self.children(): print(children) t = testClass1() t = testClass3() I’m not sure how torch.nn.Module determines its “children”, I’m guessing it just checks all of its parameters is they are an instance of torch.nn.Module or something like that. Any insights appreciated.
st45841
PyTorch will register the submodules as children, if you use an nn.ModuleList instead of a Python list. Use self.test_classes = nn.ModuleList() and it should work.
st45842
for i, data in enumerate(train_dataloader, 0): File “/home/vijay/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py”, line 637, in next return self._process_next_batch(batch) File “/home/vijay/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py”, line 657, in _process_next_batch if isinstance(batch, ExceptionWrapper): raise batch.exc_type(batch.exc_msg) error
st45843
Could you describe your error, what you are trying to do, and where you are stuck? PS: you can post code snippets by wrapping them into three backticks ```, which would make debugging easier.
st45844
Hi, I have intel GPU, Can PyTorch support this? What other devices Pytorch support? Calling torch.device with the wrong value is giving below error. Expected one of cpu, cuda, mkldnn, opengl, opencl, ideep, hip, msnpu, xla device type. Can I use the device from above?
st45845
Solved by vaisakh_m in post #2 No, PyTorch only supports CUDA enabled devices(Nvidia GPUs) as GPUs. You can still run PyTorch on your CPU. All the devices mentioned here are compatible with PyTorch. Also, check if this is useful.
st45846
No, PyTorch only supports CUDA enabled devices(Nvidia GPUs) as GPUs. You can still run PyTorch on your CPU. prateekazam: Expected one of cpu, cuda, mkldnn, opengl, opencl, ideep, hip, msnpu, xla device type All the devices mentioned here are compatible with PyTorch. Also, check if this 26 is useful.
st45847
I want to get the last hidden state in a batch (with different length) after feeding through unidirection nn.LSTM (not the padded state). My current approach is: List[Tensor] -> Padded Tensor -> PackPaddedSequence -> LSTM -> PadPackedSequence -> Select hidden state of last step using length a = torch.ones(25, 300) b = torch.ones(22, 300) c = torch.ones(15, 300) padded_seq = pad_sequence([a, b, c]) # torch.Size([25, 3, 300]) lengths = torch.Tensor([25, 22, 15]).int() inp_seq = pack_padded_sequence(padded_seq, lengths=lengths) lstm = torch.nn.LSTM(input_size=300, hidden_size=150, num_layers=2) out, _ = lstm(inp_seq) out_tensor, inp_length = pad_packed_sequence(out) b_size = list(inp_length.size())[0] last_hidden = out_tensor[inp_length - 1, range(b_size)].contiguous() My question is: Am I doing a correct way to get that hidden state ? (I felt it a little clumsy) How can I use it with bidirectional=True ?
st45848
I have a matrix A and a tensor b of size (1,3) - so a vector of size 3. I want to compute C = b1 * A + b2 * A^2 + b3 * A^3 where ^n is the n-th power of A. At the end, C should have the same shape as A. How can I do this efficiently?
st45849
Hi all! I am working on a dataset of ~300 samples with ~5000 data-points each - ranged between 0 and 1. I am interested in: Group samples for similarity; Find the differences between groups; Would make sense to train an autoencoder to reduce the dimensionality to N points. Take the output of the encoder and use it as the input of an unsupervised algorithm (KNN, DBSCAN)? if so, is it correct to use a sigmoid and relu activation for the encoder and decoder, respectively? The AE architecture: class AE(nn.Module): def __init__(self): super().__init__() self.encoder_hidden_layer_1 = nn.Linear(in_features=4979 , out_features=3000) self.encoder_hidden_layer_2 = nn.Linear(in_features=3000, out_features=1500) self.encoder_output_layer = nn.Linear(in_features=1500, out_features=10) self.decoder_hidden_layer_1 = nn.Linear(in_features=10, out_features=512) self.decoder_hidden_layer_2 = nn.Linear(in_features=512, out_features=2000) self.decoder_output_layer = nn.Linear(in_features=2000, out_features=4979) def forward(self, features): x = self.encoder_hidden_layer_1(features) x = F.relu(x) x = self.encoder_hidden_layer_2(x) x = F.relu(x) x = self.encoder_output_layer(x) encoded = F.sigmoid(x) x = self.decoder_hidden_layer_1(x) x = F.relu(x) x = self.decoder_hidden_layer_2(x) x = F.relu(x) x = self.decoder_output_layer(x) decoded = F.relu(x) return decoded, encoded
st45850
Hi, I was trying to replicate some experiments done in TF and noticed that they use something called virtual batch size. Some papers have shown that the per device batch size and the accuracy of batch norm estimates that comes with it can matter and is often a reason why large batch size training does not perform as well as training with smaller batch sizes. At the same time, training with larger batches, especially on lower dimensional data (eg 32x32 images) often yield better GPU utilization. Is there a way to replicate this ghost batch norm in Pytorch, eg can I have a batch norm layer that automatically subdivided the batch into smaller micro-batches and computes statistics on each individual one? Right now per device batch size is coupled to total batch size and number of GPUs I am using which makes it hard to experiment with it, eg if I want to use a total bs of 1024 and a virtual batch size of 64 I need to use 16 GPUs. I found one repo that does that but they actually split the batch and perform multiple forward passes which is super inefficient. Thank’s for your help,
st45851
Hey there, I would like to create an object detection for my own dataset wich includes 5 different classes. Therfore I checked out the Tutorial Object Detection Finetunig. How can I change the code to train the model on my own pictures and classes? Is there any example? First I imported my own Data and of course changed the names where the Data is used. But how can I change the number of classes? Right now I only changed the varialbe num_classes to 6. What else do i have to do? Thanks a lot for your help!
st45852
Are u following this example? https://pytorch.org/docs/stable/torchvision/models.html#mask-r-cnn 39. If yes, I did the same last week and it worked fine to me. Please note the method def get_model_instance_segmentation(num_classes). Are you adding new labels or you would like to have only your new labels? I did with only 6 classes (background + 5 classes). And it worked fine.
st45853
No, I did this tutorial https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html 31 I would like to have just my 5 classes (+backgraound).
st45854
Yep, that worked for me as well. If your Custom Data set working fine? I can share mine, if you used Labelme to create annotation. Did you test the data loader? If both above are ok, so you just need to run the training
st45855
Can you tell me what code I have to add? If I start training I get this error: /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change " Loss is nan, stopping training {‘loss_classifier’: tensor(1.8139, device=‘cuda:0’, grad_fn=), ‘loss_box_reg’: tensor(nan, device=‘cuda:0’, grad_fn=), ‘loss_mask’: tensor(2.0006, device=‘cuda:0’, grad_fn=), ‘loss_objectness’: tensor(17.6392, device=‘cuda:0’, grad_fn=), ‘loss_rpn_box_reg’: tensor(19.2071, device=‘cuda:0’, grad_fn=)} An exception has occurred, use %tb to see the full traceback. SystemExit: 1 I created my Label with Labelbox. How can I ttest my data loader? If I print the pictures, they are shown correctly.
st45856
You can test the dataset with this: len(dataset) dataset[0] And the data loader with this: next(iter(dataloader))
st45857
The Dataset is shown, but I do not understand all the numbers, arrays and matrices. I dont have a dataloader. I uploaded my .zip file with all the pictures and unziped it.
st45858
Now I get a prediction but I do not have different classes. It is just a black an white prediction weather there is any object or no.
st45859
Probably it’s because dataset from tutorial is only for one class detection. And it doesn’t raise an error if you have several classes on image. I’m looking for example of training data for multicalss detection too. If you solve this issue please help me, I can’t understand how to prepare data for training, I have image and mask with 5 classes on it, but how the boxes should look like if there is several objects for one class and so on… cant understand p.s. pytorch tutorials are misleading
st45860
Hey guys, I was wondering if anyone has implemented Elastic Weight Consolidation (EWC) as outlined in this paper 41? This algorithm allows for sequential/continuous learning without the model encountering catastrophic forgetting. The main part of implementing this is calculating the Fisher information matrix. If anyone has any code they can share on this, that’d be great. Otherwise I’m happy to attempt it and share my code here. Found a tensorflow implementation here: https://github.com/stokesj/EWC 300 which we can use for reference.
st45861
These two repos might have what you’re looking for: GitHub moskomule/ewc.pytorch 283 An implementation of EWC with PyTorch. Contribute to moskomule/ewc.pytorch development by creating an account on GitHub. GitHub kuc2477/pytorch-ewc 265 PyTorch implementation of DeepMind's PNAS 2017 paper "Overcoming Catastrophic Forgetting" - kuc2477/pytorch-ewc
st45862
The original EWC requires you to compute the importance for each weight based on an additional pass over the training set. The importance is the squared gradient averaged over each minibatch. Anyway, you can take a look at the implementation available in the ContinualAI notebooks 147. It is an association for Continual Learning Disclaimer: I am part of ContinualAI.
st45863
I just happened to be going through some of my code and I noticed that my inputs to my model had requires_grad set to false. So I just went and tried out the basic pytorch example and found this was the same for the example as well. Here is the pytorch example: class Net(nn.Module): def __init__(self): super(Net, self).__init__() # 1 input image channel, 6 output channels, 3x3 square convolution # kernel self.conv1 = nn.Conv2d(1, 6, 3) self.conv2 = nn.Conv2d(6, 16, 3) # an affine operation: y = Wx + b self.fc1 = nn.Linear(16 * 6 * 6, 120) # 6*6 from image dimension self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): # Max pooling over a (2, 2) window x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) # If the size is a square you can only specify a single number x = F.max_pool2d(F.relu(self.conv2(x)), 2) x = x.view(-1, self.num_flat_features(x)) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x def num_flat_features(self, x): size = x.size()[1:] # all dimensions except the batch dimension num_features = 1 for s in size: num_features *= s return num_features net = Net() input = torch.randn(1, 1, 32, 32) target = torch.randn(10) # a dummy target, for example target = target.view(1, -1) # make it the same shape as output criterion = nn.MSELoss() # create your optimizer optimizer = optim.SGD(net.parameters(), lr=0.01) # in your training loop: optimizer.zero_grad() # zero the gradient buffers output = net(input) loss = criterion(output, target) loss.backward() print(input.grad) optimizer.step() print(input.requires_grad) The results of the print statements are None and False respectively. I’m just sorta confused, I thought gradients were supposed to accumulate in leaf_variables and this could only happen if requires_grad = True.
st45864
Hi, You are thinking correctly! In your example, your input is not a leaf variable so no grad will be accumulated for it which is the goal of your code too. tylerschuessler: I thought gradients were supposed to accumulate in leaf_variables and this could only happen if requires_grad = True. For instance, weights and biases of layers such as conv and linear are leaf variables and require grad and when you do backward, grads will be accumulated for them and optimizer will update those leaf variables. So, if you want to compute gradients with respect to your INPUTS too (which can be used to UPDATE INPUTS), like the weights, you need to enable grads for them and make them leaf. For example, in your code if you add below line after input = torch.randn(1, 1, 32, 32), you can get grads of loss w.r.t. inputs: input = input.clone().detach().requires_grad_(True) Bests
st45865
Hi, I have a 2d tensor W and 2 boolean masks that select the wanted rows and columns of a tensor. For example: W[row_mask][:, column mask] returns the tensor subset of interest. I want to modify these elements (for example, add 1 to each). However, doing something like: W[row_mask][:, column mask] += 1 leaves the W tensor unchanged. Is there a way I can modify only a subset of a tensor, selected by rows and columns?
st45866
It is inefficient to do selective writes with cuda, so this is not well supported, though there are masked_scatter_ and scatter_add_ ops. But sequential operations like W += row_mask*column_mask or W = torch.where(mask, f(W), W) should be faster.
st45867
Hi all ! I’m running across an issue for a test scenario I need to run. I have a dozen of models built with the exact same pattern (WideResNet 28-10). Each of those model have slight differences due to training divergences. I however would need to qualify each of those model on the entire test set (4 minibatches, CIFAR 10). Those models may eventually get back into a training iteration following the test. The easy way to go would be to do a repeated iteration in the test set on each of those models, i.e. roughly : for i, (input, target) in enumerate(test_loader): target, input = target.to(device), input.to(device) for model in models: output = model(input) loss = criterion(output, target) acc1 = accuracy(output, target) # metric management Which seems a bit inefficient/slow (2 min per minibatch, 4 minibatches, a thousand models … ). Is there any way to do so in a clean way which would exploit the embarassingly parallel nature of this task ? At first I thought to concat the models (in the end its just sets of layers that could operate in parallel ?) together but I have no clue on how to achieve that in Pytorch/don’t even know if its doable. Multiprocessing also appeared as a possible choice but would it be any efficient ? Any suggestion is welcome