id
stringlengths
3
8
text
stringlengths
1
115k
st31568
I’m trying to train a Pneumonia classifier using Resnet34. While training the model, the loss is increasing and accuracy is decreasing drastically (both in training and validation sets). What might be the potential reason behind this? def train(model, dataloaders, loss, optimizer, epochs=5): train = dataloaders['train'] valid = dataloaders['valid'] device = 'cuda' if torch.cuda.is_available() else 'cpu' metric = Accuracy().to(device) for epoch in tqdm(range(epochs), desc="EPOCHS : "): model.train() cst = 0 for x, y in tqdm(train, leave=True, desc="Trainning : "): optimizer.zero_grad() x = x.to(device) y = y.to(device) preds = model(x).to(device) acc = metric(preds.argmax(dim=1), y) cost = loss(preds, y) cst += cost.item() cost.backward() optimizer.step() acc = metric.compute() cst /= len(train) print(f'Train loss : {cst} \t Train acc : {acc}') model.eval() cst = 0 for x, y in tqdm(valid, leave=True, desc="Validation : "): x = x.to(device) y = y.to(device) preds = model(x).to(device) acc = metric(preds.argmax(dim=1), y) cost = loss(preds, y) cst += cost.item() acc = metric.compute() cst /= len(valid) print(f'Valid loss : {cst} \t Valid acc : {acc}') return model model = models.resnet34(pretrained=True) for param in model.parameters(): param.requires_grad = False model.fc = nn.Sequential( nn.Dropout(p=.7), nn.Linear(in_features=model.fc.in_features, out_features=2), nn.LogSoftmax(dim=1) ) model = model.to(device) LR = 3e-3 WD = 1e-4 loss = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=LR, weight_decay=WD) md = train(model, dataloaders, loss, optimizer, epochs=5) Screenshot (1)1920×1080 115 KB
st31569
Well, the obvious answer is, nothing wrong here, if the model is not suited for your data distribution then, it simply won’t work for desirable results. And another thing is I think you should reframe your question If loss increase then certainly acc will decrease. That’s just my opinion, I may not be to the point here.
st31570
I tried different architectures as well, but the result is the same. And I don’t think I should reframe the question, as you can see from the screenshot.
st31571
@Lucky_Magna By reframing I meant this is obvious if loss decrease acc will increase.
st31572
Can you check the initial loss of your model with random data? It should be around -ln(1/num_classes). If this value is close then it suggests that your model is initialized properly. The next thing to check would be that your data format as input to the model makes sense (e.g., from the perspective of data layout, etc.) From here, if your loss is not even going down initially, you can try simple tricks like decreasing the learning rate until it starts training. If the loss is going down initially but stops improving later, you can try things like more aggressive data augmentation or other regularization techniques.
st31573
@eqy Loss of the model with random data is very close to -ln(1/num_classes), as you mentioned. As for the data, it is in the right format. Screenshot (2)1067×204 9.87 KB
st31574
@eqy I changed the model from resnet34 to renset18. The loss is stable, but the model is learning very slowly. The accuracy is starting from around 25% and raising eventually but in a very slow manner. It is taking around 10 to 15 epochs to reach 60% accuracy. I tried increasing the learning_rate, but the results don’t differ that much.
st31575
Ok, that sounds normal. At this point I would see if there are any data augmentations that you can apply that make sense for you dataset, as well as other model architectures, etc.
st31576
@eqy Ok let me explain about the project I’m working on. I’m trying to classify Pneumonia patients using X-ray copies. Below mentioned are the transforms I’m currently using. transform = { 'train' : T.Compose([ T.Resize(size=(224,224)), T.RandomAffine(30), T.RandomInvert(p=1), T.RandomHorizontalFlip(), T.ToTensor(), T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'valid' : T.Compose([ T.Resize(size=(224,224)), T.RandomInvert(p=1), T.ToTensor(), T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) } Before you may ask why am I using Invert transform on the validation set, I think this transform is able to capture the pneumonia parts in the x-ray copies. So, I used it on validation and test set as well (If it is a bad idea the correct me). After applying the transforms the images look something like this: 68ac12a3-19c8-4c71-9e00-46c9fcc9248d670×686 101 KB 87aede22-f1af-4713-be47-59db9804d1bf670×686 98.6 KB 311762ab-ee6c-438c-b216-9c1bbccdd958670×686 113 KB
st31577
@eqy Solved it! I forgot to shuffle the dataset. It is overfitting to one class in the whole dataset. Thanks for the help though.
st31578
Nice. @Lucky_Magna Could you please share the performance of your final model? Like the training and validation losses plots and possibly accuracy plots as well. Thx.
st31579
@Nahil_Sobh I posted the code on my github account you can see the performance there. github.com narayana8799/Pneumonia-Detection-using-Pytorch/blob/master/Pneumonia Detection.ipynb 3 { "cells": [ { "cell_type": "code", "execution_count": 1, "id": "764e6867-8a51-4090-b806-07a176c73fe6", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "_CudaDeviceProperties(name='NVIDIA GeForce GTX 1050 Ti', major=6, minor=1, total_memory=4096MB, multi_processor_count=6)" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ This file has been truncated. show original
st31580
Hey, i would like to know how i can concatenate two tensors like this: t1 = torch.rand(2, 10, 512) t2 = torch.rand(2, 768) and get tensor like this: >>> torch.Size([2, 10, 1280]) Let’s assume that shapes are: t1_shape = (batch_size, sequence_len, embedding_dim) t2_shape = (batch_size, embedding_dim) I want to concatenate these tensors along the embedding_dim, so every different tensor in sequence_len dimension will be concatenated with the same t2 Tensor. As a solution i see: t2 = t2.unsqueeze(1) t2.size() >>> torch.Size([2, 1, 768]) t2 = torch.cat((t2,) * 10, dim=1) t2.size() >>> torch.Size([2, 10, 768]) torch.cat((t1, t2), dim=2) But i’m afraid this approach costs memory when duplicating t2 Tensor 10 times (in reality it can be much more). Is there any memory efficient solution? Thank you!
st31581
Solved by tom in post #2 You could use t2.unsqueeze(1).expand(-1, 10, -1). This will take no extra memory (before the cat, that is) as it is a view (print stride() and shape to see what is going on) and the 10-dimension is just stride 0 (i.e. all copies in the same location). Best regards Thomas
st31582
You could use t2.unsqueeze(1).expand(-1, 10, -1). This will take no extra memory (before the cat, that is) as it is a view (print stride() and shape to see what is going on) and the 10-dimension is just stride 0 (i.e. all copies in the same location). Best regards Thomas
st31583
Hi all, Can someone please show me how to train Centernet model for instance segmentation. I have well understood the Centernet model, but I didn’t understand how to train this model for instance segmentation. Can someone show me an example? Thanks in advance
st31584
Hi all, Can someone explain to me how to code the training part with pytorch for the Centernet model for instance segmentation. Thanks in advance.
st31585
Hi, I’m trying to do instance segmentation on small objects like a baseball (30x30) in I’ve tried training Mask RCNN following the balloon example, but little success. CenterNet does 28.9 APs which is close to the state of the art PANet. detection, batch them and feed into a lightweight semantic segmentation model.
st31586
I’m just wondering why colab notebook crashes after running this code: torch.cat( [torch.mean(torch.tensor([1.])), torch.mean(torch.tensor([1.]))]) Can someone help with this ?
st31587
This is because mean will implicitly reduce the first dimension and produce a tensor (really a scalar) without dimensions and cat expects at least a single dimension. You can work around this with torch.cat( [torch.mean(torch.tensor([1.]), dim=0, keepdim=True), torch.mean(torch.tensor([1.]), dim=0, keepdim=True)])
st31588
Hi, I have this neural net: self.network = nn.Sequential( nn.Conv2d(in_channels=30, out_channels=1, kernel_size=(1, 1), stride=1), nn.ReLU(), nn.Flatten(), nn.Linear(400,1), ) The input size is 100 rows x 30 columns x 1 x 1. Before passing it through the Linear layer, I need it to be a completely flattened 100 number array. Squeeze() does the trick but I can’t insert it into nn.Sequential for some reason. nn.Flatten() brings it down to 400 x 1 but I need it to be essentially 400 x 0 for it to work. Thanks
st31589
So, I am trying to modify some of my code so that it supports the batch dimension. One of the lines uses torch.diagflat, and I was wondering what would be the batched version of it? I see there are the torch.diag and torch.diagonal functions, but it’s not clear if they replicate torch.diagflat? import torch x = torch.randn(2, 3) # batch of 2 print(torch.diagflat(x).shape) # size is torch.Size([6, 6]) instead of torch.Size([2, 3, 3])
st31590
Solved by tom in post #2 You can use torch.diag_embed with torch.view(batch_size, -1) as the input. If your tensor is not necessarily contiguous, you can use torch.resize instead of torch.view. Diagonals seems to be one of the bits of the numpy api that isn’t thought out terribly well w.r.t. being flexible/intuitive for us…
st31591
You can use torch.diag_embed 18 with torch.view(batch_size, -1) as the input. If your tensor is not necessarily contiguous, you can use torch.resize instead of torch.view. Diagonals seems to be one of the bits of the numpy api that isn’t thought out terribly well w.r.t. being flexible/intuitive for use-cases like batching (see the default dimension behaviour for numpy.diagonal always seemed odd to me, too)… Best regards Thomas
st31592
I am trying to get the intersection of 3 tensors using PyTorch is there any built-in function that can do: tens=intersection(tens1,tens2,tens3) or should I go through NumPy or my own coded function?
st31593
I have a little difficulty understanding what happens when we use pytorch cosine similarity function. considering this example: input1 = torch.abs(torch.randn(1,2,20, 20)) input2 = torch.abs(torch.randn(1,2,20, 20)) cos = nn.CosineSimilarity(dim=1, eps=1e-6) output = cos(input1, input2) print(output.size()) torch.Size([20, 20]) I was expecting to get the output of size 2x20x20, can someone please explain to me why it is not like that?? Moreover, is there a way to compute the CosineSimilarity for each channel separately and get an output of size 2x20x20?? Thanks
st31594
Solved by ptrblck in post #11 Thanks for the explanation. In this case, would you want to use the 10x10 pixels as the vector to calculate the cosine similarity? Each channel would therefore hold a 100-dimensional vector pointing somewhere and you could calculate the similarity between the channels. a = torch.randn(1, 2, 10, 1…
st31595
Based on the docs 985, the output will have the same shape as the inputs without the dim which is used to compute the similarity. So if you would like to use the batch dimension and get an output of [2, 20, 20], you could use dim=0.
st31596
@ptrblck I see, but why when i define the dim=1 it gives me torch.Size([20, 20])? how does it actually compute it? Also, the results are kinda weird when dim=0, it is just 1, im confused
st31597
It should give you an output of shape [1, 20, 20], so that’s strange. Could you check it again please? The cosine similarity will be calculated between both tensors in the specified dimension. All other dimensions apparently deal as an additional storage and won’t be used in the calculation. You can also reshape your input tensors to [batch_size, 2] and will get the same result: res1 = F.cosine_similarity(input1, input2, 1) res2 = F.cosine_similarity( input1.permute(0, 2, 3, 1).view(-1, 2), input2.permute(0, 2, 3, 1).view(-1, 2), 1).view(1, 20, 20) print((res1 == res2).all()) You can find the implementation here 192.
st31598
So when dim=1 it compute it along dimension 1 and consider all the channels together basically. That part makes sense. Though the output is still the size of [ 20, 20] not [1, 20, 20] maybe it is because im using pytorch 0.3.0? input1 = torch.abs(torch.randn(1,2,10, 10)) input2 = torch.abs(torch.randn(1,2,10, 10)) res1 = F.cosine_similarity(input1, input2, 1) print(res1) print(res1.size()) output: 0.9249 0.9581 0.9964 0.9384 1.0000 0.7050 0.7339 0.6200 0.3887 0.8247 0.8397 0.9821 0.8112 0.9300 0.9955 0.9970 0.9599 0.9871 0.9546 0.2274 0.9980 0.9903 0.9990 0.9977 0.5832 0.7850 0.9049 0.8266 0.9084 0.9682 0.9949 0.9895 0.8929 0.8659 0.7442 0.5848 0.9990 0.8466 0.9778 1.0000 0.9786 0.9972 0.5892 0.2555 0.6968 0.7367 0.9168 0.8906 0.8962 0.0922 0.8235 0.5739 0.5015 0.9879 0.5706 0.9696 0.9995 0.7057 0.9877 0.8018 0.8789 0.9820 0.7538 0.9882 0.9999 0.2345 0.7596 0.9877 0.9749 0.9463 0.9243 0.9671 0.7078 0.3916 1.0000 0.9979 0.9256 1.0000 0.9740 0.7148 0.9987 0.9342 0.2270 0.8224 0.9970 0.9744 0.8185 0.9213 0.8891 0.9911 0.9607 0.9490 0.9766 0.9463 0.7205 0.9997 0.9150 0.7641 0.5461 0.7848 [torch.FloatTensor of size 10x10] torch.Size([10, 10]) but the second part that i put dim=0 gives me all values equal to 1 and it does not make sense yet: i looked at the code, but can you please tell me one more time why it gives output of 1? res1 = F.cosine_similarity(input1, input2, 0) print(res1) print(res1.size()) output: (0 ,.,.) = 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 (1 ,.,.) = 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 [torch.FloatTensor of size 2x10x10] torch.Size([2, 10, 10])
st31599
Well, because this dimension has the shape 1, you are basically measuring the cosine similarity between points instead of vectors. This also means that the L2-norm in this dimension is just the absolute value, which results in all ones as the result. EDIT: Yeah, your older version might yield the different shape.
st31600
I see…Thank you for the clarification do you think there is a way to reshape inputs to compute the similarity of each channel (lets say our batchsize is always 1) and get an out of torch.Size([2, 10, 10])?
st31601
Maybe there is a way, but let’s first clarify your use case. I’m not quite sure, what the cosine similarity should calculate in this case. Assuming we have two tensors with image dimensions [1, 2, 10, 10]. Now let’s say one tensor stores all ones (call it tensor y). The other consists of two [10, 10] slices, where one channel is also all ones, the other however is a linspace from 0 to 1 (call it tensor x). We can now see the channels as coordinates of vectors. I.e. each of the 100 pixels has two coordinates. While y has all [1, 1] vectors, x's pixel have different vectors with the values between [0, 1] and [1, 1]. We assume the cosine similarity output should be between sqrt(2)/2. = 0.7071 and 1.. Let see an example: x = torch.cat( (torch.linspace(0, 1, 10)[None, None, :].repeat(1, 10, 1), torch.ones(1, 10, 10)), 0) y = torch.ones(2, 10, 10) print(F.cosine_similarity(x, y, 0)) It seems to be working in this way. Now let’s talk a bit about your specific use case. If you expect an output of [2, 10, 10], the similarity should be calculated somehow elementwise. I’m not sure, how you would like to use the cosine similarity for it. Could you explain what kind of information is stored in your images, pixels, channels etc.? Maybe I’m just misunderstanding the issue completely.
st31602
Thank you for your reply. I totally agree with the first part of your explanation. Let me explain a little more about why i want to compute it like that. Here is what i am trying to do: lets say i have features channel 'x = torch.ones(1,2, 10, 10)and 'y = torch.ones(1,2, 10, 10). What im trying to do is to weight each feature channel in y based on the similarity of that channel with its corresponding channel in x. of course i can use the cosine similarity for the whole x and y and just multiply each channel of y with that similarity via mul, but i feel like i should compute the similarity between the feature channels separately. meaning that channel 1 should be weighted with similarity between x[0,0,:,:] and y[0,0,:,:] and channel 2 should be weighted with similarity between x[0,1,:,:] and y[0,1,:,:] please let me know if it is not clear
st31603
Thanks for the explanation. In this case, would you want to use the 10x10 pixels as the vector to calculate the cosine similarity? Each channel would therefore hold a 100-dimensional vector pointing somewhere and you could calculate the similarity between the channels. a = torch.randn(1, 2, 10, 10) b = torch.randn(1, 2, 10, 10) F.cosine_similarity(a.view(1, 2, -1), b.view(1, 2, -1), 2) > tensor([[-0.0755, 0.0896]]) Now you could use these two values to weight your channels. Would that make sense?
st31604
ptrblck: a = torch.randn(1, 2, 10, 10) b = torch.randn(1, 2, 10, 10) F.cosine_similarity(a.view(1, 2, -1), b.view(1, 2, -1), 2) > tensor([[-0.0755, 0.0896]]) @ptrblck: I am using toch.bmm to compute the cosine distance. it will return a maxtrix size of NxN instead of triangle vector in the matrix in the nn.CosineSimilarity. How to use nn.CosineSimilarity to get full cosine matrix as torch.bmm did? This is my code. input1 = torch.randn(2, 4, 4) input2 = torch.randn(2, 4, 4) #Using bmm x_norm = input1 / torch.norm(input1, p=2, dim=1, keepdim=True) y_norm = input2 / torch.norm(input2, p=2, dim=1, keepdim=True) cosine_sim = torch.bmm(x_norm.transpose(2,1), y_norm) print('Using bmm: \n', cosine_sim) # Pytorch built-in cos = nn.CosineSimilarity(dim=1, eps=1e-6) cosine_sim = cos(input1, input2) print('Using nn: \n', cosine_sim) The output is Using bmm: tensor([[[-0.0230, 0.2983, 0.0487, 0.3974], [-0.5747, 0.5513, -0.6436, -0.1389], [-0.3876, -0.2107, 0.7093, -0.4929], [-0.3446, -0.5347, 0.6372, -0.6423]], [[-0.3842, -0.0349, 0.1621, 0.6400], [ 0.6776, -0.4812, -0.3169, -0.7976], [-0.5251, -0.1258, 0.9381, -0.2379], [-0.1517, 0.7164, 0.8332, 0.1668]]]) Using nn: tensor([[-0.0230, 0.5513, 0.7093, -0.6423], [-0.3842, -0.4812, 0.9381, 0.1668]]) Do you know how to use nn.CosineSimilarity and achieve similar result as torch.bmm. I cannot use torch.bmm because of CUDA memory error.
st31605
I am using cosine similarly to check similarly of sentence embedding. I have 200 texts in each of two sets and i am getting the embedding from a model. The size of each embedding (200,52,784) Now when i am using cosine similarly its returning me a tensor size (200,784) But what i want to return me a single percentage value which represents the total similarity between these two sets. How can I do that?
st31606
How did you do what you wanted to do? I have to do a similar thing and the paper I am implementing says that just select the argmax. (n vectors which are the output of cosine similarity. Each of these vectors is an array of tensors.)
st31607
class MODEL(nn.Module): def __init__(self, attention_heads=4, attention_size=32, out_size=4): super(MODEL, self).__init__() self.conv1a = nn.Conv2d(kernel_size=(10, 2), in_channels=1, out_channels=16, padding=(4, 0)) self.conv1b = nn.Conv2d(kernel_size=(2,8), in_channels=1, out_channels=16, padding=(0, 3)) self.conv2 = nn.Conv2d(kernel_size=(3,3), in_channels=32, out_channels=32, padding=(1,1)) self.conv3 = nn.Conv2d(kernel_size=(3,3), in_channels=32, out_channels=48, padding=(1,1)) self.conv4 = nn.Conv2d(kernel_size=(3,3), in_channels=48, out_channels=64, padding=(1,1)) self.conv5 = nn.Conv2d(kernel_size=(3,3), in_channels=64, out_channels=80, padding=(1,1)) self.maxp = nn.MaxPool2d((2, 2)) self.bn1a = nn.BatchNorm2d(3) self.bn1b = nn.BatchNorm2d(3) self.bn2 = nn.BatchNorm2d(3) self.bn3 = nn.BatchNorm2d(3) self.bn4 = nn.BatchNorm2d(3) self.bn5 = nn.BatchNorm2d(3) self.gap = nn.AdaptiveAvgPool2d(1)#(data_format='channels_last') self.flatten = nn.Flatten() self.fc = nn.Linear(10000,out_size) self.attention_query = [] self.attention_key = [] self.attention_value = [] self.attention_heads = attention_heads self.attention_size = attention_size for i in range(self.attention_heads): self.attention_query.append(nn.Conv2d(in_channels=80,out_channels=self.attention_size,kernel_size=1)) self.attention_key.append(nn.Conv2d(in_channels=80,out_channels=self.attention_size,kernel_size=1)) self.attention_value.append(nn.Conv2d(in_channels=80,out_channels=self.attention_size,kernel_size=1)) def call(self, *input): x = input[0] xa = self.conv1a(x) xa = self.bn1a(xa) xa=nn.relu(xa) xb = self.conv1b(x) xb = self.bn1b(xb) xb = nn.relu(xb) x = nn.concat([xa, xb], 1) x = self.conv2(x) x = self.bn2(x) x=nn.relu(x) x = self.maxp(x) x = self.conv3(x) x = self.bn3(x) x = nn.relu(x) x = self.maxp(x) x = self.conv4(x) x = self.bn4(x) x = nn.relu(x) x = self.conv5(x) x = self.bn5(x) x = nn.relu(x) attn = None for i in range(self.attention_heads): # Q = self.attention_query[i](x) # Q = tf.transpose(Q, perm=[0, 3, 1, 2]) # K = self.attention_key[i](x) # K = tf.transpose(K, perm=[0, 3, 2, 1]) # V = self.attention_value[i](x) # V = tf.transpose(V, perm=[0, 3, 1, 2]) # attention = tf.nn.softmax(tf.matmul(Q, K)) # attention = tf.matmul(attention, V) Q = self.attention_query[i](x) K = self.attention_key[i](x) V = self.attention_value[i](x) attention = nn.Softmax(torch.matmul(Q, K),dim=1) attention = torch.matmul(attention, V) if (attn is None): attn = attention else: attn = nn.concat([attn, attention], 2) x = torch. transpose(attn, perm=[0, 2, 3, 1]) x = nn.relu(x) x = self.gap(x) x = self.flatten(x) x = self.fc(x) return x raise NotImplementedError I am not getting this error , my input is a 2d structure of size (23,63)
st31608
Solved by ptrblck in post #2 I guess this error is raised during the forward pass, since the forward method definition is missing, so you would have to change call to forward.
st31609
I guess this error is raised during the forward pass, since the forward method definition is missing, so you would have to change call to forward.
st31610
@ptrblck Sir did that. But for input I=torch.randn(32,1,23,63) model=MODEL(4,32,4) O=model(I) I am now getting an error at line attention = nn.Softmax(torch.matmul(Q, K),dim=1) The error is : RuntimeError: Expected batch2_sizes[0] == bs && batch2_sizes[1] == contraction_size to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.) Plz guide
st31611
This error is raised in the torch.matmul call, which gets tensors in a wrong shape. Print the shape of both input tensors and make sure they have the expected shapes 1.
st31612
Sir the shapes are Q=torch.Size([32, 32, 5, 15]) K=torch.Size([32, 32, 5, 15]) and yes this size is inappropriate for matrix mul. I dont know what to do now.
st31613
You are confusing the shapes with the values in a tensor. The first example won’t work, as the shapes are invalid for a matrix multipliciation: A = torch.randn([32, 32, 5, 15]) B = torch.randn([32, 32, 5, 15]) C = torch.matmul(A, B) > RuntimeError: Expected batch2_sizes[0] == bs && batch2_sizes[1] == contraction_size to be true, but got false. So you would have to permute one of the tensors depending on the expected output shape: C = torch.matmul(A.permute(0, 1, 3, 2), B) print(C.shape) > torch.Size([32, 32, 15, 15]) # or C = torch.matmul(A, B.permute(0, 1, 3, 2)) print(C.shape) > torch.Size([32, 32, 5, 5]) Your second example was using just 4 values and is thus returning a scalar.
st31614
Yes sir I got it, but now plz guide how to multiply this softmax output with another matrix, it says it’s not possible. But this is ky definition of attention module @ptrblck sir.
st31615
ptrblck: torch.matmul(A.permute(0, 1, 3, 2), B) As now I am getting this error: TypeError: matmul(): argument ‘input’ (position 1) must be Tensor, not Softmax At line: attention = torch.matmul(attention, V.permute(0, 1, 3, 2)) This is for second matmul() method
st31616
I have the DataLoader class for 1 image and labels: class CoronalDataset(Dataset): def __init__(self, pet_type='interim', train=True, transform=None): train_valid = 'train' if train else 'valid' # Create the Tensor from the numpy array self.x = torch.from_numpy(np.load(f'{DATASET_DIR}/{pet_type}/rgb/x_{train_valid}_coronal.npy')).permute(0, 3, 1, 2).type(torch.FloatTensor) print("Shape of X: ", self.x.shape) print(self.x.dtype) self.y = torch.from_numpy(np.load(f'{DATASET_DIR}/{pet_type}/rgb/y_{train_valid}_123_45.npy')) print("Shape of Y:", self.y.unique(return_counts=True)) labellist = self.y.tolist() result = sorted([(x, labellist.count(x)) for x in [[1.0, 0.0],[0.0, 1.0]]], key=lambda y: y[1]) for elem in result: print('{} {}'.format(elem[0], elem[1])) print(self.y.dtype) self.len = len(self.x) self.transform = transform def __getitem__(self, index): sample = self.x[index], self.y[index] if self.transform: sample = self.transform(sample1) return sample def __len__(self): return self.len
st31617
Solved by ptrblck in post #4 In that case you could write a custom model and use these two inputs directly in the forward method. Here is a small example: class MyModel(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(10, 10) self.fc2 = nn.Linear(10, 10) def forw…
st31618
Could you describe your use case a bit more and in particular what kind of class you would like to implement? Would it be a model with two inputs or e.g. the transformation using two image tensors?
st31619
In that case you could write a custom model and use these two inputs directly in the forward method. Here is a small example: class MyModel(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(10, 10) self.fc2 = nn.Linear(10, 10) def forward(self, x1, x2): x1 = self.fc1(x1) x2 = self.fc2(x2) x = x1 + x2 return x model = MyModel() x1 = torch.randn(1, 10) x2 = torch.randn(1, 10) out = model(x1, x2)
st31620
Is this right, sir? class ResNet(nn.Module): def __init__(self, block, num_blocks, num_classes=10): super(ResNet, self).__init__() self.in_planes = 64 self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False) self.conv2 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False) self.bn1 = nn.BatchNorm2d(64) self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1) self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2) self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2) self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2) self.linear = nn.Linear(512*block.expansion, num_classes) def _make_layer(self, block, planes, num_blocks, stride): strides = [stride] + [1]*(num_blocks-1) layers = [] for stride in strides: layers.append(block(self.in_planes, planes, stride)) self.in_planes = planes * block.expansion return nn.Sequential(*layers) def forward(self, x, y): out1 = F.relu(self.bn1(self.conv1(x))) out2 = F.relu(self.bn1(self.conv2(y))) out = self.layer1(out) out = self.layer2(out) out = self.layer3(out) out = self.layer4(out) out = F.avg_pool2d(out, 4) out = out.view(out.size(0), -1) out = self.linear(out) return out
st31621
No, since out1 and out2 are not used, while the undefined out is used instead, which would raise an error.
st31622
This could work assuming that both have the same shape or can be broadcasted. To quickly check your model, you can create random tensors and execute the forward as well as backward pass.
st31623
Dear Sir, your suggestion is right. But how can I create the class DataLoader for input that has 2 images?
st31624
You can write a custom Dataset as described here 3 and return two images in the __getitem__.
st31625
Dear sir, I have a problem: “mat1 dim 1 must match mat2 dim 0” I have the inputs are 2 images (3, 224, 224) and (3, 224, 224), and the network ( I used Resnet18) is: '''ResNet in PyTorch. ''' import torch import torch.nn as nn import torch.nn.functional as F class BasicBlock(nn.Module): expansion = 1 def __init__(self, in_planes, planes, stride=1): super(BasicBlock, self).__init__() self.conv1 = nn.Conv2d( in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False) self.bn1 = nn.BatchNorm2d(planes) self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False) self.bn2 = nn.BatchNorm2d(planes) self.shortcut = nn.Sequential() if stride != 1 or in_planes != self.expansion*planes: self.shortcut = nn.Sequential( nn.Conv2d(in_planes, self.expansion*planes, kernel_size=1, stride=stride, bias=False), nn.BatchNorm2d(self.expansion*planes) ) def forward(self, x): out = F.relu(self.bn1(self.conv1(x))) out = self.bn2(self.conv2(out)) out += self.shortcut(x) out = F.relu(out) return out class Bottleneck(nn.Module): expansion = 4 def __init__(self, in_planes, planes, stride=1): super(Bottleneck, self).__init__() self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=1, bias=False) self.bn1 = nn.BatchNorm2d(planes) self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False) self.bn2 = nn.BatchNorm2d(planes) self.conv3 = nn.Conv2d(planes, self.expansion * planes, kernel_size=1, bias=False) self.bn3 = nn.BatchNorm2d(self.expansion*planes) self.shortcut = nn.Sequential() if stride != 1 or in_planes != self.expansion*planes: self.shortcut = nn.Sequential( nn.Conv2d(in_planes, self.expansion*planes, kernel_size=1, stride=stride, bias=False), nn.BatchNorm2d(self.expansion*planes) ) def forward(self, x): out = F.relu(self.bn1(self.conv1(x))) out = F.relu(self.bn2(self.conv2(out))) out = self.bn3(self.conv3(out)) out += self.shortcut(x) out = F.relu(out) return out class ResNet(nn.Module): def __init__(self, block, num_blocks, num_classes=10): super(ResNet, self).__init__() self.in_planes = 64 self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False) self.conv2 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False) self.bn1 = nn.BatchNorm2d(64) self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1) self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2) self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2) self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2) self.linear = nn.Linear(512*block.expansion, num_classes) def _make_layer(self, block, planes, num_blocks, stride): strides = [stride] + [1]*(num_blocks-1) layers = [] for stride in strides: layers.append(block(self.in_planes, planes, stride)) self.in_planes = planes * block.expansion return nn.Sequential(*layers) def forward(self, x, y): out1 = F.relu(self.bn1(self.conv1(x))) out2 = F.relu(self.bn1(self.conv2(y))) out = out1 + out2 out = self.layer1(out) out = self.layer2(out) out = self.layer3(out) out = self.layer4(out) out = F.avg_pool2d(out, 4) out = out.view(out.size(0), -1) out = self.linear(out) return out def ResNet18(): return ResNet(BasicBlock, [2, 2, 2, 2])
st31626
I’m trying to use einsum in my implementation of Losses (eg. Dice Loss) for network training. So far I trained on 2D images, and it worked nicely, but now I tried to extend the code to work also on 3D data. As I want the same code to work on both, 2 and 3D, I went for example from using torch.einsum(‘bkwh,bkwh->bk’) to torch.einsum('bk…,‘bk…->bk’), basically using ellipsis for the unknown number of dimensions. The desired output is still the same though - elementwise multiplication and then summation over all but first two dimensions. But so, with this syntax, I want to do the summation over the ellipsis, which according to the documentation should work ( torch.einsum — PyTorch 1.8.1 documentation 1 : See note: " torch.einsum handles ellipsis (‘…’) differently from NumPy in that it allows dimensions covered by the ellipsis to be summed over, that is, ellipsis are not required to be part of the output. " ). But for some reason it doesn’t work for me; when I use this particular operation on two tensors, both of size [32, 7, 24, 24, 24] for example, it throws an error: " RuntimeError: shape ‘[32, 7]’ is invalid for input of size 3096576 ". So it would seem like it doesn’t know how to sum over ellipsis after all? Or am I somehow using it wrong?
st31627
I get the following results, and it seems to be working fine: torch.einsum('bk..., bk... -> bk', torch.randn(32, 7, 24, 24, 24), torch.randn(32, 7, 24, 24, 24)).shape Out[26]: torch.Size([32, 7]) Maybe you need to ensure your tensor’s shape.
st31628
Hi Eva (and Lart)! Eva: torch.einsum('bk…,‘bk…->bk’), … it throws an error: " RuntimeError: shape ‘[32, 7]’ is invalid for input of size 3096576 ". Please check your pytorch version – this looks like an einsum() bug that got fixed somewhere along the line. (I’m not sure exactly when.) Lart’s example works for me with a 1.8 nightly build, version ‘1.8.0.dev20210117’, but fails – giving your error – with an older 1.7 stable build, version ‘1.7.1’. If you’re not using the current stable build (1.8.1), you might try upgrading to that to see if things work. (I haven’t tried it on the 1.8.1 stable build, but the 1.8.0 nightly version suggests that the stable build should work.) Best. K. Frank
st31629
Is it possible to use both CPU and GPU during training to avoid bellow error ? CUDA error: out of memory I have CPU: 32G RAM and GPU: 8G RAM. When I train on smaller network with batch size =4 , it is OK. But I can not increase batch size, because it faces CUDA out of memory. In this case, it uses just 20% of CPU and all GPU capacity. Is it possible to set data in CPU and model in GPU? How ?? Thanks
st31630
Training on a CPU would be very slow if you’re doing deep learning stuff. Perhaps, you might want to try some other approaches such as mixed-precision training ?
st31631
I mean the model trains in GPU and also loss calculated in GPU, but reading data and batches are in CPU? I don’t know about mixed-precision training, What is it used for ? I just want to solve error CUDA out of memory to train with larger batch size than 4 !
st31632
Hello! I created, trained and saved my model MyModel in Jupiter as 'mymodel.pt'. But then I want to load it in another application. To do this, I created the same class in the file where I load it: class MyModel(nn.Module): def __init__(self): ... def forward(self, x): ... This file is the file with the predictor class, which is run through uvicorn. And when I run torch.load(path_to_pt), I see an error that there is no class MyModel: File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 851, in _load result = unpickler.load() AttributeError: Can't get attribute 'MyModel' on <module '__main__' from '/usr/local/bin/uvicorn'> I understand that the MyModel class is called differently: >>> print(MyModel) # <class 'classification.my_model.MyModel'> And I understand that this shouldn’t be a problem when working through env, but is there any way I can solve this problem without going to env?
st31633
Solved by ptrblck in post #2 It seems you’ve stored the model directly (not its staste_dict), so I think you would need to make sure the folder structure etc. on the inference application is the same as was used during training. The recommended way would be to store the state_dict of models instead, recreate the model object i…
st31634
It seems you’ve stored the model directly (not its staste_dict), so I think you would need to make sure the folder structure etc. on the inference application is the same as was used during training. The recommended way would be to store the state_dict of models instead, recreate the model object in another script, and load its state_dict later, which would avoid these pickling issues.
st31635
Hello, I have an ImageFolder object with datapoints of 3 unbalanced classes and I want to randomly choose n points from each class where n is the minimum class count and then split the new dataset formed into a training and validation set (either keeping the proportions or randomly). I have this code but I am not sure if I am doing it correctly. dataset = datasets.ImageFolder(image_dir, transform=transformations) images_label = {image[0]: image[1] for image in dataset.imgs} class_counts = {} for image_id in images_label.keys(): label = images_label[image_id] class_counts[label] = class_counts.get(label, 0) + 1 class_weights = list(class_counts.values()) class_weights /= np.sum(class_weights) sampler = torch.utils.data.sampler.WeightedRandomSampler(class_weights, sum(class_counts.values())) data_loader = DataLoader(dataset, sampler=sampler) train_length=int(0.8* len(data_loader)) test_length=len(data_loader)-train_length train_dataset,test_dataset = torch.utils.data.random_split(data_loader.dataset,(train_length,test_length)) dataloader_train = torch.utils.data.DataLoader(train_dataset) dataloader_test = torch.utils.data.DataLoader(test_dataset)
st31636
In your current approach it seems that you are using the class_counts to create a WeightedRandomSampler, while the each sample should get a weight as described in this post 10. I’m also unsure, if each batch should contain at least n samples from each class or the dataset splits. In the former case, you could write a custom sampler (and remove the WeightedRandomSampler) such that indices are samples using your condition.
st31637
Hello all, I’m using Pytorch for awhile and I’m wondering about the memory usage. The case is simple, 1 windows machine with 1 GPU. When I’m training a model I see that the RAM and the GPU usage increase but not to their full potential. GPU doesn’t pass the 10% of usage for example. To have a faster training session, would make it sense to push the hardware to the limit? And if so, how? It is just a matter of putting more batches? Are any setup settings from Pytorch controlling this? This is my first post so any formatting issues regarding the question please let me know. Best
st31638
For example, I use numpy: 2017-12-05-170222_544x733_scrot.png544×733 41.1 KB Then I would like to do it in PyTorch, but it failed, I do not know how to solve it. 2017-12-05-170351_1042x765_scrot.png1042×765 47 KB
st31639
You can use torch.topk 2.4k. It returns a tuple with the maximum entries (like sort) and their indices (like argsort). Best regards Thomas
st31640
Hey @tom, torch.topk is significantly slower than torch.sort I’m running it on a tensor of dimension 300k and taking k=200 out. sample code - import torch x = torch.rand(300000, device='cuda') %timeit torch.topk(x, 200) # 1000 loops, best of 5: 2.96 ms per loop %timeit torch.sort(x, descending=True)[0][:200] # 1000 loops, best of 5: 812 µs per loop Can you please help me here? Regards Nikhil
st31641
So what happens is that torch.sort (for large tensors if memory serves me well) goes to the Thrust library’s sort which is likely better optimized than PyTorch’s own kernels. Note that there are memory implications of using sort: The tensor you get from it is a view of the sorted 300k element tensor. Depending on what you do (in particular when involving autograd), this can be a disadvantage. Best regards Thomas
st31642
Hi! I found that torch.softmax cause GPU memory leak. My pytorch version is 1.8.1+cu111. When I run the code below: import torch from torch import nn from torch.nn import functional as F from torch import cuda def test(inp): w = torch.rand([32, 1, 1, 1],device='cuda') y = torch.softmax(F.conv2d(inp, w), 1) y = F.conv_transpose2d(y, w) return y imgs = torch.zeros([128,1,512,512],device='cuda') outp = test(imgs) # del outp cuda.empty_cache() print(cuda.memory_summary()) The output is: image624×696 15.9 KB After the function outputs the result, the variables inside the function should be released, but it did not. In addition, if you delete the variableoutp, the redundant occupied memory will be released. For conparison, I wrote a softmax myself and it did not has the condition of memory leak. The code is below: import torch from torch import nn from torch.nn import functional as F from torch import cuda def softmax(x,dim): ex = torch.exp(x) return ex/torch.sum(ex, dim,keepdim=True) def test(inp): w = torch.rand([32, 1, 1, 1],device='cuda') y = softmax(F.conv2d(inp, w), 1) y = F.conv_transpose2d(y, w) return y imgs = torch.zeros([128,1,512,512],device='cuda') outp = test(imgs) cuda.empty_cache() print(cuda.memory_summary()) The output: image627×695 16.2 KB I don’t know what mechanism the non-releasable memory is. If it’s a bug, please fix it as soon as possible. Thanks Another weird phenomenon! I found that as long as I multiply a constant immediately after the first convolution, this kind of memory leak will occur. If you multiply the constant before feed into the ‘conv_transpose2d’ or not multiply the constant, the memory leak will disappear. The codes and results for the three cases are as follows: import torch from torch import nn from torch.nn import functional as F from torch import cuda def test(inp): w = torch.rand([32, 1, 1, 1],device='cuda') a = F.conv2d(inp, w)*5 y = F.conv_transpose2d(a, w) return y imgs = torch.zeros([128,1,512,512],device='cuda') outp = test(imgs) cuda.empty_cache() print(cuda.memory_summary()) image620×684 16.6 KB import torch from torch import nn from torch.nn import functional as F from torch import cuda def test(inp): w = torch.rand([32, 1, 1, 1],device='cuda') a = F.conv2d(inp, w) y = F.conv_transpose2d(a*5, w) return y imgs = torch.zeros([128,1,512,512],device='cuda') outp = test(imgs) cuda.empty_cache() print(cuda.memory_summary()) image618×688 17 KB import torch from torch import nn from torch.nn import functional as F from torch import cuda def test(inp): w = torch.rand([32, 1, 1, 1],device='cuda') a = F.conv2d(inp, w) y = F.conv_transpose2d(a, w) return y imgs = torch.zeros([128,1,512,512],device='cuda') outp = test(imgs) cuda.empty_cache() print(cuda.memory_summary()) image628×691 16.2 KB Please help me!!
st31643
I don’t know if this is a bug because in your example outp is still in scope and should not be deleted. However, it is interesting that the native implementation of softmax causes the computation to tie up more memory.
st31644
del outp was commented out in the code. I just did not post the result when outp was deleted.
st31645
I don’t know why outp takes up so much non-releasable memory when softmax is used, or even just multiply a constant, as I illustrated above.
st31646
This isn’t a memory leak as the memory is still available within PyTorch. Roughly speaking, PyTorch allocates CUDA memory in chunks and then puts tensors in it. It can only “return” (with empty caches) chunks that are completely unused (i.e. don’t have any tensors in them). What happens here is that with the additional intermediate the data of outp sits inside the 4GB allocation that is mostly free and available to PyTorch but cannot be returned. You can get a glimpse of this by calling: torch.cuda.memory_snapshot 1 and it also links to a brief note on memory management (e.g. how to avoid caching allocations for debugging): print(torch.cuda.memory_snapshot()) Best regards Thomas
st31647
Thank you very much for your answer! But I still don’t quite understand. After the function returns, why are the intermediate variables inside the function not released completely? After all, I only need the tensor returned by the function, and I don’t need anything else. And this doesn’t always happen. For example, the penultimate and the antepenultimate examples I gave, just because the timing of multiplying the constant is different, the function takes 4GB more memory after returning.
st31648
I think the memory summary is confusing here. Are you seeing 4GB more used, or 4GB that is used and not available for allocations? A litmus test here would be to keep allocating tensors after the function returns and see how many succeed before an OOM error is thrown.
st31649
It is true that the non-releasable memory can be reallocated to the new tensor, but when the new tensor is large, the non-releasable memory can not be combined with the released memory to obtain a whole block of memory to hold the new tensor. When the available video memory is 14GB (colab), the following two examples show this situation: import torch from torch import nn from torch.nn import functional as F from torch import cuda def test(inp): w = torch.rand([32, 1, 1, 1],device='cuda') a = F.conv2d(inp, w)*5 # y = F.conv_transpose2d(a, w) # return y outp = test(torch.zeros([128,1,512,512],device='cuda')) cuda.empty_cache() print(cuda.memory_summary()) a = torch.zeros(11,256,1024,1024,device='cuda') print(cuda.memory_summary()) import torch from torch import nn from torch.nn import functional as F from torch import cuda def test(inp): w = torch.rand([32, 1, 1, 1],device='cuda') a = F.conv2d(inp, w) #Only modified here y = F.conv_transpose2d(a*5, w) #Only modified here return y outp = test(torch.zeros([128,1,512,512],device='cuda')) cuda.empty_cache() print(cuda.memory_summary()) a = torch.zeros(11,256,1024,1024,device='cuda') print(cuda.memory_summary()) How to solve this problem?
st31650
You are right, the non-releasable memory indeed can be reused by pytorch, but it is not shared with the released memory. When the new defined tensor is large, the non-releasable memory can not be applied with released memory simultaneously.
st31651
So memory fragmentation (which is likely the more precise than memory leak to describe the situation) is a thing with the caching allocator. If you wanted, you could set PYTORCH_NO_CUDA_MEMORY_CACHING=1 to get around this at the expense of doing all allocations/deallocations through cuda.
st31652
I have a neural network with input shape = 1000, hidden = 500, and output = 2 neurons for classification. I am trying to do transfer learning, and my pretrained model is shaped shape = 1000, hidden = 500, and output = 200 neurons. However, when I try to initialize the classification model parameters with the learnt parameters, there is no error about different output layer neuron numbers. Screen Shot 2021-05-27 at 1.11.50 PM1642×1256 187 KB Screen Shot 2021-05-27 at 1.11.58 PM1630×270 58.4 KB No error above when trying to initialize by directly setting the data. However, when I initialize with load_state_dict(), the correct behavior occurs. Below: Screen Shot 2021-05-27 at 1.12.09 PM2998×1066 381 KB
st31653
Solved by ptrblck in post #4 The model will change, but since you’ve manipulated the internal parameters manually, the in_features and other attributes won’t be changed as seen here: # default setup model = nn.Linear(10, 10, bias=False) print(model) > Linear(in_features=10, out_features=10, bias=False) x = torch.randn(1, 10) o…
st31654
You are manually overriding the .data attribute, which won’t trigger any shape checks, so it would be your responsibility to make sure the parameters are assigned properly using this approach. With that being said, note that you should generally not use the .data attribute, as it could yield unwanted side effects. Instead, if you want to manually assign the parameters, wrap the assignment in a with torch.no_grad() block and use the .weight and .bias attributes.
st31655
Thank you for your response. When I initialize the weights with the incorrect number of neurons, that does not change the current classification network’s shape. For example, the classification network is shaped 1000-500-2 and the weights that I am initializing the model with is 1000-500-200. When I use .data to initialize, the resulting shape remains 1000-500-2. According to your response, shouldn’t it change the shape to 1000-500-200? Or am I understanding it vice versa?
st31656
The model will change, but since you’ve manipulated the internal parameters manually, the in_features and other attributes won’t be changed as seen here: # default setup model = nn.Linear(10, 10, bias=False) print(model) > Linear(in_features=10, out_features=10, bias=False) x = torch.randn(1, 10) out = model(x) print(out.shape) > torch.Size([1, 10]) # manual manipulation with torch.no_grad(): model.weight = nn.Parameter(torch.randn(1, 10)) print(model) > Linear(in_features=10, out_features=10, bias=False) # wrong, as you've manually manipulated the parameter out = model(x) print(out.shape) > torch.Size([1, 1]) # new, expected output shape I would generally not recommend to manipulate internals manually unless you are sure that’s the right approach.
st31657
Hi, I am confused about the order of execution of ‘net.cuda()’ and ‘net.load_state_dict’. Will the following two execution orders produce the same result? # method 1 net = model(args) net.cuda() net.load_state_dict(checkpoints) # method 2 net = model(args) net.load_state_dict(checkpoints) net.cuda()
st31658
I never tried but I think they should produce the same results. I generally load the state dict first and then send model to cuda device. As shown here 2 in the documentation.
st31659
Hi, I’m using torch 1.8.1+cu101. I train model on gpu and save it using torch.save(…) and load it back on cpu using torch.load(…, map_location=‘cpu’). But the prediction result on cpu is totally different from that on gpu. I then check the model parameters loaded and the parameters are different on cpu and on gpu. Why is it? I’ve already seen On a cpu device, how to load checkpoint saved on gpu device I trained my network on a gpu device and saved checkpoint by torch.save Loading this checkpoint on my cpu device gives an error: raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled``` but it still doesn’t work. Thanks
st31660
Could you post the model definition as well as a minimal code snippet to reproduce the issue of the non-matching parameters, please?
st31661
Quick question - I want to use nn.Identity as a placeholder. Like: layer_one = nn.Linear(input_size, hidden_size) if use_linear else nn.Identity However, I am worried that during backprop the Identity weight values will be updated. Will it? Thanks, Kevin
st31662
The identity layer shouldn’t have any weights that could be updated. import torch.nn >>> a = torch.nn.Linear(100, 100) >>> a.weight tensor([[-0.0092, -0.0083, -0.0101, ..., -0.0416, 0.0169, -0.0232], [ 0.0707, 0.0684, -0.0826, ..., -0.0583, -0.0801, -0.0349], [ 0.0531, 0.0917, -0.0934, ..., 0.0632, -0.0696, -0.0597], ..., [-0.0661, 0.0780, 0.0926, ..., 0.0099, -0.0024, -0.0690], [ 0.0313, 0.0154, -0.0628, ..., 0.0512, 0.0821, -0.0196], [ 0.0760, 0.0127, -0.0037, ..., -0.0742, -0.0545, -0.0989]], requires_grad=True) >>> b = torch.nn.Identity() >>> b.weight Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/pytorch/torch/nn/modules/module.py", line 1130, in __getattr__ raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'Identity' object has no attribute 'weight'
st31663
I got this problem when I read the paper ‘Federated Semi-Supervised Learning with Inter-Client Consistency & Disjoint Learning’. I am quite wondering about the disjoint learning. The author said ‘For a given model, it should decompose into two parts where model = a + b(each model’s parameters equal to the sum of the same location parameter a + the same location parameter b). When we train the model on dataset A, we only update a and hold b constant. When we train the model on dataset B, we only update b and hold a constant.’ How can we do this with pytorch? Thanks so much for your help!!!
st31664
Hi All, I’m currently submitting a few scripts to a remote server with a few different GPUs and I’ve noticed that in some cases some jobs will fail due to this CUDA unknown error. The GPUs involved are specificied to use CUDA 11.0 and the PyTorch installation is 1.7.1+CUDA11. This error has emerged in few different ways but only when I called CUDA in some way. So, for example, moving my model from CPU to GPU will result in the same error (sometimes). The driver version is 450.66. Traceback (most recent call last): File "~/run.py", line 28, in <module> device_name = torch.cuda.get_device_name(torch.cuda.current_device()) File "~/.local/lib/python3.6/site-packages/torch/cuda/__init__.py", line 366, in current_device _lazy_init() File "~/.local/lib/python3.6/site-packages/torch/cuda/__init__.py", line 172, in _lazy_init torch._C._cuda_init() RuntimeError: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. Is there any particular way to diagnose the error so that I can resolve it? Thank you!
st31665
This issue sounds more like a setup issue than a PyTorch error, so I would recommend to check the setup with the server admin and also take a look at dmesg (in particular search for any xid entries, which could provide more infromation, why CUDA is failing).
st31666
Do you have any references for further reading with this? I’ve never used dmesg and xid before to debug a CUDA install! Thank you once again!
st31667
Hello again! I’ve been briefly reading through the document you sent me and I’m not 100% sure how to proceed with solving this error. It states in the document you shared, that the Xid entries are located at /var/log/messages/ however that directory does not exist. Is there some other command that needs to be run beforehand? Also, I ran this command dmesg | grep -e 'NVRM: Xid' to see if any Xid entries appear from dmesg and it returns nothing. The only error that appears is a series of nfs: RPC call returned error 13 errors, but that is all. Could this be a potential issue? Also, also, could the type of card be an issue as well? Some cards are GTX 745 cards whereas some are more modern Quadro cards. Thank you for all the help!