id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st47268 | Jimmy2027:
parallel
You did not show the line that triggers the mistake, can you run it successfully on one GPU? or (CPU at least)? |
st47269 | When trying to run this with torch.device('cpu') I get the error:
File "/src/MIMIC/mimic/run_epochs.py", line 94, in basic_routine_epoch
results = model(batch)
File "/miniconda3/envs/mimic/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/miniconda3/envs/mimic/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 147, in forward
raise RuntimeError("module must have its parameters and buffers "
RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu |
st47270 | @klory the code runs without problems on gpu without nn.DataParallel. The problem seems to be the torch.distributions.Laplace that I call in the forward pass. Is there any reason this should be a problem?
Also my model consists of different nn.Modules: an encoder and a decoder for each modality. Is this compatible with nn.DataParallel? |
st47271 | I have created a gist 12 that reproduces the error. The problem is indeed the Laplace distribution from torch.distributions |
st47272 | Hello,
I have the following code, which I do not understand completely:
class ResMod(nn.Module):
def __init__(self, cin, cout, reps=5):
super(ResMod, self).__init__()
self.expand = BottleneckSparse(cin, cout, use_norm=False)
self.pool = MASPool(cout, 2, 2)
self.layers = nn.ModuleList([BottleneckSparse(cout, cout, use_norm=False) for _ in range(reps)])
def forward(self, input):
x, m = input
x, m = self.expand((x, m))
x, m = self.pool((x, m))
for L in self.layers:
x, m = L((x, m))
return x, m
Do I understand this correctly, that the custom BottleneckSparse Layer is used 5 times in the ModuleList?
Is the forward path then:
Bottleneck > Pool > 5x Bottleneck?
with the bottlenecks being added in the “for L”-part? |
st47273 | Solved by klory in post #2
yes, but they are five different BottleneckSparse |
st47274 | Hello all
I was trying to build a model using Attention layer. My model without attention perfectly overfits on a small dataset, but the one with attention doesn’t. Could someone help me how can I fix it, or whether or not I am doing it correctly?
I am using the same encoder architecture for both the models, the only difference is of the decoder.
import torch
from torch import nn
import torch.nn.functional as F
from torch.autograd import Variable
class AttentionDecoder(nn.Module):
def __init__(self, nh=256, nclass=13, dropout_p=0.1):
super(AttentionDecoder, self).__init__()
self.hidden_size = nh
self.output_size = nclass
self.dropout_p = dropout_p
self.embedding = nn.Embedding(self.output_size, self.hidden_size)
self.attn_combine = nn.Linear(self.hidden_size * 2, self.hidden_size)
self.dropout = nn.Dropout(self.dropout_p)
self.gru = nn.GRU(self.hidden_size, self.hidden_size)
self.out = nn.Linear(self.hidden_size, self.output_size)
self.vat = nn.Linear(self.hidden_size, 1)
def forward(self, input, hidden, encoder_outputs):
embedded = self.embedding(input)
embedded = self.dropout(embedded)
# test
batch_size = encoder_outputs.shape[1]
alpha = hidden + encoder_outputs
alpha = alpha.reshape(-1, alpha.shape[-1])
attn_weights = self.vat( torch.tanh(alpha))
attn_weights = attn_weights.view(-1, 1, batch_size).permute((2,1,0))
attn_weights = F.softmax(attn_weights, dim=2)
attn_applied = torch.matmul(attn_weights,encoder_outputs.permute((1, 0, 2)))
# output = torch.cat((embedded, attn_applied ), -1)
output = torch.cat((embedded, attn_applied.squeeze(1) ), -1)
output = self.attn_combine(output).unsqueeze(0)
output = F.relu(output)
output = output.squeeze(2)
output, hidden = self.gru(output, hidden)
output = output.unsqueeze(0)
output = F.log_softmax(self.out(output[0]), dim=1)
return output, hidden, attn_weights
def initHidden(self, batch_size):
result = Variable(torch.zeros(1, batch_size, self.hidden_size))
return result
class Encoder(nn.Module):
def __init__(self, cnnOutSize, nc, nclass, nh, n_rnn=2, leakyRelu=False):
super(Encoder, self).__init__()
ks = [3, 3, 3, 3, 3, 3, 2]
ps = [1, 1, 1, 1, 1, 1, 0]
ss = [1, 1, 1, 1, 1, 1, 1]
nm = [64, 128, 256, 256, 512, 512, 512]
cnn = nn.Sequential()
def convRelu(i, batchNormalization=False):
nIn = nc if i == 0 else nm[i - 1]
nOut = nm[i]
cnn.add_module('conv{0}'.format(i),
nn.Conv2d(nIn, nOut, ks[i], ss[i], ps[i]))
if batchNormalization:
cnn.add_module('batchnorm{0}'.format(i), nn.BatchNorm2d(nOut))
if leakyRelu:
cnn.add_module('relu{0}'.format(i),
nn.LeakyReLU(0.2, inplace=True))
else:
cnn.add_module('relu{0}'.format(i), nn.ReLU(True))
convRelu(0)
cnn.add_module('pooling{0}'.format(0), nn.MaxPool2d(2, 2)) # 64x16x64
convRelu(1)
cnn.add_module('pooling{0}'.format(1), nn.MaxPool2d(2, 2)) # 128x8x32
convRelu(2, True)
convRelu(3)
cnn.add_module('pooling{0}'.format(2),
nn.MaxPool2d((2, 2), (2, 1), (0, 1))) # 256x4x16
convRelu(4, True)
convRelu(5)
cnn.add_module('pooling{0}'.format(3),
nn.MaxPool2d((2, 2), (2, 1), (0, 1))) # 512x2x16
convRelu(6, True) # 512x1x16
self.cnn = cnn
self.softmax = nn.LogSoftmax()
def forward(self, input):
# print(input.shape)
conv = self.cnn(input)
# print('After Encoder Shape: ', conv.shape)
b, c, h, w = conv.size()
conv = conv.reshape(b, -1, w)
# print(conv.shape)
conv = conv.permute(2, 0, 1) # [w, b, c]
# print(conv.shape)
return conv
class Decoder(nn.Module):
def __init__(self, nIn, nHidden, nOut):
super(Decoder, self).__init__()
self.rnn = nn.LSTM(nIn, nHidden, bidirectional=True)
self.embedding = nn.Linear(nHidden * 2, nOut)
def forward(self, input):
recurrent, _ = self.rnn(input)
T, b, h = recurrent.size()
t_rec = recurrent.view(T * b, h)
output = self.embedding(t_rec) # [T * b, nOut]
output = output.view(T, b, -1)
return output
class Model(nn.Module):
def __init__(self, cnnOutSize, nc, nclass, nh, n_rnn=2, leakyRelu=False):
super(Model, self).__init__()
self.nh = nh
self.encoder = Encoder(cnnOutSize, nc, nclass, nh)
# self.decoder = Decoder(cnnOutSize, nh, nclass)
self.attentionDecoder = AttentionDecoder(nh, nclass, dropout_p=0.1)
def forward(self, x):
conv_output = self.encoder(x)
# print(conv_output.shape)
first_word = torch.tensor([0]*conv_output.shape[1]).type(torch.LongTensor).cuda()
decoder_output, decoder_hidden, decoder_attention = self.attentionDecoder(first_word, self.attentionDecoder.initHidden(x.shape[0]).cuda(), conv_output)
return decoder_output
# rnn_output = self.decoder(conv_output)
# return rnn_output
def create_model(config):
model = Model(config['cnn_out_size'], config['num_of_channels'], config['num_of_outputs'], 1024)
return model |
st47275 | I cannot spot anything obviously wrong, but would start by checking all shapes, as your current code is using some permutations and reshapes.
Try to add comments with the dimensions to all lines and make sure that the layers are getting the tensors in the expected shapes. |
st47276 | Hello everyone, I have a random question if anyone can respond it and thank you.
let’s say I’m doing social distanciation project, but how for example we can do it ? Like I don’t think that training a model on a dataset will be able to detect the distance between two persons??
What really I’m asking, how to train a model to detect the distance? If there is any other solution please provide it. |
st47277 | You can detect person/face/something to get (x,y) value for each person of interest. After that you need to transform your image from image to planar scale. Homography is the term you are looking for to do that. See opencv 1 docs for info on this. Now you can compute the distances between all pairs of points. |
st47278 | I am deploying a neural network in my Ubuntu machine and when the weights are initialised, I get an error: “RuntimeError: CUDA error: no kernel image is available for execution on the device”.
In nvidia-smi, the CUDA version is 10.2.
I have 2 GPUs. One of them (K40c) is very old and requires a low version of torchvision (0.4.0) but I am not using it: I specify cuda:1 and make sure that 1 points to the newest GPU device (Titan V).
Going to https://pytorch.org/get-started/previous-versions/ 21, I made sure I had installed torch and torchvision so that they were compatible with version 10.2:
torch version: 1.5.0
torchvision: 0.6.0
Still, I have this issue, in particular in line:
init.xavier_normal_(m.weight.data, gain=gain)
File “…/python3.6/site-packages/torch/nn/init.py”, line 282, in xavier_normal_
return no_grad_normal(tensor, 0., std)
File “…/python3.6/site-packages/torch/nn/init.py”, line 19, in no_grad_normal
return tensor.normal_(mean, std)
RuntimeError: CUDA error: no kernel image is available for execution on the device.
Is it possible that the old GPU, despite not being used, is still causing this problem?
Thank you. |
st47279 | VFernandez:
Is it possible that the old GPU, despite not being used, is still causing this problem?
Could you rerun the code with
CUDA_VISIBLE_DEVICES=id python script.py args
where id should be 0 or 1 depending which GPU is mapped to the device id and make sure that only the TitanV is found?
The error could also point to an NVIDIA driver, which is too old.
For CUDA10.2, you would need >=440.33 as given in this table 37. |
st47280 | Hello! Thank you for your answer.
Using CUDA_VISIBLE_DEVICES doesn’t work, same error is popping out.
The driver version is 440.100, so it should be okay according to the NVIDIA website. |
st47281 | I don’t know why it shouldn’t work, as I’m using a TitanV myself with the binaries, different CUDA versions, and builds from source.
If CUDA_VISIBLE_DEVICES is not working, you could try to disable or remove the old GPU for the sake of debugging and rerun your script. |
st47282 | I’m training a language model using gpt2, when i used multi-gpus
RuntimeError: Gather got an input of invalid size: got [2, 4, 6, 300, 128], but expected [2, 5, 6, 300, 128]
gpus=2, the batch_size=9. It seems the two gpu get different size of bath_size:one is 4 , the other is 5.
how can i fix this. |
st47283 | from future import absolute_import, division, print_function, unicode_literals, unicode_literals
tensorflow와 tf.keras를 임포트합니다
import tensorflow as tf
from tensorflow import keras
헬퍼(helper) 라이브러리를 임포트합니다
import numpy as np
import matplotlib.pyplot as plt
print(tf.version)
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
print(fashion_mnist)
class_names = [‘T-shirt/top’, ‘Trouser’, ‘Pullover’, ‘Dress’, ‘Coat’, ‘Sandal’, ‘Shirt’, ‘Sneaker’, ‘Bag’, ‘Ankle boot’]
print(class_names)
print(train_images.shape)
print(len(train_labels))
print(train_labels)
print(test_images.shape)
print(len(test_labels))
print(test_labels)
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
train_images = train_labels / 255.0
test_images = test_images / 255.0
plt.figure(figsize=(10, 10))
for i in range(25):
plt.subplot(5, 5, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary) #here <<<<<<<<<<<<<<<
plt.xlabel(class_names[train_labels[i]])
plt.show()
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation=‘relu’),
keras.layers.Dense(10, activation=‘softmax’)
])
Traceback (most recent call last):
File “C:/Users/chabu/PycharmProjects/untitled1/ke.py”, line 50, in
plt.imshow(train_images[i], cmap=plt.cm.binary)
File “C:\Users\chabu\Anaconda3\lib\site-packages\matplotlib\pyplot.py”, line 2683, in imshow
None else {}), **kwargs)
File “C:\Users\chabu\Anaconda3\lib\site-packages\matplotlib_init_.py”, line 1601, in inner
return func(ax, *map(sanitize_sequence, args), **kwargs)
File “C:\Users\chabu\Anaconda3\lib\site-packages\matplotlib\cbook\deprecation.py”, line 369, in wrapper
return func(*args, **kwargs)
File “C:\Users\chabu\Anaconda3\lib\site-packages\matplotlib\cbook\deprecation.py”, line 369, in wrapper
return func(*args, **kwargs)
File “C:\Users\chabu\Anaconda3\lib\site-packages\matplotlib\axes_axes.py”, line 5671, in imshow
im.set_data(X)
File “C:\Users\chabu\Anaconda3\lib\site-packages\matplotlib\image.py”, line 690, in set_data
.format(self._A.shape))
TypeError: Invalid shape () for image data |
st47284 | Hi @chabeomsoo,
your problem seems to be related to a Keras model and matplotlib, so I think you could get way better help in their discussion boards, since you’ve landed in the PyTorch board.
That being said, plt.imshow expects image data as [height, width, 3]. |
st47285 | Hi
Same issue happened to me. Updating your matplotlib to the current version would solve this issue. |
st47286 | Here is the code I’m trying to run on Google Colab:
import torch
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.U = torch.nn.Embedding(12, 8, sparse=True)
self.V = torch.nn.Embedding(18, 8, sparse=True)
def forward(self):
# to be implemented
pass
model = Model()
optimizer = torch.optim.SparseAdam(model.parameters(), lr=2e-3)
and I get a ValueError. However, printing list(model.parameters()) shows that the parameters clearly exist. What could be the issue here? |
st47287 | Solved by Nikronic in post #2
Hi,
I am not sure but it may be related to an issue in this thread:
I have no clue but apparently it works!
Bests |
st47288 | Hi,
I am not sure but it may be related to an issue in this thread:
ERROR:optimizer got an empty parameter list
Do:
G_params = list(G.parameters())
D_params = list(D.parameters())
.parameters() is a generator, and probably for debugging purposes you are pre-populating it somewhere.
I have no clue but apparently it works!
Bests |
st47289 | This seems to be a newly introduces bug, so could you please create an issue on GitHub 5 so that we can track and fix it?
I can reproduce it in 1.7.0, 1.8.0.dev20201110, but not 1.7.0.dev20200830. |
st47290 | When I use " .cuda()", something gets wrong.
>>> import torch
>>> A = torch.tensor([1,2])
>>> B = A.cuda()
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1579040055865/work/aten/src/THC/THCGeneral.cpp line=50 error=71 : operation not supported
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/lzy/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/cuda/__init__.py", line 197, in _lazy_init
torch._C._cuda_init()
RuntimeError: cuda runtime error (71) : operation not supported at /opt/conda/conda-bld/pytorch_1579040055865/work/aten/src/THC/THCGeneral.cpp:50 |
st47291 | This error is sometimes raised by a wrong usage of multiprocessing and CUDA, but this doesn’t seem to be the case here.
What does torch.cuda.is_available() return, which PyTorch and CUDA runtime versions are you using, and which GPU? |
st47292 | >>> torch.cuda.is_available()
False
>>> print(torch.__version__)
1.4.0
cat /usr/local/cuda/version.txt
CUDA Version 10.0.130
nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130
My gpu is 1080Ti.
It seems like there is no use of cuda. |
st47293 | Your local CUDA toolkit will not be used, if you installed the conda binaries or pip wheels.
Did you properly select the desired CUDA runtime here 1, which should ship in the binaries? If so, what NVIDIA driver are you using? |
st47294 | Hi everyone
I have a question regarding possible transformations of a CNN input. I know that the input should be in the form batch_size x number_of_channels x height x width. Let’s assume I have a tensor of shape [2, 3, 5, 5]. So two images with three channels each and the image is of size 5 by 5.
Is there a neat way to transform this input into 3 x [2, 1, 5, 5]? Basically, I would like to split each channel of every image in the batch and then have 3 such inputs. All I can think of are a bunch of nested for loops but maybe there is a better / more common way to do this?
Any help is very much appreciated!
All the best
snowe |
st47295 | Solved by klory in post #2
x = torch.randn(2,3,5,5)
x_ = x.transpose(0,1).unsqueeze(2)
x_ = list(x_) |
st47296 | Recently I was diving into meta-learning, and need to change the weights of module during the training process, so I can’t use off-the-shelf torch.nn.Conv2d or torch.nn.LSTM module for I can’t pass weights into the module. Instead, I have to define weights manually and call the underlying interface.
For convolution layers or batch normalization layers, PyTorch provides torch.nn.Functional.conv2d and torch.nn.Functional.batchnorm interface, and can be called easily. Things are a little different for LSTM module, there is no interface like torch.nn.Functional.LSTM.
So I looked up the doc of torch.nn.LSTM module, and found a interface torch.nn._VF.lstm. I just call this interface and pass my self-defined weight to it, and the code actually runs normally. However, I found that the training result is worse than the result trained using the torch.nn.LSTM module (I got 80% accuracy for a text recognition task using the LSTM module while 70% using _VF.lstm interface). So I think there must be something I didn’t notice, can anybody provide me some advice? What’s the problem construct a LSTM layer like this?
Thanks a lot! |
st47297 | Here is the key part of my code, since it is a demo version, I only considered the single layer case, and the weight has bias. Some variable names are changed for the readability, the original code can be run normally.
The first part is the initialization of weights, this is called when the network was initialized.
def create_lstm_weight(self, device):
import math
param_list = [nn.Parameter(torch.ones((4* hidden_size, intput_size)).to(device)), # W_ih
nn.Parameter(torch.ones((4* hidden_size, intput_size)).to(device)), # W_hh
nn.Parameter(torch.ones((4* hidden_size)).to(device)), # b_ih
nn.Parameter(torch.ones((4* hidden_size)).to(device))] # b_hh
if bi_direction:
param_list.extend([nn.Parameter(torch.ones((4* hidden_size, intput_size)).to(device)),# W_ih_reverse
nn.Parameter(torch.ones((4* hidden_size, intput_size)).to(device)),# W_hh_reverse
nn.Parameter(torch.ones((4* hidden_size)).to(device)), # b_ih_reverse
nn.Parameter(torch.ones((4* hidden_size)).to(device))]) # b_hh_reverse
# flatten the weights as described in doc
if param_list[0].is_cuda and torch.backends.cudnn.is_acceptable(param_list[0]):
with torch.cuda.device_of(param_list[0]):
import torch.backends.cudnn.rnn as rnn
with torch.no_grad(): #
torch._cudnn_rnn_flatten_weight(param_list, (4 if has_bias else 2),
input_size, rnn.get_cudnn_mode('LSTM'), hidden_size, num_layers=1, batch_first=False, bidirectional=True)
# initialize the weights
for p in param_list:
torch.nn.init.uniform_(p, a=math.sqrt(1 / hidden_size) * -1, b=math.sqrt(1 / hidden_size))
The second part is the forward method, it is called in the forward method of the the network
def lstm_forward(self, x, param):
'''
x: [time_step_length, batch_size, feature_dim]
'''
time_step, batch_size, input_size = x.shape
if bidirectional:
h_state = (torch.zeros(2, batch_size, hidden_size, device=self.device, dtype=torch.float32), torch.zeros(2, batch_size, hidden_size, device=self.device, dtype=torch.float32))
weights = param
else:
h_state = (torch.zeros(1, batch_size, hidden_size, device=self.device, dtype=torch.float32), torch.zeros(1, batch_size, hidden_size, device=self.device, dtype=torch.float32))
weights = param
result = _VF.lstm(x, h_state, weights, use_bias=True, num_layers=1, dropout_rate=0.0, training=True, bidirectional=True, batch_first=False
outputs, h = result[0], result[1:]
return output, h |
st47298 | I’m building a LSTM model and I want to use as input the result of nn.Embedding().
I transfer my model to GPU using to('cuda'), but when I train it, Torch complains saying that the embedding is on the CPU while the input tensor is on the GPU.
To test whether there’s something wrong with my model, I simply tried to embed a tensor using the following code:
emb = nn.Embedding(5, 11)
t = torch.tensor([1,2,3])
If I now do emb(t), it works properly. But if I do t = t.cuda() and try again, it says:
RuntimeError: Expected object of backend CPU but got backend CUDA for argument #3 'index'
The all thing works if I put the embedding also on the GPU.
A few questions then:
According to this 44, I should not put the embedding tensor on the GPU since it might be big. But how do I embed it then? To be fair: that refers to PyTorch 0.4, while I’m using PyTorch 1.1: did it change in the meanwhile?
If my embedding layer is inside the Class defining my model, and I put the model on the GPU, shouldn’t it work properly? |
st47299 | I’m not seeing the advice of leaving the embedding on the CPU in the linked issue. However, if you would like to do it, you could just call model.embedding.cpu(), where .embedding refers to the attribute name of your nn.Embedding layer. In the forward method you would then have to pass a CPU tensor to the embedding, push the output to the GPU and pass it to the next layer (which should have its parameters on the GPU).
Yes, that should work. Could you post your model definition and a code snippet which reproduces this error, so that we can have a look? |
st47300 | Hi Ptrblck,
I pass the variable from embedding to another layers in this way. It did not gives me any error. Is embedding layer works correct? Points refer to “Out1.cpu()” and "Out3.cuda()
class Discriminator4layer113D(nn.Module):
def __init__(self, ngpu,ndf):
super(Discriminator4layer113D, self).__init__()
## --define embedding for 64 differente labels and map them to dim of 10
self.embedding=nn.Embedding(401, 10)
self.ngpu = ngpu
self.ndf=ndf
self.l1= nn.Sequential(nn.Conv3d(2, self.ndf, 3, 1, 0, bias=False),nn.LeakyReLU(0.2, inplace=True))
self.l2=nn.Sequential(nn.Conv3d(self.ndf, self.ndf * 2, 3, 1, 0, bias=False),nn.BatchNorm3d(ndf * 2),nn.LeakyReLU(0.2, inplace=True))
self.drop_out2 = nn.Dropout(0.5)
self.l3= nn.Sequential(nn.Conv3d(self.ndf * 2, self.ndf * 4, 3, 2, 0, bias=False), nn.BatchNorm3d(ndf * 4), nn.LeakyReLU(0.2, inplace=True))
self.drop_out3 = nn.Dropout(0.5)
self.l4= nn.Sequential(nn.Conv3d(self.ndf * 4, 1, 3, 1, 0, bias=False),nn.Sigmoid())
def forward(self, x,Labels):
Labels=Labels.squeeze(1).squeeze(1).squeeze(1)
Out1=self.embedding(Labels)
## apply linear layer to convert the size of embdded number to the input size
Out2= nn.Linear(10, x.shape[2]*x.shape[3]*x.shape[4])(Out1.cpu())
## ---- reshape the label size to the size of input for concatenation
Out3=Out2.view(-1,11,11,11).unsqueeze(1)
## ---- concatenate labels and inputs
Out4=torch.cat((x,Out3.cuda()),1)
out = self.l1(Out4)
out=self.l2(out)
out=self.drop_out2(out)
out=self.l3(out)
out=self.drop_out3(out)
out=self.l4(out)
return out |
st47301 | You are recreating the nn.Linear layer in each forward pass with random parameters, so that it won’t be trained.
Create the layer in the __init__ method in the same way other layers were initialized and use it in the forward method.
I would recommend to check the input and output shape of the embedding layer to make sure it’s working as expected. |
st47302 | I changed in this way. The output of the embedding is 64x10 which it is correct since my batch size is 64 and output of the embedding is vector by 10 dimension.
class Discriminator(nn.Module):
def __init__(self, ngpu,ndf):
super(Discriminator, self).__init__()
## --define embedding for 64 differente labels and map them to dim of 10
self.embedding=nn.Embedding(401, 10)
self.ngpu = ngpu
self.ndf=ndf
self.l=nn.Linear(10,1331)
self.l1= nn.Sequential(nn.Conv3d(2, self.ndf, 3, 1, 0, bias=False),nn.LeakyReLU(0.2, inplace=True))
self.l2=nn.Sequential(nn.Conv3d(self.ndf, self.ndf * 2, 3, 1, 0, bias=False),nn.BatchNorm3d(ndf * 2),nn.LeakyReLU(0.2, inplace=True))
self.drop_out2 = nn.Dropout(0.5)
self.l3= nn.Sequential(nn.Conv3d(self.ndf * 2, self.ndf * 4, 3, 2, 0, bias=False), nn.BatchNorm3d(ndf * 4), nn.LeakyReLU(0.2, inplace=True))
self.drop_out3 = nn.Dropout(0.5)
self.l4= nn.Sequential(nn.Conv3d(self.ndf * 4, 1, 3, 1, 0, bias=False),nn.Sigmoid())
def forward(self, x,Labels):
Labels=Labels.squeeze(1).squeeze(1).squeeze(1)
Out1=self.embedding(Labels)
Out2= self.l(Out1)
## ---- reshape the label size to the size of input for concatenation
Out3=Out2.view(-1,11,11,11).unsqueeze(1)
## ---- concatenate labels and inputs
Out4=torch.cat((x,Out3),1)
out = self.l1(Out4)
out=self.l2(out)
out=self.drop_out2(out)
out=self.l3(out)
out=self.drop_out3(out)
out=self.l4(out)
return out
class Generator(nn.Module):
def __init__(self,ngpu,nz,ngf):
super(Generator, self).__init__()
self.ngpu=ngpu
self.nz=nz
self.ngf=ngf
self.embedding=nn.Embedding(401, 10)
self.l1= nn.Sequential( nn.ConvTranspose3d(self.nz+10, self.ngf * 8, 3, 1, 0, bias=False),
nn.BatchNorm3d(self.ngf * 8),
nn.ReLU(True))
self.l2= nn.Sequential(nn.ConvTranspose3d(self.ngf * 8, self.ngf * 4, 3, 1, 0, bias=False),
nn.BatchNorm3d(self.ngf * 4),
nn.ReLU(True))
self.l3= nn.Sequential(nn.ConvTranspose3d( self.ngf * 4, self.ngf * 2, 3, 1, 0, bias=False),
nn.BatchNorm3d(self.ngf * 2),
nn.ReLU(True))
self.l4= nn.Sequential(nn.ConvTranspose3d( self.ngf*2, 1, 3, 1, 0, bias=False),nn.Sigmoid())
def forward(self, input,Labels,Sigmad):
Labels=Labels.squeeze(1).squeeze(1).squeeze(1)
Out1=self.embedding(Labels)
## ---- concatenate labels and noise from channels
Out1=Out1.unsqueeze(2).unsqueeze(3).unsqueeze(4)
Out2=torch.cat((Out1,input),1)
out=self.l1(Out2)
out=self.l2(out)
out=self.l3(out)
out=self.l4(out)*Sigmad
return out |
st47303 | Hi,
I want to fill a tensor like this:
gainW = torch.tensor(1.1)
gainH = torch.tensor(2.5)
res = 5
a = torch.zeros(res,res)
a[0,0] =0.1
for i in range(res):
for j in range(res):
if i==0 and j==0:
continue
if j==0:
a[i,j] = torch.cos(a[i-1,0]*gainH)
else:
a[i,j] = torch.sin(a[i,j-1]+gainW)
print(a)
tensor([[ 0.1000, 0.9320, 0.8955, 0.9112, 0.9046],
[ 0.9689, 0.8785, 0.9180, 0.9016, 0.9086],
[-0.7523, 0.3408, 0.9916, 0.8674, 0.9224],
[-0.3049, 0.7139, 0.9706, 0.8777, 0.9184],
[ 0.7233, 0.9683, 0.8788, 0.9179, 0.9017]])
But this is extremely inefficient, How can I improve it? Any ideas would be appreciated!
BTW, I want to calculate later the gradient with respect to gainW,gainH! |
st47304 | import torch
gainW = torch.tensor(1.1)
gainH = torch.tensor(2.5)
res = 5
a = torch.zeros(res,res)
a[0,0] = 0.1
for i in range(1, res):
a[i, 0] = torch.cos(a[i-1, 0] * gainH)
for j in range(1, res):
a[:, j] = torch.sin(a[:, j-1] + gainW)
This might be faster.
As your case requires the previous computation result to get the next, it’s difficult to avoid for loops entirely. |
st47305 | Thnx, Do you think this is the best I can do? Do you think putting it in JIT (trace) would help? Does trace keep the gradient flow? |
st47306 | i have output x from network 1 and another tensor y (with require grad = True) ! now i want to concatenate tensor x and y to feed in network2 but when optimizing it raises an error and y.grad tensor is full of zeroes!!
how to solve this problem?? |
st47307 | Based on the error it seems you are trying to pass a non-leaf tensor (e.g. an output activation) to the optimizer, which is not supported. Could this be the case? |
st47308 | Basically what the title asks. Is this guaranteed to work and is this documented somewhere? Also, will the pytorch cuda 10.1 version work on a system with cuda 11 installed? |
st47309 | PyTorch conda binaries and pip wheels ship with their own CUDA (, cudnn, NCCL, etc.) runtime and your local CUDA toolkit will not be used unless you build PyTorch from source or any custom CUDA extension.
Yes, CUDA11 should work, if it supports your GPU architecture. If not, please create a topic here or an issue on GitHub and we’ll take a look. |
st47310 | Hello all, I have a tensor size of BxCxHxW. I want to unfold the tensor with a kernel size of K into non-overlapped patches. Do we have any equation to compute the stride and padding for the unfold 3 function, such that the patches can be used to fold the original tensor BxCxHxW by fold function?
For example, a tensor size of 16x32x56x56 undolds with size of k=6, which should I use stride and padding value? |
st47311 | Solved by ptrblck in post #4
The used padding is too naive in my example and you might want to use e.g. divmod to calculate the padding size:
def unfold_tensor (x, step_c, step_h, step_w):
kc, kh, kw = step_c, step_h, step_w # kernel size
dc, dh, dw = step_c, step_h, step_w # stride
nc, remainder = np.divmod… |
st47312 | For non-overlapping patches the stride should equal the kernel size.
For a small example have a look at this post 29.
Based on your input shape and kernel size, you would need to pad the input tensor to properly reshape the patches back. |
st47313 | @ptrblck this is my example, however, it does not work for recovering back the original tensor when h and w are not divisible to k. This is my code
import torch
from torch.nn import functional as F
def unfold_tensor (x, step_c, step_h, step_w):
kc, kh, kw = step_c, step_h, step_w # kernel size
dc, dh, dw = step_c, step_h, step_w # stride
pad_c, pad_h, pad_w = x.size(1)%kc // 2, x.size(2)%kh // 2, x.size(3)%kw // 2
x = F.pad(x, ( pad_h, pad_h, pad_w, pad_w, pad_c, pad_c))
patches = x.unfold(1, kc, dc).unfold(2, kh, dh).unfold(3, kw, dw)
unfold_shape = patches.size()
patches = patches.reshape(-1,unfold_shape[1]*unfold_shape[2]*unfold_shape[3], unfold_shape[4]*unfold_shape[5]*unfold_shape[6])
return patches, unfold_shape
def fold_tensor (x, shape_x, shape_orginal):
x = x.reshape(-1,shape_x[1], shape_x[2], shape_x[3], shape_x[4], shape_x[5], shape_x[6])
x = x.permute(0, 1, 4, 2, 5, 3, 6).contiguous()
#Fold
output_c = shape_x[1] * shape_x[4]
output_h = shape_x[2] * shape_x[5]
output_w = shape_x[3] * shape_x[6]
x = x.view(1, output_c, output_h, output_w)
return x
b,c,h,w = 1, 8, 28, 28
x = torch.randint(10, (b,c,h,w))
print (x.shape)
shape_orginal = x.size()
patches, shape_patches = unfold_tensor (x, c, 3, 3)
print (patches.shape)
fold_patches = fold_tensor(patches, shape_patches, shape_orginal)
print (fold_patches.shape)
print((x == fold_patches).all())
Output:
torch.Size([1, 8, 28, 28])
torch.Size([1, 81, 72])
torch.Size([1, 8, 27, 27])
Traceback (most recent call last):
File "test_unfold.py", line 32, in <module>
print((x == fold_patches).all()) |
st47314 | The used padding is too naive in my example and you might want to use e.g. divmod to calculate the padding size:
def unfold_tensor (x, step_c, step_h, step_w):
kc, kh, kw = step_c, step_h, step_w # kernel size
dc, dh, dw = step_c, step_h, step_w # stride
nc, remainder = np.divmod(x.size(1), kc)
nc += bool(remainder)
nh, remainder = np.divmod(x.size(2), kh)
nh += bool(remainder)
nw, remainder = np.divmod(x.size(3), kw)
nw += bool(remainder)
pad_c, pad_h, pad_w = nc*kc - x.size(1), nh*kh - x.size(2), nw*kw - x.size(3)
x = F.pad(x, ( 0, pad_h, 0, pad_w, 0, pad_c))
patches = x.unfold(1, kc, dc).unfold(2, kh, dh).unfold(3, kw, dw)
unfold_shape = patches.size()
patches = patches.reshape(-1,unfold_shape[1]*unfold_shape[2]*unfold_shape[3], unfold_shape[4]*unfold_shape[5]*unfold_shape[6])
return patches, unfold_shape
def fold_tensor (x, shape_x, shape_orginal):
x = x.reshape(-1,shape_x[1], shape_x[2], shape_x[3], shape_x[4], shape_x[5], shape_x[6])
x = x.permute(0, 1, 4, 2, 5, 3, 6).contiguous()
#Fold
output_c = shape_x[1] * shape_x[4]
output_h = shape_x[2] * shape_x[5]
output_w = shape_x[3] * shape_x[6]
x = x.view(1, output_c, output_h, output_w)
return x
b,c,h,w = 1, 8, 28, 28
x = torch.randint(10, (b,c,h,w))
print(x.shape)
shape_original = x.size()
patches, shape_patches = unfold_tensor (x, c, 3, 3)
print(patches.shape)
fold_patches = fold_tensor(patches, shape_patches, shape_orginal)
print(fold_patches.shape)
fold_patches = fold_patches[:, :shape_original[1], :shape_original[2], :shape_original[3]]
print(fold_patches.shape)
print((x == fold_patches).all())
> tensor(True)
Currently the padding is only applied to one side and you can try to split it so that both sides will be padded. |
st47315 | I’m making a module and I expected to get 1 input (shape (2,2,3,3)) at a time. I just realized nn.Linear expects a batch dimension, so I need to expect batches not individual inputs. How does nn.Linear process batches and how can I process batches in my forward()? Would I just put everything in a loop over the batch elements?
This is what my forward() method looks like:
def forward(self, pair_of_graphs):
embeddings = []
for graph in pair_of_graphs:
node_matrix, adjacency_matrix = graph
steps = 5
for step in range(steps):
message_passed_node_matrix = torch.matmul(adjacency_matrix, node_matrix)
alpha = torch.nn.Parameter(torch.zeros(1))
node_matrix = alpha*node_matrix + (1-alpha)*messaged_passed_node_matrix
new_node_matrix = torch.zeros(len(node_matrix), self.linear_2.in_features)
for node_i in range(len(node_matrix)):
linear_layer = self.linear_1 if step == 0 else self.linear_2
new_node_matrix[node_i] = linear_layer(node_matrix[node_i])
node_matrix = new_node_matrix
weights_for_average = torch.zeros(len(node_matrix))
for node_i in range(len(node_matrix)):
weights_for_average[node_i] = self.linear_3(node_matrix[node_i])
weighted_sum_node = torch.matmul(weights_for_average, node_matrix)
embeddings.append(weighted_sum_node)
concat = torch.cat(embeddings)
out = self.linear_4(concat)
return out
Any other suggestions about how I am handling my input (e.g. looping through each node of each graph) would be appreciated too. |
st47316 | Solved by ptrblck in post #4
It seems you still passing input to the layer and apply the alpha parameter later or could you explain how you are passing alpha to e.g. the linear layer?
PyTorch applies broadcasting, so if alpha is a scalar tensor you could directly run the posted line of code.
On the other hand, even if alpha … |
st47317 | All PyTorch layers accept and expect batched inputs and don’t need a for loop or any other change.
nn.Linear in particular expects the input to have the shape [batch_size, *, in_features], where the * is a variable number of dimensions. The linear layer will be applied to each sample in these variable number of dimensions as if you would be using a loop over these dims. |
st47318 | But I’m not only passing my input into pytorch layers. I also have a customer parameter alpha that I use to combine my input with an output of a linear layer: alpha*input + (1-alpha)*output. I have other custom behavior like that too.
In that case, would I have to manually loop over the input batch and build the new tensor after doing my custom computations on each item of the batch? Or is there a better way to do these sorts of things? |
st47319 | agt:
But I’m not only passing my input into pytorch layers.
It seems you still passing input to the layer and apply the alpha parameter later or could you explain how you are passing alpha to e.g. the linear layer?
agt:
In that case, would I have to manually loop over the input batch and build the new tensor after doing my custom computations on each item of the batch?
PyTorch applies broadcasting, so if alpha is a scalar tensor you could directly run the posted line of code.
On the other hand, even if alpha has the shape [batch_size] it should still work (and you might need to unsqueeze() dimensions to enable broadcasting, but it depends on the shapes of the other tensors). |
st47320 | Hi gang, been a while . So I have this simple model:
class ModelSerial(nn.Module):
def __init__(self, in_features, hidden, out_features):
super().__init__()
blocks = [
nn.Sequential(
nn.Linear(in_features, hidden),
nn.ReLU(inplace=True),
nn.Linear(hidden, hidden),
nn.ReLU(inplace=True),
nn.Linear(hidden, 1),
)
for _ in range(out_features)
]
self.blocks = nn.ModuleList(blocks)
def forward(self, x):
out = torch.cat([b(x) for b in self.blocks], dim=-1)
return out
Notice how each sequential block’s output is not chained in the forward pass. Rather, each is executed on the input separately, and the results are simply concatenated. As one would image, it runs extremely slow, especially as the number of output features increases or the depth of the network increases. This is due the Python for-loop. I had considered trying torch script jit, but that wasn’t an option for this particular engagement. So next best thing? Try and parallelize it with grouped convolutions. Here’s my attempt:
class ModelParallel(nn.Module):
def __init__(self, in_features, hidden, out_features):
super().__init__()
self.out_features = out_features
self.block = nn.Sequential(
nn.Conv1d(
in_features * self.out_features,
hidden * self.out_features,
kernel_size=1,
groups=self.out_features
),
nn.ReLU(inplace=True),
nn.Conv1d(
hidden * self.out_features,
hidden * self.out_features,
kernel_size=1,
groups=self.out_features
),
nn.ReLU(inplace=True),
nn.Conv1d(
hidden * self.out_features,
out_features,
kernel_size=1,
groups=self.out_features
)
)
def forward(self, x):
# torch.Size([B, 150])
bs = x.shape[0]
# torch.Size([B, 150, 1])
x = x[:, :, None]
# torch.Size([B, 150, 50])
x = x.expand(bs, x.shape[1], self.out_features)
# torch.Size([B, 7500, 1]) because conv1d works on [B C L]
x = x.reshape(bs, -1, 1)
# torch.Size([B, 50, 1])
x = self.block(x)
# torch.Size([B, 50])
x = x.squeeze(dim=-1)
return x
The model compiles. Input size and output size are producing what is expected. But it doesn’t train at all. Not even close. Validation and Train loss are completely out of sync, it’s a mess.
I believe my lack of understanding how groups are computed are what are messing things up. So I even tried re-arranging the setup as follows in case the groups are ‘interlaced’ as opposed to ‘chunked’:
########
# Change:
# torch.Size([B, 150, 1])
x = x[:, :, None]
# torch.Size([B, 150, 50])
x = x.expand(bs, x.shape[1], self.out_features)
########
# Into:
# torch.Size([B, 1, 150])
x = x[:, None, :]
# torch.Size([B, 50, 150])
x = x.expand(bs, self.out_features, x.shape[2])
But unfortunately, that still didn’t work (train). Epoch time was a hell of a lot faster though, as desired. My example tests:
ms = ModelSerial(
in_features=150,
hidden=10,
out_features=50
)
mp = ModelSerial(
in_features=150,
hidden=10,
out_features=50
)
I wouldn’t expect the results to be exact, I mean, linear and conv are using different weight initialization schemes even if deterministic is set. But I would at least expect the parallel model to train. PyTorch senpais, can you please provide me with some guidance on how I can architect this problem to execute in parallel? Thank you!! |
st47321 | Solved by googlebot in post #2
Your input-to-hidden layer should use 1 group (but bigger output width), otherwise you split inputs.
As I side node, I just switched from this approach to batched matrix multiplications, as there are some performance issues with grouped convolutions in cudnn.
I also found that creating block-diago… |
st47322 | Your input-to-hidden layer should use 1 group (but bigger output width), otherwise you split inputs.
As I side node, I just switched from this approach to batched matrix multiplications, as there are some performance issues with grouped convolutions in cudnn.
I also found that creating block-diagonal weight matrices can be better that bmm, as the latter creates an expanded copy of the weight tensor. I believe the best approach is shape (or sparsity) dependent. |
st47323 | Thank you for the hints. I tried to do the BMM implementation because the inputs are dense continuous values. I think I got close, but…
class ModelParallel2(nn.Module):
def __init__(self, in_features, hidden, out_features):
super().__init__()
self.out_features = out_features
self.block1_bias = nn.Parameter(torch.ones(hidden))
self.block1 = nn.Parameter(torch.ones(
1, in_features, hidden
))
self.block2_bias = nn.Parameter(torch.ones(hidden))
self.block2 = nn.Parameter(torch.ones(
1, hidden, hidden
))
self.block3_bias = nn.Parameter(torch.ones(1))
self.block3 = nn.Parameter(torch.ones(
1, hidden, 1
))
# nn.init.xavier_uniform_(self.block1, gain=nn.init.calculate_gain('relu'))
# nn.init.xavier_uniform_(self.block2, gain=nn.init.calculate_gain('relu'))
# nn.init.xavier_uniform_(self.block3, gain=nn.init.calculate_gain('relu'))
nn.init.kaiming_uniform_(self.block1, a=np.sqrt(6))
nn.init.kaiming_uniform_(self.block2, a=np.sqrt(6))
nn.init.kaiming_uniform_(self.block3, a=np.sqrt(6))
bound_a = 1 / np.sqrt(in_features)
bound_b = 1 / np.sqrt(hidden)
nn.init.uniform_(self.block1_bias, -bound_a, bound_a)
nn.init.uniform_(self.block2_bias, -bound_b, bound_b)
nn.init.uniform_(self.block3_bias, -bound_b, bound_b)
def forward(self, x):
# torch.Size([32, 130])
bs = x.shape[0]
# torch.Size([32, 1, 130])
x = x[:, None, :]
# torch.Size([32, 45, 130])
x = x.expand(bs, self.out_features, x.shape[2])
# torch.Size([32, 45, 10])
x = x @ self.block1
x = F.relu(x, inplace=True)
# torch.Size([32, 45, 10])
x = x @ self.block2
x = F.relu(x, inplace=True)
# torch.Size([32, 45, 1])
x = x @ self.block3
# torch.Size([32, 45])
x = x.squeeze(dim=-1)
return x
Inputting a random tensor and initializing the model with:
mp = ModelParallel2(
in_features=130,
hidden=10,
out_features=45
)
I get a torch.Size([32, 45]) sized output, which is correct and desirable. However, the output features are all duplicated:
tensor([[-0.0014, -0.0014, -0.0014, ..., -0.0014, -0.0014, -0.0014],
[ 0.0002, 0.0002, 0.0002, ..., 0.0002, 0.0002, 0.0002],
[ 0.0001, 0.0001, 0.0001, ..., 0.0001, 0.0001, 0.0001],
...,
[-0.0023, -0.0023, -0.0023, ..., -0.0023, -0.0023, -0.0023],
[-0.0003, -0.0003, -0.0003, ..., -0.0003, -0.0003, -0.0003],
[-0.0006, -0.0006, -0.0006, ..., -0.0006, -0.0006, -0.0006]],
grad_fn=<SqueezeBackward1>)
From the docs I found out that torch.bmm “This function does not broadcast. For broadcasting matrix products, see torch.matmul().” But even replacing x = x @ self.block1 with x = torch.matmul(x, self.block1) doesn’t change the behavior of the duplicates. |
st47324 | It is a bit tricky, requiring shape manipulations
x shape should be like [batch_dims, ngroups, 1, group_size_in]
weight shape at init: [ngroups, group_size_out, group_size_in]
weight shape in forward(): [ngroups, group_size_in, group_size_out]
this thing about weights allows to do init like:
nn.init.kaiming_(weight.view(-1,group_size_in))
then matmul internally reshapes this into bmm format, does bmm and finally outputs [batch_dims,ngroups,1,group_size_out]
PS group conv may actually be ok for a small group count, without this mess, not sure |
st47325 | Got it working with the group conv with the using 1 group adjustment for the first layer! It’s blazing fast. Thank you so much, my iteration time just dropped down to 4 secs / epoch from 4 min / epoch!! |
st47326 | Suppose there is an array A = tensor([[0.4869, 0.5144, 0.9086, 0.6139],
[0.5103, 0.8270, 0.4832, 0.8980],
[0.5234, 0.1135, 0.1037, 0.7451]])
And I want to replace the elements in each row with zeros, depending on another tensor t = tensor([0, 1, 3])
The output should be like out = tensor([[0.4869, 0.5144, 0.9086, 0.6139],
[0, 0.8270, 0.4832, 0.8980],
[0, 0, 0, 0.7451]])
I have already tried an implementation that uses the torch.gather function but that operation seems to consume a lot of memory and it runs into memory overflow when dealing with huge tensors. |
st47327 | Hi,
I think this thread can help you with this:
Set value of torch tensor up to some index nlp
Hi,
You can do this by creating binary mask.
First we create a matrix with same shape as X filled with zeros, then put 1s where index matches with ind tensor and finally by using cumsum, set 1 after previously located points. Finally, we can mask X:
X = torch.tensor(
[[1,0,4,5,6],
[3,6,7,10, 13],
[1,4,2,8,21]])
ind = torch.tensor([2,3, 1])
mask = torch.zeros_like(X)
mask[(torch.arange(X.shape[0]), ind)] = 1
mask = 1 - mask.cumsum(dim=-1)
mask
# mask
# tensor([[1, 1, 0, 0, 0]…
But I have not tested it on large tensors.
Bests |
st47328 | Hi all. For my project, I want to use the torch.optim.lr_scheduler.ReduceLROnPlateau learning rate scheduler to adjust my learning rate based on my validation loss. Looking at the documentation, however, it is not very clear to me what the threshold_mode parameter does, and which version of it, rel or abs is best to use?
The rel parameter is the default one, however, doing some quick calculations with pen and paper it appears that this version might lead to some very small margins when comparing validation losses between two iterations, much smaller than the threshold defined by me for measuring the new optimum.
Can anybody please advise on this? Cheers! |
st47329 | Hi, I am working on Segmentation of Camvid dataset. Masks are of shape [h,w,3]. Indeed for each class instead of the label of a unique class we have a color_code. I found a code for segmentation of this dataset which in order to use CrossEntropyLoss, a snippet was used that convert each mask from shape [h,w,3] to [32,h,w], where 32 is the number of classes. There is also another snippet that converts a mask of shape [32,h,w] to a 3-channel mask of shape [h,w,3]. Now, when I finished training and visualize predicted mask, the result is not good at all. I am exploring where I did a mistake. First I checked the two snippet for converting a RGB mask to a Binary mask. The codes are below.
def Color_map(df):
'''
Returns the reversed String.
Parameters:
dataframe: A Dataframe with rgb values with class maps.
Returns:
code2id: A dictionary with color as keys and class id as values.
id2code: A dictionary with class id as keys and color as values.
name2id: A dictionary with class name as keys and class id as values.
id2name: A dictionary with class id as keys and class name as values.
'''
cls = pd.read_csv(df)
# thic line of code tuples the code of colors
# len(cls.name) is the number of classes that we have: 32
# output: [(64, 128, 64), (192, 0, 128)]
color_code = [tuple(cls.drop("name",axis=1).loc[idx]) for idx in range(len(cls.name))]
# it gives a number to each code
# assigns color codes to id numbers
code2id = {v: k for k, v in enumerate(list(color_code))}
# it assigns numbers(classes) to codes
id2code = {k: v for k, v in enumerate(list(color_code))}
# it collects name of each class
color_name = [cls['name'][idx] for idx in range(len(cls.name))]
# it gives to each class a number
name2id = {v: k for k, v in enumerate(list(color_name))}
# it gives
id2name = {k: v for k, v in enumerate(list(color_name))}
return(code2id, id2code, name2id, id2name)
def mask_to_rgb(mask, id2code):
'''
Converts a Binary Mask of shape: [batch_size,num_classes,h,w]
to RGB image mask of shape [batch_size, h, w, color_code]
Parameters:
img: A Binary mask
color_map: Dictionary representing color mappings
returns:
out: A RGB mask of shape [batch_size, h, w, color_code]
'''
## Since our mask is one-hot encoding
## the argmax returns the output class for each pixel
## It returns the label of each pixel that is a number in range : 0-31
## dim 0 :batch_size
single_layer = np.argmax(mask, axis=1)
## it converts each mask to [batch_size, h,w, color_code]
output = np.zeros((mask.shape[0],mask.shape[2],mask.shape[3],3))
for k in id2code.keys():
output[single_layer==k] = id2code[k]
return(output.astype(np.float32))
def rgb_to_mask(img, id2code):
'''
Converts a RGB image mask of shape [batch_size,h, w, color_code], to a mask of shape
[batch_size,n_classes,h,w]
Parameters:
img: A RGB img mask
color_map: Dictionary representing color mappings: ecah class assigns to a unique color code
returns:
out: A Binary Mask of shape [batch_size, classes, h, w]
'''
# num_classes is equal to len(mask)
num_classes = len(id2code)
# it makes a tensor of shape h,w,num_classes:(720,960,num_classes)
shape = img.shape[:2]+(num_classes,)
# it makes a tensor with given shape and with type float64
out = np.zeros(shape, dtype=np.float64)
#
for i, cls in enumerate(id2code):
#print(f'i: {i}, cls: {cls}')
# img.reshape((-1,3)) flats mask except in channels
# it reads thecolor code for a multiplication of higght and width and if it is one of the color code of
# the classes that we have then the third dimension takes the label of that class and the first
# two dimsnions return to the hight and width
out[:,:,i] = np.all(np.array(img).reshape((-1,3)) == id2code[i], axis=1).reshape(shape[:2])
# out: hight, width, class
# returns class, hight, width
return(out.transpose(2,0,1))
I expect for a mask in training set, when I convert it to a binary mask using function rgb_to_mask and then convert it again to an rgb-mask using function mask_to_rgb the result would be the same as the original mask. But they are not the same as it can be seen in the following code. I do not know where the problem is.
print(f'mask_sample_shape: {mask_sample.shape} mask_sample: {mask_sample.dtype}, mask_type: {type(mask_sample)}')
_, id2code,_,_ = Color_map(os.path.join(path,'class_dict.csv'))
mask_cls = rgb_to_mask(mask_sample, id2code)
print(f'mask_cls: {mask_cls.shape} mask_cls:{mask_cls.dtype} mask_cls_type: {type(mask_cls)}')
## Now converting mask_cls to mask_rgb
mask_rgb = mask_to_rgb(mask_cls[np.newaxis,...], id2code)
mask_rgb = mask_rgb.squeeze(0)
print(f'mask_rgb_shape: {mask_rgb.shape} mask_rgb: {mask_rgb.dtype}, mask_rgb_type: {type(mask_rgb)}')
comparison = mask_rgb==mask_sample
print(comparison.all())
out:
mask_sample_shape: (720, 960, 3) mask_sample: float32, mask_type: <class 'numpy.ndarray'>
mask_cls: (32, 720, 960) mask_cls:float32 mask_cls_type: <class 'numpy.ndarray'>
mask_rgb_shape: (720, 960, 3) mask_rgb: float32, mask_rgb_type: <class 'numpy.ndarray'>
False
Dose anyone have any idea where is the problem? I also visualized both masks at the end but the result was not the same.
Thanks |
st47330 | Given a 2D tensor x and 2D tensors of start and end indices, how can I get a tensor with the slices of x? I am looking for a vectorized function that can do this:
>>> x = torch.arange(0, 20).view(4, 5)
tensor([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]])
>>> starts = torch.tensor([[0,0], [0,3], [2,1]])
tensor([[0,0], [0,3], [2,1]])
>>> ends = torch.tensor([[0,2], [0,5], [2,5]])
tensor([[0,2], [0,5], [2,5]])
>>> slices = some_function(x, starts, ends)
tensor([[0, 1], [3, 4],
[],
[11, 12, 13, 14],
[]])
starts and ends have the same length and each slice given by the pair start[i], end[i] only runs over the same zeroth dimension. Any help would be appreciated |
st47331 | First, your output is not a tensor because each row don’t have the same lenght, if they have the same length, you can refer here 140, otherwise I’m afraid it has to be a loop to solve it |
st47332 | Hi
I would like to use torch.nn.functional.grid_sample to resample a 3D image volume, so I tested it with identity displacements (see the code below). This should not spatially transform the image; however, it did re-orient the image. The attached figure shows the image after resampling using F.grid_sample on the first row. The second row shows the original image.
Would there be something that I missed or I did wrong? Any suggestions on how to fix this.
def coordinates_map(size, start=-1.,end=1.):
'''This is a function to create a 3D grid '''
batch_size, channels = size[:2]
d,h,w = size[2:]
w_p = np.linspace(start, end, num=w)
h_p = np.linspace(start, end, num=h)
d_p = np.linspace(start, end, num=d)
coords = np.stack(np.meshgrid(h_p,d_p,w_p, indexing='xy'), axis=-1)[np.newaxis,...]
return torch.from_numpy(coords).float().expand((batch_size,)+coords.shape[1:])
image = torch.rand((1, 1, 48, 65, 64)) # I used a real image not random tensor
grid = coordinates_map(size=image.shape)
deformed_image = F.grid_sample(x,grid)
pytorch_bug902×592 61.4 KB |
st47333 | Hey Sureerat, Not sure if you’ve already solved this, but I was just working on the exact same problem today. There is just one more step at the end that you could implement. The grid sample changes the orientation of the scan that you input so you use .permute() on your output tensor to reorient it.
I found to get the proper resampling I had to rearrange just the last three dimensions
>d1 = torch.linspace(-1, 1, self.shape[2])
> d2 = torch.linspace(-1, 1, self.shape[1])
> d3 = torch.linspace(-1, 1, self.shape[0])
> meshx, meshy, meshz = torch.meshgrid((d1, d2, d3))
> grid = torch.stack((meshx, meshy, meshz), 3)
> grid = grid.unsqueeze(0) # add batch dim
> out = torch.nn.functional.grid_sample(x, grid)
> out = out.permute(0,1,4,3,2) |
st47334 | def max(self, list_in):
# type: (List[List[int]]) -> List[int]
maxes = the_list[0]
for sublist in list_in[1:]:
for index, item in enumerate(sublist):
maxes[index] = max(maxes[index], item)
return maxes
In this func , if i dont follow the rule (input dtype is List[List[int]] & output type is List[int]),but nothing happened … so what’s the meaning of “# type”? |
st47335 | it’s just a comment to make you understand what are the types of the input and output |
st47336 | I found that when I load and evaluate models trained and saved on windows use different operating systems, results are inconsistent.
Phenomenon:
I trained a model on Windows 10 and tested it on the test set before saving and got accuracy = 63.91%. Then I saved it to “model_epoN”.
When I load and evaluate it on the same Windows 10, the results are consistent. i.e., accuracy = 63.91%, too.
But when I load and evaluate it on Ubuntu, the results are inconsistent. accuracy =0.1%, like randomly guessing.
I’ve noticed the difference between CR LF and LF line break types and made small changes to the datasets, so I’m sure the problem lies somewhere else.
Version information:
The pytorch version on Windows 10 python3.6 is 0.3.0b0+591e73e (peterjc123).
The pytorch version on Ubuntu 16.4 python3.5 is 0.3.0.post4 (official).
Save function:
def save_checkpoint(parser, epoch):
torch.save({'state_dict': parser.state_dict()},
'model_epo' + str(epoch + 1))
Load function:
def load_checkpoint(filename, parser):
checkpoint = torch.load(filename)
parser.load_state_dict(checkpoint['state_dict'])
return parser
No error or warning was raised. I guess the formats for saving state_dict are different according to the OS?
Currently, I’m not able to write a minimal working example because the model is too complicated and it builds dynamic nets in each iteration. I haven’t trained the model on Linux and test it on Win. But I will try and I still need some time.
I hope the problem can be solved in pytorch v0.4.0. |
st47337 | Did you find a workaround? I have the same problem as well and need to load my model on linux to dockerize. |
st47338 | Hi I want to convert my output of tensor values those I’m getting from UNet to images . Is there any way to do this? Below is my code chunk where i want to do
def test_step(self, batch, batch_nb):
x, y = batch
y_hat = self.forward(x)
loss = torch.nn.MSELoss()
op_loss = loss(y_hat, y)
#saving tensors to images code goes here
print(op_loss)
return {'test_loss': op_loss}
I want to save the tensors to images to some local file path after calculating op_loss |
st47339 | you can convert the tensors to numpy and save them using opencv
tensor = tensor.cpu().numpy() # make sure tensor is on cpu
cv2.imwrite(tensor, "image.png") |
st47340 | Thanks for the response! Can’t I do if my tensor is on GPU because I’m training my model on GPU. If No, Is there any other way to do it? |
st47341 | No, you have to make sure your data is on the CPU.
Also, even if you train your model on the GPU, all you have to do is shift the output tensor to the cpu and store the images. Here is a minimal Example
model.cuda()
output = model(input)
output = output.cpu().numpy()
cv2.imwrite(output, 'pic.png') |
st47342 | Hi,
while doing the conversion i’m getting length exception I posted the issue there:
Getting the error while converting the tensors to Images using cv2
Hi,
I have converted the tensors which I’m getting as an output(test set) from my UNet to images. When I’m testing the model it is throwing out of index exception which I’m not getting when I didn’t convert the tensors to images. I have 74 images in my test set but after getting tested with one image it is throwing out of index exception in my custom dataset class. any idea will there will be any problem if I do a conversion like that?
# this is where im converting
def test_step(self, batch, …
Please give any suggestion about this |
st47343 | Once you have your tensor in CPU, another possibility is to apply Sigmoid to your output and estimate a threshold (the mid point for example) in order to save it as an binary image.
from torchvision.utils import save_image
img1 = torch.sigmoid(output) # output is the output tensor of your UNet, the sigmoid will center the range around 0.
# Binarize the image
threshold = (img1.min() + img1.max()) * 0.5
ima = torch.where(img1 > threshold, 0.9, 0.1)
save_image(ima, 'BIN_ima.png')
Or you could try to “greyscale” the image…
img1 = torch.sigmoid(output)
min = img1.min()
max = img1.max()
img2 = 1./(max-min) * img1 + 1.*min / (min-max)
save_image(img2, 'GREY_img.png') |
st47344 | Hi,
I am dealing with 3D image data. The bottleneck of network design is both GPU and CPU memory.
I try to estimate the GPU memory needed for a given network architecture. However, it seems that my estimation is always much lower than what the network actually consumes. In the following example,
import torch.nn as nn
import torch
from torch.autograd import Variable
net = nn.Sequential(
nn.Conv3d(1, 16, 5, 1, 2),
)
net.cuda()
input = torch.FloatTensor(1, 1, 64, 128, 128).cuda()
input = Variable(input)
out = net(input)
print(out.size())
The actual GPU memory consumed is 448 MB if I add a break point in the last line and use nvidia-smi to check the GPU memory consumption. However, if I calculated manually, my understanding is that
the total consumed GPU memory = GPU memory for parameters x 2 (one for value, one for gradient) + GPU memory for storing forward and backward responses.
So the manual calculation would be 4MB (for input) + 64 MB x 2 (for forward and backward) + << 1MB (for parameters). It is roughly 132 MB. There is still a big gap from 132 MB to 448 MB. I don’t know what I am missing. Any idea on how to manually calculate the GPU memory required for a network? |
st47345 | Couple hundred MB are usually taken just by initializing cuda. Look at your memory consumption after
a=torch.cuda.FloatTensor(1)
, that would give you the framework overhead. |
st47346 | Thank you for the suggestion.
I found that 273 MB was used for initialized CUDA on my side.
After running this script, the program consumes 277 MB, which matches well with my calculation 273MB + 4MB (for input).
net = nn.Conv3d(1, 16, 3, 1, 1)
net.cuda()
input = torch.FloatTensor(1, 1, 64, 128, 128).cuda()
input = Variable(input)
# out = net(input)
However, if I remove the comment of the last line, the program consumes 436 MB. There are 159 MB more memory consumed. However, if I calculate the size of output (16x64x128x128 x 4 bytes x 2) = 128 MB. There is still a not small gap there. Does anybody know why? Where are these additional memory consumed? |
st47347 | Most likely (IIRC) this is workspace used by the convolution kernel; the way PyTorch allocates memory it will continue to leave blocks marked as in use from the perspective of nvidia-smi even if it’s no longer using them internally. This is because CUDA’s malloc and free functions are quite slow, and it’s much more efficient to cache allocated blocks in a free list. When the device runs out of memory, PyTorch will call CUDA’s free function on all free blocks and the memory usage seen by nvidia-smi will fall. |
st47348 | Thanks for the detailed answer. So besides running a network and checking nvidia-smi, is there any other principled way to estimate the GPU memory usage without running the network? |
st47349 | If you don’t use CUDNN, then likely yes (most operations won’t use any scratchpad space, and those that do will allocate a deterministic amount that you can find in the code). But CUDNN contains many different algorithms and implementations for each operation (conv, pool, RNN, etc) with different memory requirements, and which algorithm is chosen depends in a complicated way on the sizes of all the inputs and the values of CUDNN flags. The memory usage that you’ve computed is probably accurate if you don’t count the cached free blocks, so if you’re trying to fit a network in a particular device with a given amount of memory that may be all you need to do. |
st47350 | To any future readers:
I implemented a quick tool to automate memory size estimation based on the approach above.
GitHub
jacobkimmel/pytorch_modelsize 1.5k
pytorch_modelsize - Estimates the size of a PyTorch model in memory
As this discussion outlines, note that these size estimations are only theoretical estimates, with implementation details altering the exact model size in practice!
Hope someone finds it useful |
st47351 | ngimel:
Couple hundred MB are usually taken just by initializing cuda. Look at your memory consumption after
a=torch.cuda.FloatTensor(1)
, that would give you the framework overhead.
Thanks for this suggestion.
Any thoughts as to why this overhead might be a lot more than a couple hundred MB?
I checked this now exactly as you suggested with allocating a unit sized tensor, and in my case it seems to be 1229 [MB]! This is clearly too much.
I checked right before the allocation and it was close to zero, and just after allocating this unit tensor, it jumped to 1229 [MB].
I’m using Pytorch v1.7, Cuda 10.1 and Tesla V100 GPU. |
st47352 | Hi I want to train this network and then add 2 convolutional layer after conv layer 5. I trained this network and code another network with same architecture but with 2 more conv layer and at first I copy the weights of similar layers to the new network and then freeze the same layers but new network does not work at all!
I initialized the 2 new conv layers that be the same as identity( model2.conv6[0].weight.data=torch.zeros((1,1,7,7)) model2.conv6[0].weight.data[0,0,3,3]=1 model2.conv6[0].bias.data=torch.zeros((1)) model2.conv7[0].weight.data=torch.zeros((1,1,3,3)) model2.conv7[0].weight.data[0,0,1,1]=1 model2.conv7[0].bias.data=torch.zeros((1)))
the network:
class Net(nn.Module):
def __init__(self,SR,block_size,phi):
super(MHCSResNet,self).__init__()
self.conv1= nn.Sequential(
nn.Conv2d(1,64, kernel_size=(9,9), stride=1,padding=4),
nn.BatchNorm2d(64),
nn.ReLU()
)
self.conv2= nn.Sequential(
nn.Conv2d(64,32, kernel_size=(7,7), stride=1,padding=3),
nn.BatchNorm2d(32),
nn.ReLU()
)
self.conv3= nn.Sequential(
nn.Conv2d(32,16, kernel_size=(5,5), stride=1,padding=2),
nn.BatchNorm2d(16),
nn.ReLU()
)
self.conv4= nn.Sequential(
nn.Conv2d(16, 8, kernel_size=(3, 3), stride=1,padding=1),
nn.BatchNorm2d(8),
nn.ReLU()
)
self.conv5= nn.Sequential(
nn.Conv2d(8,1,kernel_size=(1,1),stride=1,padding=0),
)
###########
#I want to add two conv layer here:
############
self.fc=nn.Linear(1600,64)
def forward(self,kr,y,phi):
out_conv1=self.conv1(kr)
out_conv2=self.conv2(out_conv1)
out_conv3=self.conv3(out_conv2)
out_conv4=self.conv4(out_conv3)
out_conv5=self.conv5(out_conv4)
###########
#I want to add two conv layer here:
############
out_feedback=kr+out_conv5
out_linear=self.fc(out_feedback.flatten(2))
return out_linear |
st47353 | lifeblack:
but new network does not work at all!
What do you mean by this?
Can you try experimenting with just the default initializations? |
st47354 | Hi, thanks or your reply
I thought if I initialize the new layers in way that be the same as identity then first epoch be the same as first network and output be ok, but the new network’s output is not the same as the first network’s output in last epoch.
on the other hand, the new network takes much more time or each epoch (number of parameters in first network near 20000 and new network parameters near60)
and new network’s out put is not good at all. its output is a gray picture and in each epoch change a little. |
st47355 | I have installed Pytorch via conda using the following command
conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch
I have a GTX 1050 GPU and the latest drivers installed on a Windows 10 laptop. All I’m trying to do is train a simple neural network on the GPU. But, no matter what I do, the training is executed on the CPU. The GPU is not utilized at all. Following is the code I’m using,
import numpy as np
import torch
import torchvision
from torchvision import datasets, transforms
from torch import nn, optim
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
dtype = torch.cuda.FloatTensor
transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5,), (0.5,)),])
mnist_data = torchvision.datasets.MNIST('D:\python_workspace',train=True, transform=transform, download=True)
train_data = torch.utils.data.DataLoader(mnist_data,batch_size=100,shuffle=True)
input_size = 784
hidden_size = 30
output_size = 10
model = nn.Sequential(nn.Linear(input_size, hidden_size),
nn.Sigmoid(),
nn.Linear(hidden_size, output_size),
nn.Sigmoid())
model = model.to(device)
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=3, momentum=0.9)
epochs = 15
for e in range(epochs):
running_loss = 0
for images, labels in train_data:
images = images.to(device)
labels = labels.to(device)
labels = (torch.nn.functional.one_hot(labels)).float()
images = images.view(images.shape[0], -1)
optimizer.zero_grad()
output = model(images)
loss = criterion(output, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
print("Epoch {} - Training loss: {}".format(e, running_loss/len(train_data)))
This code does not run on the GPU, why? FYI, print(device) gives device(type='cuda', index=0)
Any ideas? |
st47356 | Solved by zeke in post #5
I monitored GPU usage via nvidia-smi. I also increased the network’s size. It turns out that the network was too small to be fully utilized by GPU. Increasing it’s size increased GPU usage. |
st47357 | zeke:
print(device)
I believe torch.device() expects a type, what you want is
device = torch.device(“cuda” if torch.cuda.is_available() else “cpu”)
your tensor will be routed to the CUDA current device. |
st47358 | You can run this test to confirm if you are utilising the GPU
Start training your model (run python script), then in a CMD prompt window run command below. It will list every 5 seconds process using the GPU
nvidia-smi.exe -l 5 |
st47359 | I monitored GPU usage via nvidia-smi. I also increased the network’s size. It turns out that the network was too small to be fully utilized by GPU. Increasing it’s size increased GPU usage. |
st47360 | I am a mechanical engineering student and this is my first time venturing into coding. I wish to create a Neural network for predicting an experimental output.
I have three inputs
Let’s say Power§, feed(F), speed(V) and
Two outputs
Let’s say X, Y
I hear that there different types of Neural networks like ANN, RNN and CNN etc. These are some which I know of, so I want to create a customised Neural network in which we can change the no. of neurons, layers and training functions and different aspects to compare different networks and decide the best one with higher prediction accuracy.
Can anyone please help. |
st47361 | To get familiar with PyTorch I would recommend to take a look at the tutorials 3.
However, given that you just start working in machine learning, I think a course such as fast.ai 1 might also be a good starter.
If you want to learn more about deep learning specifically, you could take a look at the DeepLearning Book or at Bishop’s Pattern Recognition and Machine Learning for a general ML introduction. |
st47362 | I am trying implement YOLO v3 on custom dataset with a pretrained resnext101 backbone and connecting layer1, layer2 and layer3 for the YOLO object detection layers.
class MDENet(BaseModel):
# YOLOv3 object detection model
def __init__(self, yolo_props, path=None, features=256, non_negative=True, img_size=(416, 416), verbose=False):
super(MDENet, self).__init__()
use_pretrained = True if path is None else False
self.pretrained, self.scratch = _make_encoder(features, use_pretrained)
for param in self.pretrained.parameters():
param.requires_grad = False
# print(self.pretrained)
self.scratch.refinenet4 = FeatureFusionBlock(features)
self.scratch.refinenet3 = FeatureFusionBlock(features)
self.scratch.refinenet2 = FeatureFusionBlock(features)
self.scratch.refinenet1 = FeatureFusionBlock(features)
self.scratch.output_conv = nn.Sequential(
nn.Conv2d(features, 128, kernel_size=3, stride=1, padding=1),
Interpolate(scale_factor=2, mode="bilinear"),
nn.Conv2d(128, 32, kernel_size=3, stride=1, padding=1),
nn.ReLU(True),
nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0),
nn.ReLU(True) if non_negative else nn.Identity(),
)
if path:
self.load(path)
# YOLO head
conv_output = (int(yolo_props["num_classes"]) + 5) * int((len(yolo_props["anchors"]) / 3))
self.upsample1 = nn.Sequential(
nn.Conv2d(1024, 256, kernel_size=1),
nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
)
self.upsample2 = nn.Sequential(
nn.Conv2d(512, 128, kernel_size=1),
nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
)
self.identity = nn.Identity()
# small objects
self.yolo1_learner = nn.Sequential(
nn.Conv2d(1024, 512, kernel_size=1),
nn.BatchNorm2d(512),
nn.ReLU(inplace=True),
nn.Conv2d(512, 1024, kernel_size=3),
nn.BatchNorm2d(1024),
nn.ReLU(inplace=True)
)
self.yolo1_reduce = nn.Conv2d(1024, conv_output, kernel_size=1, stride=1, padding=1)
self.yolo1 = YOLOLayer(yolo_props["anchors"][:3],
nc=int(yolo_props["num_classes"]),
img_size=img_size,
yolo_index=0,
layers=[],
stride=32)
# medium objects
self.yolo2_learner = nn.Sequential(
nn.Conv2d(768, 256, kernel_size=1, padding=1),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.Conv2d(256, 512, kernel_size=3),
nn.BatchNorm2d(512),
nn.ReLU(inplace=True),
nn.Conv2d(512, 256, kernel_size=1, padding=1),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.Conv2d(256, 512, kernel_size=3),
nn.BatchNorm2d(512),
nn.ReLU(inplace=True),
nn.Conv2d(512, 256, kernel_size=1, padding=1),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.Conv2d(256, 512, kernel_size=3),
nn.BatchNorm2d(512),
nn.ReLU(inplace=True)
)
self.yolo2_reduce = nn.Conv2d(512, conv_output, kernel_size=1, stride=1, padding=1)
self.yolo2 = YOLOLayer(yolo_props["anchors"][3:6],
nc=int(yolo_props["num_classes"]),
img_size=img_size,
yolo_index=1,
layers=[],
stride=16)
# large objects
self.yolo3_learner = nn.Sequential(
nn.Conv2d(384, 128, kernel_size=1, padding=1),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.Conv2d(128, 256, kernel_size=3),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.Conv2d(384, 128, kernel_size=1, padding=1),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.Conv2d(128, 256, kernel_size=3),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.Conv2d(384, 128, kernel_size=1, padding=1),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.Conv2d(128, 256, kernel_size=3),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True)
)
self.yolo3_reduce = nn.Conv2d(256, conv_output, kernel_size=1, stride=1, padding=1)
self.yolo3 = YOLOLayer(yolo_props["anchors"][6:],
nc=int(yolo_props["num_classes"]),
img_size=img_size,
yolo_index=1,
layers=[],
stride=8)
This is my forward() function:
def forward(self, x):
# Pretrained resnet101
layer_1 = self.pretrained.layer1(x)
layer_2 = self.pretrained.layer2(layer_1)
layer_3 = self.pretrained.layer3(layer_2)
layer_4 = self.pretrained.layer4(layer_3)
# Depth Detection
layer_1_rn = self.scratch.layer1_rn(layer_1)
layer_2_rn = self.scratch.layer2_rn(layer_2)
layer_3_rn = self.scratch.layer3_rn(layer_3)
layer_4_rn = self.scratch.layer4_rn(layer_4)
path_4 = self.scratch.refinenet4(layer_4_rn)
path_3 = self.scratch.refinenet3(path_4, layer_3_rn)
path_2 = self.scratch.refinenet2(path_3, layer_2_rn)
path_1 = self.scratch.refinenet1(path_2, layer_1_rn)
depth_out = self.scratch.output_conv(path_1)
# Object Detection
# small objects
yolo1_out = self.yolo1(self.yolo1_reduce(self.yolo1_learner(layer_3)))
layer_3 = self.upsample1(layer_3)
layer_3 = torch.cat([layer_3, layer_2], dim=1)
print("layer_3.shape", layer_3.shape)
# medium objects
layer_3 = self.yolo2_learner(layer_3)
yolo2_out = self.yolo2(self.yolo2_reduce(layer_3))
layer_2 = self.upsample2(layer_3)
layer_2 = torch.cat([layer_1, layer_2], dim=1)
print("layer_2.shape", layer_2.shape)
# large objects
layer_2 = self.yolo3_learner(layer_2)
yolo3_out = self.yolo3(self.yolo3_reduce(layer_2))
yolo_out = [yolo1_out, yolo2_out, yolo3_out]
return depth_out, yolo_out
while training I get
Traceback (most recent call last):
File "train.py", line 471, in <module>
train() # train normally
File "train.py", line 296, in train
midas_out, yolo_out = model(imgs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/src/D/Research/EVA5-Vision-Squad/S15/torchutils/model/mde_net/mde_net.py", line 147, in forward
return self.forward_net(x)
File "/src/D/Research/EVA5-Vision-Squad/S15/torchutils/model/mde_net/mde_net.py", line 195, in forward_net
midas_out, yolo_out = self.run_batch(x)
File "/src/D/Research/EVA5-Vision-Squad/S15/torchutils/model/mde_net/mde_net.py", line 268, in run_batch
layer_2 = self.yolo3_learner(layer_2)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 100, in forward
input = module(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 353, in forward
return self._conv_forward(input, self.weight)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 350, in _conv_forward
self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size [128, 384, 1, 1], expected input[8, 256, 104, 104] to have 384 channels, but got 256 channels instead
at layer_2 = self.yolo3_learner(layer_2). self.yolo3_learner() expects 384 input channels. but when I print the layer_2.shape it results the tensor has same 384 channels.
layer_2.shape torch.Size([8, 384, 104, 104])
Am I doing anything wrong in the Conv2d or in torch.cat?? |
st47363 | Solved by ptrblck in post #2
yolo3_learner is using a wrong layer config, i.e. the third nn.Conv2d layer expects 384 input channels, while the preceding one outputs 256 channels.
The same issue is in the 5th conv layer. |
st47364 | yolo3_learner is using a wrong layer config, i.e. the third nn.Conv2d layer expects 384 input channels, while the preceding one outputs 256 channels.
The same issue is in the 5th conv layer. |
st47365 | I have it like this
import torch
bla bla bla
mels_self = torch.utils.checkpoint.checkpoint(self.decoder_self_run, mels, encoder_outputs, batch['text_lengths'], batch['mel_lengths'])
With
from torch.utils.checkpoint import checkpoint
same code works fine, why so? |
st47366 | Solved by ptrblck in post #9
I’m not completely sure how the import mechanism works in different Python versions and my understanding is that newer Python versions are more “flexible”. You could thus try it with e.g. Python3.8 and if it’s still not working would need to import the method directly (unsure if it’s a PyTorch limit… |
st47367 | Which PyTorch version are you using (you can check it via print(torch.__version__)).
torch.utils.checkpoint was introduced some time ago, so I’ wondering if you are working with a really old PyTorch build. If so, could you update to the latest stable release (1.7.0) and retry your code? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.