id
stringlengths
3
8
text
stringlengths
1
115k
st101200
I want to use multi-gpu to train my model but get many unexpected errors in implementing loss function and backward. So I need an example to solve those problems on 0.4.1 version.
st101201
model = SoP.SoP_model(*args) optimizer = torch.optim.Adam([{'params': model.audio_s.parameters()}, {'params': model.drn_model.parameters(), 'lr': args.DRNlr}, ],lr=LR, weight_decay=WEIGTH_DECAY) Here you can see hoy wo set different paramters for different parts of the model. Inilization, pretraining and the whole backward process criterion = torch.nn.BCEWithLogitsLoss(size_average=size_average) def init_weights(m): if type(m) == nn.Conv2d: nn.init.xavier_uniform_(m.weight, gain=nn.init.calculate_gain('conv2d')) if Pretrained is not None: print('Loading pretrained weights') model.load_state_dict(torch.load(Pretrained)) else: model.unet_model.apply(init_weights) if freezeUNET: for param in model.unet_model.parameters(): param.requires_grad = False model.train() if CUDA: model = torch.nn.DataParallel(model,output_device=1).cuda() Here you have a little of everything for t in range(EPOCHS): for j in range(iterations): audio,video,gt = loader() video=video.float() audio=audio.float() if CUDA: gt=torch.autograd.Variable(gt.cuda(1,async=True)) video=torch.autograd.Variable(video.cuda(1)) audio = torch.autograd.Variable(audio.cuda(1)) else: gt=torch.autograd.Variable(gt) video=torch.autograd.Variable(video) audio = torch.autograd.Variable(audio) output = model(video,audio) loss = criterion(output, gt.float()) # compute gradient and do SGD step optimizer.zero_grad() loss.backward() optimizer.step() Notice im not using default dataloader so that part is different
st101202
I am creating a 1D tensor like this: t = torch.ones(4) so t is: tensor([1., 1., 1., 1.]) now I want to change the values of these tensor with a step so I can get a tensor like: tensor([1., -1., 1., -1.]) How come this: t[1:-1:2] = -1 produces this: tensor([ 1., -1., 1., 1.]) ?
st101203
Solved by InnovArul in post #2 for your scenario, t[1::2] = -1 because you are excluding the last index by having -1 (t[1:-1:2]).
st101204
for your scenario, t[1::2] = -1 amirhf: How come this: t[1:-1:2] = -1 produces this: tensor([ 1., -1., 1., 1.]) ? because you are excluding the last index by having -1 (t[1:-1:2]).
st101205
Hi, As stated in the title, what memory is released by using loss.backward()? Why more memory is used for model.eval(). I know torch.no_grad may be able to reduce the memory for evaluation, but I couldn’t update my pytorch version yet. (I am using pytorch 0.4.0). From the documents, this function should be there, but it cannot be imported. Is there any other way to release the memory in the way what loss.backward has done? I found the following one, but it seems only works for parameters not requiring grad. github.com/pytorch/pytorch Improve autograd memory usage 21 by apaszke on 09:45PM - 26 Feb 17 1 commits changed 4 files with 25 additions and 7 deletions. I found the answer from the following thread. It is because of the scoping rules in python and two graphs will be generated. Calling loss.backward() reduce memory usage? It’s because of the scoping rules in Python. When you do this: while True: loss = model(output) it will always use 2x the memory that is needed to compute the model, because the reference to the loss form the previous iteration won’t be overwritten (and thus the graph with all the buffers it holds won’t be freed) until this iteration completes. So you’ll effectively end up holding to two graphs. This is why you should use volatile=True inputs when only doing inference. Once you add .backwa… Thanks.
st101206
how a combined loss function like the following can be implemented? Loss = loss1 * exp(-w1) + w1 + loss2 * exp(-w2) + w2 where w1 and w2 are also trainable parameters.
st101207
You can implement it just like that: def custom_loss(loss1, loss2, w1, w2): return loss1 * exp(-w1) + w1 + loss2 * exp(-w2) + w2 and make sure you define the tensors w1, w2 with requires_grad=True and pass them to your optimizer so that they can be optimized.
st101208
Hi @richard, thanks for the reply: I made this customized loss. However, the optimizer does not optimize the w1 and w2. (I am using pytorch 0.3) class MY_LOSS(nn.Module): def __init__(self): super(MY_LOSS, self).__init__() self.loss1 = nn.CrossEntropyLoss() self.loss2 = nn.MSELoss() self.w1 = Variable(torch.Tensor(1), requires_grad=True).type(FLOAT) self.w2 = Variable(torch.Tensor(1), requires_grad=True).type(FLOAT) def forward(self, inp1, tar1, inp2, tar2): loss1 = self.loss1(inp1, tar1) loss2 = self.loss2(inp2, tar2) combined_loss = loss1 * torch.exp(-self.w1) + self.w1 + loss2 * torch.exp(-self.w2) + self.w2 return combined_loss, loss1, loss2, self.w1, self.w2
st101209
Did you pass them to the optimizer to be optimized? I think it should be something like this: optimizer =optim.Adam([net, MY_LOSS.w1, MY_LOSS.w2])
st101210
Hi everyone, I am trying to implement asynchronous Q-Learning. Each subprocess owns a copy of Deep Q Network, but when performing prediction on batch input, the forward propagation is blocked for no reason. You can track the original code in following from torch import nn import torch import torch.multiprocessing as mp import numpy as np import pdb class OneHotNGramDQN(nn.Module): def __init__( self, n, movie_latent_factor_num, layer ): super(OneHotNGramDQN, self).__init__() self._n = n self._item_embedding = nn.Embedding( 10994, movie_latent_factor_num ) self._linear_1 = nn.Linear( (n + 1) * movie_latent_factor_num, layer[0] ) self._linear_2 = nn.Linear( layer[0], layer[1] ) self._linear_3 = nn.Linear( layer[1], layer[2] ) self._linear_4 = nn.Linear( layer[2], layer[3] ) self._linear_5 = nn.Linear( layer[3], 1 ) self._relu = nn.ReLU() def forward(self, state, action): state_x = self._item_embedding(state).view(state.shape[0], -1) action_x = self._item_embedding(action).view(state.shape[0], -1) x = torch.cat([state_x, action_x], dim=-1) x = self._relu(self._linear_1(x)) x = self._relu(self._linear_2(x)) x = self._relu(self._linear_3(x)) x = self._relu(self._linear_4(x)) x = self._linear_5(x) return x def test_1(): net = OneHotNGramDQN(10, 32, [640, 320, 160, 50]) feature_input = torch.LongTensor(np.random.randint(0, 10992, (1, 10))) action_input = torch.LongTensor([0]) print(net(feature_input, action_input).shape) def test_2(): net = OneHotNGramDQN(10, 32, [640, 320, 160, 50]) batch_size = 128 feature_input = torch.LongTensor(np.random.randint(0, 10992, (batch_size, 10))) action_input = torch.LongTensor(np.zeros((batch_size, 1))) print(net(feature_input, action_input).shape) if __name__ == '__main__': net = OneHotNGramDQN(10, 32, [640, 320, 160, 50]) feature_input = torch.LongTensor(np.random.randint(0, 10992, (1, 10))) action_input = torch.LongTensor([0]) print(net(feature_input, action_input).shape) batch_size = 128 feature_input = torch.LongTensor(np.random.randint(0, 10992, (batch_size, 10))) action_input = torch.LongTensor(np.zeros((batch_size, 1))) print(net(feature_input, action_input).shape) # Running without problem workers = [mp.Process(target=test_1) for i in range(mp.cpu_count())] for worker in workers: worker.start() for worker in workers: worker.join() # Forward propagation is somehow blocked workers = [mp.Process(target=test_2) for i in range(mp.cpu_count())] for worker in workers: worker.start() for worker in workers: worker.join() I do not quite understand why it happened, although the single instance prediction seems to be ok. Could anyone kindly give any clue of it? Thanks a lot. BR
st101211
I was trying to compile a c++ extension followed the tutorial 6, but failed. here is the log information running install running bdist_egg Traceback (most recent call last): File “D:/users/v-dalin/workspace/projects/toy/extension-ffi-master/package/cpp_extension/setup.py”, line 6, in cmdclass={‘build_ext’: BuildExtension}) File “D:\users\v-dalin\software\anaconda3\envs\py36-torch04\lib\site-packages\setuptools_init_.py”, line 129, in setup return distutils.core.setup(**attrs) File “D:\users\v-dalin\software\anaconda3\envs\py36-torch04\lib\distutils\core.py”, line 148, in setup dist.run_commands() File “D:\users\v-dalin\software\anaconda3\envs\py36-torch04\lib\distutils\dist.py”, line 955, in run_commands self.run_command(cmd) File “D:\users\v-dalin\software\anaconda3\envs\py36-torch04\lib\distutils\dist.py”, line 974, in run_command cmd_obj.run() File “D:\users\v-dalin\software\anaconda3\envs\py36-torch04\lib\site-packages\setuptools\command\install.py”, line 67, in run self.do_egg_install() File “D:\users\v-dalin\software\anaconda3\envs\py36-torch04\lib\site-packages\setuptools\command\install.py”, line 109, in do_egg_install self.run_command(‘bdist_egg’) File “D:\users\v-dalin\software\anaconda3\envs\py36-torch04\lib\distutils\cmd.py”, line 313, in run_command self.distribution.run_command(command) File “D:\users\v-dalin\software\anaconda3\envs\py36-torch04\lib\distutils\dist.py”, line 973, in run_command cmd_obj.ensure_finalized() File “D:\users\v-dalin\software\anaconda3\envs\py36-torch04\lib\distutils\cmd.py”, line 107, in ensure_finalized self.finalize_options() File “D:\users\v-dalin\software\anaconda3\envs\py36-torch04\lib\site-packages\setuptools\command\bdist_egg.py”, line 118, in finalize_options self.distribution.has_ext_modules() and self.plat_name File “D:\users\v-dalin\software\anaconda3\envs\py36-torch04\lib\distutils\dist.py”, line 983, in has_ext_modules return self.ext_modules and len(self.ext_modules) > 0 TypeError: object of type ‘Extension’ has no len() Process finished with exit code 1 OS: Windows Server 2012 R2 PyTorch version: 0.41 Python version: Python 3.6.6, Anaconda3 CUDA/cuDNN version: 8.0, 5.1
st101212
Hello!! I am having this error, somebody could help me?? Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same Here is the code :https://gist.github.com/evarenieto/be162094adf2f699794bac2a6fd2e8a8 36
st101213
I guess you need to send the model also to the device using model.to(device). I see that you have done that in case of accuracy but not for train.
st101214
Hi, in tensorflow, we have data_format option in tf.nn.conv2d which could specify the data format as NHWC or NCHW. Is there equivalent operation in pytorch? If not, should we convert Variable to numpy.array, use np.swapaxes and convert it back into Variable? And under such circumstances, will the gradient tracked properly?
st101215
@Veril transpose only applies to 2 axis, while permute can be applied to all the axes at the same time. For example a = torch.rand(1,2,3,4) print(a.transpose(0,3).transpose(1,2).size()) print(a.permute(3,2,1,0).size()) BTW, permute internally calls transpose a number of times 1.0k
st101216
Indeed, it can be a shortcut to use tensor.transpose_(0, 1) instead of tensor = tensor.transpose(0, 1) But note that the difference in performance is not significant, as transpose does not copy memory nor allocate new memory, and only swaps the strides.
st101217
Awesome method! Why not combine permute 1.9k and transpose 1.1k or make transpose 1.1k inaccessible to user since it’s used internally by permute 1.1k as mentioned by fmassa.
st101218
Hi, I implemented a C extension to pytorch, in which I create the output tensor through the TH_API THCudaTensor *THCudaTensor_newWithSize2d function, declared in THTensor.h. This is then used in pytorch through the cffi. In the end, I am using this function many times, overriding my variable. Imagine something like While True: result = myfunction(param1, parma2) # do stuff It turns out that I have a memory leak: my CUDA memory is soon saturated. Do I need to explicitly call some function to free that tensor ? It doesn’t seem like it’s freed. Is the good practice to create a Python wrapper to that function, in which I define the output tensor, to be filled by this external module ? Or is there a way to do the allocation there as I do currently, leaving pytorch free this memory somehow ? Thanks a lot
st101219
Answering to my own question: it seems the correct way to go is indeed to take the result tensor as a parameter to the CUDA extension, and not to allocate this result tensor there. Indeed, the corresponding memory is never released if the tensor has not been created in Python
st101220
Hi everybody, I am exploring Pytorch code. I manage to run the code in debug mode. m = Bernoulli(torch.tensor([0.3])) s = m.sample() # 30% chance 1; 70% chance 0 print('a sample: ', s) # calling directly print('calling directly torch.bernoulli()', torch.bernoulli(torch.Tensor([0.3]))) I tried to find where torch.bernoulli was implemented but couldn’t find it. Pycharm just indicates this is a built-in function. Thank you for your time!
st101221
torch.distributions.Bernoulli called torch.bernoulli(), which, I think, is implemented using C++.
st101222
Sorry, I clearly misunderstood your question, I thought you are looking for the documentation. I think the source code can be found here: aten/src/ATen/native/Distributions.cpp 7 aten/src/TH/THRandom.cpp 4
st101223
Hello I am writing a small pytorch example with a simple NN. The program runs fine if I declare dtype = torch.FloatTensor #dtype = torch.cuda.FloatTensor # Uncomment this to run on GPU The code currently runs great with the CPU option. However, as soon as I uncomment and switch to the GPU option, the code crashes when I try to run forward on a model y_estimate = NN(x) # Forward pass With the following error : File “./test.py”, line 44, in y_estimate = NN(x) # Forward pass File “/home/chieh/App/miniconda/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 325, in call result = self.forward(*input, **kwargs) File “/home/chieh/App/miniconda/lib/python2.7/site-packages/torch/nn/modules/container.py”, line 67, in forward input = module(input) File “/home/chieh/App/miniconda/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 325, in call result = self.forward(*input, **kwargs) File “/home/chieh/App/miniconda/lib/python2.7/site-packages/torch/nn/modules/linear.py”, line 55, in forward return F.linear(input, self.weight, self.bias) File “/home/chieh/App/miniconda/lib/python2.7/site-packages/torch/nn/functional.py”, line 835, in linear return torch.addmm(bias, input, weight.t()) RuntimeError: Expected object of type Variable[torch.FloatTensor] but found type Variable[torch.cuda.FloatTensor] for argument #1 ‘mat1’ I guess I just need to change the input type for the model, but i'm not sure how to do it. Any help would be great. Here is the full code docs.google.com Train Basic Neural Net pytorch 460 #!/usr/bin/env python #This example show how to use pytorch to solve a small neural net # input layer 3 dimension # hidden layer 4 dimension # output layer 1 dimension, sigmoid activation import torch from torch.autograd import Variable import... Thank you.
st101224
Solved by SimonW in post #2 Your network is still on cpu. Add NN = NN.cuda().
st101225
Hi guys, I am encountering a similar issue: RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #2 ‘weight’ I made sure that my model is on gpu by invoking model = model.cuda() and I still get the above error message. Any hint as of why this is happening is greatly appreciated. Best
st101226
Sure, here is the function to modify a pretrained resnet to two classes, I couldn’t get the indentation to work here, so, it will be relatively harder to read. def get_modified_pretrained_model(name): describe_model = 'Basic ’ + name + ’ that outputs 2 rather than 1000 classes!' if name == ‘resnet18’: net = models.resnet18(pretrained=True) if name == ‘resnet34’: net = models.resnet34(pretrained=True) if name == ‘resnet50’: net = models.resnet50(pretrained=True) if name == ‘resnet101’: net = models.resnet101(pretrained=True) if name == ‘resnet152’: net = models.resnet152(pretrained=True) num_ftrs = net.fc.in_features net.fc = nn.Sequential( nn.Linear(num_ftrs, 2) ) return net, describe_model here is the training protocol: def training_protocol(model): describe_training_protocol = ‘Modified training_protocol with nn.CrossEntropyLoss(), optim.SGD(model_ft.parameters(), lr = 0.0001, momentum=0.9, weight_decay = 0.00001)’ criterion = nn.CrossEntropyLoss().cuda() # Observe that all parameters are being optimized optimizer_ft = optim.SGD(model.parameters(), lr = 0.0001, momentum=0.9, weight_decay = 0.00001) return criterion, optimizer_ft, describe_training_protocol here is the training function: def train_model(dataloaders, dataset_sizes, model, criterion, optimizer, num_epochs = 10, temp_save_name = None): since = time.time() best_model_wts = model.state_dict() best_log_loss = 1 model = model.cuda() for epoch in range(1, num_epochs+1): print('Epoch {}/{}'.format(epoch, num_epochs)) print('*' * 10) # Each epoch has a training and validation phase phase model.train(True) # Set model to training mode # Iterate over data. for i, (input, target) in enumerate(dataloaders['train']): target = target.cuda(async=True) input_var = torch.autograd.Variable(input) target_var = torch.autograd.Variable(target) # compute output output = model(input_var) loss = criterion(output, target_var) # compute gradient and do SGD step optimizer.zero_grad() loss.backward() optimizer.step() # epoch statistics print_training_set_performance(dataset_sizes, dataloaders['train'], model) print('----------') epoch_sk_log_loss = print_val_set_performance(dataset_sizes, dataloaders['val'], model) # deep copy the model if better logloss if epoch_sk_log_loss < best_log_loss: best_log_loss = epoch_sk_log_loss best_model_wts = model.state_dict() if temp_save_name is not None: print('model is saved after epcoh ' + str(epoch)) name = temp_save_name + '_' + '.pth' torch.save(model, name) print() time_elapsed = time.time() - since print('Training complete in {:.0f}m {:.0f}s'.format( time_elapsed // 60, time_elapsed % 60)) print('Best val logloss: {:4f}'.format(best_log_loss)) # load best model weights model.load_state_dict(best_model_wts) return model here is the full error message: Traceback (most recent call last): File “finetunacuda.py 3”, line 322, in main() File “finetunacuda.py 3”, line 320, in main fine_tuna_protocol() File “finetunacuda.py 3”, line 298, in fine_tuna_protocol model_ft = train_model(dataloaders, dataset_sizes, model_ft, criterion, optimizer_ft, num_epochs = nep, temp_save_name = name_of_results_output_txt_file) File “finetunacuda.py 3”, line 238, in train_model output = model(input_var) File “/home/ubuntu/envs/deepL/lib/python3.5/site-packages/torch/nn/modules/module.py”, line 325, in call result = self.forward(*input, **kwargs) File “/home/ubuntu/envs/deepL/lib/python3.5/site-packages/torchvision/models/resnet.py”, line 139, in forward x = self.conv1(x) File “/home/ubuntu/envs/deepL/lib/python3.5/site-packages/torch/nn/modules/module.py”, line 325, in call result = self.forward(*input, **kwargs) File “/home/ubuntu/envs/deepL/lib/python3.5/site-packages/torch/nn/modules/conv.py”, line 277, in forward self.padding, self.dilation, self.groups) File “/home/ubuntu/envs/deepL/lib/python3.5/site-packages/torch/nn/functional.py”, line 90, in conv2d return f(input, weight, bias) RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #2 ‘weight’ So, as you can see, the issue obtains during the forward phase of the training. Thanks a lot for the assistance
st101227
Hassan_Shallal: target = target.cuda(async=True) You should make input cuda in the same way as you did target
st101228
Hi Simon, thanks a lot for the reply, I totally agree with you, but first, I am trying to reproduce the formal imagenet tutorial: https://github.com/pytorch/examples/blob/master/imagenet/main.py 41 Second, when I changed the iteration part in the train_model function as follows per your feedback: # Iterate over data. for i, (inputs, target) in enumerate(dataloaders['train']): target = target.cuda(async=True) inputs = inputs.cuda(async=True) input_var = torch.autograd.Variable(inputs) target_var = torch.autograd.Variable(target) # compute output output = model(input_var) loss = criterion(output, target_var) # compute gradient and do SGD step optimizer.zero_grad() loss.backward() optimizer.step() Now, I get a new error: Traceback (most recent call last): File “finetunacuda.py”, line 299, in main() File “finetunacuda.py”, line 297, in main fine_tuna_protocol() File “finetunacuda.py”, line 275, in fine_tuna_protocol model_ft = train_model(dataloaders, dataset_sizes, model_ft, criterion, optimizer_ft, num_epochs = nep, temp_save_name = name_of_results_output_txt_file) File “finetunacuda.py”, line 215, in train_model output = model(input_var) File “/home/ubuntu/envs/deepL/lib/python3.5/site-packages/torch/nn/modules/module.py”, line 325, in call result = self.forward(*input, **kwargs) File “/home/ubuntu/envs/deepL/lib/python3.5/site-packages/torchvision/models/resnet.py”, line 142, in forward x = self.maxpool(x) File “/home/ubuntu/envs/deepL/lib/python3.5/site-packages/torch/nn/modules/module.py”, line 325, in call result = self.forward(*input, **kwargs) File “/home/ubuntu/envs/deepL/lib/python3.5/site-packages/torch/nn/modules/pooling.py”, line 143, in forward self.return_indices) File “/home/ubuntu/envs/deepL/lib/python3.5/site-packages/torch/nn/functional.py”, line 334, in max_pool2d ret = torch._C._nn.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode) RuntimeError: cuda runtime error (2) : out of memory at /pytorch/torch/lib/THC/generic/THCStorage.cu:58 What do you think? I run this on a p2.xlarge EC2 instance on AWS. Thanks a lot for the assistance Hassan
st101229
And by the way, the above memory issue obtains even with a batchsize of 1! So, I assure you this has nothing to do with batchsize, the input is standard 3 x 224 x 224 images.
st101230
torch.cuda.is_available() evaluates to True and torch.version evaluates to ’0.3.0.post4’ on the EC2 instance. Does this imply cuddNN is properly installed? I saw somewhere on the forum that if torch is installed from the pytorch website, it should come with all necessary cuda libraries. I used pip3 install http://download.pytorch.org/whl/cu90/torch-0.3.0.post4-cp35-cp35m-linux_x86_64.whl 13 to install the last version of torch on the instance. Just found out that cuddNN is different than cuda. Will install cuddNN on the instance and get back with the results. Thanks a lot for the feedback.
st101231
Cool! cuDNN’s conv is faster and more efficient. Please try that and let us know the results
st101232
I have a question, doesn’t pytorch installation come with shipped cuda and cuddNN installation?
st101233
The issue of runtime error out of memory still persists even after installing cuda and cuddNN manually, is there any tutorial that I can follow to make sure I have compatible cuda/cuddNN/pytorch/nvidia/ec2/aws/ubunut16 versions?
st101234
Hi everybody, After going no-where with trying to set up cuda and cuddnn on my own, my mentor recommended I use the aws deep learning ami, which I did. This instance comes with preconfigured and installed python, pytorch, cuda, cuddNN. I followed the steps in this tutorial to launch the instance: https://aws.amazon.com/blogs/ai/get-started-with-deep-learning-using-the-aws-deep-learning-ami/ 12 After starting the instance I confirmed everything is properly installed as follows: source activate pytorch_p27 python import torch torch.version ’0.3.0.post4’ torch.cuda.is_available() True torch.backends.cudnn.version() 7003 I am still getting the runtime error out of memory trying to finetune resnet18 with a batchsize of 1 and I use images with standard size of 3x224x224: THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1512378422383/work/torch/lib/THC/generic/THCStorage.cu line=58 error=2 : out of memory Traceback (most recent call last): File “finetunacuda.py”, line 299, in main() File “finetunacuda.py”, line 297, in main fine_tuna_protocol() File “finetunacuda.py”, line 275, in fine_tuna_protocol model_ft = train_model(dataloaders, dataset_sizes, model_ft, criterion, optimizer_ft, num_epochs = nep, temp_save_name = name_of_results_output_txt_file) File “finetunacuda.py”, line 215, in train_model output = model(input_var) File “/home/ubuntu/anaconda3/envs/pytorch_p27/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 325, in call result = self.forward(*input, **kwargs) File “build/bdist.linux-x86_64/egg/torchvision/models/resnet.py”, line 142, in forward File “/home/ubuntu/anaconda3/envs/pytorch_p27/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 325, in call result = self.forward(*input, **kwargs) File “/home/ubuntu/anaconda3/envs/pytorch_p27/lib/python2.7/site-packages/torch/nn/modules/pooling.py”, line 143, in forward self.return_indices) File “/home/ubuntu/anaconda3/envs/pytorch_p27/lib/python2.7/site-packages/torch/nn/functional.py”, line 334, in max_pool2d ret = torch._C._nn.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode) RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1512378422383/work/torch/lib/THC/generic/THCStorage.cu:58 I did everything I got my hands on, right now, I have no idea why I can’t get to run. Here is the output of nvidia-smi Wed Jan 3 03:58:23 2018 ±----------------------------------------------------------------------------+ | NVIDIA-SMI 384.81 Driver Version: 384.81 | |-------------------------------±---------------------±---------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla K80 On | 00000000:00:1E.0 Off | 0 | | N/A 44C P8 30W / 149W | 1MiB / 11439MiB | 0% Default | ±------------------------------±---------------------±---------------------+ ±----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | ±----------------------------------------------------------------------------+ Any feedback is greatly appreciated. I am posting this issue here, in case I couldn’t get any feedback I will have to start a new post. Thanks
st101235
Hi folks, I was able to run the experiments cited above by using jupyter notebook on the same instance! This is something I noticed on both my local macbook air and the aws deep learning instance, running the gpu-training using jupyter notebook is fine, running the same experiments using python .py scripts lead to runtime out of memory error! Does this hint towards any possibility of memory leaks?
st101236
SimonW: k is still on cpu. Add NN = NN.cuda(). Sorrry to hijack this post, but is there a way to use condition to automatically switch between gpu and cpu depending on the hardware available ? Sometimes I am working on my macbook, and I am trying to find a way not to modify this part every time. I found it is possible to use parsing to detect GPU availability, but I don’t know how to do that on network. http://pytorch.org/docs/master/notes/cuda.html 10 import argparse import torch parser = argparse.ArgumentParser(description='PyTorch Example') parser.add_argument('--disable-cuda', action='store_true', help='Disable CUDA') args = parser.parse_args() args.cuda = not args.disable_cuda and torch.cuda.is_available() if args.cuda: torch.set_default_tensor_type('torch.cuda.FloatTensor') # something like torch.network.cuda() ?? ##Your model below
st101237
Currently, you have to build the network before transferring it to GPU via.cuda. So there isn’t yet a command that makes subsequent nn.* calls building layers on cuda. However, if you keep a collection of modules, it is quite easy to transfer them to GPU altogether after building them.
st101238
Hi Simon:joy:, I am just encountering a very similar issue while I have already put my network on cuda by model = torch.nn.DataParallel(model).cuda() and my input and target on cuda by inputs, targets = inputs.cuda(), targets.cuda(async=True) while I still have the problem of RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #4 ‘other’ If that means there are still something I forget? Thank you for your time to read my question! The full error message: File “cifar.py”, line 347, in main() File “cifar.py”, line 202, in main train_loss, train_acc = train(trainloader, model, criterion, optimizer, epoch, use_cuda) File “cifar.py”, line 247, in train outputs = model(inputs) File “D:\pycharm\WORKS\venv\lib\site-packages\torch\nn\modules\module.py”, line 491, in call result = self.forward(*input, **kwargs) File “D:\pycharm\WORKS\venv\lib\site-packages\torch\nn\parallel\data_parallel.py”, line 112, in forward return self.module(*inputs[0], **kwargs[0]) File “D:\pycharm\WORKS\venv\lib\site-packages\torch\nn\modules\module.py”, line 491, in call result = self.forward(*input, **kwargs) File “D:\pycharm\WORKS\VSBNet_pytorch\pytorch-classification-master\models\cifar\vgg_bi.py”, line 32, in forward x = self.features(x) File “D:\pycharm\WORKS\venv\lib\site-packages\torch\nn\modules\module.py”, line 491, in call result = self.forward(*input, **kwargs) File “D:\pycharm\WORKS\venv\lib\site-packages\torch\nn\modules\container.py”, line 91, in forward input = module(input) File “D:\pycharm\WORKS\venv\lib\site-packages\torch\nn\modules\module.py”, line 491, in call result = self.forward(*input, **kwargs) File “D:\pycharm\WORKS\venv\lib\site-packages\torch\nn\modules\conv.py”, line 168, in forward torch.sum(tmp_tensor) RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #4 ‘other’ and my code (I am doing some modify in the source code “conv.py” in torch.nn.Module): self.Bi_weight[channal] += torch.sign(tmp_tensor * self.weight[channal]) * torch.sum(tmp_tensor * torch.abs(self.weight[channal])) / torch.sum(tmp_tensor) tmp_tensor = ((torch.abs( self.weight[channal]) >= sep_point(i, self.group_num)) * (torch.abs(self.weight[channal]) < sep_point(i+1, self.group_num))).float().cuda()
st101239
I am wondering if there is a way of calculating the following sum of unequal sized chunks of a tensor. import torch import numpy as np x = torch.rand(1000,100) y = np.unique(np.random.choice(1000,10) here I have a tensor x of size (1000,10), I want to calculated the sum of chucks along the first axis. These chunks are split along the first axis and y indicate the end line of each chunk. They are in general of unequal size. For example, I can do this with the following for loop cum_pos_lik = torch.FloatTensor(y.size, 100) y = np.append(0, y) for i in range(1, y.size): cum_pos_lik[i-1, :] = x[y[i-1]:y[i], :].sum(0) But I need this to be faster for my application. Clearly the sum of each chunk can be parallelized. I am wondering if there is a simple way in pytorch of doing it. Thanks!
st101240
If you want to do it on the cpu, you might want to look into torch.multiprocessing, but I’m not sure there is a better way using only tensor operations. A naive approach padding each subtensor with 0s so that they have the same size would possibly be slower.
st101241
Thanks for your reply. I am actually doing this on GPU. I think the only way is to pad it and do sum along axis. I would assume if I am doing for loop anyway, only do padding with in the for loop would be faster? In any case I will try it out. Thanks a lot!
st101242
You can try the following sums = x.cumsum(0) sums = sums[y,:] sums0=torch.zeros(sums.size()) nrows = y.size index = torch.LongTensor(range(1,nrows)) sums0.index_copy_(0,index, sums[range(nrows-1),:]) cum_pos_lik = sums-sums0
st101243
Hi ngimel, Thank you for your reply! However I have tried something similar before, the problem is for my real data size, cumsum is actually the bottleneck.
st101244
Hi @atanaka7, did you finda solution to this problem. I am in the same situation. Thanks!
st101245
I would like to use torch.distributions.MultivariateNormal as a nn.Module with self.loc and self.scale_tril as parameters. Is there a clean way to achieve this ? I wish to do something like this class MultivariateNormal(Module, th.distributions.MultivariateNormal): def __init__(self, loc: Tensor, scale_tril: Tensor): Module.__init__(self) # needs to be called before adding Parameter()s self._loc = Parameter(loc) self._scale_tril = Parameter(scale_tril) torch.distributions.MultivariateNormal.__init__(self, self._loc, scale_tril=self._scale_tril) This is not possible in the current master branch as self.loc uses expand [i.e., copies ] and self._unbroadcasted_scale_tril uses assign thereby creating additional parameter.
st101246
I have a pretrained model, and want to apply it to a new dataset. From the AdaBN paper, I want to recaculate the the mean and var for each neuros and update the value for each BN layer. How could I reset the parameter?
st101247
Solved by ptrblck in post #2 You can call .reset_parameters() directly: bn = nn.BatchNorm2d(3) bn(torch.randn(10, 3, 24, 24)) print(bn.running_mean) print(bn.running_var) bn.reset_parameters() print(bn.running_mean) print(bn.running_var)
st101248
You can call .reset_parameters() directly: bn = nn.BatchNorm2d(3) bn(torch.randn(10, 3, 24, 24)) print(bn.running_mean) print(bn.running_var) bn.reset_parameters() print(bn.running_mean) print(bn.running_var)
st101249
Hi all, Currently, I’m studying different approximation schemes in NN propagations. Suppose I have input feature maps like 100 X 3 X 28 X 28 and kernels like 32 X 3 X 3 X3. Because of approximate computing, I’d like to adapt the IFMs for different OFMs. This means I have 100 X 32 X 3 X 28 X 28 inputs, and I need to conv2 the inputs with 32 X 3 X 3 X 3 one by one correspondingly. Is there a way to parallelize the process? Right now I use a loop implementation which is too slow. Thank you in advanced! Xin
st101250
Problem solved by using GROUP conv2. First repeat the input tensor for 32 times, then set the group option in conv2.
st101251
Hi, I want to finetuning the pretrained inception v3 model, but get type mismatch error. the following is my code. #!/usr/bin/env python3 import torch import torch.nn as nn import torch.optim as optim from torchvision import models import torchvision.transforms as transforms from bcdataset import BloodCellDataset import time n_classes = 4 lr = 1e-3 batch_size = 32 epoches = 30 data_root_dir = '/home/chen/database/blood-cells/dataset2-master/dataset2-master/images/' train_data = BloodCellDataset( data_root_dir, './data/train.txt', transform=transforms.Compose([ transforms.CenterCrop(240), transforms.Resize([299, 299]), # inception v3 input size: 299x299x3 transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ])) trainloader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, shuffle=True, num_workers=2) if __name__ == '__main__': device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') model_iv3 = models.inception_v3(pretrained=True, transform_input=True) for param in model_iv3.parameters(): param.requires_grad = False n_features = model_iv3.fc.in_features model_iv3.fc = nn.Linear(n_features, n_classes) model_iv3.to(device) criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(model_iv3.parameters(), lr=lr, momentum=0.9) print('learning_rate: %f batch_size: %d' % (lr, batch_size)) print('training...') for epoch in range(epoches): model_iv3.train() train_corrects = 0 train_loss = 0 with torch.set_grad_enabled(True): for i, data in enumerate(trainloader, 0): model_iv3.zero_grad() images, labels = data images.to(device) labels.to(device) outputs, _ = model_iv3(images) _, preds = torch.max(outputs, 1) # one-hot label loss = criterion(outputs, labels) loss.backward() optimizer.step() train_loss += loss train_corrects += torch.sum(preds == labels) the output: learning_rate: 0.001000 batch_size: 32 training... Traceback (most recent call last): File "train_.py", line 56, in <module> outputs, _ = model_iv3(images) File "/home/chen/local/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/chen/local/anaconda3/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/models/inception.py", line 78, in forward File "/home/chen/local/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/chen/local/anaconda3/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/models/inception.py", line 325, in forward File "/home/chen/local/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/chen/local/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 301, in forward self.padding, self.dilation, self.groups) RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #2 'weight' i tried to modify model_iv3 to model_iv3.cuda() or add torch.set_default_tensor_type(‘torch.cuda.FloatTensor’), but it does not work. any idea?
st101252
Solved by ptrblck in post #2 You have to assign the images and labels when you push them onto the GPU: images = images.to(device) labels = labels.to(device) Could you change it and try it again?
st101253
You have to assign the images and labels when you push them onto the GPU: images = images.to(device) labels = labels.to(device) Could you change it and try it again?
st101254
Hello, I have tried to write a function with a gradient backward function such as the example shown in the following. However, the gradient for one of the variable is zero. In the following example, the grad_output is zero and grad_output1 is fine. If I change output1 = output+0.1, then the gradient is fine. Is this gradient behavior normal? If a new variable is the same as the other, then the earlier variable is zero? class cal(Function): def forward(self, input, weight): self.save_for_backward(input, target) output = input*weight output1 = output return output, output1 def backward(self, grad_output,grad_output1): .....gradient calculation return grad_input, grad_weight Thanks.
st101255
Sunnydreamrain: return output, output is this a typo in your code? To your question, if any of the output variables are not used in any other calculation further, their gradient would be 0.
st101256
I want to select corresponding slices from ref_tensor according to the mean of another tensor a. I wonder how to select it fast. import torch a = torch.randn(100, 100, 50).mul(2).cuda() a_mean = torch.mean(a, dim=2, keepdim=True) ref_tensor = torch.randn(20).cuda() out_tensor = torch.Tensor(a.size()).cuda() for i in range(a.size(0)): for j in range(a.size(1)): # get corresponding index # For example, let's just divide it by 0.1. # The larger value it is, the larger idx we select. idx = int(a_mean[i,j,:]/0.1) idx = idx if idx < ref_tensor.size(0)-1 else ref_tensor.size(0)-1 idx = idx if idx > 0 else 0 print i, j, idx out_tensor[i,j,:] = ref_tensor[idx]
st101257
For someone who meet the same problem, this is a snippet that may help a_mean = a_mean/0.1 a_mean = a_mean.long().view(h*w) a_mean = torch.clamp(a_mean, min=0, max=ref_tensor.size(0)) a_mean = a_mean.view(-1) out_tensor = ref_tensor.index_select(dim=0, index=a_mean.data)
st101258
I am a newbie to Pytorch and I greatly enjoy this community. Is there anyone who can help me implement Curriculum Dropout by Pytorch. Thanks in advance, and any kind of help will be appreciable. I want to do some experiments of Curriculum Dropout in Pytorch. Curriculum Dropout tries to use a time scheduling for adjusting the dropout rate in the neural networks. The related paper can be downloaded from https://arxiv.org/abs/1703.06229 4 The source code in python can be found here https://github.com/pmorerio/curriculum-dropout 30
st101259
Hi! I understand what fine tuning is. Also, I went through this: finetuning-in-pytorch 4 however, most of the things what I found is about fine-tuning for image. Can someone please explain a bit (probably with a code) how to make use of fine-tuning for text? Is there any example to share? Thank you
st101260
I’m trying to exponentiate a tensor A of size [m] to another tensor B of size [n] element-wise, meaning each element in A should be exponentiated to every element in B resulting in another tensor C of size [m, n]. e.g. if A = [2, 3, 4] and B = [2, 3] then C=[ [2^2, 2^3], [3^2, 3^3], [4^2, 4^3] ] the only efficient way I found is doing this: C = A.unsqueeze(1) ** B.unsqueeze(0) Is there a better way to do it?
st101261
Solved by richard in post #2 That’s pretty efficient as-is.
st101262
I have a fairly elementary question but it is something that has caused me some trouble. So lets say we have a two layer convolutional network. In the first layer we have Conv1 = Conv2d(1,2, stride = 1) meaning that we have two filters for our input, producing two feature maps in the second layer we have Conv2 = Conv2d(2,2, stride = 1) in this layer I would expect that we have two filters since the final output is two feature maps, but when i look into the weights we have 4 convolutional filters in the second convolutional layer. Why is this?
st101263
Your assumption is right! Your layers both have two filters with a different number of channels. conv1 = nn.Conv2d(1, 2, 3, 1, 1) print(conv1.weight.shape) > torch.Size([2, 1, 3, 3]) conv2 = nn.Conv2d(2, 2, 3, 1, 1) print(conv2.weight.shape) > torch.Size([2, 2, 3, 3]) The filter shape is defined as [nb_filters, in_channels, h, w]. So besides the changing number of input_channels, we still have two filters.
st101264
i see but i still dont understand why there are 4 seperate filters in layer 2. Its almost like there 2 filters per incoming channel, when i only wanted 2 filters total…im sorry if this is a very simple question i’m just not understanding why there are 4 fitlers in the second convolutional layer
st101265
so we increase the number of channels from 1 to 2 going from convolution 1 to 2. We thus increase our filter number from 2 to 4, but our output channels leaving convolution 2 are still 2. thus we are applying 2 seperate sets of filters to each channel coming into convolution 2?
st101266
No, we still have two filters in each layer. Each filter calculates the dot product in the input activation using all input channels. Have a look at the alexnet architecture 13 in Figure 2. You see that each filter has a depth in the input volume. Also, have a look at the Convolution lecture of CS231n 12. Some information: The connections are local in space (along width and height), but always full along the entire depth of the input volume. For example, suppose that the input volume has size [32x32x3], (e.g. an RGB CIFAR-10 image). If the receptive field (or the filter size) is 5x5, then each neuron in the Conv Layer will have weights to a [5x5x3] region in the input volume, for a total of 553 = 75 weights (and +1 bias parameter). Notice that the extent of the connectivity along the depth axis must be 3, since this is the depth of the input volume.
st101267
so then in the second convolution layer my two filters have the shape, 2x3x3, thus their depth is 2 now where in the first layer it was 1? thank you so much for your help! in that case how would i visualize these depth 2 filters?
st101268
Exactly! Well, you could slice the channels and visualize each one as a gray image. If you use color images (3 channels), the filters of your first conv layer will also have 3 channels, thus you could visualize them in color.
st101269
I see! Thank you very much! By slice you mean take 2x2x3x3 and visualize them as two seperate 2x3x3 images?
st101270
The code in this gist 9 does some indexing and simple arithmetic with the exact same inputs 20 000 times. It is supposed to find the index of the first and last elements in groups of consecutive elements. When run on a cpu it always returns the same correct result. However when run on cuda it returns a wrong result 4-20 times out of 20 000 iterations. I paired the code down as much as I could, however here are a few points: The error is data depended, if I delete any more data than I already have the error stops occurring The cumsum() in line 27 seems to be part of the problem. Commenting it out stops the error from occurring. It should not affect the result in any way since its result is not used. By keeping a copy of the correct result of isfirst and isLast in cFi and cLa I can see that the error is in isLast By subtracting the correct from the incorrect result in lines 39++ I can see that there is a single mismatch in position 767 the results where obtained with pytorch 0.4 and a pascal gpu with cuda 9.0 Could this be related to me moving elements to overlapping regions of the tensor (lines 20 and 31)? Any other ideas? A typical result I see is below. The first row shows that the CPU version correctly 20000/20000 times. The last row shows that the GPU version gave incorrect results 6 out of 20000 times cpu (20000, 0) countOnes isFirst=1127 isLast=1128 No mismatches in isFirst tensor(0, device=‘cuda:0’) tensor(0, device=‘cuda:0’) tensor(0, device=‘cuda:0’) pos of mismatch in isLast tensor(767, device=‘cuda:0’) tensor(0, device=‘cuda:0’) tensor(1, device=‘cuda:0’) countOnes isFirst=1127 isLast=1128 No mismatches in isFirst tensor(0, device=‘cuda:0’) tensor(0, device=‘cuda:0’) tensor(0, device=‘cuda:0’) pos of mismatch in isLast tensor(767, device=‘cuda:0’) tensor(0, device=‘cuda:0’) tensor(1, device=‘cuda:0’) countOnes isFirst=1127 isLast=1128 No mismatches in isFirst tensor(0, device=‘cuda:0’) tensor(0, device=‘cuda:0’) tensor(0, device=‘cuda:0’) pos of mismatch in isLast tensor(767, device=‘cuda:0’) tensor(0, device=‘cuda:0’) tensor(1, device=‘cuda:0’) countOnes isFirst=1127 isLast=1128 No mismatches in isFirst tensor(0, device=‘cuda:0’) tensor(0, device=‘cuda:0’) tensor(0, device=‘cuda:0’) pos of mismatch in isLast tensor(767, device=‘cuda:0’) tensor(0, device=‘cuda:0’) tensor(1, device=‘cuda:0’) countOnes isFirst=1127 isLast=1128 No mismatches in isFirst tensor(0, device=‘cuda:0’) tensor(0, device=‘cuda:0’) tensor(0, device=‘cuda:0’) pos of mismatch in isLast tensor(767, device=‘cuda:0’) tensor(0, device=‘cuda:0’) tensor(1, device=‘cuda:0’) countOnes isFirst=1127 isLast=1128 No mismatches in isFirst tensor(0, device=‘cuda:0’) tensor(0, device=‘cuda:0’) tensor(0, device=‘cuda:0’) pos of mismatch in isLast tensor(767, device=‘cuda:0’) tensor(0, device=‘cuda:0’) tensor(1, device=‘cuda:0’) cuda (19994, 6)
st101271
Solved by colesbury in post #11 The issue is that the line: b[:-1] = b[1:] reads and writes overlapping elements of b. There’s a similar issue in the original snippet, which @bgobbi mentions in the original post. You can fix this by writing: b[:-1] = b[1:].clone() At some point in the future, I’d like to add overlap detection…
st101272
Couldn’t reproduce the issue using PyTorch 0.4.1 + CUDA 8.0.61 and 0.5.0a0+2c7c12f + CUDA 9.0.176 on a GTX1070. Both runs return: cpu (20000, 0) cuda (20000, 0)
st101273
Are you using the latest release, i.e. 0.4.1, and could you update if you use an older release, e.g. 0.4.0? You get the correct results, if you comment out line 27? Just to be clear: I just copied your code without any commenting and ran the script. Should I change something to see the issue?
st101274
Yes: no changes where necessary to see the problem from the version downloaded from gist when I double checked before submitting the question. I need to talk to our IT people to upgrade from 0.4.0 to 0.4.1. It might take a few days. The problem seems to be quite sensitive to all the environmental factors. In your eyes does the code in firstLast() (lines 19-32) look solid? I also noticed that the code is about 2x slower on GPU than CPU. I am optimizing a re-written version of what I need that does not seem to cause the problem. But I am worried the problem could crop up elsewhere without me noticing.
st101275
The code looks alright. Could you try to create a new conda environment with 0.4.1 or is this not possible due to IT restrictions?
st101276
Our IT person installed 0.4.1 and the problem is still present. He also has done some further testing. Here are his findings: Some initial tests confirm the issue: Running with the installed 0.4.0+CUDA 9.0.176 - problem Running with the newly installed 0.4.1+CUDA 9.0.176 - problem Running with Nvidia-built version in a container 0.5.0a0 + CUDA 9.0.176 - problem Locally pip install 0.4.1 - still the same problem I couldn’t right away build it from scratch, because we use gcc 6.3.0 compiler in the toolchains and there is a known issue with CUDA and one of the C++ libraries. I have made some custom fixes to get it tow work with TensorFlow and if all else fails, I can try with pytorch as well, but I want to try few other things first. I have tested the hardware with a lot of TensorFlow benchmarks with pretty good results, but have not ran any pytorch benchmarks yet, so I will do that next, just to confirm that things are working. I do have to say that I am a bit puzzled though. Thanks again for following-up!
st101277
Here is some more info. Our person in IT did some more testing (he is great!). just to be sure, I went on AWS and ran it on one of the new GPU systems with Volta GPUs. I used the deep-learning AMI with pytorch already setup and with CUDA 9.0, Python 3.6.6, nvidia driver 396.37 and pytorch version 0.4.0 and 0.4.1 - both failed with even much worse issues than on our system (the last run was: cuda (9079, 10921) and that was the best result!!) It was also quite slow with the GPU version. At this point I am willing to rule out hardware setup or build issues on our system and think that either it is some kind of pytorch problem, or there is some potential issue with the code. I admit that I have not had a chance to look at the code yet, but will try to get to it over the next few days. The AMI that I used was: https://aws.amazon.com/marketplace/pp/B077GCH38C This seems to point to a general problem now not just on our systems.
st101278
Thanks for digging into that problem. I’ve tested your code again on another server with GTX1080 and CUDA 8.0.61 and it ran successfully a lot of times. However, I managed to get one error for a single run. I cannot reproduce the error currently, but will have a look at your code again.
st101279
I got down to this small code snippet: for i in range(100000): print(i) a = torch.empty(2000, dtype=torch.uint8, device='cuda').random_(2) a[0] = 1 b = a.clone() b[:-1] = b[1:] b[-1] = 1 if a.sum().cpu().item() != b.sum().cpu().item(): break The error occurs often at index 1023. I’m no CUDA expert, but maybe it could be related to some kernel size etc. @richard, @SimonW, @colesbury Do you guys have any idea, where I could start debugging / looking into the code?
st101280
The issue is that the line: b[:-1] = b[1:] reads and writes overlapping elements of b. There’s a similar issue in the original snippet, which @bgobbi mentions in the original post. You can fix this by writing: b[:-1] = b[1:].clone() At some point in the future, I’d like to add overlap detection to kernels so that the copy automatically buffers the right-hand-side when necessary, but you have to watch out for these for now.
st101281
I remembered a similar issue, but couldn’t find it. Thanks for pointing this out!
st101282
Pytorch version 0.4.0, python version 2.7 and cuda-8.0, cudnn-8.0-v6 it also happens with Pytorch version 0.4.1. import torch.nn as nn import torch import numpy as np extract_local_feature = nn.Conv2d( in_channels=256, out_channels=256, kernel_size=15, padding=7, bias=True ).cuda() input = torch.Tensor(np.random.rand(1,256, 112, 112)).cuda() import ipdb; ipdb.set_trace() out = extract_local_feature(input) ipdb.set_trace() the line out = extract_local_feature(input) gives me 3381MiB / 12196MiB memory usage. This issues only happen with batch_size = 1 Could someone help me fix this?
st101283
Could you try your code again with torch.from_numpy instead of directly wrapping the numpy array in a torch.Tensor? It’s not recommended to use your approach.
st101284
I am also facing the same issue, and can reproduce from the provided code snippet. Could we get some help please? UPDATE: Running torch.cuda.empty_cache() after the operation reduces memory for batch_size 1 to a reasonable amount of memory. As OP said, there is no issue in running this snippet with batch size > 1
st101285
Hi, I would like to implement the following in PyTorch: for i in range(indices.shape[0]): values[indices[i]] += 1 Both indices and values are 1-D tensors. I found scatter_add which perfectly meets the need: values.scatter_add_(0, indices, torch.ones_like(values)) However, I found this requires that indices must be shorter than values, otherwise, it will throw an error complaining that indices length too long. To reproduce, run the following code: import torch values = torch.zeros(5) indices = torch.LongTensor([0, 3, 3]) print(values.scatter_add(0, indices, torch.ones_like(values))) # return tensor([1, 0, 0, 2, 0]) indices = torch.LongTensor([0, 3, 3, 0, 0, 0]) print(values.scatter_add(0, indices, torch.ones_like(values)) # throws error "Expected index [6] to be smaller size than src [5] and to be smaller than tensor [5] apart from dimension 0" Is it designed in this way, or could there something wrong in the dimension check for this function? Also, is there any way around it? Thanks.
st101286
Solved by SimonW in post #4 no… index_add_ is exactly what you need. why do you do ones_like(values)? can you look at the docs before complaining? >>> x = torch.zeros(5) >>> x.index_add_(0, torch.tensor([0, 0, 4, 1]), torch.ones(4)) tensor([2., 1., 0., 0., 1.])
st101287
When I replace scatter_add with index_add, it complains that the number of indices should be equal to source:size(dim). In other words, the constraint is even stricter. I guess the problem is, it may be reasonable for index_copy, scatter to have size constraints in indices since one should not copy-paste to the same location twice. But for index_add and scatter_add, it does not quite make sense to have such constraints. My demo is a simplest example though it cannot work.
st101288
no… index_add_ is exactly what you need. why do you do ones_like(values)? can you look at the docs before complaining? >>> x = torch.zeros(5) >>> x.index_add_(0, torch.tensor([0, 0, 4, 1]), torch.ones(4)) tensor([2., 1., 0., 0., 1.])
st101289
Thank you for your explanation. I did read the docs before posting. The error msg “*** RuntimeError: invalid argument 4: Number of indices should be equal to source:size(dim)” gave me the wrong impression that the size of indices should match the size of “source” instead of “values”.
st101290
The error comes from the Transition blocks. TransitionBlock is what I wrote up myself, and TransitionBlock_test is using code form another implementation of the same network. I even modified the variables to be the same between both, but I still get an error in my code while the other one runs fine. github.com maxmatical/pytorch-projects/blob/master/TransitionBlock.ipynb 8 { "cells": [ { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import torch\n", "import torchvision\n", "import torchvision.transforms as transforms\n", "import numpy as np\n", "import math\n", "\n", "import torch.nn as nn\n", "import torch.nn.functional as F\n", "\n", "from torchsummary import summary\n" ] }, This file has been truncated. show original
st101291
You have typo in the first Transition Block definition. You wrote foward instead of forward. So it can’t find the forward function Source: Personal experience
st101292
I am using the pytorch resnet101, I am removed the average pooling and fc layers and change the stride of the last layer to 1 instead of 2. everything works fine so far. now for the last layer (layer 4) i want to use dilation =2, but it throws me an erro… I appreciate it if someone can help me why i get the following error: import torch.nn as nn import math import torch.utils.model_zoo as model_zoo import torch class Bottleneck(nn.Module): expansion = 4 def __init__(self, inplanes, planes, stride=1, downsample=None, dilation = 1 ): super(Bottleneck, self).__init__() self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) self.bn1 = nn.BatchNorm2d(planes) self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, dilation = dilation, bias=False) self.bn2 = nn.BatchNorm2d(planes) self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1, bias=False) self.bn3 = nn.BatchNorm2d(planes * self.expansion) self.relu = nn.ReLU(inplace=True) self.downsample = downsample self.stride = stride def forward(self, x): residual = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) out = self.relu(out) out = self.conv3(out) out = self.bn3(out) if self.downsample is not None: residual = self.downsample(x) out += residual out = self.relu(out) return out class ResNet(nn.Module): def __init__(self, block, layers): self.inplanes = 64 super(ResNet, self).__init__() self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False) self.bn1 = nn.BatchNorm2d(64) self.relu = nn.ReLU(inplace=True) self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) self.layer1 = self._make_layer(block, 64, layers[0]) self.layer2 = self._make_layer(block, 128, layers[1], stride=2) self.layer3 = self._make_layer(block, 256, layers[2], stride=2) self.layer4 = self._make_layer(block, 512, layers[3], stride=1, dilation = 1) # We modify the stide 2 here to be one # print(self.layer4) for m in self.modules(): if isinstance(m, nn.Conv2d): nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') elif isinstance(m, nn.BatchNorm2d): nn.init.constant_(m.weight, 1) nn.init.constant_(m.bias, 0) def _make_layer(self, block, planes, blocks, stride=1, dilation = 1): downsample = None if stride != 1 or self.inplanes != planes * block.expansion: downsample = nn.Sequential( nn.Conv2d(self.inplanes, planes * block.expansion, kernel_size=1, stride=stride, bias=False), nn.BatchNorm2d(planes * block.expansion), ) layers = [] layers.append(block(self.inplanes, planes, stride, downsample, dilation = dilation)) self.inplanes = planes * block.expansion for i in range(1, blocks): layers.append(block(self.inplanes, planes, dilation = dilation)) return nn.Sequential(*layers) def forward(self, x): x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) print('Output SIZE',x.size()) return x model = ResNet(Bottleneck, [3, 4, 23, 3]) input=torch.rand(1,3,513,513) output = model(input); when dilation is equal to 1 in line self.layer4 = self._make_layer(block, 512, layers[3], stride=1, dilation = 1) it works okay. but when i do dilation =2, it give me: File "<ipython-input-1-cf1b4471e44a>", line 1, in <module> runfile('/home/alireza/Downloads/SSD_Res101/resnet101.py', wdir='/home/alireza/Downloads/SSD_Res101') File "/home/alireza/anaconda3/lib/python3.6/site-packages/spyder_kernels/customize/spydercustomize.py", line 678, in runfile execfile(filename, namespace) File "/home/alireza/anaconda3/lib/python3.6/site-packages/spyder_kernels/customize/spydercustomize.py", line 106, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "/home/alireza/Downloads/SSD_Res101/resnet101.py", line 110, in <module> output = model(input); File "/home/alireza/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/alireza/Downloads/SSD_Res101/resnet101.py", line 103, in forward x = self.layer4(x) File "/home/alireza/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/alireza/anaconda3/lib/python3.6/site-packages/torch/nn/modules/container.py", line 91, in forward input = module(input) File "/home/alireza/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/alireza/Downloads/SSD_Res101/resnet101.py", line 44, in forward out += residual RuntimeError: The expanded size of the tensor (31) must match the existing size (33) at non-singleton dimension 3
st101293
Solved by ptab in post #2 Dilating a kernel adds spacing between kernel elements, for example: a 2-dilated 3x3-kernel can be viewed as a 5x5 kernel. You will need to adjust for this by using larger padding.
st101294
Dilating a kernel adds spacing between kernel elements, for example: a 2-dilated 3x3-kernel can be viewed as a 5x5 kernel. You will need to adjust for this by using larger padding.
st101295
Thank you for your answer! I just figured that out, for some reason when i have padding = 2, the error disappears
st101296
HI. I want to freeze my GRU after a certain amount of training. I have a ‘model’ Module object which has a gru in it. My code looks like this: def freeze_encoding(self, do_freeze=True): self.model.gru.weight_ih.requires_grad = not do_freeze self.model.gru.weight_hh.requires_grad = not do_freeze if self.model.gru.bias == True: self.model.gru.bias_ih.requires_grad = not do_freeze self.model.gru.bias_hh.requires_grad = not do_freeze print('freeze encoding') pass This code throws an error the weight_ih doesn’t exist. What would be the suggested way to do this right? Thanks.
st101297
I’m trying this: def freeze_encoding(self, do_freeze=True): for weight in self.model.parameters(): weight.requires_grad = not do_freeze if do_freeze: print('freeze encoding') pass I think it would operate on the whole model and the gru I’m interested in is part of that.
st101298
I would like to build a 3 layers autoencoder with recurrent hidden layer and a feedback connection that come from an other AE connected on the input of the H layer. Is my implementation correct ? class LPU(nn.Module): def __init__(self, Encoder_size, Hidden_size, Decoder_size): # simple autoencoder structure super(LPU, self).__init__() self.encoder = nn.Linear(Encoder_size, Encoder_size) # PREVIOUS : self.encoder = nn.Linear(Encoder_size, Hidden_size) self.act_encoder = nn.Sigmoid() self.hidden = nn.Linear(Hidden_size, Hidden_size) self.act_hidden = nn.Sigmoid() self.decoder = nn.Linear(Decoder_size, Decoder_size) # PREVIOUS : self.decoder = nn.Linear(Hidden_size, Decoder_size) self.act_decoder = nn.Sigmoid() def forward(self, Xt, last_H, last_H_superior): input_encoder= self.encoder(Xt) out_encoder = self.act_encoder(input_encoder) input_hidden = self.hidden(torch.cat((out_encoder, last_H, last_H_superior), 3)) # ???????? representation = self.act_hidden(input_hidden) # hidden compressed representation input_decoder = self.decoder(representation) out_decoder = self.act_decoder(input_decoder) return out_encoder, out_decoder, representation
st101299
Quentin_Munch: input_hidden = self.hidden(torch.cat((out_encoder, last_H, last_H_superior), 3)) # ??? In this line, itseems you are expecting 4D inputs? Linear layer works with 2D inputs (batch_size x input_dim).