id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st116868
|
@smth I managed to pinpoint the actual cause of the issue. It was a silly mistake in my code. In the loader thread, I was loading the ground truth as cuda tensor. This was causing the deadlock when the main thread was holding the lock on GPUs while training. Now I fixed it.
|
st116869
|
I have a tensor a of shape 3 x 4 x 5 and I want to edit the subtensor a[1, [0, 2, 3], :] and assign some value to it. How can I do that?
Currently, PyTorch only supports LongTensor indexing if it is the only argument. But I have two arguments for indexing
|
st116870
|
If it’s only a tensor (not a Variable for wich you want to keep the trace of operation in order to compute the gradient), you can use numpy, or do it by hand, because I don’t think it is implemented.
Otherwise (if you want to compute the gradient with respect to this assignment), I have no idea how to do it, since mathematically, an assignment is not differentiable at all.
If you dont want to assign, but to just get the values at the inedx targets, you can use index_select:
idx0 = torch.LongTensor([1])
idx1 = torch.LongTensor([0,2,3])
values = my_tensor.index_select(0,idx0).index_select(1,idx1)
|
st116871
|
on the master branch, this is now possible. you can indeed do:
x[1, [0, 2, 3], :] = y
|
st116872
|
I’m not sure if others would find these useful, even if only as a sort for beginner tutorial/helper, but I’ve written two quick and simple (as-is) functions to convert to and from a ModuleList of LSTMCells and an LSTM object.
There’s a few assumptions being made here (namely that the state dictionaries will always keep the same names).
def convert_lstm_to_lstm_cells(lstm):
lstm_cells = nn.ModuleList([nn.LSTMCell( lstm.input_size , lstm.hidden_size )] +
([nn.LSTMCell( lstm.hidden_size , lstm.hidden_size)] * (lstm.num_layers - 1)))
key_names = lstm_cells[0].state_dict().keys()
source = lstm.state_dict()
for i in range( lstm.num_layers ):
new_dict = OrderedDict( [(k, source["%s_l%d" % (k, i)]) for k in key_names] )
lstm_cells[i].load_state_dict( new_dict )
return lstm_cells
def convert_lstm_cells_to_lstm(lstm_cells):
lstm = nn.LSTM( lstm_cells[0].input_size , lstm_cells[0].hidden_size, len(lstm_cells) )
key_names = lstm_cells[0].state_dict().keys()
lstm_dict = OrderedDict()
for i, lstm_cell in enumerate(lstm_cells):
source = lstm_cell.state_dict()
new_dict = OrderedDict( [("%s_l%d" % (k, i), source[k]) for k in key_names] )
lstm_dict = OrderedDict(list(lstm_dict.items()) + list(new_dict.items()))
lstm.load_state_dict( lstm_dict )
return lstm
The need for such a conversion came about because of issues like the ones in the following posts:
Minimal code snippet that seems to cause a memory leak on GPU code only
I ended up changing my architecture specifically to work around this issue.
For inference mode, I simply use volatile=True and this solves everything.
Otherwise, you can move your model using .cpu() and .cuda() calls.
E.g. if seq is a class module, then you can do:
seq.cpu()
output = seq(input)
seq.cuda()
Furthermore, you can call pin_memory() on the input tensors to facilitate transfers back and forth.
It’s very hacky, and in the end, I found an alternate way of training my model.
Using PackedSequence with LSTMCell
Thanks for the run-down James.
I’m actually developing a solution that runs through streaming data, and so originally I had designed my module with LSTMCell's. But as you mention, I can see now that the advantages of LSTM layers far outweigh the drawbacks.
Also, since my real time stream is much slower than my processing power, there’s really no harm in simply passing data columns one by one to the LSTM and caching the final hidden/cell states for my next call. This is a small price to pay for…
Criticisms and comments are welcome.
|
st116873
|
Is there a way to convert a pretrained pytorch model to use in inference in Caffe? I torch.save() the dictionary of the net. How can I get the trained model loaded into Caffe?
|
st116874
|
Hi @sauhaardac, have you found any tools to convert trained PyTorch model to Caffe?
|
st116875
|
Nope, I actually ended up converting all our inference code to PyTorch and my entire lab followed suit. No regrets, Pytorch is awesome!
|
st116876
|
My model is based on CNN and LSTM.
What I want to do is to apply L2 regularization to LSTM only.
However, As I know, in optim, it seems there no way to apply weight seperately.
Is there any way that I can try??
|
st116877
|
Momentum and such are handled by the Optimizer itself, but as far as I know, weight decay, such as L1 and L2, can be implemented as a separate step, after the optimizer step?
So, seems like you could just grab the parameter Tensors/Variables for your LSTM, and subtract a fraction of the L2 norm from them?
|
st116878
|
What would be an efficient way to calculate standard deviation or similar metric to see how spread out the output of a network is over all of it’s outputs on a validation dataset running the network approx. 10,000 times. Output is a 1x10 tensor. I am new to this field, and was wondering if there is some efficient algorithm I don’t know about for calculating how “spread out” each of these approx. 100,000 values are.
Edit:
One way I was thinking about doing this, is keeping a running average of the values outputted from the network, and running an MSELoss function on a 1x10 tensor containing this average value. This should give something close to the standard deviation, but the problem is that the same average value won’t be used for all of the outputs.
|
st116879
|
Update:
Figured out an algorithm called online variance that was suited to my needs. Couldn’t figure out how to delete the post, but the issue is resolved now.
|
st116880
|
I am trying:
W.grad.data.fill_(0)
However, this fails the first time, since .grad doesnt exist yet. I’m thinking maybe there should be some method like eg W.zero_grad(), which will always succeed, idempotently.
|
st116881
|
I think that zero_grad only works for nn.Module and nn.Optimizer, and fills with zeros all the parameters. So if your parameter W is part of a module M, you should directly call:
M.zero_grad()
|
st116882
|
W.grad = None is not bad. But wont that cause reallocation, therefore a cuda-side sync point?
|
st116883
|
I am using renet50 as a pretrained model.
Now in resnet50 we have one fc layer and layer4 so I want to remove both the layers completely and feed the output of the previous layer to my new net:
class convNet(nn.Module):
#constructor
def __init__(self):
super(convNet, self).__init__()
#defining layers in convnet
self.conv1 = nn.Conv2d(2048,1024, kernel_size=3,stride=1,padding=1)
self.conv2 = nn.Conv2d(1024,512, kernel_size=3,stride=1,padding=1)
self.conv3 = nn.Conv2d(512,256,kernel_size=3,stride=1,padding=1)
self.pconv1= nn.Conv2d(256,256, kernel_size=(3,3),stride=1,padding=(1,1))
self.pconv2= nn.Conv2d(256,256, kernel_size=(3,7),stride=1,padding=(1,3))
self.pconv3= nn.Conv2d(256,256, kernel_size=(7,3),stride=1,padding=(3,1))
self.conv4= nn.Conv2d(256,64,kernel_size=3,stride=1,padding=1)
self.conv5= nn.Conv2d(64,1,kernel_size=3,stride=1,padding=1)
def forward(self, x):
x = nnFunctions.relu(self.conv1(x))
x = nnFunctions.relu(self.conv2(x))
x = nnFunctions.relu(self.conv3(x))
#parallel conv
x = nnFunctions.relu(self.pconv1(x)+self.pconv2(x)+self.pconv3(x))
x = nnFunctions.relu(self.conv4(x))
x = nnFunctions.relu(self.conv5(x))
return x
How can I remove the fc and layer4?
How can I add the above network to the pretrained resnet50 and also I want to use fine tuning so I want to set require_grad=True for layer3 i.e last layer after removing fc and layer4, how can I do the same
|
st116884
|
Write a new forward function that starts from the resnet50 forward function, but modifies it in the way you want.
All your questions can be done this way.
|
st116885
|
So you are saying
class convNet(nn.Module):
#constructor
def __init__(self,resnet,mynet):
super(convNet, self).__init__()
#defining layers in convnet
self.resnet=resnet
myNet=mynet
def forward(self, x):
x=self.resnet.layer1(x)
x=self.resnet.layer2(x)
x=self.resnet.layer3(x)
x=myNet(x)
return x
Is it okay if I just write self.resnet.layer1(x) or do I have to write everything for each conv layer in layer1?
And how can I set require_grad=False for layer1 and layer2 and require_grad=True for layer3
|
st116886
|
you can set it to false, like self.resnet.layer1.requires_grad=False (try it out)
|
st116887
|
Hi Soumith,
In case i want activations from a certain intermediate layer of my model, i should just rewrite the forward function call or is there maybe a more straight forward way to achieve the same. Thanks in anticipation.
|
st116888
|
Hi,
What is the easiest way to install torchvision from source ? I followed the installation instructions (from source) of the pytorch page 245, however torchvision was not installed. I tried using conda, but it says that a new version of pytorch will be downloaded. Any way around this ?
Thanks,
Lucas
|
st116889
|
Hey,
If you want to install torchvision from source, then go to torchvision source 2.8k and then install is at as a separate package. Torchvision does not come with the pytorch source.
|
st116890
|
I have a Tensor [batch, 512, w, h], and I want to change it size to (-1,512) to through Linear layer,
fusion_layer = fusion_layer.permute(2, 3, 0, 1)
fusion_layer = fusion_layer.view(-1, 512)
fusion_layer = self.bn1(self.fc1(fusion_layer))
fusion_layer = fusion_layer.view(w, h, -1, 256)
but when I run the code, there is a RuntimeError:
File “/home/sfwu/PycharmProjects/colorNet/colornet.py”, line 119, in forward
fusion_layer = fusion_layer.view(-1, 512)
File “/home/sfwu/anaconda3/lib/python3.5/site-packages/torch/autograd/variable.py”, line 471, in view
return View(*sizes)(self)
File “/home/sfwu/anaconda3/lib/python3.5/site-packages/torch/autograd/_functions/tensor.py”, line 98, in forward
result = i.view(*self.sizes)
RuntimeError: input is not contiguous at /b/wheel/pytorch-src/torch/lib/TH/generic/THTensor.c:231
so what’s wrong?
|
st116891
|
Hi,
The problem is that view() can only be called on a contiguous tensor.
You should add fusion_layer = fusion_layer.contiguous() between the permute() and view() calls.
|
st116892
|
I have an nn.Sequential (with legacy nn modules) model with 21 layers. I wish to use the learning rate 1 for all the layers except layer 5, 10, 15 and 20. For the remaining layers, I wish to use a learning rate of .1. I understand that I am supposed to group the parameters that use the same learning rate together. I am not sure about the best way to do it.
I have tried the following
list1 = nn.ParameterList()
for i in range(0,21):
if(i%5!=0):
list1.append(model.get(i).parameters())
However, this doesn’t compile.
Can anyone suggest a quick example that achieves the same?
|
st116893
|
I have train.py 1 which can train an model with data and save it as a pkl. Now, I load the pkl in another .py. However the model seems to be trained again while I load it.
Here is my code.
import torch
model = torch.load('a-b.pkl')
And it run the train.py 1 again which means that I can’t get my the model I saved before.
|
st116894
|
You can do that. Sample example is given below -
checkpoint = torch.load('a-b.pkl')
opt.start_epoc = checkpoint['epoch']
best_prec1 = checkpoint['best_prec1']
model.load_state_dict(checkpoint['state_dict'])
For more details, you may check the following link:
gist.github.com
https://gist.github.com/avijit9/1c7eebf124a02a555f7626a0fbcd04a5 17
finetune.py
from __future__ import print_function
import argparse
import os
import time
import pdb
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "3"
import random
import torch
import torch.nn as nn
This file has been truncated. show original
|
st116895
|
Code segment is here.
class SelfAtt(nn.Module):
def __init__(self, input_size):
super(SelfAtt, self).__init__()
self.input_size = input_size
self.linear = nn.Linear(input_size, 1)
self.linear_u = nn.Linear(input_size, input_size)
self.linear_w = nn.Linear(input_size, input_size)
def forward(self, x, x_mask):
"""
x: batchsize * len1 * hidden
:param x:
:return: batchsize * len1 * hidden
"""
len1 = x.size(1)
ctemp = []
for i in range(len1):
x1 = x[:, i, :]
input1 = self.linear_u(x.view(-1, self.input_size))
input2 = self.linear_w(x1.repeat(1, len1, 1).view(-1, self.input_size))
#input1 = x.view(-1, self.input_size)
#input2 = x1.repeat(1, len1, 1).view(-1, self.input_size)
hidden = F.relu(input1 + input2) # [b * len1]
score = self.linear(hidden).view(x.size(0), x.size(1))
score.data.masked_fill_(x_mask.data, -float('inf'))
alpha = F.softmax(score)
c = alpha.unsqueeze(1).bmm(x).squeeze(1)
ctemp.append(c)
return torch.stack(ctemp).transpose(0, 1)
When i use this module in my network. It will rapidly cause cuda out of memory.
How can i solve this problem?
|
st116896
|
The same code runs fine on Mac but not on Linux. I am using the same version of PyTorch.
I have also put up the issue on PyTorch Github page. Please see the details here:
github.com/pytorch/pytorch
Program runs fine on Mac but not on Linux (both CPU and GPU problems) 9
opened
Jul 7, 2017
closed
Jul 7, 2017
salman1993
Here is the link to my repo:
https://github.com/salman1993/simple-qa-on-kb/tree/master/clean/relation_prediction
The model code is very similar to the SNLI model example. The dataset may need...
|
st116897
|
_parser.o -D_THP_CORE -std=c++11 -Wno-write-strings -fno-strict-aliasing -DWITH_DISTRIBUTED
In file included from torch/csrc/DynamicTypes.cpp:1:
In file included from /Users/hugh2/git-local/pytorch/torch/csrc/DynamicTypes.h:7:
In file included from /Users/hugh2/git-local/pytorch/torch/lib/tmp_install/include/THPP/THPP.h:4:
/Users/hugh2/git-local/pytorch/torch/lib/tmp_install/include/THPP/Storage.hpp:6:10: fatal error: 'cstdint'
file not found
#include <cstdint>
^
torch/csrc/Module.cpp:6:10: fatal error: 'unordered_map' file not found
#include <unordered_map>
^
In file included from torch/csrc/Generator.cpp:6:
In file included from /Users/hugh2/git-local/pytorch/torch/csrc/THP.h:35:
/Users/hugh2/git-local/pytorch/torch/csrc/utils.h:6:10: fatal error: 'type_traits' file not found
#include <type_traits>
^
In file included from torch/csrc/autograd/init.cpp:3:
In file included from /Users/hugh2/git-local/pytorch/torch/csrc/THP.h:35:
/Users/hugh2/git-local/pytorch/torch/csrc/utils.h:6:10: fatal error: 'type_traits' file not found
#include <type_traits>
^
In file included from torch/csrc/utils/tuple_parser.cpp:1:
|
st116898
|
Look at the readme: https://github.com/pytorch/pytorch#install-pytorch 117
We specify: MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py install for OSX. Particularly, MACOSX_DEPLOYMENT_TARGET=10.9 is important and will fix your error.
|
st116899
|
I want to build a autoencoder with the encoder part based on resnets.
When I run my model I get the following error message:
Traceback (most recent call last):
File "train_pytorch_ae.py", line 129, in <module>
preds, reconstruction, feats = model.l_out(inputs)
File "/home/eavsteen/.local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/eavsteen/planet/configs_pytorch/vae_res_1-3.py", line 384, in forward
bc, reconstruction, feats = self.resaenet(x)
File "/home/eavsteen/.local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/eavsteen/planet/configs_pytorch/vae_res_1-3.py", line 357, in forward
x = self.layer5(x)
File "/home/eavsteen/.local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/eavsteen/.local/lib/python2.7/site-packages/torch/nn/modules/container.py", line 64, in forward
input = module(input)
File "/home/eavsteen/.local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/eavsteen/planet/configs_pytorch/vae_res_1-3.py", line 227, in forward
out = self.deconv2(out)
File "/home/eavsteen/.local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/eavsteen/.local/lib/python2.7/site-packages/torch/nn/modules/conv.py", line 524, in forward
output_padding, self.groups, self.dilation)
File "/home/eavsteen/.local/lib/python2.7/site-packages/torch/nn/functional.py", line 131, in conv_transpose2d
return f(input, weight, bias)
RuntimeError: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you
passed in a non-contiguous input.
I built a simple example to test out conv_transpose2d and it works fine:
import torch
import torch.nn as nn
from torch.autograd import Variable
conv = nn.Conv2d(512,512, kernel_size=3, padding=1, stride = 2, bias=False)
conv = conv.cuda()
deconv = nn.ConvTranspose2d(512,512, kernel_size=3, padding=1, output_padding=1, stride=2, bias=False)
deconv = deconv.cuda()
x = Variable(torch.randn(16, 512, 16, 16).cuda(), requires_grad=True)
out = conv(x)
temp = out.cpu().data.numpy()
print 'after conv'
print temp.shape
out = deconv(out)
temp = out.cpu().data.numpy()
print 'after deconv'
print temp.shape
err = out.sum()
err.backward()
It nicely outputs the shapes as expected:
after conv
(16, 512, 8, 8)
after deconv
(16, 512, 16, 16)
So I tried to print out the shape of the tensor in the more complicated model before it goes into the conv_transpose2d and I get (16, 512, 8, 8) as expected, but I still get the RuntimeError.
How can I debug this problem? What does non-contiguous input actually mean?
Ubuntu 16, cuda 8, i tried both cudnn 6.0 and cudnn 5.1.
Some extra information:
The runtime error happens at the first deconvolution layer of the decoder stage, with the following definition:
self.deconv = nn.ConvTranspose2d(planes, planes, kernel_size=3, stride=stride,
padding=1, output_padding=1, bias=False)
I also tested if cudnn is loaded correctly:
import torch
print(torch.backends.cudnn.is_acceptable(torch.cuda.FloatTensor(1)))
print(torch.backends.cudnn.version())
output:
True
6021
|
st116900
|
Hi Elias,
Sorry for the late reply. Is there any chance I can get access to your full script? I want to reproduce and fix this problem.
|
st116901
|
Hi everyone!
I was working jupyter notebook and interrupted the process due to some problem in my model. Even though there seem to be no process running, the memory is not freeing itself. I tried to use nvidia-smi --gpu-reset, but it throws the following error:
Unable to reset this GPU because it’s being used by some other process (e.g. CUDA application, graphics application like X server, monitoring application like other instance of nvidia-smi). Please first kill all processes using this GPU and all compute applications running in the system (even when they are running on other GPUs) and then try to reset the GPU again.
Terminating early due to previous errors.
I was wondering what is the correct way to free the memory? Thanks!!
image.png1132×812 96.4 KB
|
st116902
|
Are you sure all the processes are dead? What is the rsult of doing eg:
ps -ef | grep python
ps -ef | grep jupy
?
|
st116903
|
You are right. Not all process were dead, as I initially thought. I then use killall command to kill everything. Now everything is free. Thanks!!!
|
st116904
|
I’m trying to index a 4d tensor as such:
x[batch, :, begin_x:end_x, begin_y:end_y]
(begin_x, begin_y, end_x, end_y are all single integers)
However the resultant image looks almost strange [full img on right, slice on the left]
2017-07-06-014439_919x464_scrot.png919×464 403 KB
When I do the following for a 2d image it works as expected. i.e.:
x[batch, begin_x:end_x, begin_y:end:y]
I tried .contiguous to no avail. Am I just doing something dumb?
|
st116905
|
Issue was that the original image was being reshaped in numpy instead of using a tranpose for the dimensions. Then placing it and extracting from pytorch causing the weird re-org. Moral of the story: always transpose dimensions instead of reshape!
|
st116906
|
Hello I have a question about using MKL and CBLAS.
In generic/THBlas.c, lines 320 and 322 call the BLAS functions dgemm_ and sgemm_.
github.com
pytorch/pytorch/blob/de9845588d2b6f411641b442cdb5c5fe7e2dba75/torch/lib/TH/generic/THBlas.c#L319-L322 8
#if defined(TH_REAL_IS_DOUBLE)
dgemm_(&transa, &transb, &i_m, &i_n, &i_k, &alpha, a, &i_lda, b, &i_ldb, &beta, c, &i_ldc);
#else
sgemm_(&transa, &transb, &i_m, &i_n, &i_k, &alpha, a, &i_lda, b, &i_ldb, &beta, c, &i_ldc);
I would like to substitute them with their CBLAS counterparts referenced here:
https://software.intel.com/en-us/mkl-developer-reference-c-cblas-gemm 12
I tried to include “mkl.h” in THBlas.c but the PyTorch setup does not recognize the header file. Without the header file the setup throws errors for CBLAS_LAYOUT and CBLAS_TRANSPOSE types.
The end goal is to use Sparse BLAS routines referenced here:
https://software.intel.com/en-us/mkl-developer-reference-c-mkl-csrmm 5
My PyTorch was installed with Anaconda which includes the MKL library.
What steps do I need to make the switch?
|
st116907
|
On Caffe, the lr for bias is usually set twice large as that for weights, by setting the lr_mult parameter. I see no similar choice on PyTorch. Is it possible to manage it?
|
st116908
|
Thank you, but method in the link only works for settting different layers. I need to set different lr for weight and bias in a same layer
|
st116909
|
You can do it with per-parameter options. You just have to filter the bias params into one group and the rest into another.
named_parameters 228 should be of help filtering the parameters.
|
st116910
|
Hi all, I am curious how can I evaluate an expression, but don’t include it in the computational graph?
For example, in this code 11, I need the network to produce Q values before I feed them back for training.
However, these evaluations seem to be affecting the gradients. Right now I am solving the problem by detaching Q_targets, but this solution feels messy.
What’s the assumed way of controlling the dynamic graph?
|
st116911
|
The basic idea is that operations on Variable are equivalent to operations on tensor but build also the dynamic graph.
If you don’t want the dynamic graph (no gradient will flow back), just work with tensors directly.
If this operation is inside an nn.Module that only accepts Variable, you can forward a Variable with the flag volatile=True (which is going to make this forward pass more efficient because you won’t backpropagate through it) and then either detach the output Variable to get a new one that you can use independently of what was done before or get the tensor corresponding to the output of your network (with .data) and use it for other computations.
|
st116912
|
Is there any way to use dropout as a function in the forward method? I want to be able to tweak the amount of dropout during training, so something like this would be great:
layer = self.fc1(prediction)
prediction = F.relu(layer)
prediction = dropout(prediction, prob = parameter)
where parameter can be learned during training. This would be easy if there was an already written dropout function. With the nn.Dropout, you need to instantiate it with a certain probability, and that remains fixed during training.
|
st116913
|
Hi,
The dropout function already exists in the functional interface of nn (at least in master, not sure for older releases) as dropout(input, p=0.5, training=False, inplace=False).
That being said, the dropout function is not différentiable wrt the p parameter, so you won’t get a gradients for this, just for input.
|
st116914
|
Thanks, so if I obtain the p parameter through information from other layers, this should work?
Also, do I need to specify training=True inside the Forward method, or does that automatically get passed along during training?
|
st116915
|
input = Variable(...)
p = some_net(input)
x = some_other_net(input)
out = F.dropout(x, p.data[0])
out.backward()
In the above example, no gradient will flow back in some_net so you won’t be able to train it.
Even if the interface was taking a Variable as input for p, it would not work because the expression d(out)/d(p) does not exists and so no gradient can be computed for this input.
|
st116916
|
Hi,
I want to load two text datasets (A and B) by torchtext.
And I build a vocabulary on A using the following code.
# read data
TEXT = data.Field()
LABELS = data.Field(sequential=False)
train, val, test = data.TabularDataset.splits(path=args.data,
train='train.csv',
validation='valid.csv',
test='test.csv',
format='csv',
fields=[('text', TEXT), ('label', LABELS)])
train_iter, val_iter, test_iter = data.BucketIterator.splits((train, val, test),
batch_sizes=(args.batch_size,
4 * args.batch_size,
4 * args.batch_size),
sort_key=lambda x: len(x.text),
device=0)
TEXT.build_vocab(train.text, wv_type=args.wv_type, wv_dim=args.wv_dim)
LABELS.build_vocab(train.label)
I want to use the same vocabulary on B instead of rebuild a new one.
Is there any solutions by torchtext?
Can I dump vocab in torchtext and load-assign it?
Can I reuse the Field in torchtext?
Thanks
|
st116917
|
[bystander, newbie comment] Seems like vocab is missing a .freeze() method? What about, can you do something like TEXT.vocab.max_size = len(TEXT.vocab) ?
|
st116918
|
I’m now doing things like following, these works, but somehow UGLY.
Elegant solution wanted.
# read data
TEXT = data.Field()
LABELS = data.Field(sequential=False)
# use dataset A to build vocab
vocab_train, _, _ = data.TabularDataset.splits(path=args.vocab,
train='train.csv',
validation='valid.csv',
test='test.csv',
format='csv',
fields=[('text', TEXT), ('label', LABELS)])
TEXT.build_vocab(vocab_train.text, wv_type=args.wv_type, wv_dim=args.wv_dim)
# reuse vocab for dataset B
train, val, test = data.TabularDataset.splits(path=args.data,
train='train.csv',
validation='valid.csv',
test='test.csv',
format='csv',
fields=[('text', TEXT), ('label', LABELS)])
train_iter, val_iter, test_iter = data.BucketIterator.splits((train, val, test),
batch_sizes=(args.batch_size,
4 * args.batch_size,
4 * args.batch_size),
sort_key=lambda x: len(x.text),
device=0)
LABELS.build_vocab(train.label)
|
st116919
|
This line doesn’t report error. Do you mean build a Vocab first and then assign?
This build-and-assign idea seems working, I think the perfect method is dump-and-load.
|
st116920
|
Yes, build the vocab first, then freeze it. After freezing the vocab, any new, previously unseen, words, should just be sent to unk. I think. ?
|
st116921
|
tiny pytorch implementation of neural style transfer 39
tiny implementation of neural style transfer in PyTorch. Simple and Minimal .
use pretrained vgg16 models.
i use l1_norm loss to make style weights and content weights easy to tune.
|
st116922
|
my code is
class dogloader(Dataset):
def __init__(self, img, label, transform = None):
self.img = img; self.label = label
self.transform = transform
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
img = Image.open(self.img[idx]).convert('RGB')
print(img.size)
if self.transform is not None:
img = self.transform(img)
label = torch.from_numpy(np.array(self.label[idx]))
# print(idx)
return img, label
and error is
Traceback (most recent call last):
File "torch_test.py", line 31, in <module>
for batch_idx, (data, target) in enumerate(dataloader):
File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 212, in __next__
return self._process_next_batch(batch)
File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 239, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
RuntimeError: Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 41, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 110, in default_collate
return [default_collate(samples) for samples in transposed]
File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 90, in default_collate
storage = batch[0].storage()._new_shared(numel)
File "/usr/local/lib/python2.7/dist-packages/torch/storage.py", line 113, in _new_shared
return cls._new_using_fd(size)
RuntimeError: $ Torch: unable to mmap memory: you tried to mmap 0GB. at /b/wheel/pytorch-src/torch/lib/TH/THAllocator.c:317
|
st116923
|
FinlayLiu:
for batch_idx, (data, target) in enumerate(dataloader):
Seems like the error is occurring outside fo the code you provide? on line 31 in facdt?
|
st116924
|
Related discussions:
Mmap memory error when use multiple CPU on Azure
Hi all,
I am using multiple CPUS to train my model on azure with MongoDB. It seems I need to open a connection to data in each of the threads. Then I got this error.
Traceback (most recent call last):
File "main.py", line 225, in <module>
model.share_memory()
File "/home/textiq/anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 468, in share_memory
return self._apply(lambda t: t.share_memory_())
File "/home/textiq/anaconda/lib/python3.6/site-packages/torch/nn…
Loading huge data functionality
This way is good for images but isn’t fit for text. In NLP, data is usually in one file with multiple lines instead of one image in one file. So how to customize Dataset accordingly?
|
st116925
|
Hi All. I want to transfer my model from the CPU to GPU and also measure the time it takes to classify an image. I am using a pre trained AlexNet for this purpose. I am dong it as follows :
t1 = time.time()
model.cuda()
t2 = time.time()
model(img)
t3 = time.time()
Is this the correct way to do it ? For example, will t2-t1 tell me the model transfer time or it will just start the model transfer to the GPU and while that happens in the background and it will move forward with the code ? Thanks
|
st116926
|
I tried to build PyTorch from the github. The instruction said that I need to install magma-cuda80. But I already installed cuda80 which I needed to use pytorch library. cuda80 seems to be much up to date than magma-cuda80 in anaconda repository. What is a difference between cuda80 and magma-cuda80. Do I need the magma instead of just cuda80 to compile pytrorch from the source?
|
st116927
|
Hello,
I have a model, for which I am doing a backward call twice. However (by mistake) I forgot to to do the optimizer step after the second backward call.
I would think that my second backward call should not affect anything. But I was hoping to make sure that this is the case?
The thing is that my results change quite a lot (compared to when I don’t do the second backward call). Is something going on with the optimizer (Adam)? which is setup as optimizer = optim.Adam(model.parameters(),lr)
pseudo code:
model.zero_grad()
opt = model(data)
loss = cross_entropy(targets, opt)
loss.backward(retain_variables=True)
optimizer.step()
loss.backward()
Thanks
|
st116928
|
Hello,
I have 2 models (M1 and M2), with two losses L1 and L2. Not to mention 2 optimizers.
The output from M1 is fed as an input to M2, which makes M1 part of the computational graph of M2 (right?)
Consequently, the parameters of M1 will receive gradient signal from L2 (loss of the second model). I would like to prevent this from happening, i.e. M1 only learns from L1 and M2 only learns from L2. How can I do this?
I am guessing that I need to use a detach() somewhere (like in the DCGAN example), but I really don’t know where exactly. (I had a previous question on how detach() works and I am still not super clear)
Any help would be greatly appreciated.
Thanks,
Gautam
|
st116929
|
Would this do what I want?
op1 = model1(data1)
op2 = model2(data1,op1.detach())
Thanks
|
st116930
|
Looks correct…Detach is creating a new Variable which only shares the data with the original variable but not the graph; kind of like Variable(op1.data)
|
st116931
|
@ruotianluo Thanks for the reply!
Do you mean if I did:
op1 = model1(data1)
op2 = Variable(op1.data)
op3 = model2(data1,op2)
It would have the same effect?
I was doing this before but I thought that this was sharing the graph along with the data.
Thanks again
|
st116932
|
@ruotianluo Sorry, but I had another question in there to ask.
So L1.backward() will give gradients to model1 only and L2.backward() will give gradients to model 2 only.
If I do L1 = L1 + lambda*L2, followed by L1.backward(), in this case L2 is still not going to have an effect on the parameters of model 1 ? or will something change W.R.T the parameters of model 1?
|
st116933
|
HI,
I am very much attracted to pytorch and have started using it instead of keras with theano as back end recently.
however when i train the custom rnn model in pytorch I am getting mse approx 1e-5 which is way above compared to keras where i trained the same model and got mse approx 1e-10
Iam afraid the way bptt implemented in pytorch and the optimizers in pytorch to train rnn are not that efficient as theano
Please if any one can guide me in resolving this problem would be greatly appreciable
|
st116934
|
I’ve implemented a seq2seq model with a LSTM, and although it runs well on CPU, on GPU I get the following error:
...
assert hx.is_contiguous()
AssertionError
After some bug hunting, I found out that this happens because of the initial hidden state of the encoder. My code for the initial state does the following:
hidden = (init_hidden[0].expand(1, batch_size, init_hidden[0].size(0)),
init_hidden[1].expand(1, batch_size, init_hidden[1].size(0)))
The expansion needs to happen because the nn.LSTM expects a tensor of this size. hidden is a parameter list containing 2 vectors (h0 and c0):
init_hidden = nn.ParameterList([
nn.Parameter(torch.zeros(hidden_size)) for _ in range(2)])
Now, I know that by doing .contiguous() on each element of the tuple, I solve the problem, but this happens at the expense of a lot of unnecessary memory and computation time. After all the initialization is always the same for each element of the batch, and what I’ve done works well in CPU.
Is there a better way to do this? And why this difference between CPU and GPU? And why is the result of an expand not contiguous? After all, no memory is allocated, it is just a view of the tensor…
|
st116935
|
I’m trying to fine-tune a pretrained resnet model via a triplet loss. Due to the nature of the loss function, sometimes I make multiple forward passes through the model without ever making a backwards pass or stepping the optimizer. It seems that the norm of the final layer decreases even when I do not explicitly optimize.
I have created a small toy example below to illustrate what I’m talking about. I run an image forward through the model multiple times and then pass the same image as a “validation” image and calculate the final layer norm. Maybe this behavior is a result of the BatchNorm layers? If so, I would love to how those layers create this behavior.
import io
import numpy as np
from PIL import Image
import requests
import torch
from torch.autograd import Variable
from torchvision.models import resnet50
import torchvision.transforms as transforms
# Create finetuning model.
model = resnet50(pretrained=True)
for name, child in model.named_children():
if name != 'fc':
for p in child.parameters():
p.requires_grad = False
optimizer = torch.optim.SGD(filter(lambda x: x.requires_grad,
model.parameters()),
lr=1e-3)
# Download a random image and transform for input
# to resnet.
url = ('https://raw.githubusercontent.com/pytorch/'
'pytorch/master/docs/source/_static/img/'
'pytorch-logo-dark.png')
response = requests.get(url)
img = Image.open(io.BytesIO(response.content))
transform = transforms.Compose([
transforms.Scale(224),
transforms.CenterCrop(224),
transforms.ToTensor()])
img = transform(img)[:3, :, :]
# "Training" and "Validation" data are the exact
# same image.
train = img.unsqueeze(0)
val = Variable(img.unsqueeze(0), volatile=True)
# Run a single forward pass and measure
# norm of validation embedding. No backwards
# passes, no optimizer steps.
for epoch in range(20):
model.train()
optimizer.zero_grad()
train_emb = model(Variable(train))
model.eval()
val_emb = model(val)
print(val_emb.norm(2).data[0])
|
st116936
|
Sounds like BatchNorm–when you call .train() on your net, you put all of its child modules into training mode, including the BatchNorm modules. When BatchNorm is in training mode, every time you run a forward pass, it is updating its “running mean” and “running variance” parameters on its own (i.e. no interaction with SGD or any user-defined optimizer) which will definitely have an effect on your observed output. I’m not sure if this holds true when the Variable being passed through is volatile, but given what you’re seeing I’m going to guess that it does.
If you don’t want these updates to happen, put the BatchNorm modules in inference mode with model.eval(), or just set the specific BatchNorm modules to inference mode if you need the other modules in training mode (i.e. loop through all modules and have a conditional that checks if a module is nn.BatchNorm2d, and calls .eval() on it if it is).
Or you could probably set the momentum term of each BatchNorm module to 0–IIRC, that’s the term that controls the update rate of the running means and variances, so if you still want to use the per-batch means and variances but you don’t want to update the running statistics, setting momentum to 0 should allow you to do that.
|
st116937
|
Gotcha, thanks so much for the explanation. I guess it still weirds me out that the running mean and variance should be continually changing when feeding the same data into the layer, but I don’t have a great handle on how that calculation is performed (and got a bit lost trying to delve into the C code that actually implements it).
Nonetheless, thanks for the explanation and the recommended ways to deal with it!
|
st116938
|
passing a single image through a BatchNorm network is a terrible idea, especially in training mode.
Your batch statistics end up being just image channel statistics.
What’s happening with running_mean/running_std?
During training, this layer keeps a running estimate of its computed mean and variance. The running sum is kept with a default momentum of 0.1.
i.e. it’s updated as: running_mean = 0.9 * running_mean + 0.1 * current_mean
I’d suggest that you iterate over your modules and set all the BatchNorm layers to .eval(), like this:
m.apply(lambda x: x.eval() if 'BatchNorm' in str(type(x)) else False)
|
st116939
|
Passing a single instance to BatchNorm was just for demonstration purposes in my toy example. I’m not actually doing this for the real problem I was trying to solve.
Thanks for the update calculation! It looks like running_mean is initialized at zero. So, even if we have a static current_mean does not change between updates, running_mean will have to slowly update from the initial zero to the static current_mean value.
|
st116940
|
Anecdotal and orthogonal, but I’ve successfully trained object detection convnets (homebrew that’s somewhere between OverFeat and ResNet Faster R-CNN) with single-image-batches and batchnorm at every layer. Not sure why it works (I would expect it to fail) but it trains just fine.
|
st116941
|
Hi, I am trying to build a highway network using pytorch and I need to initialize my transform bias variable with value -1 and whose size will be equal to my network layer size. In tensorflow, we could do this using
b_T = tf.Variable(tf.constant(-1, shape=[size]), name="bias_transform").
How do I perform this in pytorch. I would be very grateful if anybody can provide an insight into this. Thanks,
|
st116942
|
I can’t really answer that but I am trying to do the same thing and have no idea how. If you found out in the mean time, please also write.
|
st116943
|
I have a scalar a and a matrix b. I want to perform a simple multiplication of the matrix by a scalar, but it seems it doesn’t work.
I also tried result = a*b.expand_as(a). This fails if a is zero, which in my case, it can be. Any ideas?
EDIT: my tensor are :
torch.Size([1])
torch.Size([4, 6, 28, 28])
EDIT2, nevermind, it’s the other way around b*a.expand_as(b) , where b is the matrix and a is the scalar.
|
st116944
|
Hi,
I just noticed that in recurrent nets, e.g. http://pytorch.org/docs/master/nn.html#gru 3 there are two bias vectors. Is there a particular reason for this?
|
st116945
|
Hello,
I think this is to match CUDNN:
Why RNN needs two biases?
As you pointed out it doesn’t really change the definition of the model, but this is what cuDNN does, so we’ve made our RNNs consistent with this behaviour.
Best regards
Thomas
|
st116946
|
my code is(I used netD.cuda and netG.cuda) :
self.optimizer_D.zero_grad()
self.backward_D()
self.optimizer_D.step()
self.optimizer_G.zero_grad()
self.backward_G()
self.optimizer_G.step()
After excuting self.optimizer_G.step(),GPU gets tasks and starts to calculate results,GPU will take 0.1s to complete tasks.However during this 0.1s,CPU is doing nothing.I want to know how to utilize this free CPU time to improve whole training speed.BTW:I have only one GPU in my computer.
Or is there another way to improve training speed in this single GPU case?Thanks a lot.
|
st116947
|
Hi
My problem is this:
I have data X and Y and I know its latent representation T. T is between [-1,1]. So, I am trying to create two parallel auto-encoders (Q1, P1) and (Q2, P2) [#both have a single hidden layer] with the following objectives
(1) Q1, Q2 = encoder
(2) P1, P2 = decoder
(3) My objective is to minimize the latent representation w.r.t T and also the reconstructed data w.r.t to X and Y
(4) I would also like to minimize the difference between the first hidden layer outputs using m.s.e.
Few snippets of code are given below (for defining Q1,P1). Q2,P2 are also defined similarly.
class Q1(nn.Module):
def __init__(self):
super(Q1, self).__init__()
# separate layers
self.headX = nn.Linear(D_in1, H)
self.tailX = nn.Linear(H, D_out)
def forward(self, x):
x = self.headX(x)
x1 =F.relu(x)
return F.tanh(self.tailX(x1)), x1
class P1(nn.Module):
def __init__(self):
super(P_1, self).__init__()
self.headX = nn.Linear(D_out, H)
self.tailX = nn.Linear(H, D_in1)
def forward(self, x):
x = self.headX(x)
x1 = F.relu(x)
return F.tanh(self.tailX(x1)), x1
Now when I am trying to compute the loss for objective (3) I am doing this (which is working fine)
for batch_idx, (data1, y_act1, data2, y_act2) in enumerate(train_loader):
data1, y_act1 = Variable(data1), Variable(y_act1)
data2, y_act2 = Variable(data2), Variable(y_act2)
y_pred1, tx1 = Q1(data1)
y_pred2, ty1 = Q2(data2)
x_pred1, tx2 = P1(y_pred1)
x_pred2, ty2 = P2(y_pred2)
# Compute and print loss.
# embedding loss
loss_em1 = Loss_embed1(y_pred1, y_act1)
loss_em2 = Loss_embed2(y_pred2, y_act2)
# reconstruction loss
loss_re1 = Loss_recons1(x_pred1, data1)
loss_re2 = Loss_recons2(x_pred2, data2)
#compute loss for output of common layers but employed on intermediate outputs
# in the encoding side
q1 = torch.mean(torch.norm(tx1 - ty1, 2, 1))
# in the decoding side
q2 = torch.mean(torch.norm(tx2 - ty2, 2, 1))
# Total loss
loss_a = loss_re1 + loss_re2
loss_b = loss_em1 + loss_em2
loss = loss_a + loss_b + q1 + q2
print('Epoch {:d}: {:d} reconst Loss {:.6f} embed Loss {:.6f} enc Loss {:.6f} dec Loss {:.6f}\n'.format(
epoch, batch_idx, loss_a.data[0], loss_b.data[0], q1.data[0], q2.data[0]))
# set the gradients to zero
P_decoder1.zero_grad()
Q_encoder1.zero_grad()
P_decoder2.zero_grad()
Q_encoder2.zero_grad()
# compute by backpropagation
loss.backward()
# update the weights
P_decoder1.step()
Q_encoder1.step()
P_decoder2.step()
Q_encoder2.step()
Is this the most appropriate way to do it? Any information on this will be helpful. Thanks.
|
st116948
|
[bystander comment] Not answering your question, but seems like the constructor of P and Q take zero parameters, whereas you are passing in data1 etc as parameters. Does this compile/run ok for you?
|
st116949
|
I am trying to train a very simple network on CIFAR10 Dataset
class Net(nn.Module):
def __init__(self,filters):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, filters, 5,padding=2)
self.fc1 = nn.Linear(filters * 16 * 16, 10)
def forward(self, x):
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
x = x.view(-1, self.num_flat_features(x))
x = self.fc1(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
def train_random_model(filter,learning_rate,epochs,modelname,dataloader):
net = Net(filter).cuda()
net.train()
criterion = nn.CrossEntropyLoss().cuda()
optimizer = optim.SGD(net.parameters(), lr=learning_rate, momentum=0)
for epoch in range(epochs):
running_loss = 0.0
correct = 0
total = 0
for key, value in dataloader.items():
inputs1, labels1 = value
inputs, labels = Variable(inputs1.cuda()), Variable(labels1.cuda())
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.data[0]
_, predicted = torch.max(outputs.data, 1)
total += labels1.size(0)
labels1=labels1.cuda()
corr_pred= (predicted == labels1).sum()
correct += corr_pred
trainerror=100.0 * (total-correct) / total
print('Finished Training',epoch,"Train Error",trainerror)
if(trainerror<=1.0):
break
print("Saving model at",modelname)
torch.save(net,modelname)
print("Sanity Check")
correct=0
total=0
net.eval()
for key, value in dataloader.items():
inputs1, labels1 = value
inputs, labels = Variable(inputs1.cuda()), Variable(labels1.cuda())
outputs = net(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
labels1=labels1.cuda()
correct += (predicted == labels1).sum()
trainerror=100.0 * (total-correct) / total
print("Train Error",trainerror,count,total,correct)
Unfortunately final train error printed while training is very different from train error computed while evaluating.
Do you have any idea what might be causing this ?
I understand Batch Normalization typically causes these kind of errors but i am not using that layer.
|
st116950
|
How many epochs did you train the network for?
I think this is just overfitting. Normally you should use the validation error only and never look at the training error. When your validation error does not improve you stop training (that’s called early stopping).
The only time you look at the training error is at the very beginning to make sure that your network is able to overfit your dataset (i.e. learn to represent completely the dataset).
If it does not overfit, change your network or preprocess your data because your network isn’t able to learn anything.
If your network is able to overfit, great, now you can focus on having it learn a general representation. You discard that training error and only focus on validation error which corresponds to unseen data.
|
st116951
|
Yeah . I was able to identify the issue as the network’s training error was computed before the gradient update was done and i was evaluating its performance after the updates
|
st116952
|
I don’t quite understand what you mean by overfitting. i am giving the input the training images and labels of CIFAR 10 dataset and in the next section i am just trying to evaluate train accuracy manually by doing forward pass on the learnt network
|
st116953
|
I thought you were passing new unseen data to the network. It is expected to have a (sometimes big) difference between train and validating error.
However if you are passing the same data you should get almost the same score
|
st116954
|
The model is constantly being updated in each SGD Step so running train error will not be equal to final train error at the end of epoch.
|
st116955
|
Looking at examples, it looks like it is fairly standard to create a child of Module, and thereby be able to use Functional etc?
is it fair to say that this is the preferred approach, rather than using eg nn.Sequential?
is there any way of adding nn.Functional methods into Sequential? Or best to create a Module child, in order to use nn.Functional?
|
st116956
|
I imagine that another reason to use the Module child approach is to use simple tensor transformations, like transposition/concatenation/stack etc
|
st116957
|
when I am doing double backward inside a module wrapped with DataParallel, it raises RuntimeError: Scatter is not differentiable twice. So it seems we cannot do double backward inside using multiGPU. Is it true? If so, is there any alternative way to do that?
|
st116958
|
Hi,
The only problem is that the Scatter module/function that you are using inside the DataParallel is not differentiable twice. This has not been implemented yet unfortunately.
|
st116959
|
I am working with large dataset of videos. And I modified a torch.utils.data.Dataset which returens video path and send to my video decoder using opencv.
There is:
batch_proc = multiprocessing.Process(target=video_batcher, args=(queue, fv, batchsize))
which starts a new process for each video clip and calls join() after all batch of a video are processed. queue is multiprocessing.Queue object which stores a batch of converted frames, say, 128 images per batch.
fv is an object which maintains a buffer and read from video consistently using Threading.
I intended to concatenate every batch of feature map(after a pretrained CNN) and save as hdf5. but error reports like this for each loop:
Traceback (most recent call last):
File “/opt/conda/lib/python2.7/multiprocessing/queues.py”, line 268, in _feed
send(obj)
File “/opt/conda/lib/python2.7/site-packages/torch/multiprocessing/queue.py”, line 17, in send
ForkingPickler(buf, pickle.HIGHEST_PROTOCOL).dump(obj)
File “/opt/conda/lib/python2.7/pickle.py”, line 224, in dump
self.save(obj)
File “/opt/conda/lib/python2.7/pickle.py”, line 286, in save
f(self, obj) # Call unbound method with explicit self
File “/opt/conda/lib/python2.7/multiprocessing/forking.py”, line 67, in dispatcher
self.save_reduce(obj=obj, *rv)
File “/opt/conda/lib/python2.7/pickle.py”, line 401, in save_reduce
save(args)
File “/opt/conda/lib/python2.7/pickle.py”, line 286, in save
f(self, obj) # Call unbound method with explicit self
File “/opt/conda/lib/python2.7/pickle.py”, line 554, in save_tuple
save(element)
File “/opt/conda/lib/python2.7/pickle.py”, line 286, in save
f(self, obj) # Call unbound method with explicit self
File “/opt/conda/lib/python2.7/multiprocessing/forking.py”, line 66, in dispatcher
rv = reduce(obj)
File “/opt/conda/lib/python2.7/site-packages/torch/multiprocessing/reductions.py”, line 113, in reduce_storage
fd, size = storage.share_fd()
RuntimeError: $ Torch: unable to mmap memory: you tried to mmap 0GB. at /b/wheel/pytorch-src/torch/lib/TH/THAllocator.c:317
Is that allocator not working well with `multiprocessing.Queue``` ?
|
st116960
|
is this in a docker setting? if so, need to set the flag: --ipc=host when starting the docker session
|
st116961
|
It is. Thank you. Btw, If I want to run LSTM on a quite long sequence of data with relatively high dimension, could pytorch handle GPU memory well? Because to my understanding autograd records all action through forward and backward and pytorch process all data as batch. So I am afraid of out of memory error.
|
st116962
|
Hi I met the exactly same problem. Could you please tell me how did you solve this problem specifically? How can I set the flag: --ipc=host when starting the docker session.
|
st116963
|
For issue about docker session you can check https://docs.docker.com/engine/reference/run/#ipc-settings-ipc 19 . And I managed to hold a branch of numpy.ndarray when loading data and preprocessing. Cuda Tensors are just converted to before sending to network.
|
st116964
|
Hi,
I am working on implementing an attention based summarization model. Let’s assume that batch_size=2, vocabulary size=4, sequence length=3. I want to accumulate attention weights to create a probs matrix and then use these matrix to sample the words. Below is an example.
word_indices (batch_size, sequence_length)
[ 0 2 3
0 1 1 ] # duplicate word index
attn_weights (batch_size, sequence_length)
[ 0.1 0.3 0.6 ex) 0.3 is attention weights for word index 2
0.7 0.1 0.2 ]
probs (batch_size, vocabulary_size)
[ 0 0 0 0
0 0 0 0 ]
batch_indices = [0, 0, 0, 1, 1, 1]
word_indices = [0, 2, 3, 0, 1, 1]
repeat_indices= [0, 1, 2, 0, 1, 2]
# Accumulate attention weights
probs[batch_indices, word_indices] += attn_weights[batch_indices, repeat_indices]
I want to get the result as below.
probs = [ 0.1 0 0.3 0.6
0.7 0.3 0 0 ]
However, I got the result as below.
probs = [ 0.1 0 0.3 0.6
0.7 0.2 0 0 ]
The problem is that when there is a duplicate index ((1, 1) twice), only the value corresponding to the last index is applied.
How can i get the desired result?
Thanks,
|
st116965
|
I’d like to convert mnist_hogwild to cuda version.
So, I tried change this line ( https://github.com/pytorch/examples/blob/master/mnist_hogwild/main.py#L52 13 ) to
model = Net().cuda()
Also, I added multiprocessing.set_start_method(‘spawn’)
By the way, I got this error when I execute the code (before subprocesses start.)
THCudaCheck FAIL file=/Users/qbx2/pytorch/torch/csrc/generic/StorageSharing.cpp line=249 error=63 : OS call failed or operation not supported on this OS
Traceback (most recent call last):
File “main.py”, line 63, in
p.start()
File “/Users/qbx2/anaconda3/lib/python3.6/multiprocessing/process.py”, line 105, in start
self._popen = self._Popen(self)
File “/Users/qbx2/anaconda3/lib/python3.6/multiprocessing/context.py”, line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File “/Users/qbx2/anaconda3/lib/python3.6/multiprocessing/context.py”, line 284, in _Popen
return Popen(process_obj)
File “/Users/qbx2/anaconda3/lib/python3.6/multiprocessing/popen_spawn_posix.py”, line 32, in init
super().init(process_obj)
File “/Users/qbx2/anaconda3/lib/python3.6/multiprocessing/popen_fork.py”, line 20, in init
self._launch(process_obj)
File “/Users/qbx2/anaconda3/lib/python3.6/multiprocessing/popen_spawn_posix.py”, line 47, in _launch
reduction.dump(process_obj, fp)
File “/Users/qbx2/anaconda3/lib/python3.6/multiprocessing/reduction.py”, line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File “/Users/qbx2/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/reductions.py”, line 104, in reduce_storage
metadata = storage.share_cuda()
RuntimeError: cuda runtime error (63) : OS call failed or operation not supported on this OS at /Users/qbx2/pytorch/torch/csrc/generic/StorageSharing.cpp:249
To resolve this issue, I changed the code to call model.cuda() on subprocesses in train() method, and it works fine. But what is this error? Is it prohibited to pass cuda model to subprocess? It doesn’t make sense. I’m using macOS Sierra (10.12.5), python 3.6.1, and cuda 8.0.
Thank you.
|
st116966
|
Hogwild training is not made to run on gpu as with cuda the tensors have locks sets in place in most likely will end up in model parameters shared in corrupted state which I don’t think cuda will allow. You could set up a pool of process that will organize them for you and should run on gpu but thats not hogwild training if your looking for that
|
st116967
|
You can refer to pytorch documentation for more info here: http://pytorch.org/docs/master/notes/multiprocessing.html 48
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.