id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st117168
|
Hi all,
I am trying to use packed sequnce as input to RNN for language modeling, but it didn’t work as expected. Here are the codes.
The following code does not use packed sequence and works fine.
class LanguageModel(nn.Module):
def __init__(self, ntoken, ninp, nhid, nlayers):
super(LanguageModel, self).__init__()
self.encoder = nn.Embedding(ntoken, ninp)
self.rnn = nn.GRU(ninp, nhid, nlayers, bidirectional=False, batch_first=True)
self.decoder = nn.Linear(nhid, ntoken)
def forward(self, inputs, mask):
# embedding
emb = self.encoder(inputs.long())
output_, _ = self.rnn(emb)
# mask output
mask_ = mask.unsqueeze(-1).expand_as(output_).float()
output = output_ * mask_
# project output
decoded = self.decoder(output.view(output.size(0)*output.size(1), output.size(2)))
return decoded.view(output.size(0), output.size(1), decoded.size(1))
The loss is
epoch: 0, it: 100, loss: 3.0152803540229796, acc: 0.21636631093919279
epoch: 0, it: 200, loss: 2.5584751963615417, acc: 0.2976714611053467
epoch: 0, it: 300, loss: 2.424778082370758, acc: 0.31738966673612595
epoch: 0, it: 400, loss: 2.3470527958869933, acc: 0.3234238114953041
epoch: 0, it: 500, loss: 2.3100508141517637, acc: 0.32845291540026667
epoch: 0, it: 600, loss: 2.269477825164795, acc: 0.33436131440103056
epoch: 0, it: 700, loss: 2.2323202776908873, acc: 0.3435117769241333
epoch: 0, it: 800, loss: 2.197794075012207, acc: 0.3516477417945862
epoch: 0, it: 900, loss: 2.161339772939682, acc: 0.36355975896120074
epoch: 0, it: 1000, loss: 2.1328598356246946, acc: 0.37262321919202807
epoch: 0, it: 1100, loss: 2.120845100879669, acc: 0.37346176490187644
epoch: 0, it: 1200, loss: 2.0859076166152954, acc: 0.3842319694161415
epoch: 0, it: 1300, loss: 2.070769666433334, acc: 0.39238578140735625
epoch: 0, it: 1400, loss: 2.057626646757126, acc: 0.394229926019907
The following code only changes the above class in forward function to use packed seqence, and loss is not decreasing.
def forward(inputs, mask):
# embedding
emb = self.encoder(inputs.long())
# sequence length
seq_lengths = torch.sum(mask, dim=-1).squeeze(-1)
# sort sequence by length
sorted_len, sorted_idx = seq_lengths.sort(0, descending=True)
index_sorted_idx = sorted_idx\
.view(-1, 1, 1).expand_as(inputs)
sorted_inputs = inputs.gather(0, index_sorted_idx.long())
# pack sequence
packed_seq = torch.nn.utils.rnn.pack_padded_sequence(
sorted_inputs, sorted_len.cpu().data.numpy(), batch_first=True)
# feed it into RNN
out, _ = self.rnn(packed_seq)
# unpack sequence
unpacked, unpacked_len = \
torch.nn.utils.rnn.pad_packed_sequence(
out, batch_first=True)
# unsort the output
_, original_idx = sorted_idx.sort(0, descending=True)
unsorted_idx = original_idx\
.view(-1, 1, 1).expand_as(unpacked)
output = unpacked.gather(0, unsorted_idx.long())
# project output
decoded = self.decoder(output.view(output.size(0)*output.size(1), output.size(2)))
return decoded.view(output.size(0), output.size(1), decoded.size(1))
The loss is
epoch: 0, it: 100, loss: 3.2207558250427244, acc: 0.16291182031854987
epoch: 0, it: 200, loss: 3.0119342851638793, acc: 0.17143549174070358
epoch: 0, it: 300, loss: 2.9969462013244628, acc: 0.17032290264964103
epoch: 0, it: 400, loss: 3.004516990184784, acc: 0.16658018743619324
epoch: 0, it: 500, loss: 2.987579824924469, acc: 0.17096973054111003
epoch: 0, it: 600, loss: 2.9835088515281676, acc: 0.1719639204442501
epoch: 0, it: 700, loss: 2.983652164936066, acc: 0.17081086978316307
epoch: 0, it: 800, loss: 2.993579874038696, acc: 0.16737559842411429
epoch: 0, it: 900, loss: 2.981204776763916, acc: 0.1713446132838726
epoch: 0, it: 1000, loss: 2.982670919895172, acc: 0.17059179410338401
epoch: 0, it: 1100, loss: 2.975895357131958, acc: 0.17110723197460176
epoch: 0, it: 1200, loss: 2.9888737654685973, acc: 0.1680946245789528
epoch: 0, it: 1300, loss: 2.982082223892212, acc: 0.17025468410924077
In both experiments, I used SGD with 0.1 learning rate.
Am I using the packed sequence in a wrong way?
Thanks!
|
st117169
|
That’s certainly the right sequence of the right steps; this stuff is no fun to debug but I’d try printing out the indices and making sure things end up in the right places. If at all possible (i.e. if you don’t have two coupled RNNs like for inference or translation) you can instead sort both the data and the labels in your data preprocessing (e.g. with torchtext) but I’m guessing you’re doing it this way because you can’t do it that way.
|
st117170
|
From what I can tell, you are packing the input indices rather than the embeddings. The output of your self.encoder is not used in your second example.
I think you need to do something like sorted_inputs = emb.gather(0, index_sorted_idx.long()).
Hope this helps!
|
st117171
|
The code is as followed:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.local = SpatialConvolutionLocal(nInputPlane=6, nOutputPlane=16, iW=14, iH=14,kH=3,kW=3)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.local(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
fc = F.relu(self.fc2(x))
x = self.fc3(fc)
return x,fc
The error is:
TypeError: 'SpatialConvolutionLocal' object is not callable
Can anyone give me an example?
|
st117172
|
In my model, I need to change data dimension (i.e: add 1 and remove 1), so I use squeeze and unsqueeze few times for each sample. How do squeeze and unsqueeze impact on computation cost. Does it slow down my model a lot ?
|
st117173
|
Neither squeeze nor unsqueeze allocate new memory, they just change the view of the tensor, so I am tempted to say it should not matter much, but I could be wrong.
|
st117174
|
I need to Reproduce the torch net 4 in pytorch, especially the RNN layer part.
and this is my pytorch net 8
when I run it, it produce an error message
RuntimeError: matrices expected, got 1D, 2D tensors at /b/wheel/pytorch-src/torch/lib/TH/generic/THTensorMath.c:1232
and the error code is
x7 = self.rnn(x6)
anyone helps me? Thanks!
|
st117175
|
The size of RNN’s input should be ( seq_len, batch, input_size) ,see the docs 3 for detail.
if your batch_size =1 , you should x6 = self.fc(x5).unsqueeze(1).
|
st117176
|
I use “expand” to repeat the tensor of N x D to be N x D x H x W in the module, and the following is a simplified version of the code:
import torch
from torch.autograd import Variable
class TwoLayerNet(torch.nn.Module):
def __init__(self, D_in, H, D_out, h, w):
super(TwoLayerNet, self).__init__()
self.linear1 = torch.nn.Linear(D_in, H)
self.linear2 = torch.nn.Linear(H * h * w, D_out)
def forward(self, x):
h_relu = self.linear1(x).clamp(min=0)
h_relu = torch.unsqueeze(torch.unsqueeze(h_relu, 2), 3) # -> N x H x 1 x 1
h_expand = h_relu.expand([64, H, h, w]).contiguous().view(64, -1) # -> N x H x h x w
y_pred = self.linear2(h_expand) # -> N x D_out
return y_pred
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out, h, w = 64, 1000, 100, 10, 6, 6
x = Variable(torch.randn(N, D_in), requires_grad=True)
y = Variable(torch.randn(N, D_out), requires_grad=False)
model = TwoLayerNet(D_in, H, D_out, h, w)
criterion = torch.nn.MSELoss(size_average=False)
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)
for t in range(500):
y_pred = model(x)
loss = criterion(y_pred, y)
print(t, loss.data[0])
optimizer.zero_grad()
loss.backward()
optimizer.step()
The forward is OK and output: (0, 667.63525390625)
But I get the error:
Traceback (most recent call last):
File “script_test.py”, line 36, in
loss.backward()
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py”, line 151, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/init.py”, line 98, in backward
variables, grad_variables, retain_graph)
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/function.py”, line 90, in apply
return self._forward_cls.backward(self, *args)
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/_functions/pointwise.py”, line 286, in backward
return grad_output * mask, None
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py”, line 789, in mul
return self.mul(other)
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py”, line 310, in mul
return Mul.apply(self, other)
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/_functions/basic_ops.py”, line 50, in forward
return a.mul(b)
RuntimeError: inconsistent tensor size at ~/pytorch/torch/lib/TH/generic/THTensorMath.c:875
Can anyone help me to figure out the problem ?
|
st117177
|
I compiled from the master branch recently. Can you run the code without error ?
|
st117178
|
I encountered a similar error where the forward pass was OK and backward failed. I compiled the latest version and can reproduce your error. My traceback is as follows, which is a little different from yours:
Traceback (most recent call last):
File “try.py”, line 31, in
loss.backward()
File “/home/qizhe/tool/anaconda2/lib/python2.7/site-packages/torch/autograd/variable.py”, line 152, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File “/home/qizhe/tool/anaconda2/lib/python2.7/site-packages/torch/autograd/init.py”, line 98, in backward
variables, grad_variables, retain_graph)
File “/home/qizhe/tool/anaconda2/lib/python2.7/site-packages/torch/autograd/function.py”, line 91, in apply
return self._forward_cls.backward(self, *args)
File “/home/qizhe/tool/anaconda2/lib/python2.7/site-packages/torch/autograd/_functions/pointwise.py”, line 289, in backward
return grad_output * mask, None
File “/home/qizhe/tool/anaconda2/lib/python2.7/site-packages/torch/autograd/variable.py”, line 802, in mul
return self.mul(other)
File “/home/qizhe/tool/anaconda2/lib/python2.7/site-packages/torch/autograd/variable.py”, line 311, in mul
return Mul.apply(self, other)
File "/home/qizhe/tool/anaconda2/lib/python2.7/site-packages/torch/autograd/functions/basic_ops.py", line 48, in forward
return a.mul(b)
RuntimeError: inconsistent tensor size, expected r [1 x 100 x 6 x 6], t [1 x 100 x 6 x 6] and src [64 x 100] to have the same number of elements, but got 3600, 3600 and 6400 elements respectively at /home/qizhe/tool/pytorch/torch/lib/TH/generic/THTensorMath.c:875
|
st117179
|
@Qizhe_Xie, thanks. Yours seems have added more error message, but I cannot understand why it shows that. My confusion is can any version successfully run the code.
|
st117180
|
Me neither. The stable version torch-0.1.12.post2-cp27-n also gives the same error.
|
st117181
|
I have figured out the error… The size arguments for tensor.expand shouldn’t be passed within a list, i.e., you should use h_expand = h_relu.expand(64, H, h, w).contiguous().view(64, -1) instead of h_expand = h_relu.expand([64, H, h, w]).contiguous().view(64, -1).
It’s strange that forward doesn’t throw an exception…
|
st117182
|
How to realize the last “many to many” with LSTM by pytorch?
Can you give me a example, thanks!
|
st117183
|
Any kind of continued sequence prediction counts as many-to-many - common examples are language models (generating a sequence of characters/words based on previous sequence) and time series prediction (predicting e.g. future stock price, seismic data based on previous history)
|
st117184
|
Thank you.Mainly, a few days ago, I found the net gradient was not changed when training, but now I know how to do. Thanks!
|
st117185
|
Hi folks, quick question:
The output of the bidirectional RNNs in pytorch have the dimension of 2*H, where H is the number of hidden units.
I understand that output[:, :, :H] will represent the forward RNN, while output[:, :, H:] represents the backward RNN.
However, what is the ordering of the backward output?
Is it (o1, o2, o3, …, oH) or is it (oH,…,o3, o3, o1)?
Thanks a bunch.
|
st117186
|
It depends on your notation:
do you mean o1_backward = F(i1_forward) or o1_backward = F(i1_backward) = F(iH_forward) ?
in the case o1_backward = F(i1_backward), then it is (o1, o2, o3, …, oH)
|
st117187
|
Hello guys, is there any way to use a non-brackprop technique in pytorch such as Synthetic Grad, Feedback Alignment, Direct Feedback Alignment or Indirect Feedback Alignment?
|
st117188
|
Not by default, but google shows several pytorch and luatorch projects:
GitHub
andrewliao11/dni.pytorch 61
Implement Decoupled Neural Interfaces using Synthetic Gradients in Pytorch - andrewliao11/dni.pytorch
GitHub
iacolippo/Direct-Feedback-Alignment 111
Experiments with Direct Feedback Alignment training scheme for DNNs - iacolippo/Direct-Feedback-Alignment
GitHub
anokland/dfa-torch 82
Training neural networks with back-prop, feedback-alignment and direct feedback-alignment - anokland/dfa-torch
|
st117189
|
I’ve built from latest Master and am unable to take the element-wise max of a gradient calculation. Is this a bug or am I doing something wrong? Thanks!
Fails:
a = Variable(torch.ones(5, 2), requires_grad=True)
b = a ** 2
c = b ** 2
g = autograd.grad(outputs=c, inputs=b,
grad_outputs=torch.ones(b.size()),
create_graph=True, retain_graph=True, only_inputs=True)[0]
print(g)
b = torch.FloatTensor([0])
torch.max(g, b)
Also fails:
a = Variable(torch.ones(5, 2), requires_grad=True)
b = a ** 2
c = b ** 2
g = autograd.grad(outputs=c, inputs=b,
grad_outputs=torch.ones(b.size()),
create_graph=True, retain_graph=True, only_inputs=True)[0]
print(g)
b = torch.ones(g.size())
torch.max(g, b)
The error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-91-63397f258c3b> in <module>()
7 print(g)
8 b = torch.ones(g.size())
----> 9 torch.max(g, b)
/usr/lib/python3.5/site-packages/torch/autograd/variable.py in max(self, dim, keepdim)
454 if isinstance(dim, Variable):
455 return Cmax.apply(self, dim)
--> 456 return Max.apply(self, dim, keepdim)
457
458 def min(self, dim=None, keepdim=False):
/usr/lib/python3.5/site-packages/torch/autograd/_functions/reduce.py in forward(cls, ctx, input, dim, keepdim, additional_args)
152 if additional_args:
153 args = additional_args + args
--> 154 output, indices = fn(*args)
155 ctx.save_for_backward(indices)
156 ctx.mark_non_differentiable(indices)
TypeError: max received an invalid combination of arguments - got (torch.FloatTensor, bool), but expected one of:
* no arguments
* (torch.FloatTensor other)
* (int dim)
didn't match because some of the arguments have invalid types: (torch.FloatTensor, bool)
* (int dim, bool keepdim)
|
st117190
|
Actually, mistake on my part, nevermind. --> b = Variable(torch.FloatTensor([0]))
|
st117191
|
I have set the Variable like
outputs = Variable(outputs.data, requires_grad=True, volatile=False)
but the backward still report the error.
|
st117192
|
The error may not be caused by your variable outputs but probably another hidden somewhere
|
st117193
|
Hello,
I am trying to load a sequence labelling dataset using torchtext but running into errors.
The ultimate goal is to train a LSTM and to sequence tagging (whether each word is an entity or not).
Parts of the code are given below. This is the error I get when I try to load up my dataset:
Traceback (most recent call last):
File "/Users/salman/dev/ents_rnn/data/test_torchtext_ents.py", line 19, in <module>
train, valid, test = LabellingDataset.splits(questions, entity_labels)
File "/Users/salman/dev/ents_rnn/data/dataset_ents.py", line 42, in splits
'labelling': ('labelling', label_field)}
File "/Users/salman/anaconda3/lib/python3.6/site-packages/torchtext-0.1.1-py3.6.egg/torchtext/data.py", line 345, in splits
File "/Users/salman/anaconda3/lib/python3.6/site-packages/torchtext-0.1.1-py3.6.egg/torchtext/data.py", line 420, in __init__
File "/Users/salman/anaconda3/lib/python3.6/site-packages/torchtext-0.1.1-py3.6.egg/torchtext/data.py", line 420, in <listcomp>
File "/Users/salman/anaconda3/lib/python3.6/site-packages/torchtext-0.1.1-py3.6.egg/torchtext/data.py", line 251, in fromJSON
File "/Users/salman/anaconda3/lib/python3.6/json/__init__.py", line 354, in loads
return _default_decoder.decode(s)
File "/Users/salman/anaconda3/lib/python3.6/json/decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/Users/salman/anaconda3/lib/python3.6/json/decoder.py", line 355, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 39 (char 38)
Process finished with exit code 1
Here is the sample dataset:
{"question": "The dog ate the apple", labelling: "NOT ENT NOT NOT ENT"}
{"question": "Everybody read that book", labelling: "NOT NOT NOT ENT"}
{"question": "John lives there", labelling: "ENT NOT NOT"}
This is my dataset class:
# most basic tokenizer - split on whitespace
def my_tokenizer():
return lambda text: [tok for tok in text.split()]
class LabellingDataset(data.ZipDataset, data.TabularDataset):
@staticmethod
def sort_key(ex):
return len(ex.question)
@classmethod
def splits(cls, text_field, label_field, root='.',
train='train.jsonl', validation='valid.jsonl', test='test.jsonl'):
# path = some path
prefix_fname = 'sequence_labelled_entities_'
return super(SimpleQaEntityDataset, cls).splits(
os.path.join(path, prefix_fname), train, validation, test,
format='JSON', fields={'question': ('question', text_field),
'labelling': ('labelling', label_field)}
)
@classmethod
def iters(cls, batch_size=32, device=0, root='.', wv_dir='.',
wv_type=None, wv_dim='300d', **kwargs):
TEXT = data.Field(sequential=True, tokenize=my_tokenizer())
LABEL = data.Field(sequential=True, tokenize=my_tokenizer())
train, val, test = cls.splits(TEXT, LABEL, root=root, **kwargs)
TEXT.build_vocab(train, wv_dir=wv_dir, wv_type=wv_type, wv_dim=wv_dim)
LABEL.build_vocab(train)
return data.BucketIterator.splits(
(train, val, test), batch_size=batch_size, device=device)
And this is how I am calling my code:
data_cache = "data_cache"
vector_cache = "vector_cache/ents_input_vectors.pt"
word_vectors = "glove.6B"
d_embed = 50
batch_size = 2
epochs = 2
gpu_device = -1
questions = data.Field(lower=True)
entity_labels = data.Field(sequential=True)
train, valid, test = LabellingDataset.splits(questions, entity_labels)
# build vocab for questions
questions.build_vocab(train, valid, test)
# load word vectors if already saved or else load it from start and save it
if os.path.isfile(vector_cache):
questions.vocab.vectors = torch.load(vector_cache)
else:
questions.vocab.load_vectors(wv_dir=data_cache, wv_type=word_vectors, wv_dim=d_embed)
os.makedirs(os.path.dirname(vector_cache), exist_ok=True)
torch.save(questions.vocab.vectors, vector_cache)
# build vocab for relations
entity_labels.build_vocab(train, valid, test)
train_iter, valid_iter, test_iter = data.BucketIterator.splits(
(train, valid, test), batch_size=batch_size, device=gpu_device)
train_iter.repeat = False
print("train iter")
for epoch in range(epochs):
train_iter.init_epoch()
for batch_idx, batch in enumerate(train_iter):
print(batch.batch_size)
print("-" * 50)
|
st117194
|
It looks like your code is fine, but the dataset isn’t quite JSON. JSON needs every string, including table keys, to be wrapped in quotes; in your example “question” is but not “labelling”. You can either write your own extension to TabularDataset to handle this not-quite-JSON or you can write a script to fix the data file separately.
|
st117195
|
if I have 2 tensors like :
x = torch.rand(3, 4)
y = torch.rand(3, 7)
how can I combine them along the second axis so I get a new tensor of size 3,11 ?
I thought cat would do this, but it doesnt, I didnt see what function would do this.
|
st117196
|
I’ve encountered a problem with GPU memory not being freed while trying to use torchsample predict_loader https://github.com/ncullen93/torchsample/blob/master/torchsample/modules/module_trainer.py#L682 3
def predict_loader(self,
loader,
cuda_device=-1,
verbose=1):
prediction_list = []
for batch_idx, batch_data in enumerate(loader):
if not isinstance(batch_data, (tuple,list)):
batch_data = [batch_data]
input_batch = batch_data[0]
if not isinstance(input_batch, (list,tuple)):
input_batch = [input_batch]
input_batch = [Variable(ins) for ins in input_batch]
if cuda_device > -1:
input_batch = [ins.cuda(cuda_device) for ins in input_batch]
prediction_list.append(self.model(*input_batch))
return torch.cat(prediction_list,0)
As far as I understand, current behaviour is correct, because GPU tensors are stored in a list and not freed, so that causes memory to fill up.
I want to save predictions for multiple batches on cpu memory. I’ve tried to modify the code to save the cpu part like this:
prediction_list.append(self.model(*input_batch.cpu()))
but it didn’t help, the GPU memory usage was still rising after every batch.
Current workaround that I use looks like this, but that doesn’t look right :):
pred = torch.from_numpy(self.model(*input_batch).cpu().data.numpy())
prediction_list.append(pred)
Is there a better way to save only a cpu part of a Tensor or sowehow tell pytorch to free GPU memory of a Tensor?
|
st117197
|
pytorch might free GPU memory but it will not show up in nvidia-smi, because we use our on memory allocator.
|
st117198
|
The problem is that the script is failing with out of memory, so memory is not freed, unfortunately.
|
st117199
|
Cubbee:
pred = torch.from_numpy(self.model(*input_batch).cpu().data.numpy())
I have the same problem here. When I do .cpu() for network output, by each iteration gpu memory starts to fill up. Did you find a more normative solution ?
|
st117200
|
it’s very strange.
I run a official example about cosine similarity, but ‘module’ object has no attribute ‘cosine_similarity’.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-84-29bc0b9e4a3b> in <module>()
----> 1 torch.nn.functional.cosine_similarity
AttributeError: 'module' object has no attribute 'cosine_similarity'
It’s a bug?
|
st117201
|
I check the functional.py 85 document.
There is no consine_similarity function defined. You can go the official website and copy the code, paste in the functional.py 85
|
st117202
|
torch.initial_seed() returns 142960454572728643 here, I do not know is this a fixed default seed.
|
st117203
|
Solved by albanD in post #2
No it is not, I think it is based on the current time when loading torch.
|
st117204
|
I want to build my own loss function, which is combine with cross-entropy and modified L2_norm loss.
criterion = nn.CrossEntropyLoss()
loss = criterion(outputs, labels)+verification_loss(fc, labels, parameters)
loss.backward()
The verification_loss function is defined as followed:
from torch.autograd import Variable
parameters = torch.rand(1)
parameters = Variable(parameters, requires_grad=True)
def verification_loss(features, labels, parameters):`
features = features
labels = labels
parameters = parameters
loss = 0.0
for i in range(0, labels.data.size(0)-1):
for j in range(i, labels.data.size(0)):
fi = features[i,:]
fj = features[j,:]
if labels.data[i] == labels.data[j]:
L2_norm = 0.5*torch.norm(fi-fj)**2
loss += L2_norm
else:
L2_norm = torch.norm(fi-fj)
diff = parameters - L2_norm
if diff > 0:
loss += 0.5*(diff**2)
return loss
The parameters is trainable parameters in this loss function, features is extracted from the internal layers, labels is the labels of the training samples.
ZeroDivisionError Traceback (most recent call last)
<ipython-input-156-55ecba691a76> in <module>()
15 outputs,fc = net(inputs)
16 loss = criterion(outputs, labels)+verification_loss(fc, labels)
---> 17 loss.backward()
18 optimizer.step()
19
//anaconda/lib/python2.7/site-packages/torch/autograd/variable.pyc in backward(self, gradient, retain_variables)
144 'or with gradient w.r.t. the variable')
145 gradient = self.data.new().resize_as_(self.data).fill_(1)
--> 146 self._execution_engine.run_backward((self,), (gradient,), retain_variables)
147
148 def register_hook(self, hook):
//anaconda/lib/python2.7/site-packages/torch/autograd/_functions/reduce.pyc in backward(self, grad_output)
198 input, = self.saved_tensors
199 if self.norm_type == 2:
--> 200 return input.mul(grad_output[0] / self.norm)
201 else:
202 pow = input.abs().pow(self.norm_type - 2)
ZeroDivisionError: float division by zero
how should I solve this problem, help~~
|
st117205
|
Hi,
The square root implicit in the 2-norm isn’t differentiable at 0, that may upset the backprop when the argument is numerically a zero norm vector. It may be a good idea to sum the squared differences yourself instead of usin norm (…)**2, that is mathematically equivalent and certain to be differentiable.
Best regards
Thomas
|
st117206
|
Many thanks. It works but the parameters can’t be updated during backward processing. Maybe there must be something wrong. Actually, I want to write a contrasive loss function. The parameter is margin. Do you have any idea about that?
|
st117207
|
I just updated to .11 and I think the api for calling torch.nn.DataParallel has changed. In the previous version if I only had one GPU I would call the function with None passed as the device_ids. Now if I pass None I get the following error:
File "/home/jtremblay/anaconda2/envs/py3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 96, in data_parallel
output_device = device_ids[0]
TypeError: 'NoneType' object is not subscriptable
documentation link to the function: http://pytorch.org/docs/nn.html?highlight=parallel#torch.nn.DataParallel 20
|
st117208
|
Hey @jtremblay and @meijieru
I am trying to replicate this issue, but I cant.
Can either of you give me a script to reproduce it?
Here is the script I used, along with it’s output:
import torch
from torch.autograd import Variable
import torch.nn as nn
import platform
print('Python version: ' + platform.python_version())
print(torch.__version__)
print('Trying out device_ids=None')
model = nn.Linear(20, 30).cuda()
net = torch.nn.DataParallel(model, device_ids=None)
inp = Variable(torch.randn(128,20).cuda(), requires_grad=True)
out = net(inp)
out.backward(torch.ones(out.size()).cuda())
print('Passed')
Output:
Python version: 3.6.0
0.1.11+b13b701
Trying out device_ids=None
Passed
|
st117209
|
actually it applies to using the functional version data_parallel.
I’ve identified the issue and fixed it in https://github.com/pytorch/pytorch/pull/1187 32
It should be in our next release.
For now, you can do:
device_ids = list(range(torch.cuda.device_count()))
|
st117210
|
Sorry for the late reply I am travelling. I should have provided an example or do a PR. I have been using the ids from now. It was an easy fix for upgrading my scripts. But thank you so much for your time.
|
st117211
|
image.png1796×686 179 KB
Hello,
I am getting Torch: unable to mmap memory: you tried to mmap 0GB error. I have 12 GB RAM, 1 GPU core and the datasize is 7GB. Ideally it should not give this error. I think i am making mistake in cuda and dataparallel, but unable to figure it out. Attached image contains the details. Please help!!
|
st117212
|
In torch. we can use nn.Max to do max along some dimensions of input. However, in pytorch, how to achieve this? can anyone give some tips
|
st117213
|
Problems solved. I find it in legacy folders in stead of the torch/nn folders in pytorch.
|
st117214
|
thank you very much for your advice, i find that wrapping legacy.nn.Max into torch.nn.Sequential often cause error below:
typeError: torch.legacy.nn.some_module.some_module is not a Module subclass
So if torch.legacy.nn is not compitible with pytorch regular modules, what’s the point of the torch.legacy.nn, if not, can you give me some template code on how to use them. Much obliged if you do.
|
st117215
|
suppose i want to implement a model which views input of size (bz * 12) x C x H x W to be of size
bz x 12 x C x H x W, and then max pool it along the second dimension, resulting a tensor of size: bz x C x H x W, so below is the code i implementated:
class View_And_Pool(nn.Module):
def __init__(self):
super(View_And_Pool, self).__init__()
# not that in python, dimension idx starts from 1
self.Pool_Net = legacy_nn.Max(1)
# only max pool layer, we will use view in forward function
# may add softmax layer ??
def forward(self, x):
# view x ( (bz*12) x C x H x W) ) as
# bz x 12 x C x H x W
x = x.view(-1, 12, x.size()[1], x.size()[2], x.size()[3])
# legacy nn is not callable, so we call updateOutput
x = self.Pool_Net.updateOutput(x.data)
# must wrap it
return Variable(x)
The above code works okay for forward, i am not sure whether the backward also works okay.
any advice?
|
st117216
|
sorry, the above code can not do backward operations. maybe i should use torch.max instead.
|
st117217
|
I want to understand why in torch (and subsequently pytorch), we made the choice to do zero gradients explicitly. Why can’t gradients be zeroed when loss.backward() is called. What scenario is served by keeping the gradients on the graph and asking the user to explicitly zero the gradients ?
|
st117218
|
gradient accumulation means we can do batch descent without having to have the batch fit completely in memory.
if one layer (or weight) is used multiple times, in the backward phase it has to accumulate.
If our setting is to accumulate gradients, we have to zero them at some point. And we cant just zero-them on loss.backward, because (1).
|
st117219
|
Getting the “not a supported wheel on this platform” error when installing via pip on Ubuntu 14.04.5 64-bit with CUDA 8.0 on python 3.5.3
I’ve looked at the other issues relating to this and none offer solutions (I don’t want to use conda).
$ pip install -v http://download.pytorch.org/whl/cu80/torch-0.1.12.post2-cp35-cp35m-linux_x86_64.whl
torch-0.1.12.post2-cp35-cp35m-linux_x86_64.whl is not a supported wheel on this platform.
Exception information:
Traceback (most recent call last):
File "/home/jake/.virtualenvs/pytorch/lib/python3.5/site-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/home/jake/.virtualenvs/pytorch/lib/python3.5/site-packages/pip/commands/install.py", line 257, in run
InstallRequirement.from_line(name, None))
File "/home/jake/.virtualenvs/pytorch/lib/python3.5/site-packages/pip/req.py", line 167, in from_line
raise UnsupportedWheel("%s is not a supported wheel on this platform." % wheel.filename)
pip.exceptions.UnsupportedWheel: torch-0.1.12.post2-cp35-cp35m-linux_x86_64.whl is not a supported wheel on this platform
$ uname -a
Linux hulk 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.5 LTS
Release: 14.04
Codename: trusty
$ python --version
Python 3.5.3
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Tue_Jan_10_13:22:03_CST_2017
Cuda compilation tools, release 8.0, V8.0.61
$ pip --version
pip 1.5.4 from /home/jake/.virtualenvs/pytorch/lib/python3.5/site-packages (python 3.5)
$ virtualenv --version
1.11.4
|
st117220
|
I must admit I didn’t use virtualenv but just sudo pip3’d it into localhost on Debian unstable. I try to avoid having numpy reinstalled, so I pass --no-deps, but the system pip3 liked the torch packages and put everything neatly under /usr/local/lib/python3.5.
Best regards
Thomas
|
st117221
|
Are the two marked lines 1) and 2) equivalent when using DataParallel?
net = torch.nn.DataParallel(net).cuda()
optim = torch.optim.Adam(net.parameters(), LR) # 1)
optim = torch.optim.Adam(net.module.parameters(), LR) # 2)
Or does one of them create weird synchronization issues?
Also, while saving params to file, which one is preferred when using a DataParallel module?
net = torch.nn.DataParallel(net).cuda()
...
torch.save({
'epoch': epoch,
'args': args,
'state_dict': model.state_dict(), # 1) OR
'state_dict': model.module.state_dict(), # 2)
'loss_history': loss_history,
}, model_save_filename)
In Torch, an analogue of method 2) was preferred (https://github.com/facebook/fb.resnet.torch/blob/master/checkpoints.lua#L45-L48 185).
|
st117222
|
I tried #1 however, I run into this issue while loading the state_dict back:
KeyError: 'unexpected key "module.cnn.0.weight" in state_dict'
Essentially, the model is nested in the module of DataParallel.
I guess it might be better to use #2 for saving?
|
st117223
|
Hi all,
I am using multiple CPUS to train my model on azure with MongoDB. It seems I need to open a connection to data in each of the threads. Then I got this error.
Traceback (most recent call last):
File "main.py", line 225, in <module>
model.share_memory()
File "/home/textiq/anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 468, in share_memory
return self._apply(lambda t: t.share_memory_())
File "/home/textiq/anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 118, in _apply
module._apply(fn)
File "/home/textiq/anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 124, in _apply
param.data = fn(param.data)
File "/home/textiq/anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 468, in <lambda>
return self._apply(lambda t: t.share_memory_())
File "/home/textiq/anaconda/lib/python3.6/site-packages/torch/tensor.py", line 86, in share_memory_
self.storage().share_memory_()
File "/home/textiq/anaconda/lib/python3.6/site-packages/torch/storage.py", line 101, in share_memory_
self._share_fd_()
RuntimeError: $ Torch: unable to mmap memory: you tried to mmap 0GB. at /py/conda-bld/pytorch_1493681908901/work/torch/lib/TH/THAllocator.c:317`
Could some one tell me what to do to solve this problem?
Thanks in advance.
|
st117224
|
this is weird. I wonder if Azure is somehow limiting the shared memory available to your process. Are you running docker inside azure?
Also, what’s the output of:
ipcs -lm
|
st117225
|
Thanks for your reply. I just figured out what happened. I didn’t use docker inside azure. The problem is I mistakenly initialized a nn.embedding in the model with size of 0. (for example nn.Embedding(0,300)). Then, I will generate this error when model.share_memory(). Now, I fixed it.
|
st117226
|
thanks for figuring this out. we’ll improve the error message in this situation, you can track it https://github.com/pytorch/pytorch/issues/1878 49
|
st117227
|
I am having latest version of all then also i am getting this error.
I have installed pytorch by source.
sameer@sameer-SVF15213SNW:~$ conda update conda
Fetching package metadata .........
Solving package specifications: .
# All requested packages already installed.
# packages in environment at /home/sameer/anaconda2:
#
conda 4.3.22 py27_0
sameer@sameer-SVF15213SNW:~$ conda update pytorch
Fetching package metadata .........
Solving package specifications: .
# All requested packages already installed.
# packages in environment at /home/sameer/anaconda2:
#
pytorch 0.1.12 py27_2cu75 soumith
sameer@sameer-SVF15213SNW:~$ conda update torch
PackageNotInstalledError: Package is not installed in prefix.
prefix: /home/sameer/anaconda2
package name: torch
sameer@sameer-SVF15213SNW:~$ conda update torchvision
Fetching package metadata .........
Solving package specifications: .
# All requested packages already installed.
# packages in environment at /home/sameer/anaconda2:
#
torchvision 0.1.8 py27_2 soumith
sameer@sameer-SVF15213SNW:~$ python testmodel.py
Traceback (most recent call last):
File "testmodel.py", line 16, in <module>
model=torch.load('./firstN.th')
File "/home/sameer/anaconda2/lib/python2.7/site-packages/torch/serialization.py", line 229, in load
return _load(f, map_location, pickle_module)
File "/home/sameer/anaconda2/lib/python2.7/site-packages/torch/serialization.py", line 377, in _load
result = unpickler.load()
AttributeError: 'module' object has no attribute 'Net'
|
st117228
|
Hi,
I would like to apply random size crop for image data which have bigger dimension than 3D.
(Such as RGBD or RGB+alpha).
If I use random size crop for each single dimension and merge into 4D x H x W tensor, size or crop position is inconsistent since randomize is done per each dimension.
On the other hand, If I make H x W x 4D vector using numpy, then applied random size crop, channels which exceeded 3 seems to be eliminated.
In those cases, how should I deal with it?
Any comments would be appreciated.
Thanks.
|
st117229
|
someone sent this PR that will help you: https://github.com/pytorch/vision/pull/189 112
|
st117230
|
How do I change optimizer from adam to sgd, it seems they have different parameters; Because Adam decreases fast and SGD can lead to better result. So I want to train with Adam at first, then use SGD for fine tuning
|
st117231
|
yes, it will. the new optimizer will also be given model.parameters() so it’s the same result.
|
st117232
|
I found that the example program given in RL tutorial has a mistake —there is no activation fuction such as Relu in the fully connect layer. So after iterations, the duration is getting worse…
(http://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html#sphx-glr-intermediate-reinforcement-q-learning-py 7)
|
st117233
|
Hi,
I have been working to implement a simple one layer (Vainlla) RNN model using Pytorch. I am beginner in pytorch and I don’t know why my forward function is not working (my program is not calling forward method).
Would some one plz guide me or assist me to resolve this issue ? I shall be very thankful
Here is my implementation:
input_dimensions = 3
output_dimensions = 1
hidden_units = 5
x = Variable(torch.randn(1,input_dimensions))
y = Variable(torch.randn(1,output_dimensions))
hidden_init = Variable(torch.randn(1, hidden_units))
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.input_dimensions = input_dimensions
self.output_dimensions = output_dimensions
self.Wih = nn.Parameter(torch.Tensor(input_dimensions, hidden_units))
self.Uhh = nn. Parameter(torch.Tensor(hidden_units, hidden_units))
self.Wy = nn.Parameter(torch.Tensor(hidden_units, output_dimensions))
print('initialization has finished')
def forward(self, x):
print('forward')
W_ih = x.mm(self.Wih)
U_hh = hidden_init.mm(self.Uhh)
updated_hidden = W_ih + U_ih
output = updated_hidden.mm(self.Wy)
print('output', output)
return output
model = Net()
model.eval()
print(model)
|
st117234
|
you haven’t actually called the model yet.
Have you tried doing:
output = model(x)
|
st117235
|
Thank you for reply but would you please let me know where should I call (output = model(x)) my model in my code ?
Thank you
|
st117236
|
please do one of the basic tutorials. you need to call the model with an input to get the output.
|
st117237
|
For example, I read a set of images into a CNN. And the default weight of these images is 1.
How can I reweight these images so that they have different weights?
Can ‘DataLoader’ achieve this goal?
|
st117238
|
Just like in adaboost, we need to reweight the input in each iteration. The weight is the same meaning here.
|
st117239
|
Well, I see two possibilities:
you define a custom loss function, providing weights for each sample as you like.
you repeat samples in your training set, which will result in more frequent samples having a higher weight in your loss.
As far as I know, there is no other way to do this in PyTorch, but maybe I’m wrong…
|
st117240
|
This is kind of a pytorch beginner question. In pytorch I’m trying to do element wise division with two tensors of size [5,5,3]. In numpy it works fine using np.divide(), but somehow I get an error here.
c = [torch.DoubleTensor of size 5x5x3]
input_patch = [torch.FloatTensor of size 5x5x3]
When executing:
torch.div(input_patch, c)
I get this error that I don’t understand.
line 317, in div
assert not torch.is_tensor(other)
AssertionError
Does it mean that variable c should not be a torch_tensor? After casting c to also be a FloatTensor gives the same error still.
Thank you!
|
st117241
|
any of these should work: torch.div, /.
In [1]: import torch
In [2]: a = torch.randn(3, 4)
In [3]: b = torch.randn(3, 4)
In [4]: c = torch.div(a, b)
In [5]: c = a / b
It’s weird that they dont. Can you print type(input_patch) and type(c) before running torch.div and see if there’s something amiss?
|
st117242
|
Hi thanks for the help. I’ve already found that the problem was that self.patch_filt was not defined as a Variable, so doing
Variable(torch.from_numpy(self.patch_filt[:, :, :, 0])).float()
solved the problem
|
st117243
|
I want to re-implement this paper [https://github.com/cvondrick/videogan 8] in PyTorch, but I don’t have the GPU to re-train 5,000 hours of video. Can I convert their pre-trained weights for PyTOrch somehow?
|
st117244
|
Yes, it’s possible to do it by hand.
You could serialize the models in a common format (for example h5), write the model definition in pytorch, load the weights and assign them to the modules.
|
st117245
|
This code may help:
GitHub
clcarwin/convert_torch_to_pytorch 115
Convert torch t7 model to pytorch model and source. - clcarwin/convert_torch_to_pytorch
|
st117246
|
So pytorch’s main strength is the ability to have dynamic computation graphs. I was wondering if there is a simple way of initializing or inserting new layers into the network as well while the network is training? For example, epoch 0 to 10 trains a single linear layer (input -> 100) and then epoch 10 to 20 adds in another layer (100 -> 1000) with random weights. Thus layer 1 is being finetuned whereas layer 2 starts training from scratch. I realize one way could be to initialize all layers one might expect to be inserted and then dynamically change the data flow. However, I wanted to dynamically initialize new layers as well. Any related documentation/code?
|
st117247
|
Solved by smth in post #2
just create new layers on the fly behind python conditionals etc.
pytorch’s models are python code, so it’s upto you how you want to do these things. your forward function gives you full freedom to do arbitrary things.
|
st117248
|
just create new layers on the fly behind python conditionals etc.
pytorch’s models are python code, so it’s upto you how you want to do these things. your forward function gives you full freedom to do arbitrary things.
|
st117249
|
Hi,
I’m working on some sort of image regression problem where I have a net that outputs a tensor “out” of size C x H x W and with a ground truth “target” of the same size.
Before computing the loss, I would like to apply a filter to both the output and the target with learnable weights (some smoothing). This would look like so (of course with all DataLoaders etc…) :
net = model.MyNet()
MyFilter = model.MyPostProcessingFilter()
img, target = Dataset[i]
criterion = nn.MSELoss()
out = net(img)
loss = criterion(MyFilter(out), MyFilter(target))
And MyFilter looks like this so :
import torch
import torch.nn as nn
import torch.nn.functional as F
class MyPostProcessingFilter(nn.Modules):
def __init__(self):
super(MyPostProcessingFilter,self).__init__()
self.weights = nn.Parameters(some_init_weights)
def forward(self,x):
kernel = softmax2D(self.weights)
return F.conv2d(x, kernel)
The problem here is that applying this module to both output and target doesn’t work because “nn criterions don’t compute the gradeints w.r.t. targets”. I understand I can’t use parameters for both, so I tried to add the following method :
def target_fwd(self, x):
kernel = softmax2D(self.weights.detach())
return F.conv2d(x, kernel)
But this still doesn’t work and I get an error :
File "main.py", line 123, in main
train(train_loader, model, criterion, optim, epoch)
File "main.py", line 182, in train
loss.backward()
File "/home/bertrand/anaconda3/envs/pytorch/lib/python3.5/site-packages/torch/autograd/variable.py", line 146, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
File "/home/bertrand/anaconda3/envs/pytorch/lib/python3.5/site-packages/torch/nn/_functions/thnn/auto.py", line 49, in backward
grad_output_expanded = grad_output.view(*repeat(1, grad_input.dim()))
TypeError: view received an invalid combination of arguments - got (), but expected one of:
* (int ... size)
* (torch.Size size)
At some point I even got an illegal memory access but didn’t manage to reproduce it.
What’s the correct way to achieve this ? Many thanks
|
st117250
|
write your loss function in terms of autograd operations, instead of using the built-in nn losses. then target will be differentiable.
|
st117251
|
I’m transferring my codes from torch7 to pytorch.
I used torch.serialize to convert arbitrary objects into strings and write them to a file.
Then I loaded the file and converted strings into objects.
But I could not find a function that does the same job in pytorch.
Is there any same method (torch.serialize/deserialize) in pytorch?
|
st117252
|
I am training an OCR using RNN. I am supplying input data as word images of varying dimensions (since each word can be of different lengths) and the size of class labels of each input data is also not consistent. Since each word can have a different number of characters.
tensor_word_dataset = WordImagesDataset(images, truths, transform = ToTensor())
dataset_loader = torch.utils.data.DataLoader(tensor_word_dataset,
batch_size=16, shuffle=True,)
This gives me the error:
RuntimeError: inconsistent tensor size at /py/conda-bld/pytorch_1493673470840/work/torch/lib/TH/generic/THTensorCopy.c:46
The image sizes of first 5 input labels and images respectively are:
torch.Size([2]) torch.Size([32, 41])
torch.Size([7]) torch.Size([32, 95])
torch.Size([2]) torch.Size([32, 38])
torch.Size([2]) torch.Size([32, 53])
torch.Size([2]) torch.Size([32, 49])
torch.Size([6]) torch.Size([32, 55])
Any suggestions as to how should I fix it. I want to shuffle the data and send it in batches instead of supplying them one at a time.
|
st117253
|
you might want to write a custom collate_fn, instead of the default one: https://github.com/pytorch/pytorch/blob/master/torch/utils/data/dataloader.py#L82 83
DataLoader takes a collate_fn as one of it’s keyword argument, the default value is default_collate
|
st117254
|
I’ve been trying to find benchmark comparisons against Torch, but all I could find are benchmark comparisons against Tensorflow, Theano and so on. I’ve seen the devs mention their extensive benchmarks, but couldn’t find the details or code.
From what I understand, Torch and PyTorch’s speed should be similar given that they share the underlying C libraries. I’d still like to see a detailed comparison of their speed and memory usage on training both small and big networks though. If anyone’s aware of any existing benchmark comparisons, preferably with code, please share the link with me. Much appreciated.
|
st117255
|
I have three simple question.
What will happen if my custom loss function is not differentiable? Will pytorch through error or do something else?
If I declare a loss variable in my custom function which will represent the final loss of the model, should I put requires_grad = True for that variable? or it doesn’t matter? If it doesn’t matter, then why?
I have seen people sometimes write a separate layer and compute the loss in the forward function. Which approach is preferable, writing a function or a layer? Why?
I have seen relevant post here before but I want to have a clear, nice explanation to resolve my confusions. Please help.
|
st117256
|
I tried to implement the class model visualization (i.e., gradient ascent; update the images). I have already set the random seed. But, I am not sure why every time when I run let say 10 iterations, the output is always different. Can anyone tell me which part of the code introduce randomness?
# ------ load vgg16
vgg16=models.vgg16(pretrained=True)
vgg16.cuda()
vgg16.eval()
# ------ random seed
rseed = 1
torch.manual_seed(rseed)
torch.cuda.manual_seed(rseed)
# ------ create empty img
img=torch.zeros(3,227,227).cuda()
img=img.unsqueeze(0) #Add an extra batch dimension
image=Variable(img,requires_grad=True) #Make it a variable which requires gradient
for i in range(num_iterations):
# ----- forward pass
scores=vgg16(image)
# ----- zero the graidents
vgg16.zero_grad()
# ----- backpropagate certain class gradient
grad = torch.zeros(1,scores.size(1))
grad[0,label_index] = 1
scores.backward(grad.cuda())
# update the image data
image.data.add_(lr*image.grad.data)
|
st117257
|
the gradient wrt weights kernels for Convolution are non-deterministic in CuDNN. I think that’s your different outputs issue.
|
st117258
|
Looking from the imagenet example it looks like multi-gpu training is pretty simple. All you need to do is add
net = torch.nn.DataParallel(net,device_ids=[0,1,2,3])
net.cuda()
and you are good to go. Is this correct or I need to do something else also ?
I did this and I am getting this error
RuntimeError: cuda runtime error (10) : invalid device ordinal at torch/csrc/cuda/Module.cpp:84
CUDA_VISIBLE_DEVICES is properly set. Can someone tell me the fix ?
Thanks,
A
|
st117259
|
you probably dont have 4 GPUs.
you can set device_ids=None (or dont specify it) and it’ll use all available GPUs.
|
st117260
|
Tried this https://gist.github.com/dusty-nv/ef2b372301c00c0a9d3203e42fd83426 9
Works fine on TX-2
On TX-1, I get:
(I tried checking out the April 18th and 19th commits but same results)
Should I change the compiler? to?
/home/ubuntu/deep_libs/pytorch/torch/lib/THCS/generic/THCSTensorMath.cu(432): error: more than one instance of overloaded function "min" matches the argument list:
function "min(int, int)"
function "min(unsigned int, unsigned int)"
function "min(int, unsigned int)"
function "min(unsigned int, int)"
function "min(long long, long long)"
function "min(unsigned long long, unsigned long long)"
function "min(long long, unsigned long long)"
function "min(unsigned long long, long long)"
function "min(float, float)"
function "min(double, double)"
function "min(float, double)"
function "min(double, float)"
argument types are: (long, long)
/home/ubuntu/deep_libs/pytorch/torch/lib/THCS/generic/THCSTensorMath.cu(432): error: more than one instance of overloaded function "min" matches the argument list:
function "min(int, int)"
function "min(unsigned int, unsigned int)"
function "min(int, unsigned int)"
function "min(unsigned int, int)"
function "min(long long, long long)"
function "min(unsigned long long, unsigned long long)"
function "min(long long, unsigned long long)"
function "min(unsigned long long, long long)"
function "min(float, float)"
function "min(double, double)"
function "min(float, double)"
function "min(double, float)"
argument types are: (long, long)
/home/ubuntu/deep_libs/pytorch/torch/lib/THCS/generic/THCSTensorMath.cu(432): error: more than one instance of overloaded function "min" matches the argument list:
function "min(int, int)"
function "min(unsigned int, unsigned int)"
function "min(int, unsigned int)"
function "min(unsigned int, int)"
function "min(long long, long long)"
function "min(unsigned long long, unsigned long long)"
function "min(long long, unsigned long long)"
function "min(unsigned long long, long long)"
function "min(float, float)"
function "min(double, double)"
function "min(float, double)"
function "min(double, float)"
argument types are: (long, long)
/home/ubuntu/deep_libs/pytorch/torch/lib/THCS/generic/THCSTensorMath.cu(432): error: more than one instance of overloaded function "min" matches the argument list:
function "min(int, int)"
function "min(unsigned int, unsigned int)"
function "min(int, unsigned int)"
function "min(unsigned int, int)"
function "min(long long, long long)"
function "min(unsigned long long, unsigned long long)"
function "min(long long, unsigned long long)"
function "min(unsigned long long, long long)"
function "min(float, float)"
function "min(double, double)"
function "min(float, double)"
function "min(double, float)"
argument types are: (long, long)
/home/ubuntu/deep_libs/pytorch/torch/lib/THCS/generic/THCSTensorMath.cu(432): error: more than one instance of overloaded function "min" matches the argument list:
function "min(int, int)"
function "min(unsigned int, unsigned int)"
function "min(int, unsigned int)"
function "min(unsigned int, int)"
function "min(long long, long long)"
function "min(unsigned long long, unsigned long long)"
function "min(long long, unsigned long long)"
function "min(unsigned long long, long long)"
function "min(float, float)"
function "min(double, double)"
function "min(float, double)"
function "min(double, float)"
argument types are: (long, long)
/home/ubuntu/deep_libs/pytorch/torch/lib/THCS/generic/THCSTensorMath.cu(432): error: more than one instance of overloaded function "min" matches the argument list:
function "min(int, int)"
function "min(unsigned int, unsigned int)"
function "min(int, unsigned int)"
function "min(unsigned int, int)"
function "min(long long, long long)"
function "min(unsigned long long, unsigned long long)"
function "min(long long, unsigned long long)"
function "min(unsigned long long, long long)"
function "min(float, float)"
function "min(double, double)"
function "min(float, double)"
function "min(double, float)"
argument types are: (long, long)
/home/ubuntu/deep_libs/pytorch/torch/lib/THCS/generic/THCSTensorMath.cu(432): error: more than one instance of overloaded function "min" matches the argument list:
function "min(int, int)"
function "min(unsigned int, unsigned int)"
function "min(int, unsigned int)"
function "min(unsigned int, int)"
function "min(long long, long long)"
function "min(unsigned long long, unsigned long long)"
function "min(long long, unsigned long long)"
function "min(unsigned long long, long long)"
function "min(float, float)"
function "min(double, double)"
function "min(float, double)"
function "min(double, float)"
argument types are: (long, long)
/home/ubuntu/deep_libs/pytorch/torch/lib/THCS/generic/THCSTensorMath.cu(432): error: more than one instance of overloaded function "min" matches the argument list:
function "min(int, int)"
function "min(unsigned int, unsigned int)"
function "min(int, unsigned int)"
function "min(unsigned int, int)"
function "min(long long, long long)"
function "min(unsigned long long, unsigned long long)"
function "min(long long, unsigned long long)"
function "min(unsigned long long, long long)"
function "min(float, float)"
function "min(double, double)"
function "min(float, double)"
function "min(double, float)"
argument types are: (long, long)
8 errors detected in the compilation of "/tmp/tmpxft_000037d7_00000000-7_THCSTensor.cpp1.ii".
CMake Error at THCS_generated_THCSTensor.cu.o.cmake:267 (message):
Error generating file
/home/ubuntu/deep_libs/pytorch/torch/lib/build/THCS/CMakeFiles/THCS.dir//./THCS_generated_THCSTensor.cu.o
make[2]: *** [CMakeFiles/THCS.dir/./THCS_generated_THCSTensor.cu.o] Error 1
make[1]: *** [CMakeFiles/THCS.dir/all] Error 2
make: *** [all] Error 2
|
st117261
|
if TX1 has CUDA 8.0 support, then upgrade to that. I suspect a 7.5 ARM bug maybe…
|
st117262
|
Hello,
l’m looking for a trick to visualize through the epochs
loss function, accuracy .
features learned at each hidden layer
the dynamic at each hidden neuron
-gradient and weight , backpropagation
Thank you
|
st117263
|
loss function, accuracy .
Matplotlib
features learned at each hidden layer
https://github.com/leelabcnbc/cnnvis-pytorch/blob/master/test.ipynb 242
the dynamic at each hidden neuron
Matplotlib
gradient and weight , backpropagation
Matplotlib
|
st117264
|
I am working on a 2 Titan machine and usingtorchvision.models.resnet50 for feature extracting.
para_model=torch.nn.DataParallel(pretrained_model) is used and I expect both of my gpu work together. But when check withnvidia-smi I found only gpu0 is working full load while gpu1’s memory usage remains none. However, when I test in python shell( also with same pretrained network it show that both graphic card are working well.
I am wondering if there should be some options other than just call DataParallel.
UPDATE:
I made simplified test script(only pretrained model which processes random cuda tensors) and find it works as expected. So I guess it might be an issue with multiprocessing, as I start a process which loads data and passes numpy arrays to main process via Queue. Anyway to prove this assumption?
|
st117265
|
@Varg_Nord you can check this assumption by starting the DataLoader with workers=0 (so that it does not use multiprocessing)
|
st117266
|
There were multiple requests for tutorials on data loaders.
I’ve written a tutorial here explaining Dataset class and DataLoader with an example: http://pytorch.org/tutorials/beginner/data_loading_tutorial.html 60
Check it out and let me know any feedback.
Sasank.
|
st117267
|
Hi,
I have been trying to work through the translation tutorial http://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html 23 but with batching my code is here https://github.com/vijendra-rana/Random/blob/master/translation_with_batch.py#L226 38. Now for this I have created some fake_data and I am trying to work through that.
When I am running the program I am getting this error
RuntimeError: Trying to backward through the graph second time, but the buffers have already been freed. Please specify retain_variables=True when calling backward for the first time.
I am not able to figure out how am I running the loss.backward() twice I know we cannot run it twice (To me it looks only once.) without retain_variable = True.Also with retain_variable = True I am getting similar error.
Thanks in advance for help.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.