id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st116568
|
Hello ,
I am building a language model with an Embedding, a LSTM and a Linear module.
I want to change a bit the output computation: the linear module will project into the embedding space from the “hidden” space. Then I want to compute output probabilities as the (euclidean) distance between embeddings and the output of the model.
Let’s consider the following (simplified) module:
def __init__(self, voc_size, embed_size, hidden_size, initial_embedding):
super(RNNLanguageModel, self).__init__()
self.hidden_size = hidden_size
self.embed_size = embed_size
self.voc_size = voc_size
self.embedder = nn.Embedding(voc_size, embed_size)
self.lstm = nn.LSTM(embed_size, hidden_size, 2, batch_first=True)
self.decoder = nn.Linear(hidden_size, embed_size)
self.pairdist = nn.PairwiseDistance(2)
def forward(self, input_seq, hidden):
embedded_seq = self.embedder(input_seq)
encoded_seq, hidden = self.lstm(embedded_seq, hidden)
decoded_seq = self.decoder(encoded_seq.contiguous().view(-1, self.hidden_size))
# Problem here: decoded_sed is 100*300 and self.embedder.weight is 10000*300
# First Try
probs = torch.stack([torch.norm(torch.add(self.embedder.weight, -dec), p=2, dim=1) for dec in decoded_seq])
# Second Try
probs = torch.cat([self.pairdist(dec.expand_as(self.embedder.weight), self.embedder.weight) for dec in decoded_seq])
# Third Try
probs = Variable(torch.cuda.FloatTensor(decoded_seq.size(0), self.voc_size))
for i, dec in enumerate(decoded_seq):
probs[i] = self.pairdist(dec.expand_as(self.embedder.weight), self.embedder.weight)
return probs.view(input_seq.size(0), input_seq.size(1), -1), hidden
I tried to get the probabilities for each word, however I get an out of memory error with this method. I tried using the PairwiseDistance function but it is not optimized for this use (no broadcasting) and I get an oom too.
I used a batch size of 20, a seq size of 50, a voc size of 10000 and an embedding size of 300.
I think what I am doing is very memory consuming, and particularly the ‘add’ that creates a new 30010000 tensor for every 2050 words in the batch…
Any idea how I could do this efficiently with pytorch ?
Update:
It appears that this is working if I detach the variable decoded_seq and self.embedder.weight (which I obviously don’t want). Why is this happening ?
probs = Variable(torch.cuda.FloatTensor(decoded_seq.size(0), self.voc_size))
for i, dec in enumerate(decoded_seq):
probs[i] = self.pairdist(dec.expand_as(self.embedder.weight), self.embedder.weight)
|
st116569
|
Hello!
I am getting a: cuda runtime error (8) : invalid device function with this code:
import torch
x = torch.rand(10, 1).cuda()
y = torch.rand(10, 300).cuda()
x.expand_as(y)
This code works fine without cuda.
Is that a bug ? What am I doing wrong ?
Note that the expand function itself is not producing the error, it is the print. However, it looks like the object returned by expand is incorrect since it produces other errors if used.
|
st116570
|
I was looking at the tutorial (http://pytorch.org/tutorials/beginner/examples_nn/two_layer_net_nn.html#sphx-glr-beginner-examples-nn-two-layer-net-nn-py 50)
and there was a very nice line were the define the model:
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out),
)
I wanted to change it to:
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.My_Activation(),
torch.nn.Linear(H, D_out),
)
say for the sake of an example that My_Activation is the element-wise quadratic (cuz why not something simple for an example?). Is it possible to write such code for custom nn modules?
|
st116571
|
Have you tried simply writing it direclty as a mathematical formula in the forward method?
|
st116572
|
I run a code similar to the dcgan example with 4 Tesla M40 with batchsize 64. The only difference for the model is I changed the 2D convolution/deconvolution to 3D. however the speed is just the same or a little slower than runing with a single gpu. anyone knows the reason?
|
st116573
|
Hello,
I wanted to define a custom softmax function, for example, with a temperature term.
I was not sure where to start. Can I just define a function, like this example? (another thread):
def trucated_gaussian(x, mean=0, std=1, min=0.1, max=0.9):
gauss = torch.exp((-(x - mean) ** 2)/(2* std ** 2))
return torch.clamp(gauss, min=min, max=max) # truncate
And use the output instead of the standard softmax? or does it need to be a nn module?
Thanks
|
st116574
|
it needs to be a class that inherits from torch.autograd.Funtion, you needs to define forward and backward
|
st116575
|
Hi guys,
I installed PyTorch in Ubuntu16.04 with conda, Python 3.6, without CUDA. I can import torch successfully, but when I write “dtype = torch.FloatTensor”, it displays " module ‘torch’ has no attribute ‘FloatTensor’ ".
Could anyone please tell me why or what can I do to use torch.FloatTensor?
Thanks !
Best,
Yongwei
|
st116576
|
I am getting the following error while doing seq to seq on characters and feeding to LSTM, and decoding to words using attention. The forward propagation is fine but while computing loss.backward() I am getting the following error.
RuntimeError: Gradients aren’t CUDA tensors
My train() function is as followed.
def train(input_batch, input_batch_length, target_batch,target_batch_length, batch_size):
# Zero gradients of both optimizers
encoderchar_optimizer.zero_grad()
encoder_optimizer.zero_grad()
decoder_optimizer.zero_grad()
encoder_input = Variable(torch.FloatTensor(len(input_batch),batch_size,500))
for ix , w in enumerate(input_batch):
w = w.contiguous().view(15,batch_size)
reshaped_input_length = [ x[ix] for x in input_batch_length] # [15 ,.. 30 times] * 128
if USE_CUDA:
w = w.cuda()
#reshaped_input_length = Variable(torch.LongTensor(reshaped_input_length )).cuda()
hidden_all , output = encoderchar(w,reshaped_input_length)
encoder_input[ix] = output.transpose(0,1).contiguous().view(batch_size,-1)
if USE_CUDA:
encoder_input = encoder_input.cuda()
temporary_target_batch_length = [15] * batch_size
#if USE_CUDA:
#target_batch_length = Variable(torch.LongTensor(target_batch_length )).cuda()
encoder_hidden_all, encoder_output = encoder(encoder_input, target_batch_length)
decoder_input = Variable(torch.LongTensor([SOS_token] * batch_size))
decoder_hidden = encoder_output
max_target_length = max(temporary_target_batch_length)
all_decoder_outputs = Variable(torch.zeros(max_target_length, batch_size, decoder.output_size))
# Move new Variables to CUDA
if USE_CUDA:
decoder_input = decoder_input.cuda()
all_decoder_outputs = all_decoder_outputs.cuda()
target_batch = target_batch.cuda()
##Added by Satish
encoder_hidden_all = encoder_hidden_all.cuda()
encoder_output = encoder_output.cuda()
decoder_hidden = decoder_hidden.cuda()
# Run through decoder one time step at a time
for t in range(max_target_length):
decoder_output, decoder_hidden, decoder_attn = decoder(
decoder_input, decoder_hidden, encoder_hidden_all
)
all_decoder_outputs[t] = decoder_output
decoder_input = target_batch[t] # Next input is current target
if USE_CUDA:
decoder_input = decoder_input.cuda()
if USE_CUDA:
all_decoder_outputs = all_decoder_outputs.cuda()
Loss calculation and backpropagation
loss = masked_cross_entropy(
all_decoder_outputs.transpose(0, 1).contiguous(), # -> batch x seq
target_batch.transpose(0, 1).contiguous(), # -> batch x seq
target_batch_length
)
loss.backward()
# Clip gradient norms
ecc = torch.nn.utils.clip_grad_norm(encoderchar.parameters(), clip)
ec = torch.nn.utils.clip_grad_norm(encoder.parameters(), clip)
dc = torch.nn.utils.clip_grad_norm(decoder.parameters(), clip)
# Update parameters with optimizers
encoderchar_optimizer.step()
encoder_optimizer.step()
decoder_optimizer.step()
return loss.data[0], ec, dc
any suggestions about why I am doing wrong?
|
st116577
|
I am getting the following error while experimenting with a simple lstm layer. The forward propagation is fine but while computing loss.backward() I am getting the following error.
raise RuntimeError(‘Gradients aren’t CUDA tensors’)
RuntimeError: Gradients aren’t CUDA tensors
any suggestions about why I am doing wrong?
|
st116578
|
the gradients are not CUDA tensors
Can you post your simple script to see what you are doing wrong?
|
st116579
|
I’m encountering the same error too. Followed is my train() function.
def train(input_batch, input_batch_length, target_batch,target_batch_length, batch_size):
# Zero gradients of both optimizers
encoderchar_optimizer.zero_grad()
encoder_optimizer.zero_grad()
decoder_optimizer.zero_grad()
encoder_input = Variable(torch.FloatTensor(len(input_batch),batch_size,500))
for ix , w in enumerate(input_batch):
w = w.contiguous().view(15,batch_size)
reshaped_input_length = [ x[ix] for x in input_batch_length] # [15 ,.. 30 times] * 128
if USE_CUDA:
w = w.cuda()
#reshaped_input_length = Variable(torch.LongTensor(reshaped_input_length )).cuda()
hidden_all , output = encoderchar(w,reshaped_input_length)
encoder_input[ix] = output.transpose(0,1).contiguous().view(batch_size,-1)
if USE_CUDA:
encoder_input = encoder_input.cuda()
temporary_target_batch_length = [15] * batch_size
#if USE_CUDA:
#target_batch_length = Variable(torch.LongTensor(target_batch_length )).cuda()
encoder_hidden_all, encoder_output = encoder(encoder_input, target_batch_length)
decoder_input = Variable(torch.LongTensor([SOS_token] * batch_size))
decoder_hidden = encoder_output
max_target_length = max(temporary_target_batch_length)
all_decoder_outputs = Variable(torch.zeros(max_target_length, batch_size, decoder.output_size))
# Move new Variables to CUDA
if USE_CUDA:
decoder_input = decoder_input.cuda()
all_decoder_outputs = all_decoder_outputs.cuda()
target_batch = target_batch.cuda()
##Added by Satish
encoder_hidden_all = encoder_hidden_all.cuda()
encoder_output = encoder_output.cuda()
decoder_hidden = decoder_hidden.cuda()
# Run through decoder one time step at a time
for t in range(max_target_length):
decoder_output, decoder_hidden, decoder_attn = decoder(
decoder_input, decoder_hidden, encoder_hidden_all
)
all_decoder_outputs[t] = decoder_output
decoder_input = target_batch[t] # Next input is current target
if USE_CUDA:
decoder_input = decoder_input.cuda()
if USE_CUDA:
all_decoder_outputs = all_decoder_outputs.cuda()
Loss calculation and backpropagation
loss = masked_cross_entropy(
all_decoder_outputs.transpose(0, 1).contiguous(), # -> batch x seq
target_batch.transpose(0, 1).contiguous(), # -> batch x seq
target_batch_length
)
loss.backward()
# Clip gradient norms
ecc = torch.nn.utils.clip_grad_norm(encoderchar.parameters(), clip)
ec = torch.nn.utils.clip_grad_norm(encoder.parameters(), clip)
dc = torch.nn.utils.clip_grad_norm(decoder.parameters(), clip)
# Update parameters with optimizers
encoderchar_optimizer.step()
encoder_optimizer.step()
decoder_optimizer.step()
return loss.data[0], ec, dc
any inputs what I’m doing wrong?
|
st116580
|
I have an LSTM which shall predict a value. However, this prediction shall only be done after several calls. I.e. something like this:
Untitled Diagram.png721×236 782 Bytes
I am providing my LSTM input of the shape [sequence length, batch size, features]. As far as I understand, the returned output will than have a prediction for each element in the sequence (output (seq_len, batch, hidden_size * num_directions)). While I’m happy to just ignore the values I dont require, I’m not sure how to do the backpropagation. More specifically, what to provide my loss function as target.
|
st116581
|
Hello,
you can easily just take the indices you are interested in.
So for example:
a = Variable(torch.randn(20),requires_grad=True)
print (a)
b = a[torch.arange(1,a.size(0),2).long()]
print (b)
b.sum().backward()
print (a.grad)
will give you alternating 0 and 1 because every second index is in the arange.
Best regards
Thomas
|
st116582
|
Hi there,
I’m implementing a UNet for binary segmentation while using Sigmoid and BCELoss. The problem is that after several iterations the network tries to predict very small values per pixel while for some regions it should predict values close to ones (for ground truth mask region). Does it give any intuition about the wrong behavior?
Besides, there exists NLLLoss2d which is used for pixel-wise loss. Currently, I’m simply ignoring this and I’musing MSELoss() directly. Should I use NLLLoss2d with Sigmoid activation layer?
Thanks
Saeed
|
st116583
|
part 1: create feedforward network, print output
pytorch network1: Create simple feedforward network, print the output
part 2: print prediciton, loss, create optimizer, calculate gradients, and run optimizer
pytorch network2: print prediction, loss, run backprop, run training optimizer
(Edit: note, this hsould be log_softmax, rather than softmax )
|
st116584
|
When I run the following code, the memory usage keeps increasing iteration by iteration.
while True:
a = Variable(torch.FloatTensor(32,16).cuda())
r = Variable(torch.FloatTensor(1).cuda())
c = r.expand(a.size())
If I use r.exapnd([32, 16]), memory usage does not increase.
Did I use expand() incorrectly? Does anyone have the same issue?
Thank you!
|
st116585
|
This does not make memory usage increasing:
>>> while True:
... a = Variable(torch.FloatTensor(32,16).cuda())
... s1,s2 = int(a.size()[0]),int(a.size()[1])
... r = Variable(torch.FloatTensor(1).cuda())
... c = r.expand((s1,s2))
I tried and I have the same bug when I direclty use torch.size().
I have no idea what causes this thing…
|
st116586
|
I’m trying to predict one step ahead, given three variables, predict the next three. My data is a list of lists like this:
[[0.700915, 0.72822, 1.0389610389610389]
[0.700015, 0.728785, 1.0410864778482825]
[0.70193, 0.72722, 1.0360332359462092]
[0.70165, 0.727505, 1.0368442608078055]
[0.70768, 0.734015, 1.0372099053545962]
[0.703005, 0.72807, 1.0356523315123114]
[0.70084, 0.72651, 1.0366239232068997]
[0.702125, 0.72727, 1.0358132428723101]
[0.702295, 0.72743, 1.0357917851353522]
[0.701025, 0.726365, 1.0361513195387053]]
I’m attempting to use the basic RNN example, but can’t even convert a single line to a tensor. Tensors just make no sense to me.
Using:
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.i2h = nn.Linear(input_size + hidden_size, hidden_size)
self.i2o = nn.Linear(input_size + hidden_size, output_size)
self.softmax = nn.LogSoftmax()
def forward(self, input, hidden):
combined = torch.cat((input, hidden), 1)
hidden = self.i2h(combined)
output = self.i2o(combined)
output = self.softmax(output)
return output, hidden
def initHidden(self):
return Variable(torch.zeros(1, self.hidden_size))
def nnModel(dataset):
for row in dataset[:10]:
print(row[1:])
n_hidden = 56
rnn = RNN(3, n_hidden, 2)
for row in dataset:
dRow = np.asarray(row[1:])
input = Variable(torch.from_numpy(dRow))
hidden = Variable(torch.zeros(1, n_hidden))
output, next_hidden = rnn(input, hidden)
print("Output: ", output)
break
Gives me this error.
Traceback (most recent call last):
File "fxDataset2.py", line 181, in <module>
nnModel(dataset)
File "fxDataset2.py", line 156, in nnModel
output, next_hidden = rnn(input, hidden)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "fxDataset2.py", line 134, in forward
combined = torch.cat((input, hidden), 1)
File "/usr/local/lib/python3.5/dist-packages/torch/autograd/variable.py", line 841, in cat
return Concat(dim)(*iterable)
File "/usr/local/lib/python3.5/dist-packages/torch/autograd/_functions/tensor.py", line 309, in forward
self.input_sizes = [i.size(self.dim) for i in inputs]
File "/usr/local/lib/python3.5/dist-packages/torch/autograd/_functions/tensor.py", line 309, in <listcomp>
self.input_sizes = [i.size(self.dim) for i in inputs]
RuntimeError: dimension 1 out of range of 1D tensor at /b/wheel/pytorch-src/torch/lib/TH/generic/THTensor.c:24
Are there any simpler tutorials for PyTorch, or someone who can tell me how to turn a list of three numbers into a tensor?
|
st116587
|
How about “Train pytorch rnn to predict a sequence of integers” ? :
Train pytorch rnn to predict a sequence of integers
|
st116588
|
with one input data x1 and one label y we can do in this way:
total_nb = 100
x1_np = np.random.randn(total_nb, 20)
x2_np = np.random.randn(total_nb, 30)
y_np = np.random.randn(total_nb, 10)
x1 = torch.from_numpy(x1_np)
x2 = torch.from_numpy(x2_np)
y = torch.from_numpy(y_np)
dataset = Data.TensorDataset(data_tensor=x1, target_tensor=y)
data_loader = Data.DataLoader(dataset, batch_size=10, shuffle=True)
for i, j in data_loader:
print(i.size(), j.size())
But how to do with two input data x1,x2 and one label y:
dataset = Data.TensorDataset(data_tensor=(x1, x2), target_tensor=y)
In this way is wrong…
|
st116589
|
Is it useful by using
np.hstack((x1_np,x2_np))
befor tranporting to tensor ?
Otherwise, we can rewrite the class to dataset.
|
st116590
|
You can have a look at either torch vision source code for ImageFolder dataset or the data loading tutorial: http://pytorch.org/tutorials/beginner/data_loading_tutorial.html 506
|
st116591
|
Dear Madam/Sir,
Recently, I am training a model with nn.Embedding, nn.LSTM, nn.Linear modules.
DataParallel is also leveraged for training efficiency. However, when multiple GPUs are utilized (with the same batch size as the single GPU running), the model is hard to train, i.e. the loss decreases very slowly, and the final model cannot work.
For optimizer, we directly use the codes like in imageNet example:
optimizer.zero_grad()
loss.backward()
optimizer.step()
Thank you for helping me.
Best,
Yikang
|
st116592
|
I finally got the reason for the training error:
As I only use the pack_padded_sequence() in my model without pad_packed_sequence(), then after gathering the results, the arrangement of the elements may be different from the original one.
Therefore, pack and unpack should be used within the model at the same time. In addition, the dimensions of unpacked variables should be of the same order as the input, so that the sizes of different sub-mini-batches are the same.
However, there is another question:
If the max length of sequence is smaller than the corresponding dimension (dim T). Then unpack(pack(sequence)) may change the size of dimension T. Is there any efficient way to solve the problem?
Best,
Yikang
|
st116593
|
Hi Everyone. I am new to pytorch or torch. I am using CrossEntropyLoss() but its giving error as “RuntimeError: multi-target not supported at…”. I have used .mat file to load the data. What could be the reason for this error? Thanks!!
|
st116594
|
Can you show a part of the code (how do you build the targets and how do you use the loss)?
|
st116595
|
Thanks Alexis for replying. I got it. I was using a vector to denote the class i.e. [0 1 0 0] for representing class ‘1’. I found out that it has to be scalar.
|
st116596
|
I would like to extend torch._C._functions.ConvNd. Can someone tell me where (which files) to get started? Appreciate any help!
Basically, I would like to fix two things:
(1) Allow kernel size > input size
(2) Convolute over individual channels separately, instead of summing up the convolution results of all channels as it is now.
|
st116597
|
It’s in https://github.com/pytorch/pytorch/blob/master/torch/csrc/autograd/functions/convolution.cpp 108
However, extending it might be difficult. I suspect there may be easier ways to achieve your goals:
Narrow the weight tensor if the input is smaller than the kernel size
Use groups, and set groups=channels http://pytorch.org/docs/master/nn.html#torch.nn.Conv2d 40
|
st116598
|
Following your suggestions, I tried the following and ran into the following problem. Would you please help have a look. Thanks a lot!
x = torch.nn.Conv2d(10, 10, [3, 3], groups=10) # in_channels = 10, out_channels = 10
img = torch.autograd.Variable( torch.randn(10, 5, 5) ) # in_channels = 10
o = x(img)
Traceback (most recent call last):
File "<pyshell#47>", line 1, in <module>
o = x(img);
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 237, in forward
self.padding, self.dilation, self.groups)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/nn/functional.py", line 39, in conv2d
return f(input, weight, bias)
RuntimeError: expected 3D tensor
|
st116599
|
The error is because your input is 3D (channels x h x w). You need a batch dimension (batch x channels x h x w), although it can be 1.
x = torch.nn.Conv2d(10, 10, [3, 3], groups=10) # in_channels = 10, out_channels = 10
img = torch.autograd.Variable( torch.randn(1, 10, 5, 5) ) # in_channels = 10
o = x(img)
|
st116600
|
Hello all,
I’m separating Resnet into an ‘activations’ module and an ‘imagenet output’ module for a low memory application that would otherwise need duplicate models, and I notice that when I separate the two, the results of applying them sequentially are slightly different. I’m wondering if someone could shed some light as to what is happening beneath the surface or if my technique for splitting is somehow inaccurate.
from torchvision.models import resnet18
class skip(nn.Module):
def __init__(self):
super(skip,self).__init__()
def forward(self, x):
return x
def split_resnet():
resnet = resnet18(pretrained=True)
fc = nn.Linear(512,1000)
fc.load_state_dict(resnet.fc.state_dict())
resnet.fc = skip()
return resnet, fc
inp = Var(torch.randn(1,3,256,256))
res1 = resnet18(pretrained=True)
res2, fc = split_resnet()
print(res1(inp))
print(fc(res2(inp))
Which yeilds:
Variable containing:
-0.3892 -0.2363 -0.7533 … -0.1491 1.8596 0.9131
[torch.FloatTensor of size 1x1000]
Variable containing:
-0.4197 -0.2726 -0.7471 … -0.1189 1.8350 0.9147
[torch.FloatTensor of size 1x1000]
|
st116601
|
Hello,
I executed your code multiple times and I found both results to be the same.
I compared every entry of both output.
Variable containing:
-0.6792 -0.0576 -0.5130 ... -0.5083 1.4377 0.9571
[torch.FloatTensor of size 1x1000]
Variable containing:
-0.6792 -0.0576 -0.5130 ... -0.5083 1.4377 0.9571
[torch.FloatTensor of size 1x1000]
are split-resnet and resnet equivalent: True
So the only pointer I have for you is checking the version of your torch, mine is 0.1.12_2.
|
st116602
|
Hmm. I am using the same version.
Found it: I made an edit to the code in my first post,
fc.load_state_dict(resnet.fc.state_dict())
used to be
fc.weight = resnet.fc.weight
and I must have still been using the old function in the notebook. Learning opportunity, thank you anyways.
|
st116603
|
What does the error:
RuntimeError: already counted a million dimensions in a given sequence. Most likely your items are also sequences and there's no way to infer how many dimension should the tensor have
mean and how does one fix it?
I only have:
W = Variable(w_init, requires_grad=True)
W_avg = Variable(torch.FloatTensor(W).type(dtype), requires_grad=False)
but I hope to make this hope more generally helpful beyond just my very (probably newbie) question.
|
st116604
|
Please post your full code and the full error stack to know which one of both lines of code
gives the error.
Where do you declare and initialize “w_init” ?
|
st116605
|
I come from tensorflow and I know that tf.get_variable() (https://www.tensorflow.org/programmers_guide/variable_scope 18) is very useful because then I don’t have to have giant list of things were I define variables and instead they are included in the graph as I create more variables. i.e. I avoid:
D,H_l1 = 2,1
w_init = torch.zeros(H_l1,D).type(dtype)
W_l1 = Variable(w_init, requires_grad=True)
D,H_l2 = 2,1
w_init = torch.zeros(H_l2,D).type(dtype)
W_l2 = Variable(w_init, requires_grad=True)
what is the equivalent way to deal with this in pytorch?
|
st116606
|
Variable in tensorflow and variable in pytorch are different things. Variable in tensorflow is setting up a tensor and giving name and such.
When you declare a Variable in pytorch u already have a tensor made of set shape and you are just wrapping that tensor with pytorch wrapper so now you can auto compute gradients easily when backproping
|
st116607
|
but sometimes the networks are gigantic (resnet, vgg or something crazy new I want to try with LSTMs…maybe), do I have to really write 100 variables out in one script? Or maybe Im thinking about pytorch totally wrong but bare with me Im new to this
|
st116608
|
Not at all
The main utility of a dynamic computation graph is that it allows you to process complex inputs and outputs, without worrying to convert every batch of input into a tensor. Tensorflow is define and run and as such write all placeholders beforehand. Pytorch define by run so graphs are created as you go
|
st116609
|
I would suggest not trying to compare tensorflow to pytorch. I assume you have built neural networks from scratch before and if so use that knowledge of how to, to how that can be done in pytorch.
The tutorials here will help a lot too:
http://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html 351
I hope that helps:grin:
|
st116610
|
Pytorch has the function to find the global maximum value, or the maximum values and indices along given dimension. How can I find the indices of the max in an N-dimensional variable? (for example if N=3 , the indices corresponding to the max value, like (3,7,10) and use it in indexing another tensor)
When I tried
max_val1, idx1 = torch.max(my_tensor,0)
max_val2, max_idx2 = torch.max(max_val1, 0)
max_idx1 = idx1[max_idx2]
indexing a tensor with an object of type LongTensor. The only supported types are integers, slices, numpy scalars and torch.LongTensor or torch.ByteTensor as the only argument.
is the error. And it is not flexible for changing N. Is there any more direct way? How can I solve it?
|
st116611
|
Hi,
one way to solve it is to flatten the tensor, take the maximum index along dimension 0 and then unravel the index 171, either using numpy or your own logic (probably you will come up with something less clumsy ):
rawmaxidx = mytensor.view(-1).min(0)[1]
idx = []
for adim in list(mytensor.size())[::-1]:
idx.append(rawmaxidx%adim)
rawmaxidx = rawmaxidx / adim
idx = torch.cat(idx)
(Note that pytorch / on LongTensor is similar to python2 / or python3 // for ints.
Best regards
Thomas
|
st116612
|
Based on above question related to indexes, i have a question too.
I get max and index for the max, from 1xn vector containing LongTensor values. index obtained is also of LongTensor type. I want to further use this to obtain a value from dictionary.
dict = {1: ‘hello’ , 2: ‘world’}
how to mention the int in dict using LongTensor.
dict[index] gives error.
Please let me know
Thanks
|
st116613
|
If you have a tensor t, an index t [0] (with as many dimensions as t is a plain python value. For a Variable v use v.data[0].
Best regards
Thomas
|
st116614
|
Thank you, both of your suggestions worked for me, I modified your code to find indices first.
|
st116615
|
but I have a question, why cant we directly put v.data and get a single element instead? It is a very basic operation
|
st116616
|
Variable v is say wrapper of tensor t than v.data equals t. Calling .data just reveals the tensor that Variable wraps around
|
st116617
|
Hey all,
I’d like to use the pre-trained Resnet in all of it’s glory, but I’m having a hard time finding the labels corresponding to each output. Could someone point me in the right direction?
Also, the outputs are all linear. I’m assuming the pre-trained model needs a softmax applied to the output?
|
st116618
|
First the pre-trained Resnet has been trained on ImageNet database 69 which has 1000 categories 251.
So the output is a vector containing 1000 float or double scalar values. Each value represents the score of the image belonging to the category matching its index. Therefore you don’t need to apply the softmax function on the output as you’re not training the model, you’re using it to make predictions. You just look for the index that holds the max score and this one corresponds to the category predicted by the model.
Example:
Suppose we have 3 categories instead of 1000 and extension is straightforward.
Cateogories are : cat=0, dog=1, puma=2. Here the output is a 3 scalar float values vector corresponding to the score of each category. So an output of outp = [2.3, 7.22, -4.25]
means score(Image is cat) = 2.3, score(Image is dog) = 7.22 and score(Image is puma) = -4.25.
Index of the max score is 1 in outp. So the model predicts that the image is a dog.
The remaining thing is what are the category names of each one of 0, …, 999 indexes of the ImageNet. You can find them here ImageNet index - class name 449
Hopefully it’s clear.
|
st116619
|
I forgot to mention how to get the label predictions from the output or scores. It’s very easy. Note that output from a model is always a tensor that has a max method:
output = pretrained_resnet101(input)
predictions = output.max(1)
output has size [N x1000 x 1 x 1] if you don’t modify the Resnet and N is the batch size.
We want indexes of max scores over categories which are on the dimension with index 1 so we take the max over the dimension with index 1.
|
st116620
|
is
a = x.mm(W)**2
really all one needs? Its so simply that I just want to double check it.
|
st116621
|
I would like to use the running average of parameters instead of using the parameters from the training directly at the test session.
To do this, I initialized the running average parameters from the network as
avg_param = torch.cat([param.view(-1) for param in model.parameters()],0)
Then, I performed the running average at each training iteration as
avg_param = 0.9*avg_param + 0.1*torch.cat([param.data.view(-1) for param in model.parameters()],0)
Finally, at the test session, I loaded the parameters as
i = 0
for param in model.parameters():
param = avg_param[i:i+param.nelement()].resize(*param.size())
i = i+param.nelement()
Is this process correct ?
|
st116622
|
There are a few problems I can see:
You should back up param.data.view(-1) in the first point. You’re not going to backprop to the average.
Content of the tensor after resize is unspecified! You can get arbitrary garbage in your tensor. Use .view to change the shape if you need to.
You’re only overwriting the local reference to param, it doesn’t change your model at all. It’s as if you did that: a = model.linear.weight; a = Variable(...)
You never back up the original parameters of your model - they are overwrittien by the average for the test and you won’t restore them to the previous state. Not sure if that’s what you wanted.
This would be correct:
def flatten_params():
return torch.cat([param.data.view(-1) for param in model.parameters()], 0)
def load_params(flattened):
offset = 0
for param in model.parameters():
param.data.copy_(flattened[offset:offset + param.nelement()]).view(param.size())
offset += param.nelement()
avg_param = flatten_params() # initialize
def train():
...
avg_param = 0.9 * avg_param + 0.1 * flatten_params()
def test():
original_param = flatten_params() # save current params
load_params(avg_param) # load the average
...
load_params(original_param) # restore parameters
|
st116623
|
Thanks for your reply.
Before employing the running average, my code occupied only half of video memory.
But, when I tried as you suggested, it didn’t proceed after 2 iterations due to ‘out of memory’.
The error is shown below.
CompleteTHCudaCheck FAIL file=/data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.9_1487349287443/work/torch/lib/THC/generic/THCStorage.cu line=66 error=2 : out of memory
Traceback (most recent call last):
File "inception_ae_resume_ra.py", line 251, in <module>
train_iter_loss, avg_param = train(config, epoch, avg_param)
File "inception_ae_resume_ra.py", line 166, in train
avg_param = 0.9*avg_param + 0.1*flatten_params()
File "/home/sypark/anaconda2/envs/py36/lib/python3.6/site-packages/torch/tensor.py", line 320, in __mul__
return self.mul(other)
RuntimeError: cuda runtime error (2) : out of memory at /data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.9_1487349287443/work/torch/lib/THC/generic/THCStorage.cu:66
|
st116624
|
Your model must have a lot of parameters. Instead of flattening them into a single big tensor, you process it in parts:
from copy import deepcopy
avg_param = deepcopy(list(p.data for p in model.parameters()))
def train():
...
for p, avg_p in zip(model.parameters(), avg_param):
avg_p.mul_(0.9).add_(0.1, p.data)
Not sure if you’ll manage to fit another copy of the params in memory, so you can restore them after testing.
|
st116625
|
Thanks for your reply.
As suggested, the train works well without memory issue.
But, in load_params(avg_param) at the test session, I got the following error.
param.data.copy_(flattened[offset:offset + param.nelement()]).view(param.size())
RuntimeError: copy from list to FloatTensor isn't implemented
I think the load_param function should be modified due to the list.
|
st116626
|
I modified the functions as below.
def load_params(avg_param):
for p, avg_p in zip(model.parameters(), avg_param):
p.data = deepcopy(avg_p)
def flatten_params():
flatten = deepcopy(list(p.data for p in model.parameters()))
return flatten
def load_params(flattened):
for p, avg_p in zip(model.parameters(), flattened):
p.data = deepcopy(avg_p)
Currently, it works without error.
But, I am not sure it works as I intended.
|
st116627
|
This would be better:
def load_params(flattened):
for p, avg_p in zip(model.parameters(), flattened):
p.data.copy_(avg_p)
Also, note that they’re no longer flattened, so you might want to change the name.
|
st116628
|
why didn’t u wrap avg_param is a Variable with autograd set to false? As in something like:
W = Variable(w_init, requires_grad=True)
W_avg = Variable(torch.FloatTensor(W).type(dtype), requires_grad=False)
for i in range(nb_iterations):
#some GD stuff...
W_avg = (1/nb_iter)*W + W_avg
|
st116629
|
I have implemented Siamese network for my project as the following:
import torchvision.models as models
class Siamese(nn.Module):
def __init__(self, num_output=1):
super(Siamese, self).__init__()
self.head_model = models.__dict__['resnet18'](num_classes=1024)
self.tail_model = nn.Sequential(
nn.Linear(1024, 1024),
nn.BatchNorm1d(1024),
nn.ReLU(),
nn.Linear(1024, num_output)
)
for m in self.modules():
if isinstance(m, nn.ReLU):
m.inplace = False
def forward(self, input1, input2):
feat1 = self.head_model(input1)
feat2 = self.head_model(input2)
feat_diff = (feat1 - feat2) ** 2
output = self.tail_model(feat_diff)
return output
[resolved, see updates below] When I train it with single gpu, it crashes with the following output. (Similar to How to implement siamese network?)
*** Error in `python': free(): invalid pointer: 0x00007f6b83d25ac0 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f6bbff5b7e5]
/lib/x86_64-linux-gnu/libc.so.6(+0x8037a)[0x7f6bbff6437a]
/lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f6bbff6853c]
/usr/lib/x86_64-linux-gnu/libstdc++.so.6(_ZNSt15basic_stringbufIcSt11char_traitsIcESaIcEE8overflowEi+0x181)[0x7f6bbc523fa1]
/usr/lib/x86_64-linux-gnu/libstdc++.so.6(_ZNSt15basic_streambufIcSt11char_traitsIcEE6xsputnEPKcl+0x89)[0x7f6bbc57ae79]
/home/yuluo/.local/lib/python2.7/site-packages/torch/lib/libshm.so(_ZSt16__ostream_insertIcSt11char_traitsIcEERSt13basic_ostreamIT_T0_ES6_PKS3_l+0x1c5)[0x7f6b83a9a235]
/home/yuluo/.local/lib/python2.7/site-packages/torch/_C.so(+0x5d2742)[0x7f6b842fb742]
/home/yuluo/.local/lib/python2.7/site-packages/torch/_C.so(+0x5d33ae)[0x7f6b842fc3ae]
/home/yuluo/.local/lib/python2.7/site-packages/torch/_C.so(_ZN5torch2nn33SpatialConvolutionMM_updateOutputEPN4thpp6TensorES3_S3_S3_S3_S3_iiiiii+0xb3)[0x7f6b843100a3]
/home/yuluo/.local/lib/python2.7/site-packages/torch/_C.so(+0x5cae27)[0x7f6b842f3e27]
/home/yuluo/.local/lib/python2.7/site-packages/torch/_C.so(_ZN5torch8autograd11ConvForward5applyERKSt6vectorISt10shared_ptrINS0_8VariableEESaIS5_EE+0x17bf)[0x7f6b842f855f]
/home/yuluo/.local/lib/python2.7/site-packages/torch/_C.so(+0x5c181b)[0x7f6b842ea81b]
python(PyObject_Call+0x43)[0x4b0cb3]
python(PyEval_EvalFrameEx+0x5faf)[0x4c9faf]
python(PyEval_EvalCodeEx+0x255)[0x4c2765]
python(PyEval_EvalFrameEx+0x6099)[0x4ca099]
python(PyEval_EvalCodeEx+0x255)[0x4c2765]
python[0x4de8b8]
python(PyObject_Call+0x43)[0x4b0cb3]
python(PyEval_EvalFrameEx+0x2ad1)[0x4c6ad1]
python(PyEval_EvalCodeEx+0x255)[0x4c2765]
python[0x4de6fe]
python(PyObject_Call+0x43)[0x4b0cb3]
python[0x4f492e]
python(PyObject_Call+0x43)[0x4b0cb3]
python[0x553187]
python(PyObject_Call+0x43)[0x4b0cb3]
python(PyEval_EvalFrameEx+0x5faf)[0x4c9faf]
python(PyEval_EvalCodeEx+0x255)[0x4c2765]
python[0x4de8b8]
python(PyObject_Call+0x43)[0x4b0cb3]
python(PyEval_EvalFrameEx+0x2ad1)[0x4c6ad1]
python(PyEval_EvalCodeEx+0x255)[0x4c2765]
python[0x4de6fe]
python(PyObject_Call+0x43)[0x4b0cb3]
python[0x4f492e]
python(PyObject_Call+0x43)[0x4b0cb3]
python[0x553187]
python(PyObject_Call+0x43)[0x4b0cb3]
python(PyEval_EvalFrameEx+0x5faf)[0x4c9faf]
python(PyEval_EvalFrameEx+0x5d8f)[0x4c9d8f]
python(PyEval_EvalFrameEx+0x5d8f)[0x4c9d8f]
python(PyEval_EvalFrameEx+0x5d8f)[0x4c9d8f]
python(PyEval_EvalCodeEx+0x255)[0x4c2765]
python(PyEval_EvalCode+0x19)[0x4c2509]
python[0x4f1def]
python(PyRun_FileExFlags+0x82)[0x4ec652]
python(PyRun_SimpleFileExFlags+0x191)[0x4eae31]
python(Py_Main+0x68a)[0x49e14a]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7f6bbff04830]
python(_start+0x29)[0x49d9d9]
======= Memory map: ========
00400000-006ea000 r-xp 00000000 08:02 7472346 /usr/bin/python2.7
008e9000-008eb000 r--p 002e9000 08:02 7472346 /usr/bin/python2.7
008eb000-00962000 rw-p 002eb000 08:02 7472346 /usr/bin/python2.7
00962000-00985000 rw-p 00000000 00:00 0
00d77000-1fffe5000 rw-p 00000000 00:00 0 [heap]
200000000-200200000 rw-s 70e29b000 00:06 493 /dev/nvidiactl
200200000-200400000 ---p 00000000 00:00 0
200400000-200404000 rw-s 696146000 00:06 493 /dev/nvidiactl
200404000-200600000 ---p 00000000 00:00 0
200600000-200a00000 rw-s 122f97000 00:06 493 /dev/nvidiactl
200a00000-201800000 ---p 00000000 00:00 0
201800000-201804000 rw-s 5ac93a000 00:06 493 /dev/nvidiactl
201804000-201a00000 ---p 00000000 00:00 0
201a00000-201e00000 rw-s 14847b000 00:06 493 /dev/nvidiactl
201e00000-202c00000 ---p 00000000 00:00 0
202c00000-202c04000 rw-s 82be5e000 00:06 493 /dev/nvidiactl
202c04000-202e00000 ---p 00000000 00:00 0
202e00000-203200000 rw-s 5ca437000 00:06 493 /dev/nvidiactl
203200000-204000000 ---p 00000000 00:00 0
204000000-204004000 rw-s 559932000 00:06 493 /dev/nvidiactl
204004000-204200000 ---p 00000000 00:00 0
204200000-204600000 rw-s 711697000 00:06 493 /dev/nvidiactl
204600000-205400000 ---p 00000000 00:00 0
205400000-205404000 rw-s 5bc316000 00:06 493 /dev/nvidiactl
205404000-205600000 ---p 00000000 00:00 0
205600000-205a00000 rw-s 5d46b7000 00:06 493 /dev/nvidiactl
205a00000-206800000 ---p 00000000 00:00 0
206800000-206804000 rw-s 83998e000 00:06 493 /dev/nvidiactl
206804000-206a00000 ---p 00000000 00:00 0
206a00000-206e00000 rw-s 14af77000 00:06 493 /dev/nvidiactl
206e00000-207c00000 ---p 00000000 00:00 0
207c00000-207c04000 rw-s 6bb1ee000 00:06 493 /dev/nvidiactl
207c04000-207e00000 ---p 00000000 00:00 0
207e00000-208200000 rw-s 78418b000 00:06 493 /dev/nvidiactl
208200000-209000000 ---p 00000000 00:00 0
209000000-209004000 rw-s 10aeae000 00:06 493 /dev/nvidiactl
209004000-209200000 ---p 00000000 00:00 0
209200000-209600000 rw-s 152ec7000 00:06 493 /dev/nvidiactl
209600000-20a400000 ---p 00000000 00:00 0
20a400000-20a404000 rw-s 15ef63000 00:06 493 /dev/nvidiactl
20a404000-20a600000 ---p 00000000 00:00 0
20a600000-20aa00000 rw-s 15f0ec000 00:06 493 /dev/nvidiactl
20aa00000-20aa04000 rw-s 11b073000 00:06 493 /dev/nvidiactl
20aa04000-20ac00000 ---p 00000000 00:00 0
20ac00000-20b000000 rw-s 6c7b00000 00:06 493 /dev/nvidiactl
20b000000-20b004000 rw-s 130997000 00:06 493 /dev/nvidiactl
20b004000-20b200000 ---p 00000000 00:00 0
20b200000-20b600000 rw-s 7786ec000 00:06 493 /dev/nvidiactl
20b600000-20b604000 rw-s 156043000 00:06 493 /dev/nvidiactl
20b604000-20b800000 ---p 00000000 00:00 0
20b800000-20bc00000 rw-s 6c4ef0000 00:06 493 /dev/nvidiactl
20bc00000-20bc04000 rw-s 6595af000 00:06 493 /dev/nvidiactl
20bc04000-20be00000 ---p 00000000 00:00 0
20be00000-20c200000 rw-s 711444000 00:06 493 /dev/nvidiactl
20c200000-20c204000 rw-s 775ed7000 00:06 493 /dev/nvidiactl
20c204000-20c400000 ---p 00000000 00:00 0
20c400000-20c800000 rw-s 521f0c000 00:06 493 /dev/nvidiactl
20c800000-20c804000 rw-s 6c4f3f000 00:06 493 /dev/nvidiactl
20c804000-20ca00000 ---p 00000000 00:00 0
20ca00000-20ce00000 rw-s 7402cd000 00:06 493 /dev/nvidiactl
20ce00000-20ce04000 rw-s 135948000 00:06 493 /dev/nvidiactl
20ce04000-20d000000 ---p 00000000 00:00 0
20d000000-20d400000 rw-s 7b1e05000 00:06 493 /dev/nvidiactl
20d400000-400200000 ---p 00000000 00:00 0
10000000000-10304200000 ---p 00000000 00:00 0
10304200000-10304400000 rw-s 6f26a4000 00:06 493 /dev/nvidiactl
10304400000-10304600000 rw-s 13530f000 00:06 493 /dev/nvidiactl
10304600000-10304800000 rw-s 43c0b4000 00:06 493 /dev/nvidiactl
10304800000-10304ad6000 rw-s 73e694000 00:06 493 /dev/nvidiactl
10304ad6000-1030e200000 ---p 00000000 00:00 0
1030e200000-1030e400000 rw-s 00000000 00:05 3431410 /dev/zero (deleted)
1030e400000-10315a00000 ---p 00000000 00:00 0
10315a00000-1031a380000 rw-s 00000000 00:05 3437954 /dev/zero (deleted)
1031a380000-1031a400000 ---p 00000000 00:00 0
1031a400000-1031ed80000 rw-s 00000000 00:05 3437955 /dev/zero (deleted)
1031ed80000-1031ee00000 ---p 00000000 00:00 0
1031ee00000-1031f000000 rw-s 00000000 00:05 3437956 /dev/zero (deleted)
1031f000000-10323980000 rw-s 00000000 00:05 3437961 /dev/zero (deleted)
10323980000-10323a00000 ---p 00000000 00:00 0
10323a00000-10328380000 rw-s 00000000 00:05 3437962 /dev/zero (deleted)
10328380000-10328400000 ---p 00000000 00:00 0
10328400000-1032cd80000 rw-s 00000000 00:05 3437967 /dev/zero (deleted)
1032cd80000-1032ce00000 ---p 00000000 00:00 0
1032ce00000-10331780000 rw-s 00000000 00:05 3437968 /dev/zero (deleted)
10331780000-10331800000 ---p 00000000 00:00 0
10331800000-10336180000 rw-s 00000000 00:05 3427061 /dev/zero (deleted)
10336180000-10336200000 ---p 00000000 00:00 0
10336200000-1033ab80000 rw-s 00000000 00:05 3427062 /dev/zero (deleted)
1033ab80000-1033ac00000 ---p 00000000 00:00 0
1033ac00000-1033f580000 rw-s 00000000 00:05 3427067 /dev/zero (deleted)
1033f580000-1033f600000 ---p 00000000 00:00 0
1033f600000-10343f80000 rw-s 00000000 00:05 3427068 /dev/zero (deleted)
10343f80000-10344000000 ---p 00000000 00:00 0
7f68e4183000-7f6a2b902000 rw-p 00000000 00:00 0
7f6af0000000-7f6af0021000 rw-p 00000000 00:00 0
7f6af0021000-7f6af4000000 ---p 00000000 00:00 0
7f6af6d00000-7f6afb680000 rw-s 00000000 00:16 147 /dev/shm/torch_11743_4183124175 (deleted)
7f6afb680000-7f6b00000000 rw-s 00000000 00:16 112 /dev/shm/torch_11743_869236225 (deleted)
7f6b00000000-7f6b00021000 rw-p 00000000 00:00 0
7f6b00021000-7f6b04000000 ---p 00000000 00:00 0
7f6b077ff000-7f6b07800000 ---p 00000000 00:00 0
7f6b07800000-7f6b08000000 rw-p 00000000 00:00 0
7f6b08000000-7f6b08021000 rw-p 00000000 00:00 0
7f6b08021000-7f6b0c000000 ---p 00000000 00:00 0
7f6b0c000000-7f6b0c021000 rw-p 00000000 00:00 0
7f6b0c021000-7f6b10000000 ---p 00000000 00:00 0
7f6b10000000-7f6b10021000 rw-p 00000000 00:00 0
7f6b10021000-7f6b14000000 ---p 00000000 00:00 0
7f6b140fb000-7f6b141fb000 rw-p 00000000 00:00 0
7f6b141fb000-7f6b1a1fb000 ---p 00000000 00:00 0
7f6b1a1fb000-7f6b26567000 rw-p 00000000 00:00 0
7f6b265fd000-7f6b290fd000 rw-p 00000000 00:00 0
7f6b290fd000-7f6b29101000 r-xp 00000000 08:02 7605181 /usr/lib/python2.7/lib-dynload/termios.x86_64-linux-gnu.so
7f6b29101000-7f6b29300000 ---p 00004000 08:02 7605181 /usr/lib/python2.7/lib-dynload/termios.x86_64-linux-gnu.so
7f6b29300000-7f6b29301000 r--p 00003000 08:02 7605181 /usr/lib/python2.7/lib-dynload/termios.x86_64-linux-gnu.so
7f6b29301000-7f6b29303000 rw-p 00004000 08:02 7605181 /usr/lib/python2.7/lib-dynload/termios.x86_64-linux-gnu.so
7f6b29303000-7f6b2930e000 r-xp 00000000 08:02 3015825 /lib/x86_64-linux-gnu/libnss_files-2.23.so
7f6b2930e000-7f6b2950d000 ---p 0000b000 08:02 3015825 /lib/x86_64-linux-gnu/libnss_files-2.23.so
7f6b2950d000-7f6b2950e000 r--p 0000a000 08:02 3015825 /lib/x86_64-linux-gnu/libnss_files-2.23.so
7f6b2950e000-7f6b2950f000 rw-p 0000b000 08:02 3015825 /lib/x86_64-linux-gnu/libnss_files-2.23.so
7f6b2950f000-7f6b2a455000 rw-p 00000000 00:00 0
7f6b2a460000-7f6b2a560000 rw-p 00000000 00:00 0
7f6b2a560000-7f6b2a5a2000 r-xp 00000000 08:02 7478123 /usr/lib/nvidia-375/libnvidia-fatbinaryloader.so.375.66
7f6b2a5a2000-7f6b2a7a1000 ---p 00042000 08:02 7478123 /usr/lib/nvidia-375/libnvidia-fatbinaryloader.so.375.66
7f6b2a7a1000-7f6b2a7ab000 rw-p 00041000 08:02 7478123 /usr/lib/nvidia-375/libnvidia-fatbinaryloader.so.375.66
7f6b2a7ab000-7f6b2a7ac000 rw-p 00000000 00:00 0
7f6b2a7ac000-7f6b2ae6e000 r-xp 00000000 08:02 7482960 /usr/lib/x86_64-linux-gnu/libcuda.so.375.66
7f6b2ae6e000-7f6b2b06d000 ---p 006c2000 08:02 7482960 /usr/lib/x86_64-linux-gnu/libcuda.so.375.66
7f6b2b06d000-7f6b2b188000 rw-p 006c1000 08:02 7482960 /usr/lib/x86_64-linux-gnu/libcuda.so.375.66
7f6b2b188000-7f6b2b195000 rw-p 00000000 00:00 0
7f6b2b195000-7f6b2b264000 r-xp 00000000 08:02 7480311 /usr/lib/x86_64-linux-gnu/libsqlite3.so.0.8.6
7f6b2b264000-7f6b2b464000 ---p 000cf000 08:02 7480311 /usr/lib/x86_64-linux-gnu/libsqlite3.so.0.8.6
7f6b2b464000-7f6b2b467000 r--p 000cf000 08:02 7480311 /usr/lib/x86_64-linux-gnu/libsqlite3.so.0.8.6
7f6b2b467000-7f6b2b469000 rw-p 000d2000 08:02 7480311 /usr/lib/x86_64-linux-gnu/libsqlite3.so.0.8.6
7f6b2b469000-7f6b2b46a000 rw-p 00000000 00:00 0
7f6b2b46a000-7f6b2b47c000 r-xp 00000000 08:02 7605164 /usr/lib/python2.7/lib-dynload/_sqlite3.x86_64-linux-gnu.so
7f6b2b47c000-7f6b2b67b000 ---p 00012000 08:02 7605164 /usr/lib/python2.7/lib-dynload/_sqlite3.x86_64-linux-gnu.so
7f6b2b67b000-7f6b2b67c000 r--p 00011000 08:02 7605164 /usr/lib/python2.7/lib-dynload/_sqlite3.x86_64-linux-gnu.so
7f6b2b67c000-7f6b2b67e000 rw-p 00012000 08:02 7605164 /usr/lib/python2.7/lib-dynload/_sqlite3.x86_64-linux-gnu.so
7f6b2b67e000-7f6b4dd81000 rw-p 00000000 00:00 0
7f6b4dd81000-7f6b4dd91000 r-xp 00000000 08:02 2102527 /home/yuluo/.local/lib/python2.7/site-packages/h5py/h5l.so
7f6b4dd91000-7f6b4df90000 ---p 00010000 08:02 2102527 /home/yuluo/.local/lib/python2.7/site-packages/h5py/h5l.so
7f6b4df90000-7f6b4df93000 rw-p 0000f000 08:02 2102527 /home/yuluo/.local/lib/python2.7/site-packages/h5py/h5l.so
7f6b4df93000-7f6b4df95000 rw-p 0004c000 08:02 2102527 /home/yuluo/.local/lib/python2.7/site-packages/h5py/h5l.so
7f6b4df95000-7f6b4dfa7000 r-xp 00000000 08:02 2102510 /home/yuluo/.local/lib/python2.7/site-packages/h5py/h5o.so
7f6b4dfa7000-7f6b4e1a6000 ---p 00012000 08:02 2102510 /home/yuluo/.local/lib/python2.7/site-packages/h5py/h5o.so
7f6b4e1a6000-7f6b4e1aa000 rw-p 00011000 08:02 2102510 /home/yuluo/.local/lib/python2.7/site-packages/h5py/h5o.so
7f6b4e1aa000-7f6b4e1ac000 rw-p 00057000 08:02 2102510 /home/yuluo/.local/lib/python2.7/site-packages/h5py/h5o.so
7f6b4e1ac000-7f6b4e1b0000 r-xp 00000000 08:02 2102506 /home/yuluo/.local/lib/python2.7/site-packages/h5py/h5fd.so
7f6b4e1b0000-7f6b4e3b0000 ---p 00004000 08:02 2102506 /home/yuluo/.local/lib/python2.7/site-packages/h5py/h5fd.so
7f6b4e3b0000-7f6b4e3b1000 rw-p 00004000 08:02 2102506 /home/yuluo/.local/lib/python2.7/site-packages/h5py/h5fd.so
7f6b4e3b1000-7f6b4e3b3000 rw-p 00014000 08:02 2102506 /home/yuluo/.local/lib/python2.7/site-packages/h5py/h5fd.so
7f6b4e3b3000-7f6b4e3be000 r-xp 00000000 08:02 2102514 /home/yuluo/.local/lib/python2.7/site-packages/h5py/h5i.so
7f6b4e3be000-7f6b4e5be000 ---p 0000b000 08:02 2102514 /home/yuluo/.local/lib/python2.7/site-packages/h5py/h5i.so
7f6b4e5be000-7f6b4e5c0000 rw-p 0000b000 08:02 2102514 /home/yuluo/.local/lib/python2.7/site-packages/h5py/h5i.so
7f6b4e5c0000-7f6b4e5c2000 rw-p 00036000 08:02 2102514 /home/yuluo/.local/lib/python2.7/site-packages/h5py/h5i.so
7f6b4e5c2000-7f6b4e5df000 r-xp 00000000 08:02 2102517 /home/yuluo/.local/lib/python2.7/site-packages/h5py/h5g.so
7f6b4e5df000-7f6b4e7de000 ---p 0001d000 08:02 2102517 /home/yuluo/.local/lib/python2.7/site-packages/h5py/h5g.so
7f6b4e7de000-7f6b4e7e1000 rw-p 0001c000 08:02 2102517 /home/yuluo/.local/lib/python2.7/site-packages/h5py/h5g.so
7f6b4e7e1000-7f6b4e7e2000 rw-p 00000000 00:00 0
7f6b4e7e2000-7f6b4e7e4000 rw-p 00087000 08:02 2102517 /home/yuluo/.local/lib/python2.7/site-packages/h5py/h5g.so
7f6b4e7e4000-7f6b4e7fb000 r-xp 00000000 08:02 2102508 /home/yuluo/.local/lib/python2.7/site-packages/h5py/h5f.so
7f6b4e7fb000-7f6b4e9fa000 ---p 00017000 08:02 2102508 /home/yuluo/.local/lib/python2.7/site-packages/h5py/h5f.so
7f6b4e9fa000-7f6b4e9fe000 rw-p 00016000 08:02 2102508 /home/yuluo/.local/lib/python2.7/site-packages/h5py/h5f.so
7f6b4e9fe000-7f6b4ea00000 rw-p 00068000 08:02 2102508 /home/yuluo/.local/lib/python2.7/site-packages/h5py/h5f.so
7f6b4ea00000-7f6b4ea40000 rw-p 00000000 00:00 0
7f6b4ea80000-7f6b4eb80000 rw-p 00000000 00:00 0
7f6b4eb80000-7f6b4eba9000 r-xp 00000000 08:02 2491420 /home/yuluo/.local/lib/python2.7/site-packages/scipy/spatial/_hausdorff.so
7f6b4eba9000-7f6b4eda8000 ---p 00029000 08:02 2491420 /home/yuluo/.local/lib/python2.7/site-packages/scipy/spatial/_hausdorff.so
7f6b4eda8000-7f6b4edab000 rw-p 00028000 08:02 2491420 /home/yuluo/.local/lib/python2.7/site-packages/scipy/spatial/_hausdorff.so
7f6b4edab000-7f6b4edac000 rw-p 00000000 00:00 0
7f6b4edac000-7f6b4edc2000 r-xp 00000000 08:02 2491411 /home/yuluo/.local/lib/python2.7/site-packages/scipy/spatial/_distance_wrap.so
7f6b4edc2000-7f6b4efc2000 ---p 00016000 08:02 2491411 /home/yuluo/.local/lib/python2.7/site-packages/scipy/spatial/_distance_wrap.so
7f6b4efc2000-7f6b4efc3000 rw-p 00016000 08:02 2491411 /home/yuluo/.local/lib/python2.7/site-packages/scipy/spatial/_distance_wrap.so
7f6b4efc3000-7f6b4efec000 r-xp 00000000 08:02 2491417 /home/yuluo/.local/lib/python2.7/site-packages/scipy/spatial/_voronoi.so
7f6b4efec000-7f6b4f1ec000 ---p 00029000 08:02 2491417 /home/yuluo/.local/lib/python2.7/site-packages/scipy/spatial/_voronoi.so
7f6b4f1ec000-7f6b4f1ef000 rw-p 00029000 08:02 2491417 /home/yuluo/.local/lib/python2.7/site-packages/scipy/spatial/_voronoi.so
7f6b4f1ef000-7f6b4f4f0000 rw-p 00000000 00:00 0
7f6b4f4f0000-7f6b4f5cf000 r-xp 00000000 08:02 2491408 /home/yuluo/.local/lib/python2.7/site-packages/scipy/spatial/qhull.so
7f6b4f5cf000-7f6b4f7ce000 ---p 000df000 08:02 2491408 /home/yuluo/.local/lib/python2.7/site-packages/scipy/spatial/qhull.so
7f6b4f7ce000-7f6b4f7d7000 rw-p 000de000 08:02 2491408 /home/yuluo/.local/lib/python2.7/site-packages/scipy/spatial/qhull.so
7f6b4f7d7000-7f6b4f7d9000 rw-p 00000000 00:00 0
7f6b4f7d9000-7f6b4f7e2000 rw-p 003bd000 08:02 2491408 /home/yuluo/.local/lib/python2.7/site-packages/scipy/spatial/qhull.so
7f6b4f7e2000-7f6b4f879000 r-xp 00000000 08:02 2491415 /home/yuluo/.local/lib/python2.7/site-packages/scipy/spatial/ckdtree.so
7f6b4f879000-7f6b4fa79000 ---p 00097000 08:02 2491415 /home/yuluo/.local/lib/python2.7/site-packages/scipy/spatial/ckdtree.so
7f6b4fa79000-7f6b4fa81000 rw-p 00097000 08:02 2491415 /home/yuluo/.local/lib/python2.7/site-packages/scipy/spatial/ckdtree.so
7f6b4fa81000-7f6b4facf000 r-xp 00000000 08:02 2491814 /home/yuluo/.local/lib/python2.7/site-packages/scipy/interpolate/interpnd.so
7f6b4facf000-7f6b4fcce000 ---p 0004e000 08:02 2491814 /home/yuluo/.local/lib/python2.7/site-packages/scipy/interpolate/interpnd.so
7f6b4fcce000-7f6b4fcd3000 rw-p 0004d000 08:02 2491814 /home/yuluo/.local/lib/python2.7/site-packages/scipy/interpolate/interpnd.so
7f6b4fcd3000-7f6b4fcd4000 rw-p 00000000 00:00 0
7f6b4fcd4000-7f6b4fd24000 r-xp 00000000 08:02 2491821 /home/yuluo/.local/lib/python2.7/site-packages/scipy/interpolate/_ppoly.so
7f6b4fd24000-7f6b4ff23000 ---p 00050000 08:02 2491821 /home/yuluo/.local/lib/python2.7/site-packages/scipy/interpolate/_ppoly.so
7f6b4ff23000-7f6b4ff29000 rw-p 0004f000 08:02 2491821 /home/yuluo/.local/lib/python2.7/site-packages/scipy/interpolate/_ppoly.so
7f6b4ff29000-7f6b4ff2a000 rw-p 00000000 00:00 0
7f6b4ff2a000-7f6b4ff2d000 rw-p 00181000 08:02 2491821 /home/yuluo/.local/lib/python2.7/site-packages/scipy/interpolate/_ppoly.so
7f6b4ff2d000-7f6b4ff69000 r-xp 00000000 08:02 2491817 /home/yuluo/.local/lib/python2.7/site-packages/scipy/interpolate/_bspl.so
7f6b4ff69000-7f6b50169000 ---p 0003c000 08:02 2491817 /home/yuluo/.local/lib/python2.7/site-packages/scipy/interpolate/_bspl.so
7f6b50169000-7f6b5016e000 rw-p 0003c000 08:02 2491817 /home/yuluo/.local/lib/python2.7/site-packages/scipy/interpolate/_bspl.so
7f6b5016e000-7f6b5016f000 rw-p 00000000 00:00 0
7f6b5016f000-7f6b50171000 rw-p 00124000 08:02 2491817 /home/yuluo/.local/lib/python2.7/site-packages/scipy/interpolate/_bspl.so
7f6b50171000-7f6b501b1000 rw-p 00000000 00:00 0
7f6b501b1000-7f6b50211000 r-xp 00000000 08:02 2491823 /home/yuluo/.local/lib/python2.7/site-packages/scipy/interpolate/dfitpack.so
7f6b50211000-7f6b50411000 ---p 00060000 08:02 2491823 /home/yuluo/.local/lib/python2.7/site-packages/scipy/interpolate/dfitpack.so
7f6b50411000-7f6b50418000 rw-p 00060000 08:02 2491823 /home/yuluo/.local/lib/python2.7/site-packages/scipy/interpolate/dfitpack.so
7f6b50418000-7f6b5041a000 rw-p 000b1000 08:02 2491823 /home/yuluo/.local/lib/python2.7/site-packages/scipy/interpolate/dfitpack.so
7f6b5041a000-7f6b5044f000 r-xp 00000000 08:02 2491809 /home/yuluo/.local/lib/python2.7/site-packages/scipy/interpolate/_fitpack.so
7f6b5044f000-7f6b5064f000 ---p 00035000 08:02 2491809 /home/yuluo/.local/lib/python2.7/site-packages/scipy/interpolate/_fitpack.so
7f6b5064f000-7f6b50650000 rw-p 00035000 08:02 2491809 /home/yuluo/.local/lib/python2.7/site-packages/scipy/interpolate/_fitpack.so
7f6b50650000-7f6b50652000 rw-p 0004e000 08:02 2491809 /home/yuluo/.local/lib/python2.7/site-packages/scipy/interpolate/_fitpack.so
7f6b50652000-7f6b5065c000 r-xp 00000000 08:02 2491256 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_nnls.so
7f6b5065c000-7f6b5085c000 ---p 0000a000 08:02 2491256 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_nnls.so
7f6b5085c000-7f6b5085d000 rw-p 0000a000 08:02 2491256 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_nnls.so
7f6b5085d000-7f6b5085f000 rw-p 0001c000 08:02 2491256 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_nnls.so
7f6b5085f000-7f6b50862000 r-xp 00000000 08:02 2491290 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_zeros.so
7f6b50862000-7f6b50a61000 ---p 00003000 08:02 2491290 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_zeros.so
7f6b50a61000-7f6b50a62000 rw-p 00002000 08:02 2491290 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_zeros.so
7f6b50a62000-7f6b50aa2000 rw-p 00000000 00:00 0
7f6b50aa2000-7f6b50ac4000 r-xp 00000000 08:02 2491344 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_lsq/givens_elimination.so
7f6b50ac4000-7f6b50cc4000 ---p 00022000 08:02 2491344 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_lsq/givens_elimination.so
7f6b50cc4000-7f6b50cc7000 rw-p 00022000 08:02 2491344 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_lsq/givens_elimination.so
7f6b50cc7000-7f6b50cf1000 r-xp 00000000 08:02 2491277 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_group_columns.so
7f6b50cf1000-7f6b50ef1000 ---p 0002a000 08:02 2491277 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_group_columns.so
7f6b50ef1000-7f6b50ef4000 rw-p 0002a000 08:02 2491277 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_group_columns.so
7f6b50ef4000-7f6b50f13000 r-xp 00000000 08:02 2491254 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_minpack.so
7f6b50f13000-7f6b51113000 ---p 0001f000 08:02 2491254 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_minpack.so
7f6b51113000-7f6b51114000 rw-p 0001f000 08:02 2491254 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_minpack.so
7f6b51114000-7f6b51116000 rw-p 00031000 08:02 2491254 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_minpack.so
7f6b51116000-7f6b5112d000 r-xp 00000000 08:02 2491275 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_slsqp.so
7f6b5112d000-7f6b5132d000 ---p 00017000 08:02 2491275 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_slsqp.so
7f6b5132d000-7f6b5132e000 rw-p 00017000 08:02 2491275 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_slsqp.so
7f6b5132e000-7f6b51330000 rw-p 0002c000 08:02 2491275 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_slsqp.so
7f6b51330000-7f6b5134e000 r-xp 00000000 08:02 2491253 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_cobyla.so
7f6b5134e000-7f6b5154e000 ---p 0001e000 08:02 2491253 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_cobyla.so
7f6b5154e000-7f6b5154f000 rw-p 0001e000 08:02 2491253 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_cobyla.so
7f6b5154f000-7f6b51551000 rw-p 00035000 08:02 2491253 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_cobyla.so
7f6b51551000-7f6b5155d000 r-xp 00000000 08:02 2491287 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/moduleTNC.so
7f6b5155d000-7f6b5175c000 ---p 0000c000 08:02 2491287 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/moduleTNC.so
7f6b5175c000-7f6b5175d000 rw-p 0000b000 08:02 2491287 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/moduleTNC.so
7f6b5175d000-7f6b5177b000 r-xp 00000000 08:02 2491248 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_lbfgsb.so
7f6b5177b000-7f6b5197b000 ---p 0001e000 08:02 2491248 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_lbfgsb.so
7f6b5197b000-7f6b5197c000 rw-p 0001e000 08:02 2491248 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_lbfgsb.so
7f6b5197c000-7f6b5197f000 rw-p 00035000 08:02 2491248 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/_lbfgsb.so
7f6b5197f000-7f6b51988000 r-xp 00000000 08:02 2491259 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/minpack2.so
7f6b51988000-7f6b51b88000 ---p 00009000 08:02 2491259 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/minpack2.so
7f6b51b88000-7f6b51b89000 rw-p 00009000 08:02 2491259 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/minpack2.so
7f6b51b89000-7f6b51b8b000 rw-p 0001d000 08:02 2491259 /home/yuluo/.local/lib/python2.7/site-packages/scipy/optimize/minpack2.so
7f6b51b8b000-7f6b51bcb000 rw-p 00000000 00:00 0
7f6b51bcb000-7f6b51c57000 r-xp 00000000 08:02 2492214 /home/yuluo/.local/lib/python2.7/site-packages/scipy/sparse/linalg/eigen/arpack/_arpack.so
7f6b51c57000-7f6b51e56000 ---p 0008c000 08:02 2492214 /home/yuluo/.local/lib/python2.7/site-packages/scipy/sparse/linalg/eigen/arpack/_arpack.so
7f6b51e56000-7f6b51e62000 rw-p 0008b000 08:02 2492214 /home/yuluo/.local/lib/python2.7/site-packages/scipy/sparse/linalg/eigen/arpack/_arpack.so
7f6b51e62000-7f6b51e63000 rw-p 00000000 00:00 0
7f6b51e63000-7f6b51e67000 rw-p 000d3000 08:02 2492214 /home/yuluo/.local/lib/python2.7/site-packages/scipy/sparse/linalg/eigen/arpack/_arpack.so
7f6b51e67000-7f6b51eba000 r-xp 00000000 08:02 2492241 /home/yuluo/.local/lib/python2.7/site-packages/scipy/sparse/linalg/dsolve/_superlu.so
7f6b51eba000-7f6b520ba000 ---p 00053000 08:02 2492241 /home/yuluo/.local/lib/python2.7/site-packages/scipy/sparse/linalg/dsolve/_superlu.so
7f6b520ba000-7f6b520bc000 rw-p 00053000 08:02 2492241 /home/yuluo/.local/lib/python2.7/site-packages/scipy/sparse/linalg/dsolve/_superlu.so
When I use DataParallel, it continuously allocate cpu memory until swap memory is full (and system crashes then). But there is no above invalid pointer error.
Things I have tried:
Memory leak is not from dataloader since when I comment all forward and backward, it behaves well.
The problem happens in forward stage since when I comment back propogagtion, it crashes.
Any ideas how to debug on this? It looks like a memory leak of dataloader for me. As for the single gpu, there seems to be some problems.
Thanks in advance.
[Update]
By import torch after scipy, the crash on single GPU is resolved. However, dataparellel still has memory leak.
|
st116630
|
I have a model I am using for sentence classification. At the start I create a vocabulary of the words in the training and validation datasets and then load the Glove word vectors corresponding to these into an embedding layer (which is fixed and not updated during training)
After the embedding layery I have a relatively simple three layer net with a softmax output.
The model works fine on the training and test data and I get about 90% accuracy on both.
I then save the model using the command
torch.save(model.state_dict(), model_path)
Fine so far.
To use the model on new data which hasn’t been categorised in advance I then re-create the model and then read in the parameters as follows:
model.load_state_dict(torch.load(model_path))
model.eval()
The problem is that I now need to change the embedding layer since the vocabulary is different. The approach I have taken is to create a new vocabulary and set of pre-trained word vectors from Glove and then update the embedding layer of the model as follows:
model.embeddings=nn.Embedding(vocab_size, word_vector_length)
model.embeddings.weight.data.copy_(v.vectors)
model.eval()
When I look at the model it looks to have done everything ok and the model will run, however, the resutls are clearly in error. If I actually use the model on data I originally trained it on I get very different (and awful) results (even when the vocab and embedding layer loaded are the same as they were during training)
I have checked and the weights in all of the hidden layers still seem to be the same before and after the loading process. Am I don’t something stupid?
Many thanks
John
|
st116631
|
I originally went off on a tangent in my reply, about ‘original research’ yada yada, but then noticed this sentence ‘(even when the vocab and embedding layer loaded are the same as they were during training)’. So, you can probably test this without even training, just using random numbers, with just like say vocab size 5 etc. This’ll be much quicker/faster/easier to get working, and find the bug. And also means as a bonus you could post a code snippet we could run and try
|
st116632
|
Thanks Hugh, will try this and come back once I have a clearer picture of what is happening
John
|
st116633
|
After trying various simple models it turns out that PyTorch is doing exactly what it should and is consistent. The problem is arising because the Glove word vectors for things it does not recognise such as ‘’ seem to change each time the program is run, which then means that the inputs to the new are changing.
I can avoid this by loading the entire vocabulary for all of the words I need in advance and then training using on the part of the data for which I have category information, then running the remaining data through the model.
I would appreciate any advice as to how others tackle this problem since it can’t be unique to me
Thanks
John
|
st116634
|
Apologies ’ should have been the unknown symbol but it won’t show properly in the window
|
st116635
|
Hi all. I am trying to copy two of my models onto the GPU simultaneously using threads. On normally transferring each of the models, they take around 30 ms each. However when I do them parallely, they take approximately 60-65 ms each. Why is this so and how can I solve it ? please help !! I have used multiprocessing also but even that didnt help.
|
st116636
|
Have you checked the theoretical lowerbound time, based on the size of the data, and the banwidth available between your main memory and the gpu on-card memory?
|
st116637
|
When I run this script:
import torch
a = torch.rand(1000, 10000)
while True:
print('.')
a.tanh_()
and then open htop, I expect to see all 8 cores running at 100%, but only 4 seem to be running? :
Screen Shot 2017-07-17 at 9.01.08 AM.png1544×224 42.4 KB
Thoughts?
[edit: also, the 4 cores that are running are not exactly running at 100% either]
|
st116638
|
Is it always cores 1,3,5,7 that are activated, or is it changing?
It is possible that your ‘a’ is allocated on 4 of the cores, so the function process a loop into a first core (for a first par of ‘a’), then a second etc… Each time, the corresponding core is active at 100%, but over a shirt time. If htop is showing a moving average of the activity, it explains you have 4 cores working at 1/4.
|
st116639
|
I am writing a custom module in which the Function will process the input and create buffers in Function scope to hold the processed data for backpropagation.
During the evaluation/test/inference phase of the model, I see that the memory usage keeps on increasing for the model. I am not sure about the reason.
I have the below questions:
Whether the reason could be: Since the backpropagation is not done during inference, the buffers in the Function are not deallocated?
How long the ‘Function’ created in the custom ‘Module’ be in memory? Is it until the backpropagation?
I want to remove the buffer data in ‘Function’ during inference cases. Is there any way to achieve this?
|
st116640
|
Is there a way to convert FloatTensor to ByteTensor?
I’m trying to do the equivalent of: np.random.uniform(size=images.shape) < images.
I create a new Tensor and initialize it with nn.init.uniform, then perform the comparison with the ByteTensor (images), the result is still a FloatTensor. How do I convert this to ByteTensor?
|
st116641
|
http://35.197.26.245:6006/#graphs 33
I am wondering whether the route created wrt to the following code is correct, especially the two ConvNd nodes below the MaxPool layer. Any ideas?
ps. the graph is traced according to the Varible.grad_fn attribute recursively.
github.com
lanpa/tensorboard-pytorch/blob/master/demo_graph.py 35
import torch
import torch.nn as nn
import torchvision.utils as vutils
import numpy as np
import torch.nn.functional as F
import torchvision.models as models
from tensorboardX import SummaryWriter
class Mnist(nn.Module):
def __init__(self):
super(Mnist, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
self.bn = nn.BatchNorm2d(20)
def forward(self, x):
x = F.max_pool2d(self.conv1(x), 2)
x = F.relu(x)+F.relu(-x)
This file has been truncated. show original
|
st116642
|
When finetuning model from a pretrained one, the CAFFE users would set the BN layers with “use_global_stats: true”, which will employ the mean and std values in the pretrained model for the finetuning stage. In my works, I find that setting is sometimes important for performance. What should I do when using Pytorch if I want to employ the already-learned mean and std rather than the moving average in finetuning jobs?
|
st116643
|
model.train(True)
for m in model.modules():
if isinstance(m, nn.BatchNorm1d) or isinstance(m, nn.BatchNorm2d):
m.eval()
# Run your training here
You can further set requires_grad=False for your BN layers, but that only effects weights + biases.
|
st116644
|
Assume that V is a Variable of size (10,6). I need to calculate the distances (e.g. the euclidean distance) between each row-vector contained in V, and store the 10-by-10 result into D. Is it possible to manage that operation without the “for” iterator?
|
st116645
|
At the expense of memory, you can do
((v.unsqueeze (0)-v.unsqueeze (1))**2).sum (2)**0.5.
This works on master, on 0.1.12 you would need to throw in .expand (10,10,6) after the unsqueezes.
Best regards
Thomas
|
st116646
|
Would the following work? Would it use less memory?
TwoAB = 2 * A @ B.transpose(0,1)
print(torch.sqrt(torch.sum(A * A, 1).expand_as(TwoAB) + torch.sum(B * B, 1).transpose(0,1).expand_as(TwoAB) - TwoAB))
(here A is one matrix, and B is the other. In this thread’s case, they’d both be V)
|
st116647
|
hello every, after i upgrade the pytorch, my code has follow warning, it took less memory, but become slower, can anyone give help me ? any reply will be appreciated.
/home/wen/anaconda3/lib/python2.7/site-packages/torch/autograd/_functions/compare.py:17: UserWarning: self and other not broadcastable, but have the same number of elements. Falling back to deprecated pointwise behavior. mask = getattr(a, cls.fn_name)(b) /home/wen/anaconda3/lib/python2.7/site-packages/torch/autograd/_functions/tensor.py:464: UserWarning: self and mask not broadcastable, but have the same number of elements. Falling back to deprecated pointwise behavior. return tensor.masked_select(mask)
|
st116648
|
The torch.cat function is giving ‘RuntimeError: out of range’ when concatenating an empty variable and a non-empty variable. Please see the example below:
import torch
x = Variable(torch.randn(1, 5).cuda())
y = Variable(torch.randn(4, 5).cuda())
z = torch.cat([x,y], 0) #works fine
The below code gives runtime error:
import torch
x = Variable(torch.randn(0, 5).cuda())
y = Variable(torch.randn(4, 5).cuda())
z = torch.cat([x,y], 0) #'RuntimeError: out of range'
I checked the same scenario with tensors. It works fine.
import torch
x = torch.randn(0, 5).cuda()
y = torch.randn(4, 5).cuda()
z = torch.cat([x,y], 0)
|
st116649
|
Based on the discussion on the issue below, I am tempted think that empty tensors are not expected to function and only do so by accident, but you might check with someone who has first-hand knowledge.
Best regards
Thomas
github.com/pytorch/pytorch
zero-dimensional numpy arrays should raise exception on from_numpy 58
opened
Jul 11, 2017
closed
Jul 13, 2017
serhii-havrylov
It seems that the way how slicing is handled in pytorch is a little bit inconsistent with numpy and with pytorch...
medium priority (this tag is deprecated)
|
st116650
|
Thanks for your input!
Maybe, my understanding was the other way around that the empty tensors should work.
Anyway, there is a bit of inconsistency. If we can make it consistent, it will be great.
I am using torch version ‘0.1.12+036c3f9’. I am not sure where to find the file to modify the function.
|
st116651
|
I am getting the following error while trying to use the BCEWithLogitsLoss from torch.nn.
AttributeError: module 'torch.nn' has no attribute 'BCEWithLogitsLoss'
I am wondering why it is happening?
|
st116652
|
In [1]: import torch
In [2]: dir(torch.nn)
Out[2]:
['AdaptiveAvgPool1d',
'AdaptiveAvgPool2d',
'AdaptiveMaxPool1d',
'AdaptiveMaxPool2d',
'AvgPool1d',
'AvgPool2d',
'AvgPool3d',
'BCELoss',
'BatchNorm1d',
'BatchNorm2d',
'BatchNorm3d',
'Bilinear',
'ConstantPad2d',
...
No such method, only BSELoss exists. What makes you feel BCEWithLogitsLoss exists?
|
st116653
|
See the followings.
http://pytorch.org/docs/master/nn.html?highlight=logitsloss#torch.nn.BCEWithLogitsLoss 192
http://pytorch.org/docs/master/nn.html?highlight=binary#torch.nn.functional.binary_cross_entropy_with_logits 88
|
st116654
|
The first two are in master branch. Are you building master from source, or are you using a binary release version? eg what is the result of:
import torch
print(torch.version.__version__)
?
|
st116655
|
Oh, no. I am not building master from source. I am using pytorch 0.1.12_2 binary version.
How can I use those two loss functions?
|
st116656
|
GitHub
pytorch/pytorch 161
Tensors and Dynamic neural networks in Python with strong GPU acceleration - pytorch/pytorch
|
st116657
|
It’s a new feature. You may use some hacky ways to include that from github repo, or install pytorch from source.
|
st116658
|
Is it possible to accomplish the following (from Chainer) in PyTorch?
github.com
chainer/chainerrl/blob/master/chainerrl/agents/acer.py#L193-L200 1
@contextlib.contextmanager
def backprop_truncated(*variables):
backup = [v.creator for v in variables]
for v in variables:
v.unchain()
yield
for v, backup_creator in zip(variables, backup):
v.set_creator(backup_creator)
For reference, I’m trying to implement the “efficient” trust region optimisation from ACER: Trust region update. Whereas my code currently backprops through the entire computation graph of the policy gradient loss and is hence very expensive, the efficient version cuts the graph just before the final layer, then calculates gradients. However, the parents need to be rejoined, as eventually the new loss should be backpropped through the entire graph.
|
st116659
|
You could split it by default in the forward pass and manually pass the gradients of the last bit to the backward of what has been cut off when that is desired.
Best regards
Thomas
|
st116660
|
Very basic POC: https://github.com/hughperkins/pytorch-prettify/blob/master/prettify_errors.ipynb 2
concept:
paste the stack trace into the jupytr notebook
press ctrl-enter, and see some somewhat cleaned up stack trace
|
st116661
|
I am taking the output of a linear layer and resizing it, then I do run CrossEntropyLoss on it. On that line it dies with a type mismatch of:
TypeError: CudaSpatialClassNLLCriterion_updateOutput received an invalid combination of arguments - got (int, torch.cuda.FloatTensor, torch.cuda.FloatTensor, torch.cuda.FloatTensor, bool, NoneType, torch.cuda.FloatTensor), but expected (int state, torch.cuda.FloatTensor input, torch.cuda.LongTensor target, torch.cuda.FloatTensor output, bool sizeAverage, [torch.cuda.FloatTensor weights or None], torch.cuda.FloatTensor total_weight)
the difference seems to be the third argument, it expects torch.cuda.LongTensor target but got torch.cuda.FloatTensor
I have printed out all my types and everything seems to be a FloatTensor. I dont understand where the Long requirement is coming from.
This is the relevant code for training:
criterion = nn.CrossEntropyLoss()
for epoch in range(args.num_epochs):
for i, (images, captions, lengths) in enumerate(data_loader):
decoder.zero_grad()
encoder.zero_grad()
images = Variable(images,volatile=False)
features = encoder(images)
outputs = decoder(features, captions, lengths)
loss = criterion(outputs, images)
and in decoder:
def __init__(self, embed_size, hidden_size, vocab_size, num_layers):
super(DecoderRNN, self).__init__()
self.embed = nn.Embedding(vocab_size, embed_size)
self.lstm = nn.LSTM(embed_size, hidden_size, num_layers, batch_first=True)
self.linear = nn.Linear(hidden_size, vocab_size)
self.linear_two = nn.Linear(4800)
self.init_weights()
def forward(self, features, captions, lengths):
embeddings = self.embed(captions)
embeddings = torch.cat((features.unsqueeze(1), embeddings), 1)
packed = pack_padded_sequence(embeddings, lengths, batch_first=True)
hiddens, _ = self.lstm(packed)
outputs = self.linear(hiddens[0])
outputs = linear.linear_two(outputs)
outputs = outputs.view(outputs.size(0),3,40,40)
print("output rnn type"+str(type(outputs.data)))
return outputs
If I remove these 2 lines,I can get the code running again.
outputs = linear.linear_two(outputs)
outputs = outputs.view(outputs.size(0),3,40,40)
Can anyone give me some insights into what Im doing wrong here?
|
st116662
|
I found this thread which has a similar issue:
Problems with weight array of FloatTensor type in loss function
I have mostly worked on keras with tf backend and sometimes dabbled with torch7. I was intrigued by the pytorch project and wanted to test it out. So, I was trying to run a simple model on a dataset where I loaded my features into a np.float64 array and the target labels into a np.float64 array. Now, PyTorch automatically converted them both to DoubleTensor and that seems okay to me. However, the loss function expects Double Tensors for the Weight and the Bias but apparently it is getting Float …
I printed out the types for all my variables and everything is a float. I dont know why it expects a Long tensor all of a sudden. And I cant find any data that is of type Long. I am stuck here, any help would be appreciated it.
|
st116663
|
The instance of CrossEntropyLoss 75 expects a long tensor with the target classes as second input. Quite possibly you are looking for a different loss function.
Best regards
Thomas
|
st116664
|
@tom Thanks for the help. That does seem to be the issue.
I had looked at the docs earlier. All of the loss functions dont have docs for the instance method. If those where there, I bet they would reduce a lot of confusion for other people
|
st116665
|
Hello everyone,
I’m working with ImageFolder. When I looked at the doc in my jupyter (Shift + Tab), I saw the following:
Init signature: T.Scale(size, interpolation=2)
Docstring:
Rescales the input PIL.Image to the given 'size'.
'size' will be the size of the smaller edge.
For example, if height > width, then image will be
rescaled to (size * height / width, size)
size: size of the smaller edge
interpolation: Default: PIL.Image.BILINEAR
So my question: Is there another way to set the new size with width=height like (224, 224) for example without using the github last version because I’ve noticed the online doc of Scale mentions the first argument, size can be a sequence.
Thank you.
|
st116666
|
Hi I found a short work around probably not elegant but it works.
It’s based on Data Loading and Processing Tutorial - Pytorch Tutorials 0.1.12_2 documentation 6.
I defined a custom transform function:
class Rescale(object):
"""
Rescale the image in a sample to a given size.
Args:
output_size (tuple or tuple): Desired output size. If tuple, output is
matched to output_size.
"""
def __init__(self, size=(224, 224)):
assert isinstance(size, (int, tuple))
self.output_size = size
def __call__(self, img):
import PIL
res = PIL.Image.Image.resize(img, self.output_size)
return res
Then I called directly ImageFolder with Rescale:
train_data = dset.ImageFolder(root=path+'train', transform=T.Compose([Rescale(size=(256,256)), T.ToTensor()]), target_transform=None)
train_loader = DataLoader(train_data,batch_size=4)
for t, (x,y) in enumerate(train_loader):
print(t, x.size())
Output(my dataset has 20 images so 5 batches of size 4):
0 torch.Size([4, 3, 256, 256])
1 torch.Size([4, 3, 256, 256])
2 torch.Size([4, 3, 256, 256])
3 torch.Size([4, 3, 256, 256])
4 torch.Size([4, 3, 256, 256])
Hopefully this going to be helpful for folks wanting such functionality.
|
st116667
|
Hi,
the other convenient option probably is to install torchvision from source. Note that it has its own repository at
GitHub
pytorch/vision 10
Datasets, Transforms and Models specific to Computer Vision - pytorch/vision
Torchvision is considerably easier to install from source than pytorch itself.
Best regards
Thomas
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.