id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st118768
|
Hi!
I’m trying to apply 1D convolution with padding to preserve input size.
Everything is ok with kernel size = 3 and padding = 1, but pytorch fails with SIGSEGV error when kernel size = 5, padding = 2 and input tensor contains squence of length 1. I’m not using CUDA. Is it a bug?
Code to reproduce the problem:
from torch.autograd import Variable
import torch.nn as nn
data = Variable(th.randn(1, 16, 1))
conv_ok = nn.Conv1d(
in_channels=16,
out_channels=16,
kernel_size=3,
padding=1)
conv_not_ok = nn.Conv1d(
in_channels=16,
out_channels=16,
kernel_size=5,
padding=2)
c1 = conv_ok(data)
c2 = conv_not_ok(data)
|
st118769
|
Hi,
I can reproduce the problem and its definitely a bug in our unfold code.
I will send a PR to fix that today.
Sorry for the inconvenience.
|
st118770
|
Yes it is.
I will check the whole function as I’m not sure why lpad and rpad are size_t and not int here…
|
st118771
|
Suppose I don’t use torch.nn.Module and I have some trainable weight matrices w1 and w2. Then how do I feed these parameters into the optimiser? Would I do something like? :
torch.optim.SGD({w1,w2}, lr=1e-4)
|
st118772
|
The first argument you feed when initializing any optimizer should be an iterable of Variables or dictionaries containing Variables. In your case a simple list containing w1 and w2 should be fine as long as those are Variables that require gradients.
import torch
import torch.optim as optim
from torch.autograd import Variable
w = Variable(torch.randn(3, 5), requires_grad=True)
b = Variable(torch.randn(3), requires_grad=True)
x = Variable(torch.randn(5))
optimizer = optim.SGD([w,b], lr=0.01)
optimizer.zero_grad()
y = torch.mv(w, x) + b
y.backward(torch.randn(3))
optimizer.step()
|
st118773
|
May I know what’s the equivalent of PyTorch’s np.empty()?
I would like to have an empty array and concatenate arrays in a loop to this original array.
My current usage:
a = np.empty(0) # This creates an empty array
b = np.concatenate((a, [1])) # New array of shape (1,)
c = np.concatenate((b,[1])) # New array of shape (2,) and so on
|
st118774
|
The fastest way to do this in PyTorch would be to repeatedly append to a list, then call torch.cat on the whole list once at the end. Assuming you’re on GPU, each call to torch.cat is a kernel invocation while appending to a list doesn’t touch the GPU.
|
st118775
|
What is the major difference between gather and index_select, other than that gather “gathers” values and index_select “selects” values? Is there any difference on the basis of underlying storage?
Also, if I want to assign a sub-tensor of a tensor (indexed using index_select) to a new value? How do I do that? For example,
t = torch.Tensor([[1,2,5],[3,4,6]])
t1 = torch.Tensor([[1,2,8],[3,4,9]])
t2 = torch.index_select(t1, 1, torch.LongTensor([0, 2]))
I want to assign the [0, 2] indices along dimension 1 of t, to t2. I noticed that scatter_ does something like this, but I can’t seem to understand how it works. Any idea if that is indeed the way to go and if so, how does it work?
|
st118776
|
Hello @vvanirudh,
Scatter allows you to index the target of the assignment. You would need to have an index tensor that is of the same shape as the source tensor. The index tensor will give one of the target dimensions while the others come from the original.
So if you take the example from the docs
# some data
orig = torch.rand(2, 5)
# where we want to put the data
# notice: idx.size() is equal orig.size()
# idx will be dimension zero index of the target))
idx = torch.LongTensor([[0, 1, 2, 0, 0], [2, 0, 0, 1, 2]])
# notice: t1.size(1) is equal orig.size(1)
t1 = torch.zeros(3, 5).scatter_(0, idx, orig)
print(t1)
the scatter assigning t1 will give you the same as the following explicit loop:
# zeros of the right size
t2 = torch.zeros(3,5)
# loop over indices of orig (and idx)
for x in range(orig.size(0)):
for y in range(orig.size(1)):
# assign to t2 at the place given by idx in the first dimension, and y in the second
t2[idx[x,y],y] = orig[x,y]
print(t2)
gather is the counterpart, i.e. with
copy = t2.gather(0, idx)
print(copy)
copy will be the same as orig. You could thus spell out gather as
copy2 = torch.zeros(*idx.size())
for x in range(idx.size(0)):
for y in range(idx.size(1)):
copy2[x,y] = t2[idx[x,y],y]
print(copy2)
index_select, on the other hand, will not select single elements, but rather entire rows, columns, or whatever dimension (similar to passing a list in numpy):
row_idx = torch.LongTensor([0,2])
zeroandthirdrow = t2.index_select(0, row_idx)
print(zeroandthirdrow)
this could be spelled out as
zeroandthirdrow2 = torch.zeros(row_idx.size(0), t2.size(1))
for x in range(row_idx.size(0)):
for y in range(t2.size(1)):
zeroandthirdrow2[x,y] = t2[row_idx[x], y]
print (zeroandthirdrow2)
or in intermediate compactness with row assignments instead of the y loop:
zeroandthirdrow3 = torch.zeros(row_idx.size(0), t2.size(1))
for x in range(row_idx.size(0)):
zeroandthirdrow3[x,:] = t2[row_idx[x],:]
print (zeroandthirdrow3)
I must admit that I usually use shapes and spelling out indices (on paper instead of in for loops), while others seem to have fancy pictures.
Best regards
Thomas
|
st118777
|
I tried to implement a FCN with PyTorch ,which I have previously implemented in Torch. The BCE losses drops to around 0.05 after several epoches, but when I simply run a inference, even an image in the training dataset, the result is just something mess:
really can not understand why. Any help?
The main training loop:(I use my own image dataloader and training loop, label is either 1 or 0)
optimizer = optim.Adam(fcn.parameters(), lr=opt.lr)
def train(epoch):
np.random.shuffle(name_list)
epoch_lossSeg = 0.0
for iteration in range(0, dataset_size,opt.batchSize):
batch_img_tensor = torch.FloatTensor(opt.batchSize, 3, opt.h, opt.w)
batch_annotation_tensor = torch.FloatTensor(opt.batchSize,1,opt.h, opt.w)
# get a batch input and label
for i in range(iteration+0, iteration + opt.batchSize):
img_ = Image.open(opt.train_img_path + '/' + name_list[i])
img = img_.resize(img_size,Image.BILINEAR)
batch_img_tensor[ i-iteration ] = torch.from_numpy(np.array(img)).float()
annotation_ = Image.open(opt.train_label_path + '/' + name_list[i])
annotation = annotation_.resize(img_size,Image.NEAREST)
batch_annotation_tensor[i-iteration] = torch.from_numpy(np.array(annotation)).float()
# PyTorch use pixel values from 0 to 1
batch_img_tensor = batch_img_tensor
batch_annotation_tensor = batch_annotation_tensor / 255
batch_input = Variable(batch_img_tensor).cuda()
batch_annotation = Variable(batch_annotation_tensor).cuda()
optimizer.zero_grad()
output = fcn.forward(batch_input)
err_seg = criterion(output,batch_annotation)
err_seg.backward()
optimizer.step()
inference code:
img = Image.open('...')
size = 256,256
img = img.resize((256,256))
input_tensor = torch.FloatTensor(1,3,256,256)
input_tensor[0] = torch.from_numpy(np.asarray(img))
input_tensor = input_tensor.float()
input_var = Variable(input_tensor)
fcn.eval()
out = fcn.forward(input_var.cuda())
from matplotlib import pyplot as plt
import numpy
out = out.cpu()
out_img = out.data[0]
plt.imshow(out_img.numpy()[0])
|
st118778
|
do you perform the same image pre-processing during training and testing?
To double check, during training you could save some example images of the training predictions (making sure that you are saving after softmax if using NLLLoss, keeping only the prediction channel, etc). This could show if you are wrongly pre-processing your test images.
|
st118779
|
I make sure preprocessing steps are the same for training and testing, so is there any bug in my training and testing code?
|
st118780
|
Maybe. I didn’t check in details your code, but are you using a pre-trained network?
And if yes, make sure that the image pre-processing is the same (you are not subtracting the imagenet mean/std, so if you use models from modelzoo, it won’t work).
Also, it would make your life easier (and the code would be faster as well) if you implemented a Dataset, and used a dataLoader from pytorch (as it would load the images using multiple threads).
|
st118781
|
I need to annotation tensors to be batchSize*1*256*256 with 0 and 1 pixel values. But when I tried to load them using Dataset class, the tensors just become the rgb channel images. Where should I change ?
Besides, is there any difference between the following two ways to save models? I used the second one
torch.save(my_net.state_dict(), "my_net.pth")
torch.save( my_net,"my_net.pth")
|
st118782
|
To load the images without converting them to RGB, just don’t pass the .convert('RGB') option to PIL.
The difference between both is that in the first one, you just save the parameters, while the second you save the full structure. The first one is advised, because it’s clear what the model structure is (you need to have a file that defines it), but both should work just fine.
|
st118783
|
And there are several Dropout layers in my model, so in the testing code,
I need to run
model.eval()
before
model.forward() ?
Any difference?
|
st118784
|
Yes, you need to put your model in eval() mode before testing it. The difference can be huge if you let it in train mode and your model has batch_norm
|
st118785
|
When I try the following
x = Variable(torch.Tensor([1]), requires_grad=True)
w = Variable(torch.Tensor([2]), requires_grad=True)
b = Variable(torch.Tensor([3]), requires_grad=True)
y = w * x + b
y.backward() #calculates gradients
print(w.grad) # dy/dw = x --> 1
it works. but when I build one more variable like this,
z = 2 * y
z.backward()
I get the following error
RuntimeError Traceback (most recent call last)
<ipython-input-49-c111445625f1> in <module>()
1 z = 2 * y
2
----> 3 z.backward()
/home/paarulakan/environments/python/pytorch-py35/lib/python3.5/site-packages/torch/autograd/variable.py in backward(self, gradient, retain_variables)
144 'or with gradient w.r.t. the variable')
145 gradient = self.data.new().resize_as_(self.data).fill_(1)
--> 146 self._execution_engine.run_backward((self,), (gradient,), retain_variables)
147
148 def register_hook(self, hook):
/home/paarulakan/environments/python/pytorch-py35/lib/python3.5/site-packages/torch/autograd/_functions/basic_ops.py in backward(self, grad_output)
46
47 def backward(self, grad_output):
---> 48 a, b = self.saved_tensors
49 return grad_output.mul(b), maybe_view(grad_output.mul(a), self.b_size)
50
RuntimeError: Trying to backward through the graph second time, but the buffers have already been freed. Please specify retain_variables=True when calling backward for the first time.
What is wrong with that?
|
st118786
|
Just add retain_variables=True to the first backward() call when you intend to perform backpropagation multiple times through the same variables.
y.backward(retain_variables=True) #calculates gradients
|
st118787
|
Hi everyone,
I am playing with a RNN in pytorch. I made the following example:
import torch
import torch.nn as nn
from torch.autograd import Variable
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
import matplotlib.pyplot as plt
batch_size = 10
n_examples = 200
sequence_len = 15 # This is equavlent to time steps of the sequence in keras
sequence_size = 13
hidden_size = 30
class my_rnn(nn.Module):
def __init__(self, input_size=2, hidden_size=20, num_layers=3, output_size=1,
batch_size=10):
super(my_rnn, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.num_layers = num_layers
self.output_size = output_size
self.batch_size = batch_size
self.rnn = nn.RNN(input_size=self.input_size, hidden_size=self.hidden_size,
num_layers=self.num_layers, batch_first=True)
# The last layer is applied on the last output only of the RNN (not like
# TimeDistributedDense in Keras)
self.linear_layer = nn.Linear(self.hidden_size, self.output_size)
self.hidden = Variable(torch.zeros((self.num_layers, self.batch_size, self.hidden_size)))
def forward(self, input_sequence):
out_rnn, self.hidden = self.rnn(input_sequence, self.hidden)
in_linear = out_rnn[:, -1, :]
final_output = self.linear_layer(in_linear)
return final_output
def init_hidden(self):
self.hidden = Variable(torch.zeros((self.num_layers, self.batch_size, self.hidden_size)))
rnn = my_rnn(input_size=sequence_size, hidden_size=hidden_size, num_layers=3, batch_size=batch_size)
#input_data = Variable(torch.randn((batch_size, sequence_len, sequence_size)))
h0 = Variable(torch.zeros((3, batch_size, hidden_size)))
demo_target = Variable(torch.randn((batch_size, 1)))
loss_fn = nn.MSELoss()
optimizer = optim.SGD(rnn.parameters(), lr=0.001, momentum=0.9)
for i in range(10):
input_data = Variable(torch.randn((batch_size, sequence_len, sequence_size)))
output = rnn(input_data)
#print output
optimizer.zero_grad()
loss = loss_fn(output, demo_target)
loss.backward()
#loss.backward(retain_variables=True)
optimizer.step()
print loss.data[0]
However, this gives me the following error:
RuntimeError: Trying to backward through the graph second time, but the buffers have already been freed. Please specify retain_variables=True when calling backward for the first time.
When I make this retain_variables=True in the loss function, it seems to be working (not yet sure if it works correctly or not).
My questions are:
What does retain_variables=True mean?
Is my code correct in general? or am I missing something (that made rise to this error)?
Assuming my code is correct, then why in
https://github.com/pytorch/examples/blob/master/word_language_model/main.py 9
the retain_variables=True option is not set?
Thank you
|
st118788
|
I guess the problem is caused by self.hidden. You should build a Variable of the hidden data and feed it to the network in every forward step. For example:
import torch
import torch.nn as nn
from torch.autograd import Variable
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
import matplotlib.pyplot as plt
batch_size = 10
n_examples = 200
sequence_len = 15 # This is equavlent to time steps of the sequence in keras
sequence_size = 13
hidden_size = 30
class my_rnn(nn.Module):
def __init__(self, input_size=2, hidden_size=20, num_layers=3, output_size=1,
batch_size=10):
super(my_rnn, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.num_layers = num_layers
self.output_size = output_size
self.batch_size = batch_size
self.rnn = nn.RNN(input_size=self.input_size, hidden_size=self.hidden_size,
num_layers=self.num_layers, batch_first=True)
# The last layer is applied on the last output only of the RNN (not like
# TimeDistributedDense in Keras)
self.linear_layer = nn.Linear(self.hidden_size, self.output_size)
self.hidden = Variable(torch.zeros((self.num_layers, self.batch_size, self.hidden_size)))
def forward(self, input_sequence):
out_rnn, self.hidden = self.rnn(input_sequence, self.hidden)
in_linear = out_rnn[:, -1, :]
final_output = self.linear_layer(in_linear)
return final_output
def init_hidden(self):
self.hidden = Variable(torch.zeros((self.num_layers, self.batch_size, self.hidden_size)))
rnn = my_rnn(input_size=sequence_size, hidden_size=hidden_size, num_layers=3, batch_size=batch_size)
#input_data = Variable(torch.randn((batch_size, sequence_len, sequence_size)))
h0 = Variable(torch.zeros((3, batch_size, hidden_size)))
demo_target = Variable(torch.randn((batch_size, 1)))
loss_fn = nn.MSELoss()
optimizer = optim.SGD(rnn.parameters(), lr=0.001, momentum=0.9)
for i in range(10):
input_data = Variable(torch.randn((batch_size, sequence_len, sequence_size)))
rnn.init_hidden() # init hidden Variable before every forward step
output = rnn(input_data)
#print output
optimizer.zero_grad()
loss = loss_fn(output, demo_target)
loss.backward()
#loss.backward(retain_variables=True)
optimizer.step()
print loss.data[0]
However, the better way is keep the hidden data and rebuild a Variable containing this data and then feed it to the forward function again. Like this:
github.com
Yugnaynehc/pytorch-fun/blob/master/word_language_model/main.py#L52 12
return min(1, clip / (totalnorm + 1e-6))
def train():
total_loss = 0
start = time.time()
hidden = model.init_hidden(batch_size)
for batch, i in enumerate(range(0, train_data.size(0)-1, bptt_len)):
source, target = get_batch(train_data, i)
model.zero_grad()
output, hidden = model(source, hidden)
hidden = repackage_hidden(hidden)
loss = loss_fn(output, target)
loss.backward()
clip_lr = lr * clip_gradient(model, clip_coefficient)
for p in model.parameters():
p.data.sub_(clip_lr, p.grad.data)
total_loss += loss.data[0]
|
st118789
|
Thank you for your reply @cyyyyc123. I tried removing the hidden matrix out of the class - as you mentioned -, but the problem is still the same
This is my new code
import torch
import torch.nn as nn
from torch.autograd import Variable
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
import matplotlib.pyplot as plt
batch_size = 32
n_examples = 200
sequence_len = 30 # This is equivalent to time steps of the sequence in keras
sequence_size = 1
hidden_size = 50
nb_layers = 2
target_size = 1
# Generate noisy sine-wave
# When I want to complexify (many to one), just swap the x and y
NSAMPLE = 100000
f = 2 # the frequency of the signal
x_data = np.float32(np.arange(NSAMPLE))
r_data = np.float32(np.random.uniform(-0.2, 0.2, NSAMPLE))
y_data = np.float32(np.sin(2 * np.pi * f* (x_data/NSAMPLE)) + r_data)
# Build the training data
X = []
y = []
for i in range(0, y_data.shape[0], sequence_len):
if i+sequence_len < y_data.shape[0]:
X.append(x_data[i:i+sequence_len])
y.append(y_data[i+sequence_len]) # next point
X = np.array(X)
y = np.array(y)
class my_rnn_1(nn.Module):
def __init__(self, input_size=2, hidden_size=20, num_layers=3, output_size=1,
batch_size=10):
super(my_rnn_1, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.num_layers = num_layers
self.output_size = output_size
self.batch_size = batch_size
self.rnn = nn.RNN(input_size=self.input_size, hidden_size=self.hidden_size,
num_layers=self.num_layers, batch_first=True)
# The last layer is applied on the last output only of the RNN (not like
# TimeDistributedDense in Keras)
self.linear_layer = nn.Linear(self.hidden_size, self.output_size)
def forward(self, input_sequence, hidden):
out_rnn, hidden = self.rnn(input_sequence, hidden)
in_linear = out_rnn[:, -1, :]
final_output = self.linear_layer(in_linear)
return final_output, hidden
def init_hidden(self):
return Variable(torch.zeros((self.num_layers, self.batch_size, self.hidden_size))).cuda()
def get_batch(X, Y, i, evaluation=False):
global batch_size
seq_len = min(batch_size, len(X) - 1 - i)
data = X[i:i+seq_len, :]
data = data.view(data.size(0), data.size(1), 1)
target = Y[i+1:i+1+seq_len].view(-1, 1)
return data, target
rnn = my_rnn_1(input_size=sequence_size, hidden_size=hidden_size, num_layers=3, batch_size=batch_size).cuda()
demo_target = Variable(torch.randn((batch_size, 1))).cuda()
X = Variable(torch.FloatTensor(X)).cuda()
y = Variable(torch.FloatTensor(y)).cuda()
loss_fn = nn.MSELoss()
# optimizer = optim.SGD(rnn.parameters(), lr=0.0001, momentum=0.9)
optimizer = optim.RMSprop(rnn.parameters())
for epoch in range(20):
hidden = rnn.init_hidden()
total_loss = 0
for batch, i in enumerate(range(0, X.size(0) - 1, batch_size)):
data, targets = get_batch(X, y, i)
if data.size(0) < batch_size:
break
output, hidden = rnn(data, hidden)
#print output
optimizer.zero_grad()
loss = loss_fn(output, targets)
loss.backward()
# loss.backward(retain_variables=True)
optimizer.step()
total_loss += loss.data[0]
if i % 10 == 0:
print "Loss = ", total_loss
print "Epoch " + str(epoch) + " -- loss = " + str(total_loss)
print "-"*100
This still gives the same error:
Traceback (most recent call last):
File "/localdata/mohammom/gipsa-lig/experiments/tensorflow_tutorial/temp.py", line 132, in <module>
loss.backward()
File "/localdata/mohammom/anaconda2/lib/python2.7/site-packages/torch/autograd/variable.py", line 146, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
File "/localdata/mohammom/anaconda2/lib/python2.7/site-packages/torch/autograd/function.py", line 209, in _do_backward
result = super(NestedIOFunction, self)._do_backward(gradients, retain_variables)
File "/localdata/mohammom/anaconda2/lib/python2.7/site-packages/torch/autograd/function.py", line 217, in backward
result = self.backward_extended(*nested_gradients)
File "/localdata/mohammom/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/rnn.py", line 275, in backward_extended
input, hx, weight, output = self.saved_tensors
File "/localdata/mohammom/anaconda2/lib/python2.7/site-packages/torch/autograd/function.py", line 235, in saved_tensors
flat_tensors = super(NestedIOFunction, self).saved_tensors
RuntimeError: Trying to backward through the graph second time, but the buffers have already been freed. Please specify retain_variables=True when calling backward for the first time.
Any advice?
|
st118790
|
Just an one line change: moving the
hidden = rnn.init_hidden()
into the inner loop.
|
st118791
|
Your linear layer is not doing the same as a TimeDistributedDense in Keras. You are only using the last time step, and ditching everything else.
Have a look at my TimeDistributed wrapper here:
Any PyTorch function can work as Keras' Timedistributed? vision
Hey,
I developed a PyTorch module that mimics the TimeDistributed wrapper of Keras a few days ago:
import torch.nn as nn
class TimeDistributed(nn.Module):
def __init__(self, module, batch_first=False):
super(TimeDistributed, self).__init__()
self.module = module
self.batch_first = batch_first
def forward(self, x):
if len(x.size()) <= 2:
return self.module(x)
# Squash samples and timesteps into a single axis
x_reshape = x.…
|
st118792
|
@cyyyyc123 Should I reset the hidden state of the RNN after every batch?
In the example you mentioned before (word_language_model), the hidden state is reset at the beginning of each epoch
|
st118793
|
@miguelvr Thank you for mentioning this issue, I am aware of it
I took a look at your implementation for Timedistributed Wrapper, many thanks! Very helpful
|
st118794
|
It depends on your purpose: if you want the hidden state to keep the information of the whole epoch, you can just initialize the hidden at the begin of the epoch; if you want every mini-batch from one epoch has the same initial hidden state, you should initialize the hidden at the begin of every mini-batch. However, the point why your code failed but the referential code succeed is the usage of ‘hidden = repackage_hidden(hidden)’ in the referential code: it will reconstruct a Variable for the next iteration. You can add one line
hidden = Variable(hidden.data)
below
output, hidden = rnn(data, hidden)
to make your code work.
|
st118795
|
Actually, this is not a PyTorch’s question.
I was trying to translate PyTorch’s docs to Chinese(it’s rather popular in China which could be seen in google trends),
I copy docs from torch/docs and create a new repo 1, then I add torch and torchvision to requirements.txt. I build success on my computer by make html, I get the same Html as http://pytorch.org/docs/ . It also builds pass in readthedocs, but it seems something was wrong that it doesn’t look as I expect.
the project homepage of readthedocs: http://pytorch-zh.readthedocs.io/en/latest/ 4
build log: https://readthedocs.org/projects/pytorch-zh/builds/5254356/ 4
Do anyone know why. I am new to sphinx and readthedocs, it seems not the problem of pytorch.
---------------------solved--------------------------------------
after use some tricks, it finally works, though It’s not as nice http://pytorch.org/docs 3
I copy theme.css that built in my computer, and add it to _static/css, and modify conf.py 3 to add it in html_context.css_file
|
st118796
|
Hi, the following code outputs 10, but I do not understand how the code arrives at this result. Would anyone be able to explain what the logic is for this result?
import torch
from torch.autograd import Variable
val = torch.FloatTensor([1])
x = Variable(val,requires_grad=True)
y = x * 2
z = y ** 2
torch.autograd.backward([z],[val],retain_variables=True)
torch.autograd.backward([y],[val],retain_variables=True)
g = x.grad
print(g)#> Outputs : 10
|
st118797
|
Because if you use retain_variables=True, the gradient buffer will hold and accumulate the gradient history. dz|dx = 8, dy|dx = 2, so you get 8+2 = 10.
|
st118798
|
I guess the gradient of any Variable, such as x, will be stored on a separate buffer, and when we use backward(), the system will dump each gradient to the buffer which it belongs. So dz/dx and dy/dx will be put into the same buffer of the Variable x.
|
st118799
|
But shouldn’t they be multiplied (rather than added), since we are using the chain rule? Thanks
|
st118800
|
I suppose it has nothing to do with chain rule here. The chain rule is dz|dx = dz|dy * dy|dx, but here are dz|dx and dy|dx. What your code have done is ‘calculate the gradient of current graph from node z, and keep the gradient. Then calculate the gradient of current graph from node y, and print the gradient buffer of node x’.
|
st118801
|
OK thanks I am clear now. For some reason I was assuming (wrong assumption) that a double application of
torch.autograd.backward([z],[val],retain_variables=True)
would calculate the second derivative which would require a multiplication, and hence was inferring that the same would apply to :
torch.autograd.backward([z],[val],retain_variables=True)
torch.autograd.backward([y],[val],retain_variables=True)
Now I am clear. Thanks
|
st118802
|
hi ,is there a way to get the class and the original name of the tranfrom image,when using the model(x) code
to forward torch.utils.data.DataLoader as an input
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
val_loader = torch.utils.data.DataLoader(
datasets.ImageFolder(valdir, transforms.Compose([
transforms.Scale(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize,
])),
batch_size=1, shuffle=False,
num_workers=4, pin_memory=True)
a torch.utils.data.DataLoader is created here
and i want to get the input result:
for i, (input, target) in enumerate(val_loader):
target_var = torch.autograd.Variable(target)
target = target.cuda(async=True)
x = torch.autograd.Variable(input, volatile=True)
target_var = torch.autograd.Variable(target, volatile=True)
#want to print the input varialbe’s imagename and classes here,how?
f2=model(x)
i want to print the input varialbe’s imagename and classes before code model(x)
many thanks!
|
st118803
|
the source code of ImageFolder 465 is simple and easy to understand, I would strongly like to advice you to have a look at it .
here is what you want:
class MyImageFolder(ImageFolder):
def __getitem__(self, index):
return super(MyImageFolder, self).__getitem__(index), self.imgs[index]#return image path
val_dataloader=t.utils.data.DataLoader(val_dataset)
for ii,data in enumerate(val_dataloader):
(input,label),(path,_) = data
.....
|
st118804
|
hi ,chen,thank u so much for your reply,i try ur code ,but it seems not work,my code is this:
valdir=r'test1/'
class MyImageFolder(datasets.ImageFolder):
def __getitem__(self, index):
return super(MyImageFolder, self).__getitem__(index), self.imgs[index]#return image path
#return super(datasets.ImageFolder, self).__getitem__(index), self.imgs[index]#return image path
gg=MyImageFolder(valdir, transforms.Compose([
transforms.Scale(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize,
]))
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
val_loader = torch.utils.data.DataLoader(
MyImageFolder(valdir, transforms.Compose([
transforms.Scale(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize,
])),
batch_size=1, shuffle=False,
num_workers=4, pin_memory=True)
for ii,py in enumerate(val_loader):
print(py)
break
#for ii, (py,ik) enumerate(val_loader):
an error is something like this ,what is the reason?
RuntimeError Traceback (most recent call last)
<ipython-input-32-fb3a2b3714cd> in <module>()
24 num_workers=4, pin_memory=True)
25
---> 26 for ii,py in enumerate(val_loader):
27 print(py)
28 break
/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.pyc in __next__(self)
172 self.reorder_dict[idx] = batch
173 continue
--> 174 return self._process_next_batch(batch)
175
176 next = __next__ # Python 2 compatibility
/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.pyc in _process_next_batch(self, batch)
196 self._put_indices()
…
return [pin_memory_batch(sample) for sample in batch]
File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 91, in pin_memory_batch
return [pin_memory_batch(sample) for sample in batch]
File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 90, in pin_memory_batch
elif isinstance(batch, collections.Iterable):
File "/usr/lib/python2.7/abc.py", line 132, in __instancecheck__
if subclass is not None and subclass in cls._abc_cache:
File "/usr/lib/python2.7/_weakrefset.py", line 75, in __contains__
return wr in self.data
RuntimeError: maximum recursion depth exceeded in cmp
|
st118805
|
seems something to do with the pin_memory ,what do this argument mean ?
val_loader = torch.utils.data.DataLoader(
MyImageFolder(valdir, transforms.Compose([
transforms.Scale(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize,
])),
batch_size=1, shuffle=False,
num_workers=4, pin_memory=False)
#num_workers=4, pin_memory=True)
this works
|
st118806
|
what should i have to do if i want to make pin_memory=True,i am new bb to python,the source code seem to be so complex
|
st118807
|
pin_memory Copies the tensor to pinned memory, if it’s not already pinned.This will make tensor transfers faster to GPU.
The reason it failed may because the paths I return is string not tensor. But this has been fixed. So you may update PyTorch and try again.
|
st118808
|
Hi, Install pytorch from the latest source package gives me the following error like
RuntimeWarning: Parent module 'torch._thnn' not found while handling absolute import
import os
and
torch/csrc/nn/THNN_generic.cpp:6788:14: error: ‘THCudaHalfTensor’ was not declared in this scope
(THCudaHalfTensor*)arg_input->cdata(),
…
Currently, my cuda version is 7.5 on ubuntu 14.04, any ideas?
|
st118809
|
After checking setup.py 8, I find that they use CUDA_HOME vairables. So i define CUDA_HOME Variable to point to the cuda7.5 folder in my .bashrc file(Previously I define it to my PATH=/usr/local/cuda-7.5/bin:$PATH and LD_LIBRARY_PATH=/usr/local/cuda-7.5/lib64:$LD_LIBRARY_PATH ), I solved my problems.
|
st118810
|
MaxPool3d(return_indices=True) is completely messed up, both on CPU and GPU.
I mentioned this in a previous post, but it’s not clear to me that the problem is being addressed.
Here is an example:
pool3d = nn.MaxPool3d(kernel_size=2,stride=2,return_indices=True)
img3d = Variable(torch.rand(1,1,4,4,4))
out, indices = pool3d(img3d)
print(indices)
The output looks like this:
Variable containing:
(0 ,0 ,0 ,.,.) =
4.6117e+18 4.6117e+18
4.2950e+09 2.3224e+18
(0 ,0 ,1 ,.,.) =
4.2950e+09 4.2950e+09
-8.9845e+18 4.2950e+09
[torch.LongTensor of size 1x1x2x2x2]
The elements of indices should be in the range [0,63], so this can’t be right.
|
st118811
|
Is pytorch autograd a wrapper around torch autograd?
If yes, then is there active development happening around torch autograd? THe github repo does not show any recent commits going back a few months. plus no one answers the issues posted.
|
st118812
|
No, pytorch autograd is a complete re-write of torch autograd. Each one of them is maintained by different people.
|
st118813
|
Thanks.
Slightly offbeat but is the bottom layer of pytorch autograd written in torch? If yes then it might be worth exposing this version of autograd to people writing torch code.
|
st118814
|
No, it is all written in C++ / Python, so there is no Lua code involved, and it would be quite a lot of work to port it to Lua Torch
|
st118815
|
Interesting.
So the fact that pytorch is called py**torch** has nothing to do with lua’s torch modules?
|
st118816
|
It has nothing to do with Torch’s Lua modules. Torch7 itself consists of a Lua wrapper on top of a C backend, and PyTorch wraps the same C backend (which is still hosted in Torch7’s Github repository), along with a new C++ autograd engine, in a Python frontend.
|
st118817
|
From naive paper of BatchNorm, we know that BN is for reducing internal Covarite Shift.
I just wonder how BN here runs correctly.
I check the DOC and guess:
Train ----> set Model.train()
Test -----> set Model.eval()
am I right ?
|
st118818
|
Yes, this is correct. The same applies to dropout as well. This changes which statistics BN uses (batch or global).
http://pytorch.org/docs/nn.html#torch.nn.Module.eval 358
|
st118819
|
Problem
Environment: python2.7; OS: ArchLinux; PyTorch_version:0.1.9_2.
$ python preprocess.py 3 -train_src data/src-train.txt -train_tgt data/tgt-train.txt -valid_src data/src-val.txt -valid_tgt data/tgt-val.txt -save_data data/demo
Traceback (most recent call last):
File “preprocess.py 3”, line 1, in
import onmt
File “/home/zeng/opensource/OpenNMT-py/onmt/init.py”, line 2, in
import onmt.Models
File “/home/zeng/opensource/OpenNMT-py/onmt/Models.py”, line 5, in
from torch.nn.utils.rnn import pad_packed_sequence as unpack
ImportError: No module named utils.rnn
what cause this problem and how to solve this problem?
Method
uninstall the pytorch and reinstall with the latest version 0.10.1
the whl links depend on what you’ve installed before.
pip uninstall http://download.pytorch.org/whl/cu80/torch-0.1.9.post2-cp27-none-linux_x86_64.whl
pip install http://download.pytorch.org/whl/cu80/torch-0.1.10.post2-cp27-none-linux_x86_64.whl
|
st118820
|
I run the model using PyTorch,and after a few seconds,the program terminated and showed the error message:
RuntimeError: maximum recursion depth exceeded in cmp
I can’t understand why error could happen,because at first I inputted the model with mnist data and it worked.Did anyone encounter the same problem?
|
st118821
|
this seems like a python issue where you wrote some recursive function and it doesn’t have a break.
|
st118822
|
it means that the recursion is too deep. I doubt it has anything to do with batch data.
|
st118823
|
Hello. I have some troubles when using the pretrained model from torchvision.model. I evaluate the ResNet18 and ResNet34 on ImageNet validation dataset, but I didn’t get the accuracy=0 after 1 iter batch.
|
st118824
|
@jekbradbury , do you have any generic Chainer to PyTorch conversion scripts? I see vzhong’s repo https://github.com/vzhong/chainer2pytorch 108, but I was wondering if you have a more general converter (e.g. one with regexes that converts Chainer python source code to PyTorch python source code).
|
st118825
|
No, we don’t, but Victor’s repo cut conversion time for a big project of his down to a day or two so we decided it was enough. You can even keep using the Chainer trainer/etc abstractions if you patch them a little.
|
st118826
|
Is there a canonical way to exploit sparsity in batch operations torch.bmm() and torch.baddmm() yet? Interested mainly in sparse -> dense (dense -> sparse is also interesting).
If I have a batch of sparse input matrices, and a dense batch of matrices :
> mat1 = torch.zeros(4, 3, 5)
> mat1[1][1][1] = 1; mat1[2][2][2] = 1
> mat2 = torch.rand(4, 5, 6)
> torch.bmm(mat1, mat2)
Exploiting sparsity is quite an optimisation. If this isn’t available yet, which users might I liaise with to help out?
Edit: it seems there is a torch.SparseFloatTensor() available in a new release?
|
st118827
|
bmm is currently not implemented for torch.sparse.* modules. If you can store it in a list, you can simply do torch.mm(mat1_i, mat2). If your matricies are extremely sparse, this should be pretty good
Also, torch.smm(mat1_i, mat2) is also implemented, for sparse * dense -> sparse operations.
|
st118828
|
Thanks for the tips If my sparse matrices are in a list, do you mean something like this? :
import torch
import torch.autograd as ann
mat2 = ann.Variable(torch.rand(4, 5, 6), requires_grad=True)
mats = [ann.Variable(torch.zeros(4, 3, 5), requires_grad=True) for _ in range(3)]
for i in range(len(mats)):
result = torch.bmm(mats[i], mat2)
print result.size()
|
st118829
|
Something like this:
import torch
x = torch.rand(5,6)
# Sparse matrix of (0, 1) = 1; (2, 1) = 2, (3, 4) = 3
sparse = torch.sparse.FloatTensor(
torch.LongTensor([[0, 2, 3], [1, 1, 4]]), # Indicies
torch.FloatTensor([1, 2, 3])) # values
print(x)
print(sparse.to_dense())
print(torch.mm(sparse, x))
# This won't actually save space or compute, since it's so dense,
# but it will be a sparse tensor representation.
print(torch.smm(sparse, x))
Simply construct a list of your sparse tensors, and loop over them to do the batch mm.
|
st118830
|
Thanks once more! One last thing:
sparse = torch.sparse.FloatTensor(
torch.LongTensor([[0, 2, 3], [1, 1, 4]]), # Indicies
torch.FloatTensor([1, 2, 3])) # values
Seems strange to me as you don’t define the sizes of the sparse matrix - it seems to arbitrarily pick the indices of the corner value as the size. What is the logic here?
|
st118831
|
Check out the tests for more in depth use cases: https://github.com/pytorch/pytorch/blob/master/test/test_sparse.py 47
You can pass in a third argument to specify the size, like torch.sparse.FloatTensor(indicies, values, torch.Size([4, 5]))
|
st118832
|
Yes, sure! I’m not sure whether I’m familiar with pytorch internals enough to be able help. But I can try anyway.
|
st118833
|
I’d like to collaborate on writing a wrapper for cusparse, if you folks still need a hand.
|
st118834
|
@siddharthachandra have a look at https://github.com/pytorch/pytorch/pull/1147 118
Part of it is done.
|
st118835
|
Hi there,
I am trying to do an LU decomposition of a matrix as follows:
Q = 1e-8*torch.eye(4).double()
G = torch.eye(4)
Q_LU = Q.btrifact()
when I run, I get
AttributeError Traceback (most recent call last)
<ipython-input-37-e3cdbfd9e43b> in <module>()
19 Q = 1e-8*torch.eye(4).double()
20 G = torch.eye(4)
---> 21 Q_LU = Q.btrifact()
22
23 # G_invQ_GT
AttributeError: 'DoubleTensor' object has no attribute 'btrifact'
Can someone help me out? Thanks!
|
st118836
|
@smth, I just recloned the latest commit on the github repo and recompiled but I still have the same problem. How do I check pytorch’s version on my system?
|
st118837
|
I downloaded the pretrained weights for resnet152 using Python 3.5 and test the below code with both Python 2.7 and Python 3.5. When i ran the code below in Python 2.7, it cause out-of-memory error (it requires more than 12GB).
resnet = torchvision.models.resnet152(pretrained=True)
for param in resnet.parameters():
param.requires_grad = False # It seems not work in Python 2.7
images = torch.randn(128, 3, 224, 224)
resnet = resnet.cuda()
images = images.cuda()
outputs = resnet(Variable(images)) # requires 2.3GB in Python 3.5
Do you have any idea why this is happening?
|
st118838
|
Which version do you use? I test this code under pytorch-0.1.10-post2 with python-2.7.13, and everything works fine.
|
st118839
|
@yunjey i’m looking into this. This seems like a pytorch v0.1.10 vs v0.1.11 problem, and not a python 2.7 vs 3.5 problem. For now you can workaround using this line:
outputs = resnet(Variable(images, volatile=True))
I will be tracking this issue here: https://github.com/pytorch/pytorch/issues/1184 42
|
st118840
|
I want to customize my loss function as simple as following.
mb_out is the output from the model forward computation, and it has size (batch_size, 1). They should all reach one in the best case, so I define the following loss function. But seems the loss is not decreasing during training, so I am wondering if this way of defining loss is wrong.
optimizer = optim.Adam(model.parameters(), lr=0.01)
loss = (1 - torch.sum(mb_out, 1)).sum() / float(batch_size)
optimizer.zero_grad()
loss.backward()
optimizer.step()
|
st118841
|
ZeweiChu:
loss = (1 - torch.sum(mb_out, 1)).sum() / float(batch_size)
It look like that your loss is wrongly define. Theoretically, you are doing regression, so you could use MSE loss.
But, trying to modify you loss, is should look like that:
def lossOne(prediction):
batch_size = prediction.size(0)
gt = torch.ones(batch_size).type_as(prediction)
loss = (gt - prediction).sum() / float(batch_size)
return loss
The main problem in your implementation was using double sum. So, interpreting your loss, you would like to the sum of output in current batch was 1, what does not make sense. In my implementation, each output want to be one.
But this loss also would not work great, in my opinion. Why? Because this loss can have value from [-infitity, +infinity]. Right loss would have range [0,+infinity]. This is why it is recommended to use MSELoss 17. It have right range of values. As target just use vector with all ones, like I do in the first line of function.
|
st118842
|
In the documentation, output of torch.nn.LSTM is described as follows.
output (seq_len, batch, hidden_size * num_directions): tensor containing the output features (h_t) from the last layer of the RNN, for each t. If a torch.nn.utils.rnn.PackedSequence has been given as the input, the output will also be a packed sequence.
Previously I thought output is a tensor containing the output features which is actually o_t, not h_t. Though, o_t is not an input to the LSTM, so the description looks reasonable.
But in few examples, I have seen, to get all the hidden state representations for each word in a sequence (language modeling task), we loop through all the words in that sequence. If output already gives us all the hidden state representations (for all t from the final layer), why we need to loop through all the words of a sequence?
I asked a similar question before - How to retrieve hidden states for all time steps in LSTM or BiLSTM? where I got the answer from @smth that, “To get individual hidden states, you have to indeed loop over for each individual timestep and collect the hidden states”. But now I feel individual hidden states for each timestep is already provided. (Please correct me if I am wrong)
Moreover this leads me to another concern that if we need o_t from the last layer of the RNN, how can we get them? Can anyone shed some light in this?
|
st118843
|
Because sometimes you might need the hidden states of lower layers too, and these are not returned form the module (remember that nn.LSTM support multi-layer LSTMs).
|
st118844
|
Hi, I am just beginning to learn deep learning in pytorch. I am running the following code I got from pytorch tutorial 9 by Justin Johnson.
#With autograd
import torch
from torch.autograd import Variable
dtype = torch.cuda.FloatTensor
N, D_in, H, D_out = 64, 1000, 100, 10
x = Variable(torch.randn(N, D_in).type(dtype), requires_grad = False)
y = Variable(torch.randn(N, D_out).type(dtype), requires_grad = False)
w1 = Variable(torch.randn(D_in, H).type(dtype), requires_grad = True)
w2 = Variable(torch.randn(H, D_out).type(dtype), requires_grad = True)
learning_rate = 1e-6
for t in range(500):
y_pred = x.mm(w1).clamp(min = 0).mm(w2)
loss = (y_pred - y).pow(2).sum()
print(t, loss.data[0])
#w1.grad.data.zero_()
#w2.grad.data.zero_()
loss.backward()
#print w1.grad.data
#print w2.grad.data
w1.data -= learning_rate * w1.grad.data
w2.data -= learning_rate * w2.grad.data
But it seems it is exploding and after 123 epochs, the loss is becoming nan. This is the output-
(0, 31723518.0)
(1, 28070452.0)
(2, 8525556.0)
(3, 14738816.0)
(4, 9347755.0)
(5, 12841774.0)
(6, 18114290.0)
(7, 6447365.5)
(8, 11224685.0)
(9, 9882719.0)
(10, 2951912.0)
(11, 2978006.25)
(12, 6616687.5)
(13, 7743705.0)
(14, 5883046.5)
(15, 3643038.25)
(16, 2570257.25)
(17, 2455251.0)
(18, 2659530.75)
(19, 2724341.5)
(20, 2513530.25)
(21, 2057666.625)
(22, 1586186.375)
(23, 1254101.625)
(24, 1110446.375)
(25, 1110734.0)
(26, 1145980.0)
(27, 1071132.875)
(28, 910926.4375)
(29, 782463.5)
(30, 719357.125)
(31, 717793.9375)
(32, 761821.125)
(33, 756986.375)
(34, 682688.4375)
(35, 646783.5625)
(36, 679672.0)
(37, 676811.0)
(38, 600790.3125)
(39, 631020.375)
(40, 692508.6875)
(41, 696700.5625)
(42, 615305.625)
(43, 504780.4375)
(44, 505154.0)
(45, 507697.0625)
(46, 498239.1875)
(47, 478827.5)
(48, 531659.3125)
(49, 472687.5)
(50, 433654.9375)
(51, 504356.59375)
(52, 475822.34375)
(53, 465258.40625)
(54, 490428.53125)
(55, 542419.6875)
(56, 480332.28125)
(57, 456323.03125)
(58, 548866.5)
(59, 460200.1875)
(60, 582967.375)
(61, 467767.125)
(62, 399487.1875)
(63, 525414.75)
(64, 563015.5)
(65, 630127.125)
(66, 339907.625)
(67, 485001.0625)
(68, 541414.6875)
(69, 637931.8125)
(70, 424327.5)
(71, 444804.25)
(72, 542814.6875)
(73, 624015.6875)
(74, 405953.71875)
(75, 523452.90625)
(76, 604742.4375)
(77, 624313.0625)
(78, 665899.8125)
(79, 796917.625)
(80, 1059727.875)
(81, 1661096.375)
(82, 3876985.5)
(83, 5157832.0)
(84, 2041864.25)
(85, 5117962.0)
(86, 5582782.0)
(87, 9489012.0)
(88, 28304358.0)
(89, 92396984.0)
(90, 135757312.0)
(91, 30141958.0)
(92, 36246224.0)
(93, 63904096.0)
(94, 27171200.0)
(95, 22396498.0)
(96, 18266130.0)
(97, 25967810.0)
(98, 23575290.0)
(99, 8453866.0)
(100, 13056855.0)
(101, 7837615.5)
(102, 10242168.0)
(103, 8700571.0)
(104, 178546768.0)
(105, 311015104.0)
(106, 264007536.0)
(107, 31766490.0)
(108, 79658920.0)
(109, 19210790.0)
(110, 20177744.0)
(111, 24349004.0)
(112, 158815472.0)
(113, 51590388.0)
(114, 42294844.0)
(115, 20198332.0)
(116, 26488356.0)
(117, 14971826.0)
(118, 296145664.0)
(119, 11408661504.0)
(120, 472693047296.0)
(121, 1.5815737104924672e+16)
(122, 2.7206068612442637e+30)
(123, inf)
(124, nan)
(125, nan)
(126, nan)
(127, nan)
(128, nan)
(129, nan)
(130, nan)
(131, nan)
(132, nan)
(133, nan)
(134, nan)
(135, nan)
(136, nan)
(137, nan)
(138, nan)
(139, nan)
(140, nan)
(141, nan)
(142, nan)
(143, nan)
(144, nan)
(145, nan)
(146, nan)
(147, nan)
(148, nan)
(149, nan)
(150, nan)
(151, nan)
(152, nan)
(153, nan)
(154, nan)
(155, nan)
(156, nan)
(157, nan)
(158, nan)
(159, nan)
(160, nan)
(161, nan)
(162, nan)
(163, nan)
(164, nan)
(165, nan)
(166, nan)
(167, nan)
(168, nan)
(169, nan)
(170, nan)
(171, nan)
(172, nan)
(173, nan)
(174, nan)
(175, nan)
(176, nan)
(177, nan)
(178, nan)
(179, nan)
(180, nan)
(181, nan)
(182, nan)
(183, nan)
(184, nan)
(185, nan)
(186, nan)
(187, nan)
(188, nan)
(189, nan)
(190, nan)
(191, nan)
(192, nan)
(193, nan)
(194, nan)
(195, nan)
(196, nan)
(197, nan)
(198, nan)
(199, nan)
(200, nan)
(201, nan)
(202, nan)
(203, nan)
(204, nan)
(205, nan)
(206, nan)
(207, nan)
(208, nan)
(209, nan)
(210, nan)
(211, nan)
(212, nan)
(213, nan)
(214, nan)
(215, nan)
(216, nan)
(217, nan)
(218, nan)
(219, nan)
(220, nan)
(221, nan)
(222, nan)
(223, nan)
(224, nan)
(225, nan)
(226, nan)
(227, nan)
(228, nan)
(229, nan)
(230, nan)
(231, nan)
(232, nan)
(233, nan)
(234, nan)
(235, nan)
(236, nan)
(237, nan)
(238, nan)
(239, nan)
(240, nan)
(241, nan)
(242, nan)
(243, nan)
(244, nan)
(245, nan)
(246, nan)
(247, nan)
(248, nan)
(249, nan)
(250, nan)
(251, nan)
(252, nan)
(253, nan)
(254, nan)
(255, nan)
(256, nan)
(257, nan)
(258, nan)
(259, nan)
(260, nan)
(261, nan)
(262, nan)
(263, nan)
(264, nan)
(265, nan)
(266, nan)
(267, nan)
(268, nan)
(269, nan)
(270, nan)
(271, nan)
(272, nan)
(273, nan)
(274, nan)
(275, nan)
(276, nan)
(277, nan)
(278, nan)
(279, nan)
(280, nan)
(281, nan)
(282, nan)
(283, nan)
(284, nan)
(285, nan)
(286, nan)
(287, nan)
(288, nan)
(289, nan)
(290, nan)
(291, nan)
(292, nan)
(293, nan)
(294, nan)
(295, nan)
(296, nan)
(297, nan)
(298, nan)
(299, nan)
(300, nan)
(301, nan)
(302, nan)
(303, nan)
(304, nan)
(305, nan)
(306, nan)
(307, nan)
(308, nan)
(309, nan)
(310, nan)
(311, nan)
(312, nan)
(313, nan)
(314, nan)
(315, nan)
(316, nan)
(317, nan)
(318, nan)
(319, nan)
(320, nan)
(321, nan)
(322, nan)
(323, nan)
(324, nan)
(325, nan)
(326, nan)
(327, nan)
(328, nan)
(329, nan)
(330, nan)
(331, nan)
(332, nan)
(333, nan)
(334, nan)
(335, nan)
(336, nan)
(337, nan)
(338, nan)
(339, nan)
(340, nan)
(341, nan)
(342, nan)
(343, nan)
(344, nan)
(345, nan)
(346, nan)
(347, nan)
(348, nan)
(349, nan)
(350, nan)
(351, nan)
(352, nan)
(353, nan)
(354, nan)
(355, nan)
(356, nan)
(357, nan)
(358, nan)
(359, nan)
(360, nan)
(361, nan)
(362, nan)
(363, nan)
(364, nan)
(365, nan)
(366, nan)
(367, nan)
(368, nan)
(369, nan)
(370, nan)
(371, nan)
(372, nan)
(373, nan)
(374, nan)
(375, nan)
(376, nan)
(377, nan)
(378, nan)
(379, nan)
(380, nan)
(381, nan)
(382, nan)
(383, nan)
(384, nan)
(385, nan)
(386, nan)
(387, nan)
(388, nan)
(389, nan)
(390, nan)
(391, nan)
(392, nan)
(393, nan)
(394, nan)
(395, nan)
(396, nan)
(397, nan)
(398, nan)
(399, nan)
(400, nan)
(401, nan)
(402, nan)
(403, nan)
(404, nan)
(405, nan)
(406, nan)
(407, nan)
(408, nan)
(409, nan)
(410, nan)
(411, nan)
(412, nan)
(413, nan)
(414, nan)
(415, nan)
(416, nan)
(417, nan)
(418, nan)
(419, nan)
(420, nan)
(421, nan)
(422, nan)
(423, nan)
(424, nan)
(425, nan)
(426, nan)
(427, nan)
(428, nan)
(429, nan)
(430, nan)
(431, nan)
(432, nan)
(433, nan)
(434, nan)
(435, nan)
(436, nan)
(437, nan)
(438, nan)
(439, nan)
(440, nan)
(441, nan)
(442, nan)
(443, nan)
(444, nan)
(445, nan)
(446, nan)
(447, nan)
(448, nan)
(449, nan)
(450, nan)
(451, nan)
(452, nan)
(453, nan)
(454, nan)
(455, nan)
(456, nan)
(457, nan)
(458, nan)
(459, nan)
(460, nan)
(461, nan)
(462, nan)
(463, nan)
(464, nan)
(465, nan)
(466, nan)
(467, nan)
(468, nan)
(469, nan)
(470, nan)
(471, nan)
(472, nan)
(473, nan)
(474, nan)
(475, nan)
(476, nan)
(477, nan)
(478, nan)
(479, nan)
(480, nan)
(481, nan)
(482, nan)
(483, nan)
(484, nan)
(485, nan)
(486, nan)
(487, nan)
(488, nan)
(489, nan)
(490, nan)
(491, nan)
(492, nan)
(493, nan)
(494, nan)
(495, nan)
(496, nan)
(497, nan)
(498, nan)
(499, nan)
Can someone please show me what is wrong here?
|
st118845
|
Uncomment the first pair of comments in the for loop. The gradient buffers have to be manually reset before fresh gradients are calculated.
Your for loop should be:
for t in range(500):
y_pred = x.mm(w1).clamp(min = 0).mm(w2)
loss = (y_pred - y).pow(2).sum()
print(t, loss.data[0])
w1.grad.data.zero_()
w2.grad.data.zero_()
loss.backward()
#print w1.grad.data
#print w2.grad.data
w1.data -= learning_rate * w1.grad.data
w2.data -= learning_rate * w2.grad.data
|
st118846
|
In the latest version of pytorch, uncommenting those 2 lines shows error. That is why I uncommented them.
AttributeError: 'NoneType' object has no attribute 'data'
|
st118847
|
@nafizh1
Try this:
Manually zero the gradients before running the backward pass
if t:
w1.grad.data.zero_()
w2.grad.data.zero_()
I had this issue too and found out that in the latest the grad data is not created (not just initialized but its not even created) until backward is called and gradients need to be computed. The above code basically checks:
If first iteration (i.e, t = 0) then don’t zero the data else zero out the grad data.
|
st118848
|
Thanks, that solved the problem. But I was wondering if there is a more elegant solution. Also, why do you have to zero the grad data every time in a loop? I assume that is something that should take care of itself.
|
st118849
|
Looking at the examples like https://github.com/pytorch/examples/blob/master/mnist/main.py 59, the canonical way to do this appears to be calling optimizer.zero_grad() unconditionally.
|
st118850
|
I also have question why we have to run
w1.grad.data.zero_()
w2.grad.data.zero_()
before
loss.backward()
every step?
|
st118851
|
I suppose that it may be the design choice. The designer set a buffer to keep the accumulated sum of every parameter’s grad. For every training step, loss.backward() calculate the current grads for every parameter, and then dump them to the buffer. After that, we can use optimizer.step() to apply these grads which stored on the buffer to update parameters. So if we don’t use w1.grad.data.zero_() or optimizer.zero_grad() before update the parameters, we will apply the accumulated grads to update them, which may be wrong in most situations.
|
st118852
|
Trying to understand modules/variables/autograd a bit better, so I made a a loss function that penalizes a point by 1 if it is 0.5 away from the target and 0 otherwise. I used the following forward
def forward(self, input_var, target_var):
D = torch.norm(input_var - target_var)
return sum(D > 0.5)
But I get the error RuntimeError: there are no graph nodes that require computing gradients when I run backward() on this loss. I don’t seem to release any data from variables or use numpy arrays. So I think I’m misunderstanding something fundamental about autograd. Any help would be appreciated. Thanks
|
st118853
|
The variable you want to train needs to be created with requires_grad=True. (Equivalently, you can make it an nn.Parameter rather than a Variable)
|
st118854
|
Thanks for recommendation. I think I get the need for requires_grad=True, but I’m still not sure I understand when. Fr example, the following runs:
def forward(self, x, y):
return sum(torch.abs(x - y))
But this gives the runtime error:
def forward(self, x, y):
return sum(torch.abs(x - y) > 0.5)
What is it about the ‘>’ that causes the error?
|
st118855
|
@scoinea the problem in the second snippet is that comparison is not a differentiable operation, so you can’t compute its gradient
|
st118856
|
If I need to copy a variable created by an operation instead of user,and let the copy have an independent memory,what can I do for that purpose?
Thank you!
|
st118857
|
How do you want to use the copy? If you need to make it a new leaf then you can do Variable(other.data.clone()), otherwise you might just call .clone() on the Variable (but remember that this will keep reference to all the history!)
|
st118858
|
I have a network that has been trained on data (the input data is converted to a torch Variable so that gradients are kept track of). At test time, given new data, can I send it as a torch tensor (instead of a variable, since I don’t need to keep track of gradients) to the network?
|
st118859
|
At test time it still needs to be a variable but you can set volatile=True (see here 8) to use the graph in inference mode and keep it from saving history.
|
st118860
|
Hi I am running some fine tuning code with Titan X (pascal), cuda 8.0, cudnn 5.1. And I set cudnn.benchmark = True in the code. Somehow when I use nvidia-smi to check the GPU, it says only around 1G GPU memory is used. How can I access more GPU memory? I am using python 2.7.5. I do not think using python 3 would help… But any suggestion?
P.S. is there a mechanism in pytorch to set the ratio of GPU memory that I want to allocate? just like in tensorflow
|
st118861
|
Unlike TF, PyTorch will only allocate as much memory as it needs. If you want to make use of more memory, increase the batch size.
|
st118862
|
I am new to machine learning, and I am trying to implement regression with polynomials.
I am aware of the regression to polynomials example, but I try to do it differently - by creating
a submodule. Here is the code:
import torch
from torch import nn
class Model(nn.Module):
def __init__(self, deg):
super(Model, self).__init__()
self.deg = deg + 1
self.theta = nn.Parameter(torch.ones(self.deg), 1)
def forward(self, xs):
phi = torch.cat([xs**i for i in range(self.deg)], 1)
res = phi @ self.theta
# phi.mv(self.theta) # if not python 3.5 or later
print(res)
return res
model = Model(2)
#print(list(model.parameters()))
x = FloatTensor([1, 2,3, 4]).unfold(0,1,1)
print(model(x))
Here is the error that I get:
Traceback (most recent call last):
File "01-polynomial-fitting.py", line 40, in <module>
print(model(x))
File "/home/arseni/.anaconda/envs/mlt/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "01-polynomial-fitting.py", line 29, in forward
res = phi @ self.theta
File "/home/arseni/.anaconda/envs/mlt/lib/python3.6/site-packages/torch/tensor.py", line 357, in __matmul__
return self.mv(other)
TypeError: mv received an invalid combination of arguments - got (Parameter), but expected (torch.FloatTensor vec)
If I change res = phi @ self.theta to res = phi @ self.theta.data I get the following error
Traceback (most recent call last):
File "01-polynomial-fitting.py", line 40, in <module>
print(model(x))
File "/home/arseni/.anaconda/envs/mlt/lib/python3.6/site-packages/torch/nn/modules/module.py", line 215, in __call__
var = var[0]
TypeError: 'float' object is not subscriptable
If I change it to res = self.theta @ phi.t(). I get the following error:
Traceback (most recent call last):
File "01-polynomial-fitting.py", line 41, in <module>
print(model(x))
File "/home/arseni/.anaconda/envs/mlt/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "01-polynomial-fitting.py", line 29, in forward
res = (self.theta @ phi.t())
File "/home/arseni/.anaconda/envs/mlt/lib/python3.6/site-packages/torch/autograd/variable.py", line 770, in __matmul__
return self.unsqueeze(0).mm(other).squeeze(0)
File "/home/arseni/.anaconda/envs/mlt/lib/python3.6/site-packages/torch/autograd/variable.py", line 523, in mm
output = Variable(self.data.new(self.data.size(0), matrix.data.size(1)))
AttributeError: 'FloatTensor' object has no attribute 'data'
What am I doing wrong ?
|
st118863
|
Try passing a Variable instead of a Tensor to your model.
print(model(Variable(x)))
For the moment, you can’t mixture Tensors and Variables. Also, autograd expect that the first argument that is returned is a Variable, and that’s not the case when you get the .data from self.theta for computing res.
|
st118864
|
Hi guys, I’m running a WRN 17 implementation with DataParallel with 4 GPUs. I’m seeing only about 50% reduction in epoch time. Is this normal? GPU utilizations are between 88 to 98 percent, which doesn’t seem too bad.
What kind of speedup do you expect from a 4GPU setup with DataParallel, on a reasonably large model like WRN?
Does it have anything to do with SLI setups? I’m running on Google Cloud Platform and I’ve no idea how they setup their GPUs.
|
st118865
|
Speed up depends on a lot of factors, including number of model parameters, data shape, and GPU bus interconnect latency. For example, I get -10% (negative) speedup for MNIST models, and 100% speedup for more complex models.
50% sounds like you are leaving some performance on the table. Try bigger models or bigger batch sizes and see if you can get better speed up.
|
st118866
|
Also be aware that you may be rate limited by the reduction phase if your model has a very large number of parameters. See my earlier question on the matter.
|
st118867
|
About the groups parameter in conv2d 2, how to get asymmetric groups through this parameters ?
As the figure below from Lecun 1998 1 , the output feature maps based on subset of input maps, could we implement this by groups ?
screenshot.png723×335 82.1 KB
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.