id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st118468
|
Thanks for your response @colesbury ! Actually I forward 64 images at one time. So the size of the output is 649577. When I want to backward, I don’t backward based on whole batch at one time (Because of the complexity of my cost function- it is a little hard to make it parallel). Actually it is image by image. In another word, In my point of view, what backward function expects is a gradient tensor with size 649577 as same as output size. So I think I should make such a tensor. Actually It is my belief. Am I right?
So in backward function, I try to make a tensor with above size and backward it!
|
st118469
|
Hi, everyone, @SpandanMadan, @AjayTalati
A number plate recognization exercise,
samples have multiple labels, every has 7 letters, 252 labels.
the inputting image size 224224, the target vector width is 252(736),for example:
‘X’ versus 000000000000000000000001000000000000
’A’ versus 100000000000000000000000000000000000
while training ,in 2nd minibatch, the output of multilabelmarginloss() is zero, i cann’t find out the reason.
Can anyone help me?
I’ve pasted the whole source code in Multi Label Classification in pytorch
best regards sincerely
|
st118470
|
In first minibatch, the output of multilabelmarginloss() Loss: 246.707458
the second, it is zero.
In[10]: output_var.data
Out[10]:
2.3264e+03 6.5879e+01 -9.7149e+00 … -2.6423e+01 2.1553e+00 2.3724e+01
2.9392e+03 8.3356e+01 -1.2624e+01 … -3.3264e+01 2.6289e+00 2.9792e+01
2.1745e+03 6.1491e+01 -9.2812e+00 … -2.4580e+01 1.9765e+00 2.2134e+01
2.4772e+03 7.0001e+01 -1.0521e+01 … -2.8103e+01 2.1341e+00 2.5243e+01
2.0916e+03 5.9231e+01 -8.8647e+00 … -2.3580e+01 1.9928e+00 2.1346e+01
[torch.cuda.FloatTensor of size 5x252 (GPU 0)]
In[11]: target_var.data
Out[11]:
0 0 0 ... 0 0 0
0 0 0 ... 0 0 0
0 0 0 ... 0 0 0
0 0 0 ... 0 0 0
0 0 0 ... 0 0 0
[torch.cuda.LongTensor of size 5x252 (GPU 0)]
|
st118471
|
t1 = Variable(torch.ones(2)) t2 = Variable(torch.zeros(2)) torch.stack([t1,t2],dim=1)
Run above code, and I recevied an error like this:
TypeError Traceback (most recent call last)
in ()
----> 1 torch.stack(t,1)
/users1/xwgeng/anaconda2/lib/python2.7/site-packages/torch/functional.pyc in stack(sequence, dim, out)
54 if dim < 0:
55 dim += sequence[0].dim()
—> 56 return torch.cat(list(t.unsqueeze(dim) for t in sequence), dim, out=out)
57
58
TypeError: cat() got an unexpected keyword argument ‘out’
And my pytorch version is 0.1.11+2b56711. It works before I update into the latest pytorch
|
st118472
|
it seems something goes wrong with the latest commit, https://github.com/pytorch/pytorch/pull/1323 8 . but I’m not sure because I don’t know much about C/C++.
|
st118473
|
I would expect that error if you pull the latest Python code from GitHub master but don’t rebuild the C extension. If you’re building from source, make sure you rebuild the C extension:
python setup.py clean
python setup.py install
(or substitute “build develop” for “install”)
|
st118474
|
I do as you said. Now torch.cat works fine for tensor, but fails for variable. I think it’s because Concat function doesn’t have keyword out. Maybe it could be fixed by
github.com/pytorch/pytorch
add keyword `out` for autograd function Concat to match torch.cat 15
pytorch:master ← chenyuntc:master
opened
Apr 23, 2017
chenyuntc
+6
-4
|
st118475
|
The title is pretty self-descriptive as it relates to my problem. I’m encountering this issue in the context of an implementation of DQN inspired by the implementation of DQN in the official tutorial (http://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html 2). However, I suspect the issue isn’t related to the particular context and is probably some sort of more fundamental misunderstanding. I’ve pasted a minimal script which reproduces this behavior.
import torch
from torch import Tensor, LongTensor
from torch.autograd import Variable
import torch.optim as optim
import torch.nn as nn
import torch.nn.functional as F
GAMMA = 0.99
class SimpleNet(nn.Module):
def __init__(self):
super(SimpleNet, self).__init__()
self.head = nn.Linear(10, 3)
def forward(self, x):
return self.head(x)
current_Q = SimpleNet()
target_Q = SimpleNet()
optimizer = optim.Adam(current_Q.parameters())
initial_current_params = list(current_Q.parameters())
initial_target_params = list(target_Q.parameters())
initial_current_grad = list(current_Q.parameters())[0].grad
initial_target_grad = list(target_Q.parameters())[0].grad
state_batch = Variable(torch.randn(1,10))
action_batch = Variable(LongTensor([[0]]))
reward_batch = Variable(Tensor([0.0]))
current_state_values = current_Q(state_batch)
state_action_values = current_state_values.gather(1, action_batch)
non_final_next_states = Variable(torch.randn(1,10))
next_state_values = target_Q(non_final_next_states).gather(1, current_Q(non_final_next_states).max(1)[1])
expected_state_action_values = reward_batch + (GAMMA * next_state_values)
huber_loss = F.smooth_l1_loss(state_action_values, expected_state_action_values)
optimizer.zero_grad()
huber_loss.backward()
optimizer.step()
final_current_params = list(current_Q.parameters())
final_target_params = list(target_Q.parameters())
final_current_grad = list(current_Q.parameters())[0].grad
final_target_grad = list(target_Q.parameters())[0].grad
initial_current_params, initial_target_params, initial_current_grad, initial_target_grad, final_target_params, final_current_grad and final_target_grad are all what I would expect them to be but I don’t understand why initial_current_params is equal to final_current_params.
Any help with understanding this would be greatly appreciated.
|
st118476
|
At what point are you comparing initial_current_params with final_current_params?
At the end of your program, initial_current_params and final_current_params will have the same values because they point to the same Parameters. (The parameters are updated in place and initial_current_params isn’t a copy).
To get a copy of the parameter values:
initial_current_params = [p.data.clone() for p in current_Q.parameters()]
|
st118477
|
That makes a TON of sense. That was it. I figured it was something stupidly simple. Thanks!
|
st118478
|
Hi all! I want to try a weird idea: using gradient descent to make one normalized matrix A be ‘closer’ to another normalized matrix B, i.e., minimize the mean square error between A (can be viewed as the output) and B (can be viewed as the target). So I write the following toy code:
import torch
from torch.autograd import Variable
import numpy as np
np.random.seed(0)
torch.manual_seed(0)
a = np.random.randint(0, 255, (5, 5)).astype(np.float32)
a = Variable(torch.from_numpy(a), requires_grad=True)
b = np.random.randint(0, 255, (5, 5)).astype(np.float32)
b = Variable(torch.from_numpy(b))
for _ in range(10):
a_max = a.max().repeat(a.size())
c = a / a_max
b_max = b.max().repeat(b.size())
d = b / b_max
loss = torch.nn.MSELoss()(c, d)
print(loss.data[0])
loss.backward()
a.data -= 1000000 * a.grad.data
Unfortunately, I encounter a runtimeError: can’t assign a FloatTensor to a scalar value of type float.
However, it seems that everything work well when I detach the node a_max from the current graph,:
import torch
from torch.autograd import Variable
import numpy as np
np.random.seed(0)
torch.manual_seed(0)
a = np.random.randint(0, 255, (5, 5)).astype(np.float32)
a = Variable(torch.from_numpy(a), requires_grad=True)
b = np.random.randint(0, 255, (5, 5)).astype(np.float32)
b = Variable(torch.from_numpy(b))
for _ in range(10):
a_max = a.max().repeat(a.size()).detach()
c = a / a_max
b_max = b.max().repeat(b.size())
d = b / b_max
loss = torch.nn.MSELoss()(c, d)
print(loss.data[0])
loss.backward()
a.data -= 1000000 * a.grad.data
# now c and b are close enough
print(c.data)
print(d.data)
I have no idea why detach that node make the program executable. I would appreciate it if you can point me out what cause the runtime error. Thanks a lot!
|
st118479
|
How do you randomly select a batchSize of fixed size during training?
Say I have my training input as a tensor of size [124, 3, 32, 32], during training, I want to randomly select a batch of 31 from the tensor e.g.
for epoch in range(maxIter):
images = Variable(train_X) #e.g. train_X is a 124 tensor of images of size [3, 32, 32]
labels = Variable(train_Y) #e.g. train_Y is of size [124] of labels
optimizer.zero_grad()
outputs = convnet(images) #convnet is a cnn
Is there a quick and easy function that allows batchSelection from the train_X and train_Y tensors images?
|
st118480
|
I think I found it out. One can index the tensor the way a python list could be index. For example,
The first tensor would be
tensor_1 = train_X[0, :, :, :]
and so on ...
|
st118481
|
Yes, you can index a tensor like a Python list or NumPy array. If you want to sample without replacement you can use the TensorDataset and DataLoader classes:
import torch
import torch.utils.data
inputs = torch.randn(124, 3, 32, 32)
targets = torch.LongTensor(124).random_(1, 10)
dataset = torch.utils.data.TensorDataset(inputs, targets)
loader = torch.utils.data.DataLoader(dataset, batch_size=31, shuffle=True)
for images, labels in loader:
print(images.size(), labels.size())
|
st118482
|
Thanks a lot for the info. I did not realize you could use the class torch.utils.data to load data that is not native to pytorch.
|
st118483
|
I’m implementing a reverse gradient layer and I ran into this unexpected behavior when I used the code below:
import random
import torch
import torch.nn as nn
from torch.autograd import Variable
class ReverseGradient(torch.autograd.Function):
def __init__(self):
super(ReverseGradient, self).__init__()
def forward(self, x):
return x
def backward(self, x):
return -x
class ReversedLinear(nn.Module):
def __init__(self):
super(ReversedLinear, self).__init__()
self.linear = nn.Linear(1, 1)
self.reverse = ReverseGradient()
def __call__(self, x):
return self.reverse(self.linear(x)).sum(0).squeeze(0)
# return self.linear(x).sum(0).squeeze(0)
input1 = torch.rand(1, 1)
input2 = torch.rand(2, 1)
rl = ReversedLinear()
input1 = Variable(input1)
output1 = rl(input1)
input2 = Variable(input2)
output1 += rl(input2)
output1.backward()
I get a size doesn’t match error when I run the code above, however, creating a new instance of ReverseGradient() with every forward prop seems to solve the problem. I just want to understand better how the autograd.Function class works.
|
st118484
|
This is not directly relevant to the issue you’re seeing, but it’s important to note:
Certain parts of torch.autograd either currently do or will in the future assume that the gradients returned by Functions are correct (i.e., equal to the mathematical derivative). If you want to do things that violate this assumption, that’s fine – but they should be implemented as gradient hooks (var.register_hook) which can arbitrarily modify gradient values, not as Functions.
|
st118485
|
Yes, you should create new instances of functions each time you use them. But as James suggested, it might make more sense to use gradients hooks in your case.
|
st118486
|
Hi, I’m new to python, pytorch and DL, so please bear with me.
I’m following Nando de Freitas Oxford youtube lectures and in one of the exercises we need to construct Polynomial Regression. I found an example here Polynomial Regression 144
Now I’m trying to modify it to my needs, but having issues. I think the problem is that in the function make_features(x) produces tensors x with size (10,2,4) and tensor y_train with size (10) and I need to align them and make the tensor x of only one row, but I don’t know how to do it. I tried transforming it with numpy.reshape() but couldn’t get it to work. It’s be great if someone could help me out, Thanks, Code below
from __future__ import print_function
from itertools import count
import torch
import torch.autograd
import torch.nn.functional as F
from torch.autograd import Variable
train_data = torch.Tensor([
[40, 6, 4],
[44, 10, 4],
[46, 12, 5],
[48, 14, 7],
[52, 16, 9],
[58, 18, 12],
[60, 22, 14],
[68, 24, 20],
[74, 26, 21],
[80, 32, 24]])
test_data = torch.Tensor([
[6, 4],
[10, 5],
[4, 8]])
x_train = train_data[:,1:3]
y_train = train_data[:,0]
POLY_DEGREE = 4
input_size = 2
output_size = 1
def make_features(x):
"""Builds features i.e. a matrix with columns [x, x^2, x^3, x^4]."""
x = x.unsqueeze(1)
return torch.cat([x ** i for i in range(1, POLY_DEGREE+1)], 1)
def poly_desc(W, b):
"""Creates a string description of a polynomial."""
result = 'y = '
for i, w in enumerate(W):
result += '{:+.2f} x^{} '.format(w, len(W) - i)
result += '{:+.2f}'.format(b[0])
return result
def get_batch():
"""Builds a batch i.e. (x, f(x)) pair."""
x = make_features(x_train)
return Variable(x), Variable(y_train)
# Define model
fc = torch.nn.Linear(input_size, output_size)
for batch_idx in range(1000):
# Get data
batch_x, batch_y = get_batch()
# Reset gradients
fc.zero_grad()
# Forward pass
output = F.smooth_l1_loss(fc(batch_x), batch_y)
loss = output.data[0]
# Backward pass
output.backward()
# Apply gradients
for param in fc.parameters():
param.data.add_(-0.1 * param.grad.data)
# Stop criterion
if loss < 1e-3:
break
print('Loss: {:.6f} after {} batches'.format(loss, batch_idx))
print('==> Learned function:\t' + poly_desc(fc.weight.data.view(-1), fc.bias.data))
# print('==> Actual function:\t' + poly_desc(W_target.view(-1), b_target))
|
st118487
|
I changed it a bit and now I don’t get any errors, but neither any values for loss or predicted values. I tried to replicate the case with sklearn and all is ok. I think I’m missing something simple, but just can’t see it. Any help would be appreciated.
Code:
import sklearn.linear_model as lm
from sklearn.preprocessing import PolynomialFeatures
import torch
import torch.autograd
import torch.nn.functional as F
from torch.autograd import Variable
train_data = torch.Tensor([
[40, 6, 4],
[44, 10, 4],
[46, 12, 5],
[48, 14, 7],
[52, 16, 9],
[58, 18, 12],
[60, 22, 14],
[68, 24, 20],
[74, 26, 21],
[80, 32, 24]])
test_data = torch.Tensor([
[6, 4],
[10, 5],
[4, 8]])
x_train = train_data[:,1:3]
y_train = train_data[:,0]
POLY_DEGREE = 3
input_size = 2
output_size = 1
poly = PolynomialFeatures(input_size * POLY_DEGREE, include_bias=False)
x_train_poly = poly.fit_transform(x_train.numpy())
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.fc = torch.nn.Linear(poly.n_output_features_, output_size)
def forward(self, x):
return self.fc(x)
model = Model()
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)
losses = [] # Added
for i in range(1000):
optimizer.zero_grad()
outputs = model(Variable(torch.Tensor(x_train_poly)))
loss = criterion(outputs, Variable(y_train))
losses.append(loss.data[0])
loss.backward()
optimizer.step()
if loss.data[0] < 1e-4:
break
print('n_iter', i)
print(loss.data[0])
plt.plot(losses)
plt.show()
and below is the sklearn code that works
regr = lm.LinearRegression()
poly = PolynomialFeatures(4, include_bias=False)
X_poly = poly.fit_transform(x_train.numpy())
regr.fit(x_train.numpy(), y_train.numpy())
pred = regr.predict(test_data.numpy())
pred
Output
array([ 40.32044911, 44.03051972, 43.45981896])
|
st118488
|
I can’t see what’s broken, but I like this nice pedagogical example!
Hope you get it working!
|
st118489
|
Does index_select and clone detach gradients? I mean during forward prop if you either use index_select (or clone) on a variable, will it create a new variable whose grad doesn’t backprop to the original variable?
|
st118490
|
Neither of them detaches gradients (well, index_select won’t backpropagate into the indices, but it wouldn’t be possible to do so).
|
st118491
|
index_select does propagate back:
x = Variable(torch.ones(3), requires_grad=True)
ixs = Variable(torch.LongTensor([1,]))
y = x.index_select(0, ixs)
z = y.mean()
z.backward()
x.grad
> Variable containing:
> 0
> 1
> 0
> [torch.FloatTensor of size 3]
It seems to be the same for clone
x_original = Variable(torch.ones(3), requires_grad=True)
x = x_original.clone()
ixs = Variable(torch.LongTensor([1,]))
y = x.index_select(0, ixs)
z = y.mean()
z.backward()
x_original.grad
> Variable containing:
> 0
> 1
> 0
> [torch.FloatTensor of size 3]
while x.grad is None gives True in this case
|
st118492
|
Hi, there,
I create a new Variable as the output to play the 4D Tensor batch multiplication in the forward function like this:
def _4D_bmm(self, batch1, batch2):
x = Variable(torch.Tensor(batch1.size(0), batch1.size(1), batch1.size(3), batch2.size(3)))
for i in range(batch1.size(0)):
x[i] = torch.bmm(torch.transpose(batch1[i], 1, 2), batch2[i])
return x
It can be passed forward, but an error will be thrown in the linear layers:
File "/home/usrname/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/usrname/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/usrname/anaconda2/lib/python2.7/site-packages/torch/nn/modules/linear.py", line 54, in forward
return self._backend.Linear()(input, self.weight, self.bias)
File "/home/usrname/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/linear.py", line 10, in forward
output.addmm_(0, 1, input, weight.t())
TypeError: addmm_ received an invalid combination of arguments - got (int, int, torch.FloatTensor, torch.cuda.FloatTensor), but expected one of:
* (torch.FloatTensor mat1, torch.FloatTensor mat2)
* (torch.SparseFloatTensor mat1, torch.FloatTensor mat2)
* (float beta, torch.FloatTensor mat1, torch.FloatTensor mat2)
* (float alpha, torch.FloatTensor mat1, torch.FloatTensor mat2)
* (float beta, torch.SparseFloatTensor mat1, torch.FloatTensor mat2)
* (float alpha, torch.SparseFloatTensor mat1, torch.FloatTensor mat2)
* (float beta, float alpha, torch.FloatTensor mat1, torch.FloatTensor mat2)
* (float beta, float alpha, torch.SparseFloatTensor mat1, torch.FloatTensor mat2)
So what is the right way to state a new Variable?
@apaszke @fmassa Any suggestions to help? Many thanks in advance.
|
st118493
|
def _4D_bmm(self, batch1, batch2):
x =[]
for i in range(batch1.size(0)):
x.append( torch.bmm(torch.transpose(batch1[i], 1, 2), batch2[i]))
return Variable(torch.cat(x)).cuda()
And the exception raises form that model’s parameters are in GPU, while the input is in CPU.
|
st118494
|
I am working on a feedforward autoencoder network that takes variable-sized images as input, and I having trouble with memory usage. I assemble roughly size-matched minibatches by zero-padding the inputs and maintaining a 0-1 mask tensor. So that the output of the network is independent of the padding, I mask the output of my convolutions. My basic layer looks something like
output = elu(conv2d(x)) * mask[:,None].expand_as(x) for 4D input x.
I am having a lot of trouble with memory usage as my images can be large (up to 300x300), limiting both the size of minibatches and the number of layers.
How can I reduce the memory usage of the line above? Are relu’s more memory-efficient in the backward pass than elu’s since it only requires the sign of the activations? Is there a more memory- or computation-efficient way to apply a binary mask?
In the ideal case and if I switch from elu to relu, I would expect that the non-transient memory usage of the layer above would be NCH*W/8 bytes, where x has size (N,C,H,W). Is there any easy way to check if PyTorch achieves that bound?
Thanks,
John
|
st118495
|
Suppose I have a tensor A of size (m, n). To loop through each row of this tensor, what I did was:
for row in A:
do something
But I saw many people did:
for row in A.split(1):
do something
Is there any difference between two methods? Is there a memory leak in the first method?
|
st118496
|
if A is a tensor, they’re both fine; if A is a Variable the second is better because it uses fewer autograd ops.
|
st118497
|
I have a problem when computing batch Jacobian. I am not sure if it is bug or I am using autograd engine incorrectly.
I used the following snippet to compute Jacobian of the output. I compared Jacobian computed by pytorch against Theano/Lasagne network initialized with the identical parameters. For the starting output 0 the results are identical. However, for the subsequent backward calls i > 0, the results differ by some constant factor (in some cases 2 or 3 but not always deterministic).
What are the reasons that the gradient accumulated multiple times in the leafs even after the input gradient was reset?
import torch
from torch.autograd.gradcheck import zero_gradients
def compute_jacobian(inputs, output):
assert inputs.requires_grad
num_classes = output.size()[1]
jacobian = torch.zeros(num_classes, *inputs.size())
grad_output = torch.zeros(*output.size())
if inputs.is_cuda:
grad_output = grad_output.cuda()
jacobian = jacobian.cuda()
for i in range(num_classes):
zero_gradients(inputs)
grad_output.zero_()
grad_output[:, i] = 1
output.backward(grad_output, retain_variables=True)
jacobian[i] = inputs.grad.data
return jacobian
|
st118498
|
Have you solved this problem? I tested it with some simple snippets, it works fine. Maybe you can provide more information if the problem still exists. Backward many times may have some problems if you don’t handle it carefully.
|
st118499
|
@chenyuntc I found the reason: importing theano somehow conflicts with pytorch buffers during second backward call. Uncommenting theano, gives identical results. Interestingly, if the pytorch model and all variables are cuda, then the script below passes. I used latest theano-dev and official pytorch wheel.
Script to reproduce:
import numpy as np
import theano # comment / uncomment
import torch
from torch import nn
from torch.autograd import Variable
from torch.autograd.gradcheck import zero_gradients
model = nn.Sequential(
nn.Linear(784, 1000),
nn.ReLU(),
nn.Linear(1000, 1000),
nn.ReLU(),
nn.Linear(1000, 10))
x = Variable(torch.rand(100, 784), requires_grad=True)
y = model(x)
grad_var = torch.zeros(*y.size())
grad_var[:, 0] = 1
y.backward(grad_var, retain_variables=True)
x_grad1 = x.grad.data.numpy().copy()
zero_gradients(x)
grad_var.zero_()
grad_var[:, 0] = 1
y.backward(grad_var, retain_variables=True)
x_grad2 = x.grad.data.numpy().copy()
assert np.allclose(x_grad1, x_grad2)
|
st118500
|
I have a short question regarding model constants and persistent state. Let’s say I have something like
class MyModule(nn.Module):
def __init__(self, n=2):
self.n = n
What’s the best way to make n part of the persistent state (i.e. the state_dict)? Should I make it a buffer? But then I would need to convert it into a tensor, which seems a bit of a hassle. Is there another more elegant way?
|
st118501
|
I think you’d better make it a buffer by self.register_buffer('n', n)
def state_dict(self, destination=None, prefix=''):
"""Returns a dictionary containing a whole state of the module.
Both parameters and persistent buffers (e.g. running averages) are
included. Keys are corresponding parameter and buffer names.
Example:
>>> module.state_dict().keys()
['bias', 'weight']
"""
if destination is None:
destination = OrderedDict()
for name, param in self._parameters.items():
if param is not None:
destination[prefix + name] = param.data
for name, buf in self._buffers.items():
if buf is not None:
destination[prefix + name] = buf
for name, module in self._modules.items():
if module is not None:
module.state_dict(destination, prefix + name + '.')
return destinatio
|
st118502
|
When I declare:
x = Variable(torch.ones(2), requires_grad=False)
And then do : x[0]
I still get a tensor of size 1. Indexing further x[0][0] further leads to the same tensor. Is there a way to get the scalar element that is x[0]
|
st118503
|
Hi @Mika_S,
You can use x.data to access the underlying storage tensor. As a result, what you are looking for is x.data[0].
|
st118504
|
Keep in mind that doing this will not work with the autograd and no gradient will be backpropagated through this number.
|
st118505
|
The cutoff threshold for gradient clipping is set based on the average norm of the gradient over one pass on the data. I would therefore like to compute the average norm of the gradient to find a fitting gradient clipping value for my model. How can this be done in PyTorch?
Another quick question: I have seen the following in the language modeling example:
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
torch.nn.utils.clip_grad_norm(model.parameters(), args.clip)
for p in model.parameters():
p.data.add_(-lr, p.grad.data)
If clip_grad_norm is already applied to model.parameters(), why we need the for loop?
|
st118506
|
The for loop is for the gradient descent update which is manually implemented in the example. Parameters are reduced by their gradient times learning rate.
To your first question, if you are referring to Pascanu et al. clipping which is based on the norm of the gradient, then torch.nn.utils.clip_grad_norm does that for you. The clipping threshold is usually tuned as a hyperparameter as there is no way to determine what the norm of the gradients would be through the training.
|
st118507
|
@smth is it possible in future release of pytorch to add some functionality to check the gradient norm? It will be very helpful.
|
st118508
|
Hi All,
No problems or questions, it just seems rare that anyone shares what they’re working on…
I’ve been using DL to experiment with new kinds of architectural drawings. Currently using a deeper DCGAN to generate architectural plans. Here are some results after ~20,000 iterations.
fake_samples_epoch_976.png1565×1565 1.77 MB
Obviously a little sketchy but the original paper starts getting very good results around 100,000 iterations.
The network is almost identical to the DCGAN example, but now with 9 convtranspose layers (~100mil parameters for G and D) from z up to 3x512x512
Thanks for making pytorch fun & easy
|
st118509
|
Hey there!
I’m starting with pytorch so I wanted to implement a neural language model.
Everything was going ok, I started facing problems when trying to work with GPU.
In fact I have a typical model that embeds, run RNN (LSTM) then use an output projection xW+b then a softmax.
My model is like:
class RnnLm(nn.Module):
def __init__(self, params):
super().__init__()
self.params = params
self.embedding = nn.Embedding(num_embeddings=params.vocab_size,
embedding_dim=params.embed_dim)
self.cell = nn.LSTM(input_size=params.embed_dim,
hidden_size=params.hidden_size,
batch_first=True)
self.out_w = autograd.Variable(torch.randn(params.hidden_size, params.vocab_size))
self.out_b = autograd.Variable(torch.randn(params.vocab_size))
def _embed_data(self, src):
"""Embeds a list of words
"""
src_var = autograd.Variable(src)
embedded = self.embedding(src_var)
return embedded
def forward(self, inputs):
# inputs: nested list [batch_size x time_steps]
# emb_inputs: [bs x ts x emb_size]
emb_inputs = self._embed_data(inputs)
log("Input: %s ; Embedded: %s "% (str(inputs.size()), str(emb_inputs.size())))
# Running the RNN
# o: [bs x ts x h_size]
# h: [n_layer x ts x h_size]
# c: [n_layer x ts x h_size]
o, (h, c) = self.cell(emb_inputs)
o = o.contiguous()
self.o = o
log("Outputs: %s" % str(o.size()))
log("h %s" % str(h.size()))
log("c %s" % str(c.size()))
# Output projection
# oo: [bs*ts x h_size]
# logits: [bs*ts x vocab_size]
oo = o.view(-1, params.hidden_size)
logits = oo @ self.out_w + self.out_b.expand_as(logits)
# Softmax
prediction = F.log_softmax(logits)
return prediction
The whole code can be seen here: https://github.com/pltrdy/pytorchwork/blob/master/rnn_lm.ipynb 8 its quite experimental (=messy).
Trying to work with GPU, I create a object “model” then call model.cuda().
The problem then comes from out_w & out_b that are not cuda tensors
print("data type: oo: %s; out_w: %s" % (str(type(oo.data)), str(type(self.out_w.data))))
Returns:
of type data type: oo: <class 'torch.cuda.FloatTensor'>; out_w: <class 'torch.FloatTensor'>
oo's type is ok, but out_w should be torch.cuda.FloatTensor.
Obviously, I could add some .cuda() for out_w & out_b in RnnLM.__init__ but thats fixing without learning.
Thanks for any help or suggestion.
|
st118510
|
if out_w and out_b are parameters of your layer, you should declare them as nn.Parameter and not autograd.Variable. Doing this change will make nn behave as expected wrt sending weights to cuda.
|
st118511
|
Oh, ok. Just never seen it
Anyway thank you for such a fast answer for a dummy question
|
st118512
|
The first error is solved. But I now have a segfault. It may be related to .contiguous() (note that I don’t need contiguous when I don’t use cuda.
Note as well, my data initially comes from numpy.
|
st118513
|
segfault? That unexpected.
Would you have a stacktrace to have more information about where it occurs?
|
st118514
|
Well, segfault does not print python trace. At least, using print (thats cheap, I know), before each instruction I found that it occurs before the .backward().
Interestingly, some iterations are working ok, then, at one point it segfault
Edit: also note that the size of the output looks constant i.e. it is always the same iteration which fails
|
st118515
|
I was wondering if you could use gdb to try and get more information:
run gdb --args python you_script.py --your args.
after it started, type run.
once it stops due to the segfault, type bt and print here the backtrace that it prints.
|
st118516
|
Nice, I was looking for the --args python trick, didn’t know it.
The output is:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffad893700 (LWP 17317)]
torch::autograd::GradBuffer::addGrad(unsigned long, std::shared_ptr<torch::autograd::Variable>&&) (
this=this@entry=0x7fffad892c40, pos=pos@entry=0,
var=var@entry=<unknown type in /home/moses/anaconda3/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so, CU 0x6e92e4, DIE 0x6fcb24>) at torch/csrc/autograd/grad_buffer.cpp:17
17 torch/csrc/autograd/grad_buffer.cpp: No such file or directory.
(gdb) bt
#0 torch::autograd::GradBuffer::addGrad(unsigned long, std::shared_ptr<torch::autograd::Variable>&&) (
this=this@entry=0x7fffad892c40, pos=pos@entry=0,
var=var@entry=<unknown type in /home/moses/anaconda3/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so, CU 0x6e92e4, DIE 0x6fcb24>) at torch/csrc/autograd/grad_buffer.cpp:17
#1 0x00007fffed45c9a1 in torch::autograd::Engine::evaluate_function (this=this@entry=0x7fffedcd7ce0 <engine>,
task=...) from /home/<user>/anaconda3/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so
#2 0x00007fffed45dd1a in torch::autograd::Engine::thread_main (this=this@entry=0x7fffedcd7ce0 <engine>, queue=...)
from /home/<user>/anaconda3/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so
#3 0x00007fffed46e87a in PythonEngine::thread_main (this=0x7fffedcd7ce0 <engine>, queue=...)
from /home/<user>/anaconda3/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so
#4 0x00007ffff652d870 in ?? () from /home/<user>/anaconda3/bin/../lib/libstdc++.so.6
#5 0x00007ffff7474184 in start_thread (arg=0x7fffad893700) at pthread_create.c:312
#6 0x00007ffff688cbed in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
(gdb)
thx for your help
|
st118517
|
PyTorch is great but I am struggling with a baffling memory leak. I am using pyTorch version 1.12 via Anaconda on an Ubuntu machine.
This leaks memory:
import numpy as np
from torch import Tensor, LongTensor, optim
Xin_memory = Tensor(2000100, 60, 50)
Xin2 = Tensor(1,60,50)
for i in range(2000000):
nda = np.empty((60,50))
Xin_memory[i+i,:,:] = Tensor(nda)
# Xin2[0,:,:] = Tensor(nda)
print i
This does not:
import numpy as np
from torch import Tensor, LongTensor, optim
Xin_memory = Tensor(2000100, 60, 50)
Xin2 = Tensor(1,60,50)
for i in range(2000000):
nda = np.empty((60,50))
# Xin_memory[i+i,:,:] = Tensor(nda)
Xin2[0,:,:] = Tensor(nda)
print i
Help! Do I misunderstand what the assignment is doing? I am under the impression that the assignment to Xin_memory will copy the values and then the Tensor will no longer exist. Eventually, nda will not exist either and will be garbage collected. I think this code is slowly eating away at memory in some long-running code I am working with. Please let me know if I can provide any additional information.
Thank you.
|
st118518
|
Tensor(2000100, 60, 50) won’t allocate the memory right away, which would like to cost 2000100x60x50x4 Bytes that is about 24GB.
|
st118519
|
Yes, that’s probably the case.
Try doing
a = torch.zeros(2000100, 60, 50)
and see if it fits in your machine.
|
st118520
|
Hi,
how you a train a vec2word model, i.e. something like a reverse nn.Embedding, which goes from a vector representation, to single words/one-hot representation?
So if I understand correctly, a cluster of points in embedded space represents similar words. Thus if you sample from that cluster, and use it as the input to vec2word, the output should be a mapping to similar words?
I guess I could do something similar to an encoder-decoder, but does it have to be that complicated/use so many parameters?
I think i understand how to train word2vec, using this TensorFlow tutorial 58, but how you the reverse in PyTorch?
Thanks a lot,
Ajay
|
st118521
|
Hey @AjayTalati,
there is no reverse of the embedding per se.
The famous king-man+woman demo then outputs the nearest vectors according to some (possibly after scaling to unit norm as in “cosine similarity”) distance function.
So if your embedding weights are unit-length vectors you can do (similar to gensim most_similar calculate analogy word vector 51 and then compute the distance 34 ). Note that the matrix multiplication is a “batch dot product”.
# have an embedding
t = torch.nn.Embedding(100,10)
# normalize rows
normalized_embedding = t.weight/((t.weight**2).sum(0)**0.5).expand_as(t.weight)
# make up some vecotr
v = 1*t.weight[1]+0.1*t.weight[5]
# no
v = v / ((v**2).sum()**0.5).expand_as(v)
# similarity score (-1..1) and vocabulary indices
similarity, words = torch.topk(torch.mv(normalized_embedding,v),5)
(Maximal dot product is minimal Euclidian distance for unit vectors per $|x-y| = |x| - 2 x \cdot y + |y| $.)
For non-unit vectors you would need to compute the distances, e.g. by storing the lengths of the word vectors to compute $argmin_i |x(i)|-2 x(i) \cdot y$.
Of course, if you scoll down the linked gensim file, you’ll find lots of better ideas, with references.
That said, you might also just skip the word vectors in output. In OpenNMT’s train.py L293-295 15 the following single layer + softmax transforms the (not “word vector”, but some learnt hidden representation) output of the decoder:
generator = nn.Sequential(
nn.Linear(opt.rnn_size, dicts['tgt'].size()),
nn.LogSoftmax())
Best regards
Thomas
|
st118522
|
Hey Thomas @tom ,
great to hear from you, and awesome advice as usual
Following your guide, I think we can use a word embedding together with the Wasserstein divergence as an criterion?
Here’s a rough guide to how it might work?
import torch
from random import randint
num_words = 10
embedding_dim = 5
# have an embedding
# can initialize to -1 or +1
# or copy pretrained weights, see- https://discuss.pytorch.org/t/can-we-use-pre-trained-word-embeddings-for-weight-initialization-in-nn-embedding/1222
t = torch.nn.Embedding(num_words,embedding_dim)
# map from embedding to probability space
vec_to_prob = nn.Softmax()
# sample a random word to train on
word_idx = randint(0,num_words-1)
# a batch of 1 sample of 1 index/word
word_idx = Variable(torch.LongTensor([[word_idx]]))
# vector representation
word_vec = t(word_idx)
word_vec = word_vec.squeeze(0) # drop the batch dimension
# sanity check !!!
_ , closest_word_idx = torch.topk( torch.mv( t.weight , word_vec.squeeze(0) ) , 1 )
closest_word_idx == word_idx #true
# map to probability space,
# could be used to calculate the Wasserstein divergence as the training objective, with a histogram from a decoder
histogram_target = vec_to_prob(word_vec)
histogram_model = blah, blah, blah
wasserstein_loss = divergence( histogram_target , histogram_model )
# after training, histogram from decoder should be "close" to target histogram in Wasserstein space
histogram_model = histogram_target
_ , closest_word_idx = torch.topk( torch.mv( t.weight , histogram_model.squeeze(0) ) , 1 )
closest_word_idx == word_idx #true
It seems reasonable, and simple, so I’d like to try it Would really appreciate your opinion !
Best regards,
Ajay
|
st118523
|
I read the paper about batch normalization, but I do not find how does it initialize the weight. So I find the code in PyTorch as below:
nn/modules/batchnorm.py line31,32 213:
self.weight.data.uniform_()
self.bias.data.zero_()
So why do the wights of batch normalization initialize like this? Is there any theory that this inialization is optimal?
|
st118524
|
Solved by smth in post #2
there is no theory around this specifically.
|
st118525
|
how to transfer variable length of language sentence into one batch?
I wanna train the neural conversation model, can I make batch without padding, and use the original variable length sentence to make batch for faster training?
|
st118526
|
you can look at examples here: https://github.com/pytorch/pytorch/releases/tag/v0.1.10 122
|
st118527
|
it seems that the PackedSequences is not suitable for Seq2Seq Model, eg: Machine Translation, Conversation Model, because when you ordering the source sentence according to their length in the batch, but the target sentence won’t be ordered as well.
Am I right or wrong? Please tell me, thanks.
|
st118528
|
That’s right. You can reorder one or both batches after passing them to nn.LSTM, but that does reduce the speed gain from using nn.LSTM with PackedSequences over manually unrolled LSTMCells.
|
st118529
|
sorry, I don’t figure out how to reorder the batch after passing them to nn.LSTM, can you make it more clear or show a simple example, thanks.
|
st118530
|
image.png622×680 20.7 KB
I am trying to implement conv+max_pooling for encoding sentence (as the above figure indicates), yet the documentation for conv2d is kind of messy. Does anyone have experience about this?
Basically, the input is a tensor a with dimension (sent_len, EMB_SIZE):
a = self.lookup(x_sent).unsqueeze(1) // I am using batch_size = 1
The output should be a tensor with dimension (1, EMB_SIZE) ?
|
st118531
|
the output of CNN should be size of (batch_size, EMB_SIZE, seq_len),
then you may use transpose() to transpose it to (seq_len, batch_size, EMB_SIZE), after which you are ready for the input of LSTM.
Is this what you need?
|
st118532
|
Hi Jin,
I want to know what’s the code snippet for transforming a to the tensor after max_pooling — (conv2d and max_pooling code).
|
st118533
|
Thanks for responding.
I implemented the model with conv2d and maxpooing APIs, similar to the Tensorflow implementation here: http://www.wildml.com/2015/12/implementing-a-cnn-for-text-classification-in-tensorflow/ 15
|
st118534
|
Hi,
I have created seq2seq model and it has encoder and decoder.
The encoder has 2 main “blocks”: embedding layer (nn.Embedding) and Lstm (nn.LSTM).
Lstm has a lot of parameters and GPU can’t handle even this one block so I can’t use lstm.cuda(1).
What should I do? How can I distribute memory through 8 GPUs in a case of RNN?
Thanks!
P.S Also, want to say that Pytorch is awesome, thank you for developing it
|
st118535
|
I don’t know much about seq2seq, but maybe this is a good reference:
https://github.com/MaximumEntropy/Seq2Seq-PyTorch 13
|
st118536
|
You can manually ship different parts of your model to different GPUs. For example
class MyModel(Module):
def __init__(self):
super(MyModel, self).__init__()
self.m1 = nn.Linear(10, 10)
self.m2 = nn.Linear(10, 10)
self.m3 = nn.Linear(10, 10)
self.m1.cuda(0) # puts in GPU0
self.m2.cuda(1) # puts in GPU1
self.m3.cuda(2) # puts in GPU2
def forward(self, x):
x = self.m1(x.cuda(0))
x = self.m2(x.cuda(1))
x = self.m3(x.cuda(2))
return x
|
st118537
|
Thanks,
But in my case, one gpu can’t handle some of the layers.
for example, I would not be able to use self.m1.cuda(0) because it would be “out of memory”.
For example, my Lstm “layer”(self.lstm = nn.LSTM(embed_size, hidden_size, num_layers, batch_first=True)) is so much big that I can’t refer if to only one GPU
|
st118538
|
In this case you might need to adapt the LSTM code in here to handle your huge model.
You can have a look at here 36 for hints where you could eventually split your model between different GPUs
|
st118539
|
Actually, you might need to use LSTMCell for that 10, as what I pointed out earlier will probably not be enough.
I don’t have any experience with LSTMs though, so I won’t be able to guide you much further.
|
st118540
|
Thanks, I will look at this.
Maybe I can use nn.parallel.data_parallel somehow for my big Lstm “layer”?
|
st118541
|
Not directly, as DataParallel is only for splitting your input data into many GPUs. But you can always split a huge tensor in several GPUs, something like
# huge tensor is M x K, split in 2 Tensors
hugeTensor = torch.rand(1000, 100)
small_tensors_on_different_gpus = [tensor.gpu(gpu_id) for gpu_id, tensor in
enumerate(hugeTensor.chunk(nGPUs, 0))]
|
st118542
|
I think you mean class DataParallel but I mean def data_parallel link 4
Or I misunderstand you?
|
st118543
|
Hi,
I have a question regarding the hidden state h in LSTM. If I am using a uni-directional LSTM, is h[0] the state for the input layer or for the output layer?
|
st118544
|
In a LSTM, you have 3 inputs: the new signal (“network’s input”), the last hidden state, the last cell state. And you have two outputs: the new cell state (“network’s output”), the new hidden state.
If you write:
new_h, new_c = LSTM(input, [last_h, last_c])
then new_h is the new hidden state, and new_c is the new cell state. The “output layer” is new_c.
I’m not sure if I answer your question.
|
st118545
|
I am sorry that I may not clear. My question is as follows.
The first dimension in h of LSTM is num_layers * num_directions. When I use a uni-directional LSTM (num_directions == 1), I want to know if h[0] is the state of the bottom most layer or the top most one.
|
st118546
|
Ah ok, then h[0] would be the state of the first hidden layer (h_0 in the docs). And h(n) the state of the last hidden layer (that you called “output layer” so I did not understand).
|
st118547
|
I am currently stuck at understanding the following behavior.
import numpy as np
import torch
from torch.autograd import Variable
1.0 + Variable(torch.ones(1))
# returns as expected
# Variable containing:
# 2
# [torch.FloatTensor of size 1]
np.sum(1.0) + Variable(torch.ones(1))
# returns an unexpected
# array([[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
# 2
# [torch.FloatTensor of size 1]
# ]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]], dtype=object)
# Switching their order
Variable(torch.ones(1)) + np.sum(1.0)
# returns the expected
# Variable containing:
# 2
# [torch.FloatTensor of size 1]
This behaviour is independent of np.sum and can be replicated with other numpy functions (e.g. np.exp, np.log, …).
I am relatively new to pytorch so it might be that I am missing something obvious that explains this. Am I, or is this really a bug?
Edit: Issue opened: https://github.com/pytorch/pytorch/issues/1294 3
|
st118548
|
I have two networks 31 which both output a tensor of shape batch_size x num_classes x height x width with num_classes = 256 (there are actually just 21 classes in VOC12 but they choose background to have label 255 - I will improve on this later).
So as the label has format batch_size x 1 x height x width I can calculate cross entropy loss with:
criterion = nn.CrossEntropyLoss()
# shape: batch_size x 1 x height x width = 1 x 1 x 256 x 256
inputs = autograd.Variable(images)
# shape: batch_size x 1 x height x width = 1 x 1 x 256 x 256
targets = autograd.Variable(labels)
# shape: batch_size x 1 x height x width = 1 x 256 x 256 x 256
outputs = model(inputs)
optimizer.zero_grad()
loss = criterion(outputs.view(-1, 256), targets.view(-1))
loss.backward()
optimizer.step()
See source for context 23.
I know that outputs[0][0][i][j| corresponds to the probability that the pixel at (i, j) belongs to class 1. So if want to transform outputs of shape 1 x 256 x 256 x 256 to 1 x 1 x 256 x 256 I would need to find the maximum (probability) of every pixel and assign it to the corresponding class value.
I could do this manually by iterating over every class and pixel with numpy but I wonder if there is any better way using tensor operations?
|
st118549
|
Note that some losses accept 2D inputs to them (and CrossEntropy will be updated soon as well to support it). So a more efficient way of computing the loss would be something like
nllcrit = nn.NLLLoss2d(size_average=True) # need to write a functional interface for it
def criterion(input, target):
return nllcrit(F.log_softmax(input), target)
Now, if you want to compute the confusion matrix for your predictions, you can use a variant of ConfusionMeter from tnt 34, but replacing the input by something like
output = output.permute(0, 2, 3, 1).view(-1, ncls).squeeze()
target = target.view(-1).squeeze()
|
st118550
|
Thanks for the hint with ConfusionMeter. I think outputs.data[0].numpy().argmax(0) does what I need.
|
st118551
|
In torch, max and argmax are computed together and returned as a tuple. So outputs.max(0)[1] is the native torch way to do this.
|
st118552
|
Any ideas what could be wrong?
train.png2286×1442 1.04 MB
Code is here 109.
This happens with both models (UNet and 1-layer conv) so I guess there must be something wrong with loss or optimization.
|
st118553
|
@fmassa Does the F.log_softmax take care of the fact that we need to take softmax along the 1st axis?
For eg the input would be a tensor of size (batch_size, n_classes, H, W), we need to apply softmax along each n_classes slice for each pixel in the image sized h*w.
From what I read here 27, I’m not sure if F.log_softmax does that.
I’m currently permuting and resizing the outputs to get a slice with dimensions batch_size, h, w, n_classes and then directly using F.cross_entropy. Is this approach correct?
# outputs.shape =(batch_size, n_classes, img_cols, img_rows)
outputs = outputs.permute(0, 2, 3, 1)
# outputs.shape =(batch_size, img_cols, img_rows, n_classes)
outputs = outputs.resize(batch_size*img_cols*img_rows, n_classes)
labels = labels.resize(batch_size*img_cols*img_rows)
loss = F.cross_entropy(outputs, labels)
|
st118554
|
My model could run well. However, I got a runtime error and I can’t figure out how this happened. It seems that the weight matrix of nn.Linear() module is with wrong type.
The trace back information is as follow:
Traceback (most recent call last):
File "/home/shawnguo/PythonWS/LinearAlignment_TE/trainer.py", line 187, in <module>
t.train()
File "/home/shawnguo/PythonWS/LinearAlignment_TE/trainer.py", line 102, in train
train_loss, train_acc = self.train_step(self.data.train)
File "/home/shawnguo/PythonWS/LinearAlignment_TE/trainer.py", line 151, in train_step
output = self.model(_data['p_ids'], _data['h_ids'], _data['p_rels'], _data['h_rels'])
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/shawnguo/PythonWS/LinearAlignment_TE/align_model.py", line 69, in forward
e = torch.mm(self.F(a_), self.F(b_).t())
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/shawnguo/PythonWS/LinearAlignment_TE/align_model.py", line 23, in forward
x = self.Layer1(x)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/linear.py", line 54, in forward
return self._backend.Linear()(input, self.weight, self.bias)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/linear.py", line 10, in forward
output.addmm_(0, 1, input, weight.t())
TypeError: addmm_ received an invalid combination of arguments - got (int, int, torch.cuda.DoubleTensor, torch.cuda.FloatTensor), but expected one of:
* (torch.cuda.DoubleTensor mat1, torch.cuda.DoubleTensor mat2)
* (torch.cuda.sparse.DoubleTensor mat1, torch.cuda.DoubleTensor mat2)
* (float beta, torch.cuda.DoubleTensor mat1, torch.cuda.DoubleTensor mat2)
* (float alpha, torch.cuda.DoubleTensor mat1, torch.cuda.DoubleTensor mat2)
* (float beta, torch.cuda.sparse.DoubleTensor mat1, torch.cuda.DoubleTensor mat2)
* (float alpha, torch.cuda.sparse.DoubleTensor mat1, torch.cuda.DoubleTensor mat2)
* (float beta, float alpha, torch.cuda.DoubleTensor mat1, torch.cuda.DoubleTensor mat2)
* (float beta, float alpha, torch.cuda.sparse.DoubleTensor mat1, torch.cuda.DoubleTensor mat2)
And, my model is defined as:
class FeedForwardLayer(nn.Module):
def __init__(self, in_dim, hidden_dim, out_dim, activation=F.tanh):
super(FeedForwardLayer, self).__init__()
self.name = 'FeedForwardLayer'
self.Layer1 = nn.Linear(in_dim, hidden_dim)
self.Layer2 = nn.Linear(hidden_dim, out_dim)
self.activation = activation
def forward(self, x):
x = self.Layer1(x)
x = self.activation(x)
x = self.Layer2(x)
x = self.activation(x)
return x
class DecomposableModel(nn.Module):
def __init__(self, word_embedding, config, train=True):
super(DecomposableModel, self).__init__()
self.name = 'DecomposableModel'
self.train = train
self.activation = config['activation']
self.drop_p = config['drop_p']
self.word_dim = word_embedding.embeddings.size(1)
self.embedding = nn.Embedding(word_embedding.embeddings.size(0), self.word_dim)
self.embedding.weight = nn.Parameter(word_embedding.embeddings, requires_grad=False)
self.F = FeedForwardLayer(self.word_dim, config['hidden_dim'], config['F_dim'], self.activation)
self.G = FeedForwardLayer(2 * self.word_dim, config['hidden_dim'], config['G_dim'], self.activation)
self.H = FeedForwardLayer(2 * config['G_dim'], config['hidden_dim'],
config['relation_num'], self.activation)
self.cuda_flag = config['cuda_flag']
def forward(self, *inputs):
p_ids = inputs[0]
h_ids = inputs[1]
if self.cuda_flag:
p_ids = p_ids.cuda()
h_ids = h_ids.cuda()
p = Variable(p_ids)
h = Variable(h_ids)
# project the word ids into continuous space
a_ = self.embedding(p)
b_ = self.embedding(h)
e = torch.mm(self.F(a_), self.F(b_).t())
e_ = F.softmax(e)
e_t = F.softmax(e.t())
beta = torch.mm(e_, b_)
alpha = torch.mm(e_t, a_)
v1 = self.G(torch.cat((a_, beta), 1)).mean(0)
v2 = self.G(torch.cat((b_, alpha), 1)).mean(0)
if self.train:
v = self.H(F.dropout(torch.cat((v1, v2), 1), self.drop_p))
else:
v = self.H(torch.cat((v1, v2), 1))
return v
def set_train_flag(self, flag):
self.train = flag
|
st118555
|
The type of the inputs to addmm_ should be both FloatTensor or DoubleTensor.
What’s the type of word_embedding.embeddings in nn.Parameter(word_embedding.embeddings, requires_grad=False)? It seems that you pass a word_embedding of type DoubleTensor to DecomposableModel’s __init__ anywhere else in the code (I guess it’s in your trainer.py 23). Make it to FloatTensor.
|
st118556
|
I have a seq2seq network (a class) which is trained and model states are saved without any problem. [Please note, I am using DataParallel]
Constructor of that class.
class Sequence2Sequence(nn.Module):
"""Class that classifies question pair as duplicate or not."""
def __init__(self, dictionary, embedding_index, max_sent_length, args):
""""Constructor of the class."""
super(Sequence2Sequence, self).__init__()
self.dictionary = dictionary
self.embedding_index = embedding_index
self.config = args
self.encoder = Encoder(len(self.dictionary), self.config)
self.decoder = AttentionDecoder(len(self.dictionary), max_sent_length, self.config)
self.criterion = nn.NLLLoss() # Negative log-likelihood loss
# Initializing the weight parameters for the embedding layer in the encoder.
self.encoder.init_embedding_weights(self.dictionary, self.embedding_index, self.config.emsize)
Now for testing, I wrote the following in my test function.
def test(model, batch_sentence):
if model.config.model == 'LSTM':
encoder_hidden, encoder_cell = model.encoder.init_weights(batch_sentence.size(0))
output, hidden = model.encoder(batch_sentence, (encoder_hidden, encoder_cell))
else:
encoder_hidden = model.encoder.init_weights(batch_sentence.size(0))
output, hidden = model.encoder(batch_sentence, encoder_hidden)
In the first line of the test function, I am getting the following error.
AttributeError: type object 'object' has no attribute '__getattr__'
Any idea why it is not working?
|
st118557
|
I solved the problem. When I use DataParallel, the object belong to DataParallel instead of my custom class and as a result I was having the issue.
|
st118558
|
So how did you extract it out of the DataParallel class before saving it?
Thanks.
|
st118559
|
I saved the model with DataParallel. After loading, I used the model as module. If you print the model after loading using DataParallel, you will see the model is inside the DataParallel which acts like a wrapper. You can access the model using DataParallel.model. Moreover, you can also load the model without DataParallel even though you trained and saved the model with DataParallel.
|
st118560
|
Hi everyone,
when I am trying to install pytorch on mac. I have encountered something like this
/Users/shiyu/Desktop/pytorch-master/torch/lib/THCUNN/generic/FusedRNNKernel.cu(497): error: explicit instantiation definition directive for global functions with clang host compiler is not yet supported
Is there any idea why it happens. Actually I have once installed pytorch on mac with titan x. But when I reinstall it, I have encountered this error. Any idea would be helpful.
|
st118561
|
it looks like https://github.com/pytorch/pytorch/pull/1119/files 175 introduced this issue. I will look into it.
|
st118562
|
Same error , global functions with clang host compiler is not yet supported!
my GPU is pascal titan x, system os is Mac 10.12.4, cuda version is 8.0.61, clang 8.0.0!
|
st118563
|
Fix is now merged in top of tree. Please let me know if you have any further issues related to this.
|
st118564
|
I was following the Generating Names with a Character-Level RNN[1] tutorial. The training is running fine when using CPU, but when I switch to GPU, I get the following error.
All I added is one line 'rnn.cuda()' after instantiating the model as rnn = RNN().
Traceback (most recent call last):
File "char_rnn_generation_tutorial.py", line 323, in <module>
output, loss = train(*randomTrainingSet())
File "char_rnn_generation_tutorial.py", line 276, in train
output, hidden = rnn(category_tensor, input_line_tensor[i], hidden)
File "/home/paarulakan/environments/python/pytorch-py35/lib/python3.5/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "char_rnn_generation_tutorial.py", line 157, in forward
hidden = self.i2h(input_combined)
File "/home/paarulakan/environments/python/pytorch-py35/lib/python3.5/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/paarulakan/environments/python/pytorch-py35/lib/python3.5/site-packages/torch/nn/modules/linear.py", line 54, in forward
return self._backend.Linear()(input, self.weight, self.bias)
File "/home/paarulakan/environments/python/pytorch-py35/lib/python3.5/site-packages/torch/nn/_functions/linear.py", line 10, in forward
output.addmm_(0, 1, input, weight.t())
TypeError: addmm_ received an invalid combination of arguments - got (int, int, torch.FloatTensor, torch.cuda.FloatTensor), but expected one of:
* (torch.FloatTensor mat1, torch.FloatTensor mat2)
* (torch.SparseFloatTensor mat1, torch.FloatTensor mat2)
* (float beta, torch.FloatTensor mat1, torch.FloatTensor mat2)
* (float alpha, torch.FloatTensor mat1, torch.FloatTensor mat2)
* (float beta, torch.SparseFloatTensor mat1, torch.FloatTensor mat2)
* (float alpha, torch.SparseFloatTensor mat1, torch.FloatTensor mat2)
* (float beta, float alpha, torch.FloatTensor mat1, torch.FloatTensor mat2)
* (float beta, float alpha, torch.SparseFloatTensor mat1, torch.FloatTensor mat2)
[1] http://pytorch.org/tutorials/intermediate/char_rnn_generation_tutorial.html#Creating-the-Network 3
|
st118565
|
this will give you a hint to solve your issue:
got (int, int, torch.FloatTensor, torch.cuda.FloatTensor), but expected one of:
|
st118566
|
Hi smth,
I tried tweaking the dimensions and changing the data, target pairs into cuda variable. I get new kind of errors.
Can you please point where to look? Including the model here for completeness.
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.i2h = nn.Linear(n_categories + input_size + hidden_size, hidden_size)
self.i2o = nn.Linear(n_categories + input_size + hidden_size, output_size)
self.o2o = nn.Linear(hidden_size + output_size, output_size)
self.dropout = nn.Dropout(0.1)
self.softmax = nn.LogSoftmax()
def forward(self, category, input, hidden):
input_combined = torch.cat((category, input, hidden), 1)
hidden = self.i2h(input_combined)
output = self.i2o(input_combined)
output_combined = torch.cat((hidden, output), 1)
output = self.o2o(output_combined)
output = self.dropout(output)
output = self.softmax(output)
return output, hidden
def initHidden(self):
return Variable(torch.zeros(1, self.hidden_size))
|
st118567
|
i know how to use Embedding() but it is used with sequences, but in my case the input is 2-D matrix
[49512]
and i want the weight matrix to be [512128]
so that the result be [49*128]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.