id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st117668
|
I think the easiest way to get a feel for these things would be to play around with a batch of data at the interactive prompt, seeing the sizes that come out of calls to Linear, Conv1D, and LSTM modules; you’ll want to write a forward method for your model that passes the data around between those modules and uses .view to reshape tensors.
|
st117669
|
Thank you @jekbradbury . I will spend more time playing with the different modules
|
st117670
|
@jekbradbury
Is it ever possible to do this:
model = nn.Sequential(
nn.LSTM(...),
# since LSTM will return a tuple, anything here in the middle to make this whole thing work?
nn.Linear(...)
)
|
st117671
|
This is how you use with nn.Sequential. I haven’t figure out how to supply initial h0 yet, so don’t pass h0 in, it will init as default
rnn = nn.Sequential(
nn.LSTM(10, 20, 2)
)
input = Variable(torch.randn(100, 3, 10))
h0 = Variable(torch.randn(2, 3, 20))
c0 = Variable(torch.randn(2, 3, 20))
output, hn = rnn(input)
If you want to pass h0 (also, you must pass c0 with h0), perhaps you should let lstm outside nn.Sequential
lstm = nn.LSTM(10, 20, 2)
input = Variable(torch.randn(100, 3, 10))
h0 = Variable(torch.randn(2, 3, 20))
c0 = Variable(torch.randn(2, 3, 20))
output. hn = lstm(input, (h0,c0)) # omit both h0, c0, or must pass a tuple of both (h0, c0).
|
st117672
|
I am receiving out of memory error after successfully finished first epoch with training and evaluation stage finished. How it is possible that I get out of memory after first epoch? I use images of the same size, same batch_size. Is there some sort of garbage collection?
|
st117673
|
Fixed that by using volatile flag on the inputs during evaluation phrase. I guess it can be closed.
|
st117674
|
In the forward function of a model, I extracted the first layer conv1 output(x), after some operations and then put back. And then prompted RuntimeError: std :: bad_cast. What is the reason? What should I do?
class ResNet(nn.Module):
def __init__( ):
def forward(self, x):
x = self.conv1(x)
net=x.data
w=self.conv1.weight.data
sigma=conv_forward_naive(x1, w)
net=myactive(net,sigma)
x.data=net
x = self.bn1(x)
This is an error message
> File "/home/zl/anaconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 866, in runfile
> execfile(filename, namespace)
> File "/home/zl/anaconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 94, in execfile
> builtins.execfile(filename, *where)
> File "/home/zl/mycode/imagefolder/imagefolder.py", line 182, in <module>
> num_epochs=10)
> File "/home/zl/mycode/imagefolder/imagefolder.py", line 103, in train_model
> outputs = model(inputs)
> File "/home/zl/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
> result = self.forward(*input, **kwargs)
> File "/home/zl/anaconda2/lib/python2.7/site-packages/torchvision/models/resnet.py", line 256, in forward
> x = self.bn1(x)
> File "/home/zl/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
> result = self.forward(*input, **kwargs)
> File "/home/zl/anaconda2/lib/python2.7/site-packages/torch/nn/modules/batchnorm.py", line 43, in forward
> self.training, self.momentum, self.eps)
> File "/home/zl/anaconda2/lib/python2.7/site-packages/torch/nn/functional.py", line 463, in batch_norm
> return f(input, weight, bias)
> RuntimeError: std::bad_cast
|
st117675
|
When I ran the following code, I got an error message:
import torch
from torch.autograd import Variable
tensor = torch.FloatTensor([[1,2],[3,4]])
variable_false = Variable(tensor) # can't compute gradients
variable_true = Variable(tensor, requires_grad=True)
# tensor operations
t_out = torch.mean(tensor*tensor)
# variable operations
v_out_false = torch.mean(variable_false*variable_false)
v_out_true = torch.mean(variable_true*variable_true)
# backpropagation
v_out_false.backward()
RuntimeError: there are no graph nodes that require computing gradients
I know I should added requires_grad=True, but here my questions are:
the graph is computational graph for calculating gradients, right?
how can such graph and its nodes be accessed?
Thanks!
|
st117676
|
The mentioned graph is the graph that contains all the computation that were used to get to the final Variable. This is the graph that is used to determine which gradients should be computed with backprop.
You can see here an example of how to traverse this graph for visualization purposes: https://github.com/szagoruyko/functional-zoo/blob/master/visualize.py 547
|
st117677
|
Your v_out_false variable isn’t connected to any Variables which need gradients. If you did v_out_true.backward() it would work.
Yes
The graph is constructed implicitely through Variable.grad_fn and Function.next_functions. grad_fn. In the current release I believe this is still called creator but it has been changed in master. Checkout this pull request 55. Unfortunately I don’t believe it’s possible to reconstruct the forward graph without going through some hoops, but as @albanD mentioned above you can reconstruct the backward graph.
|
st117678
|
Now I use BCELoss to train a net ,and get a TypeError:
TypeError: CudaBCECriterion_updateOutput received an invalid combination of arguments
- got (int, torch.cuda.FloatTensor, torch.cuda.FloatTensor,
torch.cuda.FloatTensor, torch.cuda.FloatTensor),
but expected (int state, torch.cuda.FloatTensor input, torch.cuda.FloatTensor target,
torch.cuda.FloatTensor output, bool sizeAverage, [torch.cuda.FloatTensor weights or None])
It seems that the types of prediction and label are not correct,but I’m sure I make them the same data type,and from the above error message the data types also agree with the expected parameter type of function.so 1 what’s the real message the TypeError really want to say ?
|
st117679
|
hmm this is a bit weird. Is there a code snippet I can see to reproduce this error?
|
st117680
|
The below code produce your error ‘TypeError: CudaBCECriterion_updateOutput received …’
batch_size = 64
num_classes = 17
C,H,W = 3,256,256
inputs = torch.randn(batch_size,C,H,W)
labels = torch.randn(batch_size,num_classes)
in_shape = inputs.size()[1:]
if 1:
net = densenet121(in_shape=in_shape, num_classes=num_classes).cuda().train()
x = Variable(inputs)
logits, probs = net.forward(x.cuda())
loss = nn.MultiLabelSoftMarginLoss()(logits, Variable(labels))
loss.backward()
print(type(net))
print(net)
print(probs)
The error can be corrected by
loss = nn.MultiLabelSoftMarginLoss()(logits, Variable(labels.cuda()))
|
st117681
|
I’m trying to implement a LSTM with layer normalization but I’m getting an error when I run loss.backward(). If I remove the LayerNormalizations that I’ve created it runs fine. I guess that I didn’t set up Layer Normalization correctly but I’m still new to PyTorch so any help would be appreciated!
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-28-9c20ee070d0a> in <module>()
17
18 for epoch in range(1, n_epochs + 1):
---> 19 loss = train(*random_training_set())
20 loss_avg += loss
21
<ipython-input-17-d72503b5d7db> in train(inp, target)
9 loss += criterion(output, target[c])
10
---> 11 loss.backward()
12 decoder_optimizer.step()
13
/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.pyc in backward(self, gradient, retain_variables)
144 'or with gradient w.r.t. the variable')
145 gradient = self.data.new().resize_as_(self.data).fill_(1)
--> 146 self._execution_engine.run_backward((self,), (gradient,), retain_variables)
147
148 def register_hook(self, hook):
/usr/local/lib/python2.7/dist-packages/torch/autograd/_functions/reduce.pyc in backward(self, grad_output)
95 grad_input_val = grad_output[0]
96 grad_input_val /= reduce(lambda x, y: x * y, self.input_size, 1)
---> 97 return grad_output.new(*self.input_size).fill_(grad_input_val)
98 else:
99 repeats = [1 for _ in self.input_size]
TypeError: fill_ received an invalid combination of arguments - got (torch.cuda.FloatTensor), but expected (float value)
Here is my code:
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
from torch.nn.parameter import Parameter
class LayerNormalization(nn.Module):
def __init__(self, hidden_size, eps=1e-5):
super(LayerNormalization, self).__init__()
self.eps = eps
self.hidden_size = hidden_size
self.a2 = nn.Parameter(torch.ones(1, hidden_size), requires_grad=True)
self.b2 = nn.Parameter(torch.zeros(1, hidden_size), requires_grad=True)
def forward(self, z):
mu = torch.mean(z)
sigma = torch.std(z)
ln_out = (z - mu.expand_as(z)) / (sigma.expand_as(z) + self.eps)
ln_out = ln_out * self.a2 + self.b2
return ln_out
class LSTM(nn.Module):
def __init__(self, input_size, hidden_size, embed_size, output_size):
super(LSTM, self).__init__()
self.hidden_size = hidden_size
# input embedding
self.encoder = nn.Embedding(input_size, embed_size)
# lstm weights
self.weight_fh = nn.Linear(hidden_size, hidden_size)
self.weight_ih = nn.Linear(hidden_size, hidden_size)
self.weight_ch = nn.Linear(hidden_size, hidden_size)
self.weight_oh = nn.Linear(hidden_size, hidden_size)
self.weight_fx = nn.Linear(embed_size, hidden_size)
self.weight_ix = nn.Linear(embed_size, hidden_size)
self.weight_cx = nn.Linear(embed_size, hidden_size)
self.weight_ox = nn.Linear(embed_size, hidden_size)
# decoder
self.decoder = nn.Linear(hidden_size, output_size)
# layer normalization
self.lnx = LayerNormalization(hidden_size)
self.lnh = LayerNormalization(hidden_size)
self.lnc = LayerNormalization(hidden_size)
def forward(self, inp, h_0, c_0):
# encode the input characters
inp = self.encoder(inp)
# forget gate
f_g = F.sigmoid(self.lnx(self.weight_fx(inp)) + self.lnh(self.weight_fh(h_0)))
# input gate
i_g = F.sigmoid(self.lnx(self.weight_ix(inp)) + self.lnh(self.weight_ih(h_0)))
# intermediate cell state
c_tilda = F.tanh(self.lnx(self.weight_cx(inp)) + self.lnh(self.weight_ch(h_0)))
# current cell state
cx = f_g * c_0 + i_g * c_tilda
# output gate
o_g = F.sigmoid(self.lnx(self.weight_ox(inp)) + self.lnh(self.weight_oh(h_0)))
# hidden state
hx = o_g * F.tanh(self.lnc(cx))
out = self.decoder(hx.view(1,-1))
return out, hx, cx
def init_hidden(self):
h_0 = Variable(torch.zeros(1, self.hidden_size)).cuda()
c_0 = Variable(torch.zeros(1, self.hidden_size)).cuda()
return h_0, c_0
|
st117682
|
Can you post a complete script? It’s hard to tell what went wrong just from your snippet.
|
st117683
|
I just replaced the GRU in this notebook with the code that I posted above. I also had to pass hx and cx instead of just ‘hidden’ and fix output view in the loss criterion.
GitHub
spro/practical-pytorch 366
PyTorch tutorials demonstrating modern techniques with readable code - spro/practical-pytorch
|
st117684
|
I tried your LayerNormalization Class. It worked by changing the line :
ln_out = ln_out * self.a2 + self.b2
to :
ln_out = ln_out * self.a2.expand_as(z) + self.b2.expand_as(z)
|
st117685
|
If I’m not mistaken this does the same as nn.InstanceNormalization with affine=True, if you want to go with the stock classes.
Best regards
Thomas
|
st117686
|
For pytorch’s pace, a month isn’t all that new. Relative to your original question, it’s not all that old. Kudos to the people doing all the stuff.
Best regards
Thomas
|
st117687
|
As tom already pointed out, you are doing batch normalization, not layer normalization! Maybe you could change the topic name to reflect that?
|
st117688
|
I don’t think that I am doing batch normalization. If you look at the supplementary material in the layer normalization paper my equations match theirs.
https://arxiv.org/abs/1607.06450 113
|
st117689
|
Layer normalization uses all the activations per instance from the batch for normalization and batch normalization uses the whole batch for each activations. Ok, but you didn’t normalize per neuron, so it was a mix of both. So we were both right and wrong. (sorry for the confusion)
When I didn’t miss something you should use
def forward(self, z):
mu = torch.mean(z, dim=1)
sigma = torch.std(z, dim=1)
if your input is (B,N) B batch size and N input size, if you want to do layer normalization.
|
st117690
|
I’m getting the following error. I have no concrete idea what might be triggering it. Any suggestions on what to look for?
loss.backward()
File "anaconda2/lib/python2.7/site-packages/torch/autograd/variable.py", line 146, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
File "anaconda2/lib/python2.7/site-packages/torch/autograd/_functions/tensor.py", line 22, in backward
grad_input._set_index(self.index, grad_output)
RuntimeError: tensor must have one dimension at /py/conda-bld/pytorch_1493669264383/work/torch/lib/TH/generic/THTensor.c:814```
|
st117691
|
No. It seems to have the right dimensions. I will try to explain in words what I’m doing and then try to generate a minimal example. I am working on a minimal example, but I couldn’t get it to fail in the exact same way yet.
I’m learning a model for sequence tagging that predicts tags sequentially. The model uses word embeddings, and tag embeddings for the previous predictions. The main aspect is that the scores used for prediction are computed cumulatively. The training losses are computed based on these scores. The scores for the first prediction and second prediction are $s _1 = s( x, \emptyset ; w) $ and $s _2 = s( x, \hat y _1 ; w) + s _1$, respectively. $x$ is the sentence and $\hat y _1$ is the prediction induced by $s _1$. The loss $\ell$ is computed based on the scores (and some gold data that I’m omitting because it is not important here). Doing backward on $\ell(s _1)$ works, but backward on $\ell(s _2)$ fails. I do a sequence of backward calls and take a step only at the end of the sequence.
I tried various things, but still got the same error: updating with each predicted tag; doing retain_variables=True; accumulating all losses and doing backward only at the end. I suppose that this is due to me not fully understanding what you can and cannot do with these computational graphs. What is the best way of training with cumulative scores on PyTorch?
I’m running on the CPU and using version 0.1.12_2.
Thanks a lot for looking into this.
|
st117692
|
The minimal example that I came up fails in a different way.
import torch as torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
(n, h, m) = (3, 4, 5)
fc1 = nn.Linear(n, h)
fc2 = nn.Linear(h, m)
emb = nn.Embedding(3, n)
f = lambda z, i: fc2( F.relu( fc1( emb( z[i:i + 2] ).sum(0).view(1, -1) ) ) )
x = torch.LongTensor([0, 1, 2])
y1 = torch.LongTensor([0])
y2 = torch.LongTensor([1])
y1_p = f( Variable( x ), 0 )
l1 = nn.CrossEntropyLoss()(y1_p, Variable( y1 ) )
l1.backward()
y2_p = y1_p + f(Variable( x ), 1)
l2 = nn.CrossEntropyLoss()(y2_p, Variable( y2 ) )
l2.backward()
This yields RuntimeError: Trying to backward through the graph second time, but the buffers have already been freed. Please specify retain_variables=True when calling backward for the first time., which is solved by doing l1.backward(retain_variables=True) rather than l1.backward().
|
st117693
|
this error is totally expected. After you do l1.backward(), the graph is freed and you cannot do further operations on parts of the graph (it’s already freed/deleted).
Here’s correct code:
import torch as torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
(n, h, m) = (3, 4, 5)
fc1 = nn.Linear(n, h)
fc2 = nn.Linear(h, m)
emb = nn.Embedding(3, n)
f = lambda z, i: fc2( F.relu( fc1( emb( z[i:i + 2] ).sum(0).view(1, -1) ) ) )
x = torch.LongTensor([0, 1, 2])
y1 = torch.LongTensor([0])
y2 = torch.LongTensor([1])
y1_p = f( Variable( x ), 0 )
l1 = nn.CrossEntropyLoss()(y1_p, Variable( y1 ) )
y2_p = y1_p + f(Variable( x ), 1)
l2 = nn.CrossEntropyLoss()(y2_p, Variable( y2 ) )
(l1+l2).backward()
# alternatively
# torch.autograd.backward([l1, l2], [l1.data.new([1]), l2.data.new([1])])
|
st117694
|
Thanks. I did not work. I tried that before, and tried it again now, and still got the same error: RuntimeError: tensor must have one dimension at /py/conda-bld/pytorch_1493669264383/work/torch/lib/TH/generic/THTensor.c:814. I still could not get a minimal example to reproduce this error. The dimensions all seem to agree and the score computations work until the end of the sequence. When calling backward, it fails though. Any tips? It must be a different issue.
|
st117695
|
i think this was a bug that is now fixed in master and will be in the next release a week to two weeks away.
If you want an immediate fix, do consider building from master: https://github.com/pytorch/pytorch#from-source 8
We really apologize for having to make you build from source for the short-term fix.
p.s.: I ran the script (my corrected version) and it works on master.
|
st117696
|
Thanks for all the help. I just installed from source. I still have the same issue. I will keep looking. Any suggestions are appreciated.
File "anaconda2/lib/python2.7/site-packages/torch/autograd/variable.py", line 151, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File "anaconda2/lib/python2.7/site-packages/torch/autograd/__init__.py", line 98, in backward
variables, grad_variables, retain_graph)
File "anaconda2/lib/python2.7/site-packages/torch/autograd/function.py", line 90, in apply
return self._forward_cls.backward(self, *args)
File "anaconda2/lib/python2.7/site-packages/torch/autograd/function.py", line 183, in wrapper
outputs = fn(ctx, *tensor_args)
File "anaconda2/lib/python2.7/site-packages/torch/autograd/_functions/tensor.py", line 276, in backward
grad_tensor.index_add_(ctx.dim, index, grad_output)
RuntimeError: tensor must have one dimension at pytorch/torch/lib/TH/generic/THTensor.c:814```
Version: 0.1.12+4eb448a ; installed for CPU, i.e., no GPU support.
|
st117697
|
For some reason grad_output has two dimensions, i.e., len(grad_output.size()) is two. Any ideas on why this is the case? The loss Variable on which I’m calling backward has a single dimensions with one element.
|
st117698
|
okay so then the repro script you gave me is not the same as the script that you are running. I understand if it’s part of a larger code-base that you cant share. Step through in pdb and see what the size of grad_output is at tensor.py:276 2 as shown in the stack trace.
|
st117699
|
Yeah, the research code base that I’m working on is larger and it is hard to extract a single example. The minimal example tried to capture my understanding of what might have been wrong. I discovered a few bugs in my code related to dimensions. It was a combination of bugs that led to that issue. One of the most striking ones was using sum along an axis followed by repeat; I forgot to account for the fact that the axis summed over disappears. Most errors were in parts of the code that I wrote some reasonable amount of time ago and didn’t think much of. I figured things out with a combination of line debugging and print statements. The first error message that I go was definitely opaque. Thanks for all the help.
|
st117700
|
When I train a model, I set batchsize to 28 and find that it works fine. However, when I use dataloader to generate training data, I get ‘cuda runtime error (2) : out of memory’ and have to decrease the batchsize. I am not sure whether dataloder is the reason of this problem. Can someone help me?
|
st117701
|
The two upsampling methods seem to be UpsamplingNearest2d and UpsamplingBilinear2d: http://pytorch.org/docs/_modules/torch/nn/modules/upsampling.html 11
Does one of these methods work better than the other for upsampling a word/text embedding to size of a larger image embedding for the purpose of combining (via add or concat) the text embedding & image embedding?
|
st117702
|
I tried both and also tried simply repeating the whole text embedding.
Repeating worked best, followed by Bilinear, followed by NearestNeighbor.
|
st117703
|
Hello,
l have downloaded a pre-trained CRNN (convolutional recurrent neural network) for image based sequence recognition. Here is the pre-trained model 137 and the related code for the model code 204
Now, l want to fine tune the model. So, l’m wondering how can l fine tune this model. The idea is to remove the last classify layer and and add a new one. How can l do that for this CRNN ?
The pre-trained CRNN has alphabet="0123456789abcdefghijklmnopqrstuvwxyz"
and l want to fine tune the model on the following alphabet
alphabet=“0123456789abcdefghijklmnopqrstuvwxyz,;:!%@)(/.°”
Thank you.
|
st117704
|
search on this forum for “finetuning”, there are a few threads that explore that.
I dont think giving you particular solutions for your specific code is feasible (unless someone here has a lot of free time )
|
st117705
|
Looks like you already have an issue on the implementor’s GitHub page: https://github.com/meijieru/crnn.pytorch/issues/23 364
I responded there with a solution to your problem. (Or at least, a solution to one of your problems, can’t guarantee the rest of your code will run).
Probably best to keep any further discussion to implementor’s GitHub since this is pretty specific to that code base.
Best,
Stephen
|
st117706
|
I want to create a PyTorch tutorial using MNIST data set.
In TensorFlow, there is a simple way to download, extract and load the MNIST data set as below.
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("./mnist/data/")
x_train = mnist.train.images # numpy array
y_train = mnist.train.labels
x_test = mnist.test.images
y_test = mnist.test.labels
Is there any simple way to handle this in PyTorch?
|
st117707
|
It seems to support MNIST data set in torchvision.datasets.
I was confused because PyTorch documentation 81 does not specify MNIST.
|
st117708
|
Yes it already there - see here
http://pytorch.org/docs/data.html 253
and here,
http://pytorch.org/docs/torchvision/datasets.html#mnist 272
The code looks something like this,
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.batch_size, shuffle=True, **kwargs)
|
st117709
|
How do you subset the MNIST training data? It’s 60,000 images, how can you reduce it to say 2000?
Here’s the code
>>> from torchvision import datasets, transforms
>>>
>>>
>>> train_all_mnist = datasets.MNIST('../data', train=True, download=True,
... transform=transforms.Compose([
... transforms.ToTensor(),
... transforms.Normalize((0.1307,), (0.3081,))
... ]))
Files already downloaded
>>> train_all_mnist
<torchvision.datasets.mnist.MNIST object at 0x7f89a150cfd0>
How do I subset train_all_mnist ?
Or, alternatively I could just download it again, and hack this line to 2000,
https://github.com/pytorch/vision/blob/master/torchvision/datasets/mnist.py#L64 91
It’s a bit ugly - anyone know a neater way to do this?
|
st117710
|
I’m interested in Omniglot, which is like an inverse, MNIST, lots of classes, each with a small number of examples.
Take a look, here
GitHub
brendenlake/omniglot 47
Omniglot data set for one-shot learning. Contribute to brendenlake/omniglot development by creating an account on GitHub.
By the way - thank you for your tutorials - they are very clear and helpful to learn from.
Best regards,
Ajay
|
st117711
|
omniglot is in this Pull Request:
github.com/pytorch/vision
OMNIGLOT Dataset 55
pytorch:master ← ludc:master
opened
Jan 25, 2017
ludc
+148
-2
|
st117712
|
Ha, fanck Q
Spent a hour hacking together my own loader - but this looks better!
Seems to be the easiest data set for experimenting with one-shot learning?
|
st117713
|
Whats the current best methodology for Omniglot? Who or what’s doing the best at the moment?
|
st117714
|
@pranv set the record on Omniglot recently with his paper:
Attentive Recurrent Comparators
https://arxiv.org/abs/1703.00767 30
|
st117715
|
Thanks for that
It look’s like the DRAW I implemented in Torch years ago , without the VAE, and decoder/generative canvas.
I though you might like this, implementation of a GAN on Omniglot,
Code for training a GAN on the Omniglot dataset using the network described in:
Task Specific Adversarial Cost Function
arxiv.org
1609.08661.pdf 22
2.19 MB
GitHub
ToniCreswell/piGAN 36
Task Specific Adversarial Cost Function (Omniglot GAN) - ToniCreswell/piGAN
|
st117716
|
Nope sorry - been totally snowed under the past couple of months - not had any time to work on it.
If you’re referring to the alternative cost functions for GANs I don’t think they make much difference?
If you’re referring to non Gaussian attention mechanisms for the DRAW encoder, I don’t know of any better approach than @pranav 's as mentioned above. I think he’s open sourced his code?
Cheers,
Aj
|
st117717
|
The code for Attentive Recurrent Comparators is here: https://github.com/pranv/ARC 24
It includes Omniglot data downloading and iterating scripts along with all the models proposed in the paper (the nets are written and trained with Theano).
I will try to submit a PR for torchvision.datasets.Omniglot if I find some time
|
st117718
|
I am running a 2 layer CNN on the MNIST dataset and I am getting this error.
RuntimeError: size ‘[-1 x 3136]’ is invalid for input of with 131072 elements at /py/conda-bld/pytorch_1490979338030/work/torch/lib/TH/THStorage.c:55
Model used
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 32, 3, padding=2)
# conv1 = 28x28x1 => 28x28x32
# pooling = 28x28x32 => 14x14x32
self.conv2 = nn.Conv2d(32, 64, 3, padding=2)
# conv2 = 14x14x32 => 14x14x64
# pooling = 14x14x64 => 7x7x64
self.fc1 = nn.Linear(64*7*7, 1024)
self.fc2 = nn.Linear(1024, 10)
def forward(self, x):
x = F.max_pool2d(F.relu(self.conv1(x)), 2)
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
# reshape Variable
x = x.view(-1, 64*7*7)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return x
Can anyone tell me what am I doing wrong?
|
st117719
|
the expected input to this network is of size 28x28 images (batch x 1 x 28 x 28), but you are giving larger images.
Hence, at x = x.view(-1, 64*7*7) this operation is failing.
|
st117720
|
@smth his network works fine when the kernel_size = 5. What does the kernel_size mean? Is it the size of filter?
|
st117721
|
Solved the error. Added x.size() before the Linear layer in the forward function to get the size. Simple hack
|
st117722
|
Some paper run experiment on LSTM on case with and without output gate.
For example, this paper Towards Universal Paraphrastic Sentence Embeddings 8 table 5 in section 4.4.
I would like to suggest an option to turn on/off output gate.
|
st117723
|
Usually for any modification of the default LSTM module, we suggest that you write your own custom LSTM using nn.LSTMCell for example.
|
st117724
|
My model training speed get slower overtime. (first epoch take only 20 minutes but epoch 6 take 50 minutes.
66uLZFS.jpg940×705 147 KB
It is train on GPU
Here is the source code with my preprocess sentiment tree bank data
https://drive.google.com/file/d/0B10N16RArpisQ1hPVEdHRXF2UGM/view?usp=sharing 4
Download http://nlp.stanford.edu/data/glove.840B.300d.zip and put glove.840B.300d.txt into data/glove
Install some python package
pip install meowlogtool
pip install tqdm
Command to run
python sentiment.py --emblr 0 --rel_dim 0 --tag_dim 0 --optim adagrad --name basic --lr 0.05 --wd 1e-4 --at_hid_dim 0
Here is the code on Github for you to read without download (branch pytorch_forum). But I don’t put sst data on github, so you have to download from link above to get data. https://github.com/ttpro1995/treelstm.pytorch/tree/pytorch_forum 4
Model source code
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable as Var
import utils
import Constants
from model import SentimentModule
from embedding_model import EmbeddingModel
class SimpleGRU(nn.Module):
"""
w[i] : (300, 1)
h[i] : (150, 1)
p[i] : (20, 1)
r[i] : (20, 1)
k[i] : (150, 1)
x[i] : (20 + 150 + 300 + 20 = 490, 1) (490, 1)
Uz, Ur, Uh : (150, 150) => 67500 => (450, 450)
Wz, Wr, Wh : (150, 20 + 150 + 300 + 20) (150, 490)
"""
def __init__(self, cuda, in_dim, hid_dim):
super(SimpleGRU, self).__init__()
self.cudaFlag = cuda
self.Uz = nn.Linear(hid_dim, hid_dim)
self.Ur = nn.Linear(hid_dim, hid_dim)
self.Uh = nn.Linear(hid_dim, hid_dim)
self.Wz = nn.Linear(in_dim, hid_dim)
self.Wr = nn.Linear(in_dim, hid_dim)
self.Wh = nn.Linear(in_dim, hid_dim)
if self.cudaFlag:
self.Uz = self.Uz.cuda()
self.Ur = self.Uz.cuda()
self.Uh = self.Uz.cuda()
self.Wz = self.Wz.cuda()
self.Wr = self.Wr.cuda()
self.Wh = self.Wh.cuda()
def forward(self, x, h_prev):
"""
Simple-GRU(compress_x[v], h[t-1]) :
z[t] := s(Wz *compress_x[t]+ Uz * h[t-1] + bz)
r[t] := s(Wr * compress_x[t] + Ur * h[t-1] + br)
h_temp[t] := g(Wh * compress_x[t] + Uh * h[t-1] + bh)
h[t] := r[t] .* h[t-1] + (1 - z[t]) .* h_temp[t]
return h[t]
:param x: compress_x[t]
:param h_prev: h[t-1]
:return:
"""
z = F.sigmoid(self.Wz(x) + self.Uz(h_prev))
r = F.sigmoid(self.Wr(x) + self.Ur(h_prev))
h_temp = F.tanh(self.Wh(x) + self.Uh(h_prev))
h = r*h_prev + (1-z)*h_temp
return h
class TreeSimpleGRU(nn.Module):
def __init__(self, cuda, word_dim, tag_dim, rel_dim, mem_dim, at_hid_dim, criterion, leaf_h = None):
super(TreeSimpleGRU, self).__init__()
self.cudaFlag = cuda
# self.gru_cell = nn.GRUCell(word_dim + tag_dim, mem_dim)
self.gru_cell = SimpleGRU(self.cudaFlag, word_dim+tag_dim, mem_dim)
self.gru_at = GRU_AT(self.cudaFlag, word_dim + tag_dim + rel_dim + mem_dim, at_hid_dim ,mem_dim)
self.mem_dim = mem_dim
self.in_dim = word_dim
self.tag_dim = tag_dim
self.rel_dim = rel_dim
self.leaf_h = leaf_h # init h for leaf node
if self.leaf_h == None:
self.leaf_h = Var(torch.rand(1, self.mem_dim))
torch.save(self.leaf_h, 'leaf_h.pth')
if self.cudaFlag:
self.leaf_h = self.leaf_h.cuda()
self.criterion = criterion
self.output_module = None
def getParameters(self):
"""
Get flatParameters
note that getParameters and parameters is not equal in this case
getParameters do not get parameters of output module
:return: 1d tensor
"""
params = []
for m in [self.gru_cell, self.gru_at]:
# we do not get param of output module
l = list(m.parameters())
params.extend(l)
one_dim = [p.view(p.numel()) for p in params]
params = F.torch.cat(one_dim)
return params
def set_output_module(self, output_module):
self.output_module = output_module
def forward(self, tree, w_emb, tag_emb, rel_emb, training = False):
loss = Var(torch.zeros(1)) # init zero loss
if self.cudaFlag:
loss = loss.cuda()
for idx in xrange(tree.num_children):
_, child_loss = self.forward(tree.children[idx], w_emb, tag_emb, rel_emb, training)
loss = loss + child_loss
if tree.num_children > 0:
child_rels, child_k = self.get_child_state(tree, rel_emb)
if self.tag_dim > 0:
tree.state = self.node_forward(w_emb[tree.idx - 1], tag_emb[tree.idx -1], child_rels, child_k)
else:
tree.state = self.node_forward(w_emb[tree.idx - 1], None, child_rels, child_k)
elif tree.num_children == 0:
if self.tag_dim > 0:
tree.state = self.leaf_forward(w_emb[tree.idx - 1], tag_emb[tree.idx -1])
else:
tree.state = self.leaf_forward(w_emb[tree.idx - 1], None)
if self.output_module != None:
output = self.output_module.forward(tree.state, training)
tree.output = output
if training and tree.gold_label != None:
target = Var(utils.map_label_to_target_sentiment(tree.gold_label))
if self.cudaFlag:
target = target.cuda()
loss = loss + self.criterion(output, target)
return tree.state, loss
def leaf_forward(self, word_emb, tag_emb):
"""
Forward function for leaf node
:param word_emb: word embedding of current node u
:param tag_emb: tag embedding of current node u
:return: k of current node u
"""
h = self.leaf_h
if self.cudaFlag:
h = h.cuda()
if self.tag_dim > 0:
x = F.torch.cat([word_emb, tag_emb], 1)
else:
x = word_emb
k = self.gru_cell(x, h)
return k
def node_forward(self, word_emb, tag_emb, child_rels, child_k):
"""
Foward function for inner node
:param word_emb: word embedding of current node u
:param tag_emb: tag embedding of current node u
:param child_rels (tensor): rels embedding of child node v
:param child_k (tensor): k of child node v
:return:
"""
n_child = child_k.size(0)
h = Var(torch.zeros(1, self.mem_dim))
if self.cudaFlag:
h = h.cuda()
for i in range(0, n_child):
k = child_k[i]
x_list = [word_emb, k]
if self.rel_dim >0:
rel = child_rels[i]
x_list.append(rel)
if self.tag_dim > 0:
x_list.append(tag_emb)
x = F.torch.cat(x_list, 1)
h = self.gru_at(x, h)
k = h
return k
def get_child_state(self, tree, rels_emb):
"""
Get child rels, get child k
:param tree: tree we need to get child
:param rels_emb (tensor):
:return:
"""
if tree.num_children == 0:
assert False # never get here
else:
child_k = Var(torch.Tensor(tree.num_children, 1, self.mem_dim))
if self.rel_dim>0:
child_rels = Var(torch.Tensor(tree.num_children, 1, self.rel_dim))
else:
child_rels = None
if self.cudaFlag:
child_k = child_k.cuda()
if self.rel_dim > 0:
child_rels = child_rels.cuda()
for idx in xrange(tree.num_children):
child_k[idx] = tree.children[idx].state
if self.rel_dim > 0:
child_rels[idx] = rels_emb[tree.children[idx].idx - 1]
return child_rels, child_k
class AT(nn.Module):
"""
AT(compress_x[v]) := sigmoid(Wa * tanh(Wb * compress_x[v] + bb) + ba)
"""
def __init__(self, cuda, in_dim, hid_dim):
super(AT, self).__init__()
self.cudaFlag = cuda
self.in_dim = in_dim
self.hid_dim = hid_dim
self.Wa = nn.Linear(hid_dim, 1)
self.Wb = nn.Linear(in_dim, hid_dim)
if self.cudaFlag:
self.Wa = self.Wa.cuda()
self.Wb = self.Wb.cuda()
def forward(self, x):
out = F.sigmoid(self.Wa(F.tanh(self.Wb(x))))
return out
class GRU_AT(nn.Module):
def __init__(self, cuda, in_dim, at_hid_dim ,mem_dim):
super(GRU_AT, self).__init__()
self.cudaFlag = cuda
self.in_dim = in_dim
self.mem_dim = mem_dim
self.at_hid_dim = at_hid_dim
if at_hid_dim > 0:
self.at = AT(cuda, in_dim, at_hid_dim)
self.gru_cell = SimpleGRU(self.cudaFlag, in_dim, mem_dim)
if self.cudaFlag:
if at_hid_dim > 0:
self.at = self.at.cuda()
self.gru_cell = self.gru_cell.cuda()
def forward(self, x, h_prev):
"""
:param x:
:param h_prev:
:return: a * m + (1 - a) * h[t-1]
"""
m = self.gru_cell(x, h_prev)
if self.at_hid_dim > 0:
a = self.at.forward(x)
h = torch.mm(a, m) + torch.mm((1-a), h_prev)
else:
h = m
return h
class TreeGRUSentiment(nn.Module):
def __init__(self, cuda, in_dim, tag_dim, rel_dim, mem_dim, at_hid_dim, num_classes, criterion):
super(TreeGRUSentiment, self).__init__()
self.cudaFlag = cuda
self.tree_module = TreeSimpleGRU(cuda, in_dim, tag_dim, rel_dim, mem_dim, at_hid_dim, criterion)
self.output_module = SentimentModule(cuda, mem_dim, num_classes, dropout=True)
self.tree_module.set_output_module(self.output_module)
def get_tree_parameters(self):
return self.tree_module.getParameters()
def forward(self, tree, sent_emb, tag_emb, rel_emb, training = False):
# sent_emb = F.torch.unsqueeze(self.word_embedding.forward(sent_inputs), 1)
# tag_emb = F.torch.unsqueeze(self.tag_emb.forward(tag_inputs), 1)
# rel_emb = F.torch.unsqueeze(self.rel_emb.forward(rel_inputs), 1)
# sent_emb, tag_emb, rel_emb = self.embedding_model(sent_inputs, tag_inputs, rel_inputs)
tree_state, loss = self.tree_module(tree, sent_emb, tag_emb, rel_emb, training)
output = tree.output
return output, loss
Thank a lot.
|
st117725
|
Here is the result I run cProfile
gist.github.com
https://gist.github.com/ttpro1995/7e5b8903cb3068245a0538a5f5247f7f 17
full_profiling
/home/vdvinh/miniconda3/envs/ml/bin/python /home/vdvinh/projects/profile_gru/sentiment.py
2017-05-11 10:43:53,153 : INFO : LOG_FILE
2017-05-11 10:43:53,154 : INFO : _________________________________start___________________________________
2017-05-11 10:43:53,156 : INFO : name default_log
2017-05-11 10:43:53,157 : INFO : batchsize 25
2017-05-11 10:43:53,157 : INFO : epochs 10
2017-05-11 10:43:53,157 : INFO : lr 0.05
2017-05-11 10:43:53,157 : INFO : emblr 0
2017-05-11 10:43:53,157 : INFO : tag_emblr 0
2017-05-11 10:43:53,157 : INFO : rel_emblr 0
This file has been truncated. show original
I note that {method ‘run_backward’ of ‘torch._C._EngineBase’ objects} take longer every epoch
|
st117726
|
I found the cause. Because I want to randomly init leaf_h and set it as constant. So I set leaf_h as Variable and save into class attribute. Thus, for every sample, leaf_h (which is Variable) will log it creator. Therefore, for every computation graph which leaf_h participate in, backward get longer. Leaf_h is class attribute, so it is not destroy for every sample or batch, but stay there until end. Thus, the “history” in leaf_h is not destroy.
My suggestion for fixing this:
Save self.leaf_h as torch.Tensor
self.leaf_h = torch.rand(1, self.mem_dim)
When ever need to use leaf_h, init another temporate Variable
temp_leaf_h = Var(leaf_h)
|
st117727
|
Hi,
What is the change needed to be made to a lstm model, for using a smaller batch size, say during evaluation and for the last small size of training data (< batch_size)
|
st117728
|
I am trying to perform a sequence labelling task where I have initialized embedding weights using Glove for a word. I also want to incorporate character level features into the model. I am able to train an LSTM over the character vector and save its final state. Should I use this as the character representation instead?
How can I append a character vector of dim say 5, corresponding to each word in the sentence dynamically (i.e, during run time) using Pytorch? So final input will be of dim = glove_dim + character_dim.
P.S: I am trying to recreate code exercise as suggested here. 161
|
st117729
|
Thanks ! I implemented it last week after implementing the character level LSTM code.
|
st117730
|
Hi,
How do you calculate the validation loss per iteration?
Say you’ve your train_loader and test_loader.
Normally for training per iteration to get the training loss, we do loss = criterion(outputs, labels where criterion = nn.CrossEntropyLoss() .
For validation per iteration,
test_loss_iter = []
for images, labels in test_loader:
images = Variable(images.view(-1, 784).cuda())
outputs = net(images)
test_loss = criterion(outputs, labels)
test_loss_iter.append(test_loss)
iteration_test_loss = np.mean(test_loss_iter)
But I get error AttributeError: 'torch.LongTensor' object has no attribute 'requires_grad'
|
st117731
|
How to initialize the parameter of BatchNorm2d in pytorch? I mean mean, variance, gamma and beta. Actually I have a pretrained model in keras with tensorflow backend. Another thing which I want to mention that is the size of weight of each learnable parameter:
mean = (64,)
variance = (64,)
gamma = (64,)
beta = (64,)
Appreciating in advance for any response!
|
st117732
|
mean = self.running_mean
variance = self.running_var
gamma = self.weight
beta = self.bias
|
st117733
|
Hi there. I’m new to pytorch and have a conceptual question about extending pytorch’s autograd module like the linear module on the docs:
everytime we perform an operation on pytorch’s Variables it automatically register a graph and later can be backpropagated automatically.
In forward() function the inputs are all Variables and there are a bunch of operations we perform on them, which will register a graph for backpropagation itself.
But I still need to override a backward() function since I’m extending autograd.
It’s like I register a autograd graph when I’m extending it. So does it mean that I can ignore the graph created during forward()?
Thanks!
|
st117734
|
yes, during backward you only define the correct graph between grad_output and grad_input
|
st117735
|
Hi everyone, I’m trying to measure the time needed for a forward and a backward pass separately on different models from the PyTorch Model Zoo. I’m using this code:
gist.github.com
https://gist.github.com/iacolippo/9611c6d9c7dfc469314baeb5a69e7e1b 308
measure-fp-bp.py
import gc
import numpy as np
import sys
import time
import torch
from torch.autograd import Variable
import torchvision.models as models
import torch.backends.cudnn as cudnn
This file has been truncated. show original
I do 5 dry runs, then measure ten times each forward pass and backward pass, average and compute the std deviation. Something strange keeps happening: the first time I execute the code everything is fine, if I relaunch the script right after one of the model will typically have a std deviation much higher than the others. I’m talking of a standard deviation of the same order of magnitude of the average. If instead I let a consistent amount of time pass between two runs, everything is fine.
Any idea of what might be causing this?
(Also if you have any advice on how to measure this, it will be welcome )
|
st117736
|
check that your GPU is not downclocking or has some kind of adaptive power boost. I’ve observed this in the past if I put my GPU into an “adaptive power boost” setting (or whatever nvidia calls it).
|
st117737
|
Thank you.
I’ve disabled the auto-boost mode by
sudo nvidia-smi --auto-boost-default=DISABLED -i 0
I assumed it was successful since All done. appeared in the terminal, but the problem persists.
Every parameter of the GPU shown in nvidia-smi is fine.
|
st117738
|
why the result between CPU version and GPU version is different? How do we get grad for cuda variable? Thanks.
CPU version:
import torch
from torch.autograd import Variable
l = torch.nn.Linear(6,1)
input = Variable(torch.rand(10,6), requires_grad = True)
out = l(input)
target = Variable(torch.rand(10,1))
crt = torch.nn.L1Loss()
loss = crt(out, target)
loss.backward()
print input.grad
Output:
Variable containing:
1.00000e-02 *
-1.5130 -3.3551 1.2752 1.4854 -0.3192 2.7163
1.5130 3.3551 -1.2752 -1.4854 0.3192 -2.7163
-1.5130 -3.3551 1.2752 1.4854 -0.3192 2.7163
1.5130 3.3551 -1.2752 -1.4854 0.3192 -2.7163
1.5130 3.3551 -1.2752 -1.4854 0.3192 -2.7163
1.5130 3.3551 -1.2752 -1.4854 0.3192 -2.7163
-1.5130 -3.3551 1.2752 1.4854 -0.3192 2.7163
-1.5130 -3.3551 1.2752 1.4854 -0.3192 2.7163
-1.5130 -3.3551 1.2752 1.4854 -0.3192 2.7163
1.5130 3.3551 -1.2752 -1.4854 0.3192 -2.7163
[torch.FloatTensor of size 10x6]
GPU version:
l = torch.nn.Linear(6,1).cuda()
input = Variable(torch.rand(10,6), requires_grad = True).cuda()
out = l(input)
target = Variable(torch.rand(10,1)).cuda()
crt = torch.nn.L1Loss().cuda()
loss = crt(out, target)
loss.backward()
print input.grad
Output: None
|
st117739
|
The gradient does not work through .cuda(). So instead of
fxia22:
input = Variable(torch.rand(10,6), requires_grad = True).cuda()
use
input = Variable(torch.rand(10,6).cuda(), requires_grad = True).
Best regards
Thomas
|
st117740
|
I just stumbled into the same problem. Can somebody explain the logic behind it? Seems very counter-intuitive.
|
st117741
|
hello, i am new to pytorch and tried to implement a simple RBM following deeplearning.net 7. I get a simple implementation, but with runtime error when executing at torch.bernoulli function. I have no idea how it occur, and find no answer somewhere else. can someone help me out?
my code:
import os
import torch
from torch.autograd import Variable
import torch.nn.functional as nn
import torch.optim as optim
from torchvision import datasets, transforms
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
# hyper_params
cd_k = 5
lr = 0.001
batch_sz = 20
test_btsz = 16
nH = 512
nV = 28 * 28
v_bias = Variable(torch.zeros(nV), requires_grad=True)
h_bias = Variable(torch.zeros(nH), requires_grad=True)
W = Variable(torch.normal(torch.zeros(nV, nH), 0.01), requires_grad=True)
params = [v_bias, h_bias, W]
solver = optim.Adam(params, lr=lr)
# data sets, Minist 28x28 train:60000 test:10000
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('Data/data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor()
])),
batch_size=batch_sz, shuffle=True)
test_loader = torch.utils.data.DataLoader(
dataset=datasets.MNIST('Data/data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor()
])),
batch_size=test_btsz, shuffle=True)
def xw_b(x, w, b):
return x @ W + b.repeat(x.size(0), 1)
def free_enery(v):
xwb = xw_b(v, W, h_bias)
vbias_term = v @ v_bias
hidden_term = torch.sum(torch.log(1 + torch.exp(xwb)), dim=1)
return -hidden_term - vbias_term
def sample_vfh(h):
wx_b = h @ W.t() + v_bias.repeat(h.size(0))
prob = nn.sigmoid(wx_b)
return torch.bernoulli(prob)
def sample_hfv(v):
wx_b = v @ W + h_bias.repeat(v.size(0))
prob = nn.sigmoid(wx_b)
return torch.bernoulli(prob)
def gibbs_chain(x, cd_k):
v_ = x
for _ in range(cd_k):
h_ = sample_hfv(v_)
v_ = sample_vfh(h_)
return v_
def train_rbm_by_batch(x):
chain_end = gibbs_chain(x, cd_k)
v_ = Variable(chain_end.data, requires_grad=False)
loss = -torch.mean(free_enery(x) - free_enery(v_))
solver.zero_grad()
loss.backward()
solver.step()
return loss
def train_rbm():
for epoch in range(100):
loss, batch_idx = 0, 0
for batch_idx, data in enumerate(train_loader, 0):
inputs, _ = data
inputs = Variable(inputs.view(batch_sz, 28*28))
loss = train_rbm_by_batch(inputs)
print('Epoch-{}; loss: {} '.format(epoch, loss.data.numpy()))
if __name__ == '__main__':
train_rbm()
error information:
File “/Users/yx/Documents/ANN/rbm_pytorch.py”, line 79, in gibbs_chain
h_ = sample_hfv(v_)
File “/Users/yx/Documents/ANN/rbm_pytorch.py”, line 73, in sample_hfv
return torch.bernoulli(prob)
File “/Users/yx/anaconda/envs/yx/lib/python3.6/site-packages/torch/autograd/variable.py”, line 705, in bernoulli
return Bernoulli()(self)
File “/Users/yx/anaconda/envs/yx/lib/python3.6/site-packages/torch/autograd/stochastic_function.py”, line 23, in _do_forward
result = super(StochasticFunction, self)._do_forward(*inputs)
File "/Users/yx/anaconda/envs/yx/lib/python3.6/site-packages/torch/autograd/functions/stochastic.py", line 41, in forward
samples = probs.new().resize_as(probs).bernoulli_(probs)
RuntimeError: must be >= 0 and <= 1 at /Users/soumith/anaconda/conda-bld/pytorch-0.1.10_1488750756897/work/torch/lib/TH/THRandom.c:270
|
st117742
|
RuntimeError: must be >= 0 and <= 1
As seen, the function expects input >=0 and <=1 but you have not given that.
|
st117743
|
Hi,
I am new to PyTorch. I am currently using a simple lstm encoder for my variable length sequences. I notice something strange. When I feed the same inputs in a batch, the lstm outputs for each identical input is different. That is, say my variable length sequence all begin with the input 1, the hidden state after the first step is different for all of them.
What am I doing wrong ? (I am definitely sure that it is the self.hidden that I am passing which creates this issue)
pack_wv = torch.nn.utils.rnn.pack_padded_sequence(batch_in_wv_sorted, seq_lengths_sorted, batch_first=True)
out, (ht,ct) = self.lstm(pack_wv, self.hidden)
When I change the last line to
out, (ht,ct) = self.lstm(pack_wv)
The issue goes away. But I am afraid that this will lead to incorrect behavior, since we are expected to pass the self.hidden for the intialised first hidden state.
|
st117744
|
self.lstm = nn.LSTM(input_size=embedding_dim, hidden_size=hidden_dim, batch_first=True)
self.hidden = (autograd.Variable(torch.randn(self.num_layers, self.batch_size, self.hidden_dim)),autograd.Variable(torch.randn(self.num_layers, self.batch_size, self.hidden_dim)))
|
st117745
|
That makes sense, if you’re initializing a random hidden state, making it random between batches, you’re gonna get different outputs for the same input.
If you were trying to make them the same you could initialize a zero hidden state (as leaving it None does) or a n_layers x 1 x hidden_dim random state that you repeat batch_size times over dimension 1. However I can’t imagine why that would be useful… if you’re building a seq2seq type model the states will be different between batches.
|
st117746
|
Hi,
I am using gpu for my code. However, the speed of the program is the same as it is for the cpu version of the code. (The gpu version is using the gpu).
|
st117747
|
How big is the network? You’ll get a much bigger speedup for a larger network.
Also - what type of network is it? How is it implemented? I assume it’s running just fine (no errors being thrown)? Are you timing forward passes or training?
|
st117748
|
I think I have solved the issue now. Thanks for the reply. The problem was that I had some variables not initialised as .cuda().
Is there a general principle in general while using the code for gpus. Writing .cuda() everytime is a little annoying
|
st117749
|
you should only have to call .cuda() 2 times, one for the input/target variables and one for the model.
|
st117750
|
Still very new to PyTorch, but loving the style.
I am stuck on a small problem where I cannot get the gradient or call backward() when using masked_select(). I am willing to use index_select() if I can figure out how to get the index. I feel close with nonzero() but can’t quite get it to work.
This works, when I build the index by hand:
import torch from torch.autograd import Variable import numpy as np x = np.array([ 0.00834103, 0.00212306, -999.0, 0.00149333, 0.00899409]) x = Variable(torch.from_numpy(x.astype('float32')),requires_grad=True) cond = Variable(torch.from_numpy(np.array([0,1,3,4]).astype('int32'))) y = x.index_select(0,cond.long()) out = y.sum() out.backward() print x.grad
When I try to build the condition dynamically and use masked_select, it fails with a NotImplementedError:
cond = (x>-999.) y = x.masked_select(cond)
So I figured I would get the index and then send that to index_select() but I get a TypeError here:
cond_idx = torch.nonzero(cond) *** TypeError: Type Variable doesn't implement stateless method nonzero
Any ideas how to get this to work?
|
st117751
|
I had started there but found some forum threads saying that Numpy style indexing is not supported. I just tried again and I get the *** NotImplementedError:
|
st117752
|
After much searching, it appears that this discussion from @apaszke is relevant with the .unsqueeze(1) being critical to it working.
Full example:
import torch from torch.autograd import Variable import numpy as np x = np.array([ 0.00834103, 0.00212306, -999.0, 0.00149333, 0.00899409]) x = Variable(torch.from_numpy(x.astype('float32')),requires_grad=True) y = x[(x>-999.).unsqueeze(1)] out = y.sum() out.backward() print x.grad
|
st117753
|
When processing very long sequences, it is impractical to bptt to beginning of the sequence. I don’t see anything in pytorch docs or examples where the sequence length spans multiple minibatches. Say we have the following sequences padded to be suitable for use by nn.LSTM after packing using pack_padded_sequence():
a b c eos
e f eos 0
h i eos 0
The minibatch size is 4 x 3 (seq_len x batch_size). If the hidden states (h,c) are detached from history every minibatch, bptt span = seq_len = 4. Suppose that the sequence “e f eos” in this minibatch is the tail of a longer sequence
z y x w e f eos
When processing “e f eos”, if we want to keep bptt_span to be fixed at 4, we would have to preserve (h, c) across minibatch boundaries.
Is there some facility provided by pytorch to enable this feature or is the best way to implement this is for the user to keep track of (h_t, c_t) returned by nn.LSTM and manage detaching and resetting hidden states?
|
st117754
|
Hi, when I call resize of Variable, the underlying data changes as follows:
criterion.x1
Variable containing:
-5
11
2
3
0
44
-1
32
15
0
[torch.FloatTensor of size 10]
criterion.x1.resize(10, 1)
Variable containing:
-5
4
0
-1
4
0
1
0
0
0
[torch.FloatTensor of size 10x1]
When I extract data out of Variable, same thing happens.
Did I miss something ?
|
st117755
|
Probably because your original tensor was not contiguous. If resize_ changes the size of the tensor, it starts from the same point but uses a contiguous chunk of memory.
For example:
>>> m = torch.arange(0, 25).view(5, 5)
0 1 2 3 4
5 6 7 8 9
10 11 12 13 14
15 16 17 18 19
20 21 22 23 24
[torch.FloatTensor of size 5x5]
>>> x = m[:,0]
0
5
10
15
20
[torch.FloatTensor of size 5])
>>> x.resize_(5, 1)
0
1
2
3
4
[torch.FloatTensor of size 5x1]
|
st117756
|
Hello. I’m new to PyTorch. I am coming from Keras, Theano and TensorFlow and I love the simplicity and performance in PyTorch so far.
I have some custom loss functions that use either Theano’s scan 39 (more flexibility than I need) or TensorFlows tf.map_fn 67 calls and I would like to transfer that over to PyTorch. I found torch.map() but that seems to apply a simple function to all elements in the tensor.
Is there a map_fn that works on the GPU that I have not yet found?
|
st117757
|
Very helpful. Will the resulting loss function be differentiable with autograd? I was under the impression that I needed to use only torch functions for my loss function.
|
st117758
|
The result will be differentiable with autograd – you can use anything in Python as long as you don’t manually modify a variable’s .data attribute; it’ll throw an error if you do something non-differentiable.
|
st117759
|
I have to say it is incredible to me that this works. so much simpler than the TF or Theano approach to the problem. working example here:
import torch
from torch.autograd import Variable
import numpy as np
np.random.seed(20170525)
x = np.random.rand(100)
y = np.ones(100)+np.random.rand(100)
def np_mse(a,b):
return np.mean((a-b)**2)
mse_parts = [np_mse(x.reshape(20,5)[:,i],y.copy().reshape(20,5)[:,i]) for i in range(5)]
print sum(mse_parts)
x = Variable(torch.from_numpy(x.astype('float32')),requires_grad=True)
y = Variable(torch.from_numpy(y.astype('float32')),requires_grad=True)
out = sum([torch.mean((x.view(20,5)[:,i]-y.view(20,5)[:,i])**2) for i in range(5)])
print out
out.backward() # works!!
|
st117760
|
Interstingly np.mean() works but np.std() does NOT work. Any insight into why?
Looks like the variable that comes back has a strange format:
out = np.std([torch.mean((x.view(20,5)[:,i]-y.view(20,5)[:,i])**2) for i in range(5)])
0.149130464538 [[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing: 0.1491 [torch.FloatTensor of size 1] ]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]] Traceback (most recent call last): File "pytorch_customloss.py", line 17, in <module> out.backward() # works!! AttributeError: 'numpy.ndarray' object has no attribute 'backward'
|
st117761
|
You should not use numpy functions in torch Variables.
You can use instead torch.std
|
st117762
|
Will torch.std work over a list of torch variables? That is what I get back after my for loop.
I get this error with my example. I feel like I must be missing something fundamental here in my approach. Please advise.
out = torch.std(results)
*** TypeError: torch.std received an invalid combination of arguments - got (list), but expected one of:
* (torch.FloatTensor source)
didn't match because some of the arguments have invalid types: (list)
* (torch.FloatTensor source, int dim)
|
st117763
|
No, torch.std requires a tensor or a Variable. But if your tensors have the same size, you can concatenate them other a new dimension (via torch.stack for example) and apply torch.std over that dimension
|
st117764
|
I’m looking at the http://pytorch.org/tutorials/beginner/nlp/word_embeddings_tutorial.html 368
It says in the exercise section: “The Continuous Bag-of-Words model (CBOW) is frequently used in NLP deep learning. It is a model that tries to predict words given the context of a few words before and a few words after the target word. This is distinct from language modeling, since CBOW is not sequential and does not have to be probabilistic.”
What makes n-gram “sequential” or “probabilistic”? The only change to the n-gram code I did for this exercise was changing the trigram to “fivegram”, where the context is now 2+2 words before and after, rather than 2 words before the target word.
Am I misunderstanding something?
|
st117765
|
Besides the context gets doubled, the optimization problem has also changed to include the sum over embedded vectors in a context which wasn’t the case in N-gram model.
|
st117766
|
Oh, so we sum input vectors instead of concatenating them? I missed that part. Why would we want to do that?
|
st117767
|
Take a more closer look at the formulation of the problem involving logSoftmax(A(sum q_w) + b). Intuitively, it’s one way of gathering the contributions of the surrounding words. You may find this 140 useful. Also spoiler alert for the solution I gave here.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.