id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st116468
|
Hi,
I’ve successfully implemented semantic segmentation models in pytorch, so I think that should not be a bug in PyTorch.
So, there a few things I’d check in your code:
it seems that you use inputs in the range 0-255, without any normalization. is it intentional? I’d at least put them in the 0-1 range, and eventually subtract 0.5 or the dataset std.
batch normalization with batch size 2 doesn’t look right to me, might introduce a very large bias because the batch size is so small. I’d say to use batch norm sucessfully you’d need batch sizes of 128 or 256
it seems that you don’t use pre-trained networks for initializing your model? This could lead to reduced performances
|
st116469
|
Hi,
In your implementation, do you use ConvTranspose2d or Upsampling?
Regarding your points:
I intentionally changed the range to 0-255 since the range 0-1 resulted the same problem.
The problem persists with or without batch normalization. However, I’m using batch size of 16 in my recent attempts.
I think it’s not fair to relate the problem to pre-trained initialization as there is the same theano implementation which is trained from scratch and works like a charm!!
Thanks for you reply.
|
st116470
|
I finally found the problem!!
For the last set of convolutions, that is 128-> 64 -> 64 -> 1, the activation function should not be used!
The activation function causes the values to vanish!
I just removed the nn.ReLU() modules on top of these convolution layers and now everything works fine!
Saeed
|
st116471
|
It’s great that you managed to solve your problem. Would you mind summarising a) what worked for you and b) what didn’t work for you/errors so that other people can benefit?
|
st116472
|
I tried to convert a torch RNN net into pytorch. https://github.com/Andybert/Recurrent-Convolutional-Video-ReID 9
The pytorch net works and converges, but it have a performance loss(rank 1: 56 vs 40). I don’t know why.
any advices? or someone can help me check whether the model defined in pytorch is consistent with the model defined in torch, especially the rnn module?
|
st116473
|
As we all know, convolutional operator in CNN has shared weight property, the shared weight size is the whole image. If I want to change the shared weight size? For example, 4x4? How could I do that?
|
st116474
|
hi,
i am so appreciated with your great job. I am newer to pytorch.
i have download faster-rcnn code. And running the code is no problem.
But when i try to remote debug the code . It is so slow that i cannot watch the evaluating of the varible.
it is keep evaluating like
wait for few minutes, it comes time out.
and step over for just ‘train_loss = 0’ takes long time.
can you help me?
|
st116475
|
Hello, everyone,
Since my training samples have different importance, I wonder how to use this information in the training pass.
For example, I have a function to assign each individual sample an importance factor, the first idea that comes to my mind is to scale the gradients according to this factor, which emphasizes some samples. However, I think this is quite difficult to implement since pytorch averages the loss at the mini-batch level.
Do you have any suggestions? Thanks.
Best Regards,
Shuai.
|
st116476
|
Hi Sam,
I think the most clean way is to write your own loss function that supports the weight (some loss functions like BCELoss 31 already have example weights, but one has to be careful because others weight classes instead).
If you don’t want to do that, you could add a hook to achieve this to the prediction, similar to this (on master / pytorch 0.2):
a = Variable(torch.randn(3,3), requires_grad=True)
loss = a.sum()
loss.backward()
print (a.grad)
def scale_gradients(v, weights): # assumes v is batch x ...
def hook(g):
return g*weights.view(*((-1,)+(len(g.size())-1)*(1,))) # probably nicer to hard-code -1,1,...,1
v.register_hook(hook)
b = Variable(torch.randn(3,3), requires_grad=True)
weights=Variable(torch.arange(1,4))
scale_gradients(b,weights)
loss2 = b.sum()
loss2.backward()
print (b.grad)
For pytorch 0.1.12 you would need .expand_as(g) after the view.
Best regards
Thomas
|
st116477
|
Hi,
My laboratory updated the glibc on our workstation, and after that my pytorch begin to fail from time to time. It seems that the error is caused by customed usage of glibc, which depends on a certain version probably. Does anyone encounter similar problem?
-----------------------------------update record ---------------
Jul 19 03:19:34 Updated: glibc-common-2.17-157.el7_3.5.x86_64
Jul 19 03:19:35 Updated: glibc-2.17-157.el7_3.5.x86_64
Jul 19 03:19:35 Updated: glibc-headers-2.17-157.el7_3.5.x86_64
Jul 19 03:19:35 Updated: glibc-devel-2.17-157.el7_3.5.x86_64
Jul 19 03:19:36 Updated: glibc-2.17-157.el7_3.5.i686
Thank you!
Below is the output:
*** Error in `python’: double free or corruption (fasttop): 0x00007fe508103130 ***
======= Backtrace: =========
/usr/lib64/libc.so.6(+0x7c503)[0x7fe6a59e1503]
/usr/lib64/nvidia/libcuda.so.1(+0x1aa1df)[0x7fe674bf01df]
/usr/lib64/nvidia/libcuda.so.1(+0xd051b)[0x7fe674b1651b]
/usr/lib64/nvidia/libcuda.so.1(cuStreamCreate+0x5b)[0x7fe674c3629b]
/vulcan/scratch/jackwang/anaconda3/envs/magma/lib/python3.6/site-packages/torch/lib/libcudnn.so.6(+0x864236)[0x7fe690180236]
/vulcan/scratch/jackwang/anaconda3/envs/magma/lib/python3.6/site-packages/torch/lib/libcudnn.so.6(+0x8966a4)[0x7fe6901b26a4]
/vulcan/scratch/jackwang/anaconda3/envs/magma/lib/python3.6/site-packages/torch/lib/libcudnn.so.6(+0x7839bf)[0x7fe69009f9bf]
/vulcan/scratch/jackwang/anaconda3/envs/magma/lib/python3.6/site-packages/torch/lib/libcudnn.so.6(cudnnRNNBackwardData+0xf42)[0x7fe69009dd12]
/vulcan/scratch/jackwang/anaconda3/envs/magma/lib/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so(ffi_call_unix64+0x4c)[0x7fe69d1cf5b0]
/vulcan/scratch/jackwang/anaconda3/envs/magma/lib/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so(ffi_call+0x1f5)[0x7fe69d1ced55]
/vulcan/scratch/jackwang/anaconda3/envs/magma/lib/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so(_ctypes_callproc+0x3dc)[0x7fe69d1c689c]
/vulcan/scratch/jackwang/anaconda3/envs/magma/lib/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so(+0x9df3)[0x7fe69d1bedf3]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(_PyObject_FastCallDict+0x9e)[0x7fe6a68bec1e]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(+0x14895b)[0x7fe6a699b95b]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(_PyEval_EvalFrameDefault+0x2c40)[0x7fe6a699ed40]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(+0x146514)[0x7fe6a6999514]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(+0x148c88)[0x7fe6a699bc88]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(_PyEval_EvalFrameDefault+0x2c40)[0x7fe6a699ed40]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(+0x146514)[0x7fe6a6999514]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(_PyFunction_FastCallDict+0x285)[0x7fe6a699a515]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(_PyObject_FastCallDict+0x166)[0x7fe6a68bece6]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(_PyObject_Call_Prepend+0xcc)[0x7fe6a68bef3c]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(PyObject_Call+0x56)[0x7fe6a68befd6]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(_PyEval_EvalFrameDefault+0x3ec9)[0x7fe6a699ffc9]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(+0x147100)[0x7fe6a699a100]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(_PyFunction_FastCallDict+0x10c)[0x7fe6a699a39c]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(_PyObject_FastCallDict+0x166)[0x7fe6a68bece6]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(_PyObject_Call_Prepend+0xcc)[0x7fe6a68bef3c]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(PyObject_Call+0x56)[0x7fe6a68befd6]
/vulcan/scratch/jackwang/anaconda3/envs/magma/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so(_Z23THPFunction_do_backwardP11THPFunctionP7_object+0x133)[0x7fe695b0d533]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(_PyCFunction_FastCallDict+0x229)[0x7fe6a6916429]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(+0x148b8c)[0x7fe6a699bb8c]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(_PyEval_EvalFrameDefault+0x2c40)[0x7fe6a699ed40]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(+0x147100)[0x7fe6a699a100]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(_PyFunction_FastCallDict+0x10c)[0x7fe6a699a39c]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(_PyObject_FastCallDict+0x166)[0x7fe6a68bece6]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(_PyObject_Call_Prepend+0xcc)[0x7fe6a68bef3c]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(PyObject_Call+0x56)[0x7fe6a68befd6]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(+0x6c0bd)[0x7fe6a68bf0bd]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(+0x6c16d)[0x7fe6a68bf16d]
/vulcan/scratch/jackwang/anaconda3/envs/magma/bin/…/lib/libpython3.6m.so.1.0(PyObject_CallMethod+0xe6)[0x7fe6a68c26e6]
/vulcan/scratch/jackwang/anaconda3/envs/magma/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so(_ZN5torch8autograd10PyFunction5applyERKSt6vectorISt10shared_ptrINS0_8VariableEESaIS5_EE+0xe1)[0x7fe695b0dfe1]
/vulcan/scratch/jackwang/anaconda3/envs/magma/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so(_ZN5torch8autograd6Engine17evaluate_functionERNS0_12FunctionTaskE+0x248)[0x7fe695b03be8]
/vulcan/scratch/jackwang/anaconda3/envs/magma/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so(_ZN5torch8autograd6Engine11thread_mainESt10shared_ptrINS0_10ReadyQueueEE+0x3b)[0x7fe695b056eb]
/vulcan/scratch/jackwang/anaconda3/envs/magma/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so(_ZN12PythonEngine11thread_mainESt10shared_ptrIN5torch8autograd10ReadyQueueEE+0x54)[0x7fe695b16284]
/vulcan/scratch/jackwang/anaconda3/envs/magma/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so(_ZNSt6thread5_ImplISt12_Bind_simpleIFSt7_Mem_fnIMN5torch8autograd6EngineEFvSt10shared_ptrINS4_10ReadyQueueEEEEPS5_S8_EEE6_M_runEv+0x48)[0x7fe695b07f18]
/vulcan/scratch/jackwang/anaconda3/envs/magma/lib/python3.6/site-packages/torch/lib/…/…/…/…/libstdc++.so.6(+0xb7260)[0x7fe67f0eb260]
/usr/lib64/libpthread.so.0(+0x7dc5)[0x7fe6a663edc5]
/usr/lib64/libc.so.6(clone+0x6d)[0x7fe6a5a5c76d]
======= Memory map: ========
|
st116478
|
My code is as follows:
image.png879×190 13.3 KB
as it shows the training process for net1 and net2 is uncorrelated, so I want use devices=1 to train net1 and devices=2 to train net2 parallel .
How can I realize it?
|
st116479
|
@cold_wind
You can fork two processes for training in parallel using command line like following
CUDA_VISIBLE_DEVICES=0 python train_net1.py
CUDA_VISIBLE_DEVICES=1 python train_net2.py
Also you can do the same with python code using Queues, train both nets in parallel, wait till both will end training and then test them.
For example this link https://dmitryulyanov.github.io/if-you-are-lazy-to-install-slurm/ 35 can be useful
|
st116480
|
Hi, I am new current user of pytorch and I was wondering if you have better way to preprocessing your own text data.
For example, if you have bunch of text data such as
sentences = “I have a pen” -> words -> “I”, “have”, “a”, “pen”.
classification = 0 or 1 (pos or neg).
Of course, there are many sentences and all the sentence lengths are different.
In Keras or Tensorflow, it has own function like text padding or so.
How to embed those into pytorch way and train simple classification for that?
There are not many good examples for this.
|
st116481
|
Solved by Jing in post #2
I think maybe torch.nn.utils.rnn.pack_padded_sequence() can do text padding.
|
st116482
|
Recently, I am learning about image classification.
I have read several tutorial and examples.
But one thing bother me is that why everyone using
Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]
And no one give explanation, so i just wondering why.
I tried other normalization, but it also works fine.
|
st116483
|
[0.485, 0.456, 0.406], [0.229, 0.224, 0.225] are means and stds computed on the whole training datasets.
|
st116484
|
I installed pytorch version 0.1.12_2
>>> torch.__version__
>>> '0.1.12_2'
but when I used torch.nn.BCEWithLogitsLoss(), it said:
AttributeError: 'module' object has no attribute 'BCEWithLogitsLoss'
anyone knows what’s wrong?
Thanks
|
st116485
|
nn.BCEWithLogitLoss was added in master branch(0.2.0), so you can’t use it in stable version(0.1.12_2).
Since BCEWithLogitLoss just combines Sigmoid layer and the BCELoss in one single class,
you can add one sigmoid layer before BCELoss Instead.
|
st116486
|
I need to convert a nn.Recurrent layer in torch into pytorch layer. Is there any differences between nn.Recurrent module in torch and nn.RNN module in pytorch?
|
st116487
|
I am trying to write a extension for pytorch. I find some repos to study. Now I have some problems:
How about the relation between module and function? I find some examples call function in module,why not only use module to implement?
I find some cuda implement use kernle, and others not. When should I write the cuda kernel files ?
|
st116488
|
It’d be nice if the error message for torch.cat was more specific. For example, when I run the following
import torch
a = torch.FloatTensor(1)
b = torch.DoubleTensor(1)
c = torch.cat((a, b)) # Error since a and b are different types
I get the following error
Traceback (most recent call last):
File "foo.py", line 5, in <module>
c = torch.cat((a, b))
TypeError: cat received an invalid combination of arguments - got (tuple), but expected one of:
* (sequence[torch.FloatTensor] seq)
didn't match because some of the arguments have invalid types: (tuple)
* (sequence[torch.FloatTensor] seq, int dim)
It’d be nice if it said something like, “got mixed types but expected…” I spent a while double checking my inputs before I realized the types were different.
Looks like this confusion was also the cause for this post.
|
st116489
|
I’m writing something like:
a=Variable(torch.randn(1).type(torch.FloatTensor),requires_grad=True) b=Variable(torch.randn(3,3))
how to compute the product
THX!
|
st116490
|
What kind of product do you need? You want to scale each matrix element by the same scalar? Try this:
a = ...
b = ...
b * a.expand_as(b)
|
st116491
|
I am wondering if using expand_as is efficient in this case ? does it store only the original value and the repetitions across each axis or something similar ? it would be cheaper to divide each matrix element by the same value without having to create the whole matrix containing the same values.
|
st116492
|
Is there any NCCL or CUDA examples with PyTorch? I want to use nccl to synchronize the data across multi gpus.
|
st116493
|
I want to apply a function for each row of tensor independly. This function have ‘while’ inside it self, so it would be non-linear transformation.
I was thinking about using apply 268, but look like that it is not the best method, because it need CPU data and it is not high performance (?) based on documentation.
I would like to work it like that:
data_trans = data.apply(lambda x : my_function(t))
|
st116494
|
If I want to add some Gaussion noise in the CIFAR10 dataset which is loaded by torchvision, how should I do it? Or, if I have defined a dataset by torch.utils.data.TensorDataset, how can I add more data samples there? Is any function like append( )? Thanks.
|
st116495
|
There’s a few ways you can do this.
If you don’t care about seeing all 50k cifar10 samples in one complete pass of the data loader you could pass in a transform that randomly returns noise instead of the image. i.e.
import random
class RandomNoise(object):
def __init__(self, probability):
self.probabilit = probability
def __call__(self, img):
if random.random() <= self.probability:
return img.clone().normal_(0,1)
return img
cifar10_dataset = torchvision.datasets.CIFAR10(root="...", download=True, transform=input_transform)
You could sub class CIFAR10 and override the __len__ function to return maybe 100000, then the __getitem__ function would look something like:
def __getitem__(self, index):
if index < 50000:
return super(MySubClass, self).__getitem__(index)
else:
return torch.Tensor(size).normal_(0,1), torch.Tensor(1).fill_(class_label)
|
st116496
|
Hi, I use torchvision.transform to do it, it has a lambda function which you can customized a funciton to add noise to the data. But the CIFAR10 image is small just 32 * 32 * 10, after add sp or gaussion noise on them, the final result seems like not well .
|
st116497
|
I am very new to pytorch and want to understand, how do i pass a single image to a regression model. I want to pass a single image to a model and collects its position, the respective rgb value in each position and use a regression model to predict rgb values to given position. How do i achieve this?
|
st116498
|
I did a comparison of memory cost between Caffe and pyTorch, on single GeForce GTX 1070.
I was using Imagenet classification task for this testing. PyTorch codes in this test are just example codes.
Alexnet
batch size 256 512 768 1024
-----------------------------------------------------
caffe test 2443 4399 6365 OoM
caffe train 4229 7425 OoM OoM
pytorch test 1855 2817 3167 3655
pytorch train 4803 4919 7041 OoM
This result is very good for pytorch. It is roughly using 2/3 of memory of that used by Caffe. But when I did the same test using VGG, the memory cost of pytorch is higher then Caffe when training:
VGG16
batch size 32 42 48 64
-----------------------------------------------------
caffe test 2931 3607 3983 5025
caffe train 6015 7007 7995 OoM
pytorch test 2173 2649 2795 3655
pytorch train 6447 6907 OoM OoM
note: OoM is stand for ‘out of memory’.
Dose anyone has any idea on this?
Another observation is that, before training starts, the memory cost (shown by nvidia-smi) would first burst to a high level (approximately 20% higher then later value), and then stable at a lower value. This is likely to cause a ‘CUDNN_STATUS_ALLOC_FAILED’ error. Is this a normal situation?
|
st116499
|
can i see your benchmark script?
for batch size 42, the difference in memory usage is minor, 6907 (pytorch) vs 7007 (Caffe). I expect that for batch size 48 it tips in either direction someway (pytorch might be picking a different algorithm for cudnn probably.
|
st116500
|
Yes, you’re right!
In the above test, I set cudnn.benchmark to True.
I just tried setting it to False. The memory cost drops.
Now, using VGG16, the maximum batch size increases to 60.
batch size | 48 | 52 | 56 | 60
pytorch train | 7657 | 8115 | 7409 | 7659
Thank you! @smth
|
st116501
|
@Soumith_Chintala I stumbled upon this thread and I observed that I just observed the opposite of this. The memory footprint for vgg16 pytorch model in my cases increases if I set cudnn.benchmark=False. For batchsize of 64, vgg16 pytorch version occupies 9978 MB of GPU memory. Is it normal or is there a way to reduce this ?
|
st116502
|
My understanding is due to pytorch automatically select the cudnn algorithm, when there is adequate memory, it just use more. So it may not be fair to compare memory cost without considering the cudnn alogrithm.
In my comparison above, you can see batchsize=52 cost more memory than batchsize=56, because my memory limit is 8G. So I think one reasonable comparison metric could be ‘the maximum batch size given xx memory size’.
|
st116503
|
Hi all,
I am coding a seq2seq model with inputs of different length. Therefore, I am using pad_packed_sequence and pack_padded_sequence in the Encoder. However, in test time I noticed some differences depending on the shuffle of the data. Testing which is the problem, I noticed slight differences in results, here is a toy example to explain which is the problem. As initialization:
seq = 4
batch = 3
in_size = 5
hidden_size = 7
input = Variable(torch.rand(seq,batch, in_size))
hidden = Variable(torch.zeros(1, batch, hidden_size))
init_hidden = Variable(torch.zeros(1, 1, hidden_size))
rnn = nn.GRU(in_size, hidden_size, 1)
batch_len = [4,3,2]
for i in range(1,batch):
input[batch_len[i]:,i,:] = 0
input0 = input[:, 0].unsqueeze(1)
input1 = input[:batch_len[1], 1].unsqueeze(1)
input2 = input[:batch_len[2], 2].unsqueeze(1)
input_pad = pack_padded_sequence(input, batch_len)
I assume that the output and hidden state should be the same using input_pad or the concatenation of the three input*
output_pad, hidden = rnn(input_pad, hidden)
output0, hidden0 = rnn(input0, init_hidden)
output1, hidden1 = rnn(input1, init_hidden)
output2, hidden2 = rnn(input2, init_hidden)
output, out_len = pad_packed_sequence(output_pad)
However, there are small changes:
>>> print((torch.cat([hidden0,hidden1, hidden2],1) - hidden).abs().sum())
Variable containing:
1.00000e-07 *
1.6391
[torch.FloatTensor of size 1]
>>> (torch.cat([output0, torch.cat([output1,Variable(torch.zeros(output.size(0)-output1.size(0),output1.size(1),output1.size(2)))],0), torch.cat([output2,Variable(torch.zeros(output.size(0)-output2.size(0),output2.size(1),output2.size(2)))],0)],1) - output).abs().sum()
Variable containing:
1.00000e-07 *
3.4925
[torch.FloatTensor of size 1]
I am using the padded and packed functions correctly? Is there a bug in this functions? I’ve got the impression that this can lead me to different models in real scenarios.
Thank you very much and sorry for the long post.
|
st116504
|
Is there a way to pass extra feature along with the existing word tokens as input and feed it to the encoder RNN?
Lets consider the NMT problem , say I have 2 more feature columns for the corresponding source vocabulary( Feature1 here ). For example, consider this below:
Feature1 Feature2 Feature3
word1 x a
word2 y b
word3 y c
.
.
Moreover, I believe this is glossed over in the seq2seq tutorial(https://github.com/spro/practical-pytorch/blob/master/seq2seq-translation/seq2seq-translation.ipynb 23) as quoted below:
“When using a single RNN, there is a one-to-one relationship between inputs and outputs. We would quickly run into problems with different sequence orders and lengths that are common during translation…….With the seq2seq model, by encoding many inputs into one vector, and decoding from one vector into many outputs, we are freed from the constraints of sequence order and length. The encoded sequence is represented by a single vector, a single point in some N dimensional space of sequences. In an ideal case, this point can be considered the “meaning” of the sequence.”
Furthermore, I tried tensorflow and took me a lot of time to debug and make appropriate changes and got nowhere . And heard from my colleagues that pytorch would have the flexibility to do so. Please share your thoughts on how to achieve the same in pytorch. Would be great of anyone tells how to practically implement/get this done in pytorch. Thanks in advance.
|
st116505
|
Hello,
I was running a task on certain GPUs, and then wanted to run another task also on same GPUs. Then I used ctrl+C to terminate the second task, suddenly everything is stuck. When I tried to run nvidia-smi to spot the situation, it even become dreaded un-interruptible and also stuck there. When I try to kill every related process, the task seemed to be terminated, however, there was still some python process that cannot be killed and finally leaving unknown error in CUDA which occupied memory and utilized 100% usage of thoes GPU and I could not send any more jobs on them. I have encountered this twice. Is this disaster due to my misbehavior or a bug of PyTorch?
BTW, I built PyTorch with latest commit.
|
st116506
|
I also encountered the same problem and do not know how to solve it. I do not want to reboot the server because I am not in the sudoer and other people are using it. The related issues on tensorflow and mxnet are posted on https://github.com/dmlc/mxnet/issues/4242 65
|
st116507
|
Exactly, I was also using a server and luckily I asked DCO to rebooted the server and solve the problem.
|
st116508
|
I think issue could best be described by giving a simple example. In the following simple script, I’m trying to take the Hessian-vector product where the Hessian is of f_of_theta taken w.r.t. theta and the vector is simply vector.
import torch
from torch.autograd import Variable, grad
theta = Variable(torch.randn(2,2), requires_grad=True)
f_of_theta = torch.sum(theta ** 2 + theta)
vector = Variable(torch.randn(2,2))
gradient = grad(f_of_theta, theta)[0]
gradient_vector_product = torch.sum(gradient * vector)
gradient_vector_product.requires_grad = True
hessian_vector_product = grad(gradient_vector_product, theta)[0]
gradient is being calculated correctly but when the script tries to calculated hessian_vector_product, I get the following error:
terminate called after throwing an instance of 'std::runtime_error’
what(): differentiated input is unreachable
Aborted
So, simply put, my question is how exactly should I do what I’m trying to do? Any help with this would be greatly appreciated.
Edit: Note that I’m using the pytorch version built from the latest commit on master (ff0ff33).
|
st116509
|
There’s a Hessian-vector product example in the autograd tests: https://github.com/pytorch/pytorch/blob/master/test/test_autograd.py#L151 427
|
st116510
|
Yeah, I was looking at that before but I couldn’t get it to work by doing something that I thought was analogous. Turns out my hang-up was that it didn’t occur to me that it was important to pass in a Variable to the first backward pass and not a Tensor.
For the sake of anyone else who may read this in the future, it appears that the following is what is needed to get my simple example working:
import torch
from torch.autograd import Variable, grad
theta = Variable(torch.randn(2,2), requires_grad=True)
f_of_theta = torch.sum(theta ** 2 + theta)
vector = Variable(torch.randn(2,2))
f_of_theta.backward(Variable(torch.ones(2,2), requires_grad=True), retain_variables=True)
gradient = theta.grad
gradient_vector_product = torch.sum(gradient * vector)
gradient_vector_product.backward(torch.ones(2,2))
hessian_vector_product = theta.grad - gradient
|
st116511
|
You don’t need to pass in a Variable nor specify retain_variables. This would be enough:
f_of_theta.backward(create_graph=True)
|
st116512
|
When I do:
import torch
from torch.autograd import Variable, grad
theta = Variable(torch.randn(2,2), requires_grad=True)
f_of_theta = torch.sum(theta ** 2 + theta)
vector = Variable(torch.randn(2,2))
f_of_theta.backward(create_graph=True)
gradient = theta.grad
gradient_vector_product = torch.sum(gradient * vector)
gradient_vector_product.backward(torch.ones(2,2))
hessian_vector_product = theta.grad - gradient
I get:
TypeError: backward() got an unexpected keyword argument ‘create_graph’
I’m still on the same PyTorch version as before. Weird. I definitely see the create_graph argument on line 46 of ./torch/autograd/init.py of the source I’m building off of. Not sure what to make of that.
|
st116513
|
Hello @apaszke,
I’m not sure whether it is relevant, but for me, fn.backward does not take create_graph either, but backward(fn, create_graph=True) works as expected.
This seems to be because right now
github.com
pytorch/pytorch/blob/master/torch/autograd/variable.py#L110 7
be constructed, allowing to compute higher order derivative
products. Defaults to ``False``.
"""
torch.autograd.backward(self, gradient, retain_graph, create_graph)
def register_hook(self, hook):
"""Registers a backward hook.
The hook will be called every time a gradient with respect to the
variable is computed. The hook should have the following signature::
hook(grad) -> Variable or None
The hook should not modify its argument, but it can optionally return
a new gradient which will be used in place of :attr:`grad`.
This function returns a handle with a method ``handle.remove()``
that removes the hook from the module.
Example:
>>> v = Variable(torch.Tensor([0, 0, 0]), requires_grad=True)
does not take create_graph.
Best regards
Thomas
|
st116514
|
@apaszke That was probably my issue too. On a somewhat related note, is there any sense of an ETA on converting all operations to be twice differentiable?
|
st116515
|
https://github.com/mjacar/pytorch-trpo 21 I run example.py 6 in this project and came across several problems: one is unexpected argument create_graph
change
dict iteration methods,
xrange to range and
remove create_graph=True
made the trpo cartpole example work in python3.
But I wanna know what does create_graph do for us? And whether this argument has already been removed? Thanks.
|
st116516
|
Hi, @tigerneil
try uninstalling torch, and rebuilding from source? Then, it should work fine, with create_graph=True
|
st116517
|
create_graph=True is necessary for the proper second derivative approach to work. Right now the bottleneck on that is having all PyTorch operations be twice differentiable. So, in a sense, that parameter is presently useless. For what it’s worth, I plan on continuing work on that repo which will include documentation so there’s no confusion over Python 2 vs Python 3 among numerous other things.
|
st116518
|
Yes. As far as I understand, 0.2 will be released next week or so, but until then you need to compile from source for second derivatives.
|
st116519
|
When training using 4 gpu for segmentation task. The GPU memory usage of the first one is much larger than the others. Any thoughts? Thanks!
|
st116520
|
Maybe some buffers used by the optimizer, such as the momentum, whose size equals to that of the model parameters.
|
st116521
|
I found this:
github.com
pytorch/pytorch/blob/master/torch/nn/parallel/data_parallel.py#L47 47
>>> output = net(input_var)
"""
# TODO: update notes/cuda.rst when this class handles 8+ GPUs well
def __init__(self, module, device_ids=None, output_device=None, dim=0):
super(DataParallel, self).__init__()
if not torch.cuda.is_available():
self.module = module
self.device_ids = []
return
if device_ids is None:
device_ids = list(range(torch.cuda.device_count()))
if output_device is None:
output_device = device_ids[0]
self.dim = dim
self.module = module
self.device_ids = device_ids
self.output_device = output_device
It seems always gathers the output to the first GPU. Is this a temporary solution?
|
st116522
|
do you want to retain the outputs on their respective GPUs without being gathered back onto a particular GPU?
If so, you want to do the scatter, parallel_apply and avoid gather yourself. These are primitives under nn.parallel. DataParallel is effectively a composition scatter + parallel_apply + gather
|
st116523
|
Thanks smth!
I figured it out, exactly as you said https://github.com/pytorch/pytorch/issues/1893 921
|
st116524
|
What is the advantage or benefit about not gather the outputs to a single GPU? Or is it have some disadvantages?
|
st116525
|
Hello,
I’m new to the PyTorch framework (coming from Theano and Tensorflow mainly) which I really enjoy to use.
I’ve followed the introduction tutorial and read the classifying Classifying Names with a Character-Level RNN one.
I now try to adapt it to a char level LSTM model in order to gain some practical experience with the framework.
Basically I feed in the model sequences of char indices and give as target to the model the same sequence but shifted by one in the future.
However I can’t overfit a simple training example and I don’t see what I did wrong.
If someone can spot my mistake it would be very helpful.
Here is my code:
class LSTMTxtGen(nn.Module):
def __init__(self, hidden_dim, n_layer, vocab_size):
super(LSTMTxtGen, self).__init__()
self.n_layer = n_layer
self.hidden_dim = hidden_dim
self.vocab_size = vocab_size
self.lstm = nn.LSTM(vocab_size, hidden_dim, n_layer, batch_first=True)
# The linear layer that maps from hidden state space to tag space
#self.hidden = self.init_hidden()
def init_hidden(self, batch_size):
# Before we've done anything, we dont have any hidden state.
# Refer to the Pytorch documentation to see exactly
# why they have this dimensionality.
# The axes semantics are (num_layers, minibatch_size, hidden_dim)
return (autograd.Variable(torch.zeros(self.n_layer, batch_size,
self.hidden_dim)),
autograd.Variable(torch.zeros(self.n_layer, batch_size,
self.hidden_dim)))
def forward(self, seqs):
self.hidden = self.init_hidden(seqs.size()[0])
lstm_out, self.hidden = self.lstm(seqs, self.hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
lstm_out = nn.Linear(lstm_out.size(1), self.vocab_size)(lstm_out)
return lstm_out
model = LSTMTxtGen (
hidden_dim = 50,
n_layer = 3,
vocab_size = 44,
)
print(Model)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adamax(model.parameters())
G = Data.batch_generator(5,100)
batch_per_epoch, to_idx, to_char = next(G)
X, Y = next(G)
for epoch in range(10):
losses = []
for batch_count in range(batch_per_epoch):
model.zero_grad()
#mode.hidden = model.init_hidden()
#X, Y = next(G)
X = autograd.Variable(torch.from_numpy(X))
Y = autograd.Variable(torch.from_numpy(Y))
preds = model(X)
loss = criterion(preds.view(-1, model.vocab_size), Y.view(-1))
loss.backward()
optimizer.step()
losses.append(loss)
if (batch_count % 20 == 0):
print('Loss: ', losses[-1])
The loss keep oscillating and no improvement is made.
|
st116526
|
You are creating a new output layer each time you call the forward.
lstm_out = nn.Linear(lstm_out.size(1), self.vocab_size)(lstm_out)
and as a result, nothing is learned across iterations.
It should be something like:
lstm_out = self.output_layer(lstm_out)
where self.output_layer = nn.Linear(...) should be defined in __init__()
|
st116527
|
According to the documentation, class torch.nn.RNN takes two inputs, input, h_0. However, I found that even if I only feed input, RNN still returns output, h_n, any idea why?
|
st116528
|
@ZeweiChu because if hidden states are None , they are initialized as zero tensors, then they are used for computation of next hidden states and RNN outputs new hidden states. You can look at source code and check it
|
st116529
|
I wrote a script to train conv net on mnist, but there are some problems.
The mnist dataset is from Kaggle (train.csv). The reference codes are
listed in the head part of the code.
Problems:
(1) line 46: Without this line, it will raise RuntimeError: expected Double tensor (got Float tensor).
The tutorial and example code don’t do like this…
46 convnet = convnet.double() #RuntimeError: expected Double tensor (got Float tensor)
(2) line 66: will crash if I don’t explicitly assign True to model.trainning. The totorial and example code don’t do like this…
66 model.trainning = True #AttributeError: 'MnistConvNet' object has no attribute 'trainning'
(3) The model seems not learning (test accuracy still 0.11 after 100 iterations), but I didn’t find out why. The test accuracy should arise after merely several iterations (The TF example codes in Kaggle do so). Is the code wrong?
Thanks in advance!
Complete script:
1 # http://pytorch.org/tutorials/
2 # http://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html
3 # https://github.com/pytorch/examples/blob/master/mnist/main.py
4
5 import sys
6 import os
7
8 os.putenv('OPENBLAS_NUM_THREADS', '4')
9
10 import torch as th
11 import torch.nn.functional as thnf
12 import numpy as np
13 import pandas as pd
14 from sklearn.model_selection import train_test_split
15 print('-> Using TH', th.__version__)
16
17 ### Read Train-Val data and split ###
18 trainval = pd.read_csv("train.csv")
19 trainval_images = trainval.iloc[:, 1:].div(255)
20 trainval_labels = trainval.iloc[:, :1]
21 train_images, val_images, train_labels, val_labels = train_test_split(
22 trainval_images, trainval_labels, train_size=0.8, random_state=0)
23 print('-> train set shape', train_images.shape)
24 print('-> val set shape', val_images.shape)
25
26 ### Model ###
27 class MnistConvNet(th.nn.Module):
28 def __init__(self):
29 super(MnistConvNet, self).__init__()
30 self.conv1 = th.nn.Conv2d(1, 10, kernel_size=5)
31 self.conv2 = th.nn.Conv2d(10, 20, kernel_size=5)
32 self.conv2_drop = th.nn.Dropout2d()
33 self.fc1 = th.nn.Linear(320, 50)
34 self.fc2 = th.nn.Linear(50, 10)
35 def forward(self, x):
36 x = thnf.relu(thnf.max_pool2d(self.conv1(x), 2))
37 x = thnf.relu(thnf.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
38 x = x.view(-1, 320)
39 x = thnf.relu(self.fc1(x))
40 x = thnf.dropout(x, training=self.trainning)
41 x = self.fc2(x)
42 return x
43 #return thnf.log_softmax(x)
44
45 convnet = MnistConvNet()
46 convnet = convnet.double() #RuntimeError: expected Double tensor (got Float tensor)
47 crit = th.nn.CrossEntropyLoss()
48 optimizer = th.optim.Adam(convnet.parameters(), lr=1e-2)
49
50 ### Train and Val ###
51
52 def step_train(model, iteration):
53 i = iteration
54 batch_images = train_images.iloc[
55 (i*50)%33600:
56 (i+1)%672==0 and 33600 or ((i+1)*50)%33600].values
57 batch_labels = train_labels.iloc[
58 (i*50)%33600:
59 (i+1)%672==0 and 33600 or ((i+1)*50)%33600].values
60 batch_images = th.autograd.Variable(th.from_numpy(batch_images))
61 batch_labels = th.autograd.Variable(th.from_numpy(batch_labels))
62 batch_images = batch_images.resize(50, 1, 28, 28)
63 batch_labels = batch_labels.resize(50)
64
65 model.train()
66 model.trainning = True #AttributeError: 'MnistConvNet' object has no attribute 'trainning'
67 optimizer.zero_grad()
68 output = model(batch_images)
69 loss = crit(output, batch_labels)
70 loss.backward()
71 optimizer.step()
72
73 pred = output.data.max(1)[1]
74 correct = pred.eq(batch_labels.data).cpu().sum()
75 print('-> Iter {:5d} |'.format(i), 'loss {:7.3f} |'.format(loss.data[0]),
76 'Bch Train Accu {:.2f}'.format(correct / output.size()[1]))
77
78 def step_eval(model, iteration):
79 correct = 0
80 total = val_images.shape[0]
81 lossaccum = 0.
82 print('-> TEST @ {} |'.format(iteration), end='')
83 for i in range(0, val_images.shape[0], 50):
84 images = val_images.iloc[i:i+50].values
85 labels = val_labels.iloc[i:i+50].values
86 images = th.autograd.Variable(th.from_numpy(images))
87 labels = th.autograd.Variable(th.from_numpy(labels))
88 images = images.resize(50, 1, 28, 28)
89 labels = labels.resize(50)
90
91 model.eval()
92 model.trainning = False
93 output = model(images)
94 loss = thnf.nll_loss(output, labels)
95 lossaccum += loss.data[0]
96 pred = output.data.max(1)[1]
97 correct += pred.eq(labels.data).cpu().sum()
98 print('.', end=''); sys.stdout.flush()
99 print('|')
100 print('-> TEST @ {} |'.format(iteration),
101 'Loss {:7.3f} |'.format(lossaccum),
102 'Accu {:.2f}|'.format(correct / total))
103 exit()
104
105 for i in range(20000):
106 step_train(convnet, i)
107 if i>0 and i%100==0:
108 step_eval(convnet, i)
part of output
-> Using TH 0.1.12
-> train set shape (33600, 784)
-> val set shape (8400, 784)
-> Iter 0 | loss 2.286 | Bch Train Accu 0.60
-> Iter 1 | loss 2.340 | Bch Train Accu 0.70
|
st116530
|
I’m recently implementing deep photo style transfer, which contains a loss function with dense X sparse X dense in it. I found spmm to do sparse X dense, but didn’t find one to do the inverse. Am I missing sth or it really doesn’t exist?
|
st116531
|
My question is not explicitly related to programming issue. I want to know, how much important it is to shuffle the training data before each epoch/iteration in pytorch? Please share your experience.
Also, if I see that shuffling test data changes the performance for a trained model which is trained without shuffling the training data, what we can infer from here? Can we say the model learned features sensitive to batch position? Or, we can say the model is perhaps implemented in a wrong way?
I am interested to know if anyone faced any issue with the performance given the data shuffling was done or not.
|
st116532
|
Hi,
We can use THCudaTensor_div to divide by a real number and THCudaTensor_cdiv to divide 2 tensors.
What is the best way to take reciprocal of elements of a THCudaTensor in a custom C++ extension.
Regards
Nabarun
|
st116533
|
One way I am thinking of right now is to create another tensor, fill it with ones and use THCudaTensor_cdiv. Wondering if there is any straightforward way to do it.
|
st116534
|
Hi,
I was running the following code on GPU:
import torch
X = torch.rand(4, 4)
X = X.cuda()
torch.cumsum(X, dim=0)
and I got this error:
THCudaCheck FAIL file=/home/shuoyangd/pytorch/torch/lib/THC/generic/THCTensorMathScan.cu line=52 error=8 : invalid device function
What confuses me is that other functionalities (matrix multiplication etc.) seem to work fine (on the same GPU). So just to confirm, is this just that torch.cumsum function does not have GPU support yet?
BTW, I’m using Tesla K80.
Thanks!
|
st116535
|
When the index is a 1-d tensor by itself, it seems advanced indexing get triggered and we only get a copy:
x = torch.rand(4,3)
print x
x[1].add_(3)
print x
x[torch.LongTensor(1).random_(4)].add_(3)
print x
We get:
0.0500 0.7517 0.7858
0.3007 0.8665 0.8264
0.5933 0.3438 0.8359
0.6214 0.4772 0.1589
[torch.FloatTensor of size 4x3]
0.0500 0.7517 0.7858
3.3007 3.8665 3.8264
0.5933 0.3438 0.8359
0.6214 0.4772 0.1589
[torch.FloatTensor of size 4x3]
0.0500 0.7517 0.7858
3.3007 3.8665 3.8264
0.5933 0.3438 0.8359
0.6214 0.4772 0.1589
[torch.FloatTensor of size 4x3]
Is it possible to update only selected row/column?
Thank you!
|
st116536
|
I’m trying to combine LSTM (scratch) and CNN (pre-trained) as follows:
Input -> LSTM -> CNN -> Output (t=1)
|
Input -> LSTM -> CNN -> Output (t=2)
|
Input -> LSTM -> CNN -> Output (t=3)
|
The recurrent connection is exist only in LSTM not CNN.
I use pre-trained CNN, so that I don’t need to train CNN.
Only what I need is the gradient from CNN to train LSTM.
Because CNN requires a lot of memory, I want to share gradient memory for BPTT.
For example, the gradient from CNN is computed and saved for LSTM training at each time, but internal gradient memory (or buffer?) of CNN is shared for next step.
If I follow the conventional RNN training code (forward (t=1,2,3, … , T) and backward (t=T, …, 3,2,1)), I think that dynamic graph of pytorch will allocate memory of CNN separately over time.
How can I handle this problem?
I will appreciate if someone give me any answer.
|
st116537
|
Hello,
if the loss is per time-step you could
tell the cnn you won’t need parameter gradients: for p in cnn.parameters(): p.requires_grad = False,
run the LSTM on the input, capture output in one long array, say, lstm_out. Let’s pretend it is sequence-first.
apply the CNN to each timestep-slice wrapped in a new Variable, so out_step = cnn(Variable(lstm_out.data[i], requires_grad=True)),
compute the loss per step
loss_step = loss_function(out_step, target),
backprop through the cnn with loss_step.backward(),
append gradient for timestep to a list, say, lstm_out_grad.append(out_step.grad)
backprop through the LSTM withlstm_out.backward(torch.stack(lstm_out_grad)).
This should give you the gradients in the LSTM, except for any bugs that come with code only typed and not tested.
Here is a minimal snippet how to manually break backprop
How to manually do chain rule backprop?
Hello,
is this approximately what you need: let’s say you have
from torch.autograd import Variable
x = Variable(torch.randn(4), requires_grad=True)
y = f(x)
y2 = Variable(y.data, requires_grad=True) # use y.data to construct new variable to separate the graphs
z = g(y2)
(there also is Variable.detach, but not now)
Then you can do (assuming z is a scalar)
z.backward() # this computes dz/dy2 in y2.grad
y.backward(y2.grad) # this computes dy/dx * y2.grad
print (x.grad)
Note that the .backw…
Best regards
Thomas
|
st116538
|
Dear Thomas,
It seems really cool solution!!
I will try as you suggested.
Thank you very much for your helping.
Best,
MInju Jung
|
st116539
|
Sorry for spawning yet another memory leak thread. I’ve gone through the previous ones and didn’t find that they were the same issue (or version). But perhaps I’m mistaken.
In relation to a few previous posts I’ve made, (specifically on working with seq2seq training models, and the fact that LSTMCell's aren’t cuda enabled) I’ve come to a place where I have to iterate a sequence one-by-one through LSTM layer to generate my seq2seq (this is the motivation of the code below which corresponds to the decoder half of a VRAE).
So I’ve sprung a dreaded memory leak, and I’m not sure why. Here’s my minimal code:
lstm = nn.LSTM( 5, 512, 2 ).double().cuda()
ll = nn.Linear( 512, 5 ).double().cuda()
h_t = Variable( torch.cuda.DoubleTensor(2, 1, 512) , requires_grad=False).cuda()
c_t = Variable( torch.cuda.DoubleTensor(2, 1, 512) , requires_grad=False).cuda()
out = Variable( torch.cuda.DoubleTensor(1, 1, 5 ), requires_grad=False).cuda()
out, (h_t, c_t) = test_lstm( out , (h_t, c_t) ) # <- warmup the run - first memory reading
out = test_ll(out.squeeze(1)).unsqueeze(1)
for i in range( 200 ):
out, (h_t, c_t) = lstm( out , (h_t, c_t) )
out = ll(out.squeeze(1)).unsqueeze(1)
print( "%d %d" % (i, datetime.datetime.now().microsecond))
gc.collect()
This code works without a hitch on the cpu.
On the gpu, I start with process GPU consumption of 189MiB at start, 413MiB at the first checkpoint, and then the following output:
1 780961
2 803360
...
66 886347
67 903248
68 921229
THCudaCheck FAIL file=/py/conda-bld/pytorch_1490980628440/work/torch/lib/THC/generic/THCStorage.cu line=66 error=2 : out of memory
Traceback (most recent call last):
File "/usr/bin/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2881, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-27-226b10408002>", line 2, in <module>
out, (h_t, c_t) = test_lstm( out , (h_t, c_t) )
File "/usr/bin/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/usr/bin/anaconda3/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 91, in forward
output, hidden = func(input, self.all_weights, hx)
File "/usr/bin/anaconda3/lib/python3.6/site-packages/torch/nn/_functions/rnn.py", line 327, in forward
return func(input, *fargs, **fkwargs)
File "/usr/bin/anaconda3/lib/python3.6/site-packages/torch/autograd/function.py", line 202, in _do_forward
flat_output = super(NestedIOFunction, self)._do_forward(*flat_input)
File "/usr/bin/anaconda3/lib/python3.6/site-packages/torch/autograd/function.py", line 224, in forward
result = self.forward_extended(*nested_tensors)
File "/usr/bin/anaconda3/lib/python3.6/site-packages/torch/nn/_functions/rnn.py", line 269, in forward_extended
cudnn.rnn.forward(self, input, hx, weight, output, hy)
File "/usr/bin/anaconda3/lib/python3.6/site-packages/torch/backends/cudnn/rnn.py", line 247, in forward
fn.weight_buf = x.new(num_weights)
RuntimeError: cuda runtime error (2) : out of memory at /py/conda-bld/pytorch_1490980628440/work/torch/lib/THC/generic/THCStorage.cu:66
and at this point, I’m pegged at 4GiB of memory. In the span of 140ms.
Nvidia driver version is 375.39,
nvcc is 8.0, V8.0.61
pytorch 0.1.11+27fb875
Moreover, nothing I do at this point frees that memory and I have to respawn my process.
Edit: running this with Variables marked as volatile instead of !requires_grad doesn’t end up in a memory problem.
My hope is that I’m doing something stupid. Let me know if you have any questions about setup.
|
st116540
|
It’s a known issue with nn.LSTM module (and nn.GRU too). It’s not a leak, but they use too much memory when using to iterate over inputs. You should use the LSTMCell. It works with CUDA too and should be reasonably fast.
|
st116541
|
Alternatively, you can disable cuDNN using torch.backends.cudnn.enabled = False.
|
st116542
|
Ahh, thanks (and drats).
Following this thread, I had decided to not make use of LSTMCell's, but I guess I need to go back on that decision.
Before I go too far down this track: is it possible or even supported to swap paramters of an LSTM in and out of LSTMCell's on the fly?
Thanks for your time.
Edit: I just realized probably an even simpler solution to this whole problem is to transfer my model back and forth to the CPU using pinned memory. Way better solution than to rearchitect everything.
|
st116543
|
Hi @MBlah, sorry to revive this after a long time, but I seem to be facing the same problem. Could you probably give snippet about how you transfer your model back and forth?
|
st116544
|
I ended up changing my architecture specifically to work around this issue.
For inference mode, I simply use volatile=True and this solves everything.
Otherwise, you can move your model using .cpu() and .cuda() calls.
E.g. if seq is a class module, then you can do:
seq.cpu()
output = seq(input)
seq.cuda()
Furthermore, you can call pin_memory() on the input tensors to facilitate transfers back and forth.
It’s very hacky, and in the end, I found an alternate way of training my model.
|
st116545
|
@apaszke Thanks for the hint. Is there other reference link about nn.LSTM’s memory issue when cuDNN enabled? I was also struggling with OOM issue until reading this post…
|
st116546
|
Hi all, I think I have a a misunderstanding of how to use lstms. I’ve read through all the docs and also a bunch of lstm examples.
I am trying to do something basic, which is just to take the output of an lstm and pass it through a linear layer, but the sizes dont seem to be coming out properly.
My batch size is 128 and that is what I expect at the final line for my forward, but instead I get back 22036.
class MyRNN(nn.Module):
def __init__(self, embed_size, hidden_size, vocab_size, num_layers):
super(MyRNN, self).__init__()
self.embed = nn.Embedding(vocab_size, embed_size)
self.lstm = nn.LSTM(embed_size, hidden_size, num_layers, batch_first=True)
self.linear = nn.Linear(hidden_size,4800)
self.init_weights()
def init_weights(self):
"""Initialize weights."""
self.embed.weight.data.uniform_(-0.1, 0.1)
self.linear.weight.data.uniform_(-0.1, 0.1)
self.linear.bias.data.fill_(0)
def forward(self, features, captions, lengths):
embeddings = self.embed(captions)
print("embedding size:"+str(embeddings.size()))
embeddings = torch.cat((features.unsqueeze(1), embeddings), 1)
packed = pack_padded_sequence(embeddings, lengths, batch_first=True)
rnn_features, _ = self.lstm(packed)
print("rnn_features:"+str(rnn_features.data.size()))
outputs = self.classifier(rnn_features[0])
#output should be of size 128 * 4800, not 22036 * 4800
return outputs
Here are my print statement outputs:
captions size:torch.Size([128, 362])
captoins size:torch.Size([128, 302])
padded captions size:torch.Size([22036])
embedding size:torch.Size([128, 302, 256])
packed sizetorch.Size([22036, 256])
rnn_features:torch.Size([22036, 512])
It looks like the issue is that I dont understand pack_padded_sequence. The docs say output is “The returned Variable’s data will be of size TxBx*, where T is the length of the longest sequence and B is the batch size. If batch_first is True, the data will be transposed into BxTx* format.” But it seems like the output is just 21456??? Why is that?
What is the point of pack_padded_sequence? It seems optional to use for RNNs. I see some code that uses them and some that don’t. Is it better to use these sequences vs Tensors?
it seems if you use torch.nn.utils.rnn.pack_padded_sequence() then you don’t need to pass h_0 and c_0? hard to tell, the docs don’t really say.
And for the input of LSTM, from the docs it says “input (seq_len, batch, input_size): tensor containing the features of the input sequence. The input can also be a packed variable length sequence.” Can someone explain what the difference between seq_len and input_size is?
Any help would be greatly appreciated, I’ve been stuck on this for a while, trying to fix it on my own as it seems it should be easy to fix, but everything i’ve tried does not work.
|
st116547
|
I am trying to understand the similarities/differences with the sonnet framwork of deepmind.
here are my conclusions:
Boths frameworks are trying to introduce some freedom degres in the forward/backward implementation of the layers.
Sonnet splits the layers implementation in two parts :
Configuration of the layer
Connection of the layer to the syntactic tree
While PyTorch introduce a new notion of syntactic tree: the dynamic syntactic tree;
While the intention is the same, in case of Sonnet, as soon as it is connected, you cannot change it again. A contrario, in the case of Pytorch, you change the syntactic tree all along the learning,
Am I correct ?
|
st116548
|
Hi,
For several days now, I am trying to build a simple sine-wave sequence generation using LSTM, without any glimpse of success so far.
I started from the “time sequence prediction example 458”
All what I wanted to do differently is:
Use different optimizers (e.g RMSprob) than LBFGS
Try different signals (more sine-wave components)
This is the link to my code 351. experiment.py is the main file
What I do is:
I generate artificial time-series data (sine waves)
I cut those time-series data into small sequences
The input to my model is a sequence of time 0...T, and the output is a sequence of time 1...T+1
What happens is:
The training and the validation losses goes down smoothly
The test loss is very low
However, when I try to generate arbitrary-length sequences, starting from a seed (a random sequence from the test data), everything goes wrong. The output always flats out
figure_1-3.png800×600 22.2 KB
I simply don’t see what the problem is. I am playing with this for a week now, with no progress in sight.
I would be very grateful for any help.
Thank you
|
st116549
|
The model and training code looks good. I think there’s something weird in your get_batch function or related batching logic - by turning down batch_size it unexpectedly runs slower but gets better results:
Epoch 11 -- train loss = 0.005119644441312117 -- val loss = 0.01055418627721996
|
st116550
|
Thank you for your reply @spro
What was the batch_size you used to get this result? I reduced it from 32 to 8, but it is still flats out
|
st116551
|
I rechecked the get_batch function. You were right, there was were two problems with it (the chosen batch_size didn’t propagate to this function + the targets range wasn’t chosen correctly). I modified the github repository.
However, this still doesn’t resolve the problem. I tried with different batch sizes, but it still flats out
Help?
|
st116552
|
Good results after shortening the period of your sine wave to 60 steps (from 180):
Screen Shot 2017-04-18 at 1.15.31 AM.png1166×896 86.8 KB
It might be that the long range dependency is too long for such a small model. It can learn to “fit the data” when the teacher is holding its hand, but is never trained on its own outputs, so that’s as far as it goes.
|
st116553
|
Thank you so much @spro !
I was able to regenerate your output.
I tried with 2 sine-waves components as well (while reducing the steps as you mentioned), and it works beautifully
It is unstable for 3 sine-wave components, but I think this is due to the issue you mentioned, that I don’t train the model on its outputs.
I will train the model on its outputs and see how it performs
|
st116554
|
@spro: Would you recommend me any paper/blog/tutorial on how to train the model on its outputs?
I understand the general idea, but I am having doubts about its details
|
st116555
|
Maybe take a look at @spro 's seq2seq
http://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html#training-and-evaluating 246
Teacher forcing and that kind of stuff is used a lot in seq2seq, I first learnt about it from, Wojciech Zaremba, Ilya Sutskever’s
Learning to Execute 69
If you’re comfortable with Torch, then I think you’d have a lot of fun playing around with that code, IT’S A CLASSIC
https://github.com/wojciechz/learning_to_execute 83
|
st116556
|
A quick and dirty version with your existing code:
half_seq_len = int(seq_len / 2)
output = rnn(data[:, :half_seq_len], half_seq_len)
|
st116557
|
Hi Omar @osm3000 and Sean @spro,
on the topic of RNN training, I just wondered if either of you guys had seen an implementation of Professor Forcing,
arxiv.org
1610.09038.pdf 128
543.65 KB
I’d be interested in doing a PyTorch implementation of this?
|
st116558
|
Hi,
Thank you for the LSTM threads, I’m learning so much from them!
(This one and the more recent one, but I felt that this was better fitting here.)
A few observations that may or may not be interesting regarding the pytorch example (in particular with (entire) batch:
At least with single precision (on cuda) it seems to me that lower loss apparently does not necessarily mean nicer looking predictions (at ~1e-4), I find both.
I would expect something to be up regarding single precision given that the example is done with doubles…
It seems that after switching from LBFGS to Adam also converges similarly.
I have not been entirely successful using double precision on cuda.
Is that similar to your experiences? What’s the conclusion, in particular for the first point.
Best regards
Thomas
|
st116559
|
Hi there,
I was trying to use the weight_norm() in the master branch so I built the bleeding edge version of PyTorch from source.
The error message is as below:
Thread 1 "python" received signal SIGSEGV, Segmentation fault.
0x00007fffd69bba8c in THCudaFree () from /home/user2/.conda/envs/pytorch_master/lib/python3.6/site-packages/torch/lib/libTHC.so.1
So could anyone tell what’s the best practice to build PyTorch from source?
Thanks!
|
st116560
|
What I would do personally is:
follow the instructions at https://github.com/pytorch/pytorch#from-source 14 rigorously, to the letter, copy and paste the entire output, from start to finish, including all commands, output etc, into one ginormous https://gist.github.com
log the issue to github.com/pytorch/pytorch/issues 2
provide full detail on:
which OS
32-bit/64-bit (by the way, 32-bit almost certainly not supported)
anything slightly weird/odd/unusual/interesting about your system
Well… I might start by doing the above, but here. So, you started to doing that, but missing:
the ginormous gist of full commands and output
full details on your os, anything weird/unusual/interesting about your system
|
st116561
|
Hi,
I have used the conda virtual environment to build the master branch.
OS: Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-83-generic x86_64)
Python version: Python3
Driver Version: 375.66
conda create --name pytorch_master
source activate pytorch_master
export CMAKE_PREFIX_PATH=/home/user2/.conda/envs/pytorch_master/
git clone https://github.com/pytorch/pytorch
cd pytorch
conda install numpy pyyaml mkl setuptools cmake gcc cffi
conda install -c soumith magma-cuda80
And if I run python setup.py install directly, it will incur an import error:
Could not find platform independent libraries <prefix>
Could not find platform dependent libraries <exec_prefix>
Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>]
Fatal Python error: Py_Initialize: Unable to get the locale encoding
ImportError: No module named 'encodings'
Current thread 0x00007faeabc48700 (most recent call first):
[1] 12644 abort (core dumped) python setup.py install
My solution to this is to deactivate the virtual environment and re-enter it again. This time I can install the PyTorch without any problem.
But no matter what program I am trying to run, as long as it uses CudaTensor, it would crash with a segmentation fault as above.
I am wondering if it’s because of the conda virtual envs. If I would like to install the bleeding edge version of python in a virtual env, could you please shed some light on the best way to do this?
Thanks,
Danlu
|
st116562
|
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import torch.backends.cudnn as cudnn
input = torch.randn(64, 3, 32, 32).cuda()
input_var = Variable(input)
cudnn.benchmark = True
net = nn.Conv2d(3, 24, kernel_size=3, stride=1,
padding=1, bias=False).cuda()
net.train()
output_var = net(input_var)
Finally found where the seg fault comes from! It’s because I set cudnn.benchmark = True. Do you have any idea on it?
FYI: I could run v0.12 with the flag cudnn.benchmark = True on the same computer, so the installed cudnn is supposed to not be the problem. Is it possible that something goes wrong when linking to the cudnn lib?
Thanks,
Danlu
|
st116563
|
I installed the 0.2+5254846 (master branch) just now but it seems that I still cannot use cudnn.benchmark = True. Am I misunderstanding something?
Thanks for your quick reply!
|
st116564
|
i ran your script on master, and it didn’t segfault for me. Can you give me a gdb stack-trace if it is crashing for you on the master branch?
|
st116565
|
I tried to get binaries. I tried to compile it from source, but every time I get something on python 2.7
|
st116566
|
seems like an issue might be with your python install.
What’s the output of:
python --version
pip --version
Are you using an anaconda python?
|
st116567
|
python 2.7.13 :: Anaconda custom (x86_64)
pip 9.0.1 from /Users/olivier/anaconda/lib/python2.7/site-packages (python 2.7)
but my jupyter environment is on python 3
and When I activate tensorflow, the system says :
python 3.5.2 :: continuum analytics, Inc
and
pip 9.0.1 from /Users/olivier/anaconda/envs/tensorflow/lib/python3.5/site-packages (python 3.5)
how can I get it fixed while keeping my Tensorflow operationnal ?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.