id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st118368
|
It seems that it’s a bug in [email protected] 18.
EDIT: I think that answer meant that it’s a problem in that guy’s code, but I really have no idea other than some stdlibc++ or pthread problem
|
st118369
|
I installed from the binaries using conda.
EDIT: I used conda install pytorch torchvision cuda80 -c soumith after installing conda via the 64-bit installer on the anaconda webpage (https://www.continuum.io/downloads)
I’m on arch linux
$ python --version
Python 3.6.0 :: Anaconda 4.3.1 (64-bit)'
$ pacman -Q glibc
glibc 2.25-1
I can try installing from source and see if that changes anything.
|
st118370
|
Installing from source (I’m using what’s on master right now for pytorch) didn’t change the behaviour but there is some interesting new info in the stacktrace from the debugger:
$ gdb python
GNU gdb (GDB) 7.12.1
Copyright (C) 2017 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-pc-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from python...done.
(gdb) run net_test.py
Starting program: /home/clemente/anaconda3/bin/python net_test.py
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/libthread_db.so.1".
warning: File "/home/clemente/anaconda3/lib/libstdc++.so.6.0.19-gdb.py" auto-loading has been declined by your `auto-load safe-path' set to "$debugdir:$datadir/auto-load".
To enable execution of this file add
add-auto-load-safe-path /home/clemente/anaconda3/lib/libstdc++.so.6.0.19-gdb.py
line to your configuration file "/home/clemente/.gdbinit".
To completely disable this security protection add
set auto-load safe-path /
line to your configuration file "/home/clemente/.gdbinit".
For more information about this security protection see the
"Auto-loading safe path" section in the GDB manual. E.g., run from the shell:
info "(gdb)Auto-loading safe path"
[New Thread 0x7fffe7d8f700 (LWP 22877)]
done
^C
Thread 1 "python" received signal SIGINT, Interrupt.
0x00007ffff76c4299 in pthread_cond_destroy@@GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
(gdb) bt
#0 0x00007ffff76c4299 in pthread_cond_destroy@@GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
#1 0x00007fffee44450e in torch::autograd::ReadyQueue::~ReadyQueue (this=0xb8ac10, __in_chrg=<optimized out>)
at torch/csrc/autograd/engine.cpp:36
#2 std::default_delete<torch::autograd::ReadyQueue>::operator() (this=<optimized out>, __ptr=0xb8ac10)
at /home/clemente/anaconda3/gcc/include/c++/bits/unique_ptr.h:67
#3 std::unique_ptr<torch::autograd::ReadyQueue, std::default_delete<torch::autograd::ReadyQueue> >::~unique_ptr (
this=0xb8abf0, __in_chrg=<optimized out>) at /home/clemente/anaconda3/gcc/include/c++/bits/unique_ptr.h:184
#4 std::_Destroy<std::unique_ptr<torch::autograd::ReadyQueue> > (__pointer=0xb8abf0)
at /home/clemente/anaconda3/gcc/include/c++/bits/stl_construct.h:93
#5 std::_Destroy_aux<false>::__destroy<std::unique_ptr<torch::autograd::ReadyQueue>*> (__last=0xb8abf8, __first=0xb8abf0)
at /home/clemente/anaconda3/gcc/include/c++/bits/stl_construct.h:103
#6 std::_Destroy<std::unique_ptr<torch::autograd::ReadyQueue>*> (__last=0xb8abf8, __first=<optimized out>)
at /home/clemente/anaconda3/gcc/include/c++/bits/stl_construct.h:126
#7 std::_Destroy<std::unique_ptr<torch::autograd::ReadyQueue>*, std::unique_ptr<torch::autograd::ReadyQueue> > (
__last=0xb8abf8, __first=<optimized out>) at /home/clemente/anaconda3/gcc/include/c++/bits/stl_construct.h:151
#8 std::vector<std::unique_ptr<torch::autograd::ReadyQueue, std::default_delete<torch::autograd::ReadyQueue> >, std::allocator<std::unique_ptr<torch::autograd::ReadyQueue, std::default_delete<torch::autograd::ReadyQueue> > > >::~vector (
this=0x7fffee783248 <engine+8>, __in_chrg=<optimized out>)
at /home/clemente/anaconda3/gcc/include/c++/bits/stl_vector.h:415
#9 torch::autograd::Engine::~Engine (this=0x7fffee783240 <engine>, __in_chrg=<optimized out>)
at /home/clemente/src/pytorch/torch/csrc/autograd/engine.h:21
#10 0x00007ffff6a276c0 in __run_exit_handlers () from /usr/lib/libc.so.6
#11 0x00007ffff6a2771a in exit () from /usr/lib/libc.so.6
#12 0x00007ffff7a4ba19 in Py_Exit (sts=0) at Python/pylifecycle.c:1541
#13 0x00007ffff7a4ee82 in handle_system_exit () at Python/pythonrun.c:602
#14 0x00007ffff7a4f12d in PyErr_PrintEx (set_sys_last_vars=1) at Python/pythonrun.c:612
#15 0x00007ffff7a4fa1d in PyRun_SimpleFileExFlags (fp=<optimized out>, filename=<optimized out>, closeit=<optimized out>,
flags=0x7fffffffe320) at Python/pythonrun.c:401
#16 0x00007ffff7a6aa41 in run_file (p_cf=0x7fffffffe320, filename=0x604110 L"net_test.py", fp=0x6615d0) at Modules/main.c:320
#17 Py_Main (argc=<optimized out>, argv=<optimized out>) at Modules/main.c:781
---Type <return> to continue, or q <return> to quit---q
Quit
(gdb) q
A debugging session is active.
Inferior 1 [process 22871] will be killed.
Quit anyway? (y or n) y
It seems like gdb is complaining about not being able to access the version of libstdc++ installed by conda. It is possible that pytorch is also using my host version of libstdc++?
|
st118371
|
I was able to get this program to run successfully in a virtualized ubuntu environment…
It could have something to do with the libraries that pytorch is depending on
|
st118372
|
Could it have something to do with the fact that the versions of glibc and libpthread shown in the stack traces are not the ones provided by anaconda? Do you know how to encourage the python runtime to use the anaconda provided versions of those libraries?
|
st118373
|
@apaszke
I think it seems that the biggest difference that I could understand to make a difference between the ubuntu system the code works fine on and the arch system where it hangs is that the arch version is running off glibc 2.25 and the ubuntu version is running off of glibc 2.23, each with their respective versions of libpthread.
I realize now that conda doesn’t pack its own version of glibc really, and I’m not sure how one would test building pytorch against different versions of glibc on the same system.
If this is the case (which it’s too soon to say), anyone running a version of glibc >=2.25 is going to have problems running pytorch.
|
st118374
|
For what it is worth, I have also an Arch install and I can reproduce the problem.
Last time I used my Arch machine for pytorch was two weeks ago and everything was fine (notably, I could run the autograd tests). If I try to run them now, I see the same behaviour described in this issue (everything running but the program not exiting at the end.
Digging through my install logs, I found this:
[2017-04-01 17:21] [ALPM] upgraded glibc (2.24-2 -> 2.25-1)
So it really seems like glibc is problem here.
|
st118375
|
Did any of you tried rebuilding PyTorch since the update? Maybe the C++ interface/headers were updated too?
|
st118376
|
My build of pytorch is probably more recent than the update but I can give it a try when i get to my laptop tonight.
Alban found this tweet of somebody seeing something that might be similar (https://twitter.com/pchapuis/status/842738509005934594 13)
|
st118377
|
I’m working on a model where I need to apply the same shared LSTM layer to every slice of an input tensor along a particular dimension (sort of like an LSTM convoultion). Unfortunately, this easily causes CUDA memory error when the LSTM has > 50 units.
For example, I’m trying to apply a 50 unit LSTM on every slice of the second dimension. If the second dimension has size 20, the LSTM will output 50 units for every slice of the dimension, totaling 20x50. This slicing is done via for loop and stacking the output tensors.
In my Tensorflow implementation, I was easily able to have up to 256 LSTM units. However a Pytorch implementation runs it of GPU memory. Is there a memory efficient way to handle this? Is there also a wat to parallel process each slice as they don’t depend on the computation of each other?
|
st118378
|
This is a good question, and I think the memory issues you’re seeing are the result of the way PyTorch currently wraps cuDNN LSTMs (which is listed as an issue but probably won’t be fixed for a little while). Try it with torch.backends.cudnn.enabled = False and see if you can fit a larger hidden size.
|
st118379
|
For a data set with 600 time steps, this stackoverflow answer 39 proposes the following training schema, where each line represents a batch with sequence_length=5 that will be trained on an RNN model:
t=0 t=1 t=2 t=3 t=4 t=5 ... t=598 t=599
sample |---------------------|
sample |---------------------|
sample |-----------------
...
sample ----|
sample ----------|
I had naively assumed that this would be excessive (as the overlap will have the model seeing each data point around sequence_length times), and thought that the following would be sufficient (say bptt sequence_length is 3 for convenience):
t=0 t=1 t=2 t=3 t=4 t=5 t=6 t=7 ... t=598 t=599
sample |-----------|
sample |-----------|
sample |-------
...
sample -----------|
The first schema now makes sense to me, as it is the only way the model will be able to see each transition between time stops at least once. If I read correctly, it also looks like get_batch does this in the word_language_model 57 example. I just wanted to verify that this is the way we should be training sequential data.
Thanks
|
st118380
|
No, the second schema is more common for language model training (and is used in the word_language_model example). The first schema is needed in certain unusual cases, such as Pointer Sentinel Language Models, and in principle can provide marginally more information, but it’s much slower.
|
st118381
|
Thanks for the answer. I’m using time series data rather than LM but it sounds like the 2nd schema is still preferred.
Is it important that the sequences are trained in order so that the relevant hidden state is reused, or is it common to shuffle them? It seems the hidden state may not be used anyways if they’re fed in parallel batches.
jekbradbury:
and is used in the word_language_model example
Ok, I see. I was just looking at the way get_batch 28 was indexing, but it looks like in training/evaluation it loops using range with a bptt stepsize.
|
st118382
|
For truncated BPTT training, it’s important that batches be processed in order so that hidden states are preserved.
|
st118383
|
@jekbradbury, so we have truncated BPTT training supported in Pytorch? I fail to find any doc on it yet
|
st118384
|
Yes, it’s used in the word language model example – all you need to do is call .detach_() on a variable and it will break the computation graph in a way that truncates backpropagation.
|
st118385
|
When I replace the feature encoder layers of my semantic segmentation models with pretrained VGG16 from torchvision I always encounter that python runs out of cuda memory (12GB).
I wonder how this can be when the models should be equal (I have no problems with cuda when hardcoding the complete network definition myself).
Can it be that pytorch does not free up the memory from unused layers of model.vgg16()?
Source code can be found here 25.
|
st118386
|
@apaszke I found out this does not occur when I directly inherit nn.Module. So maybe we should not use class inheritance here?
|
st118387
|
PyTorch definitely won’t free up memory from layers that you defined in __init__ but didn’t use in forward, because the Parameters those layers use will still be in scope. That should explain the behavior you observe with inheritance.
|
st118388
|
Even though I do not save it as class property?
class FCN(nn.Module):
def __init__(self, num_classes):
super().__init__()
feat = list(models.vgg16(pretrained=True).features.children())
self.feat1 = nn.Sequential(*feat[0:4])
self.feat2 = nn.Sequential(*feat[5:9])
self.feat3 = nn.Sequential(*feat[10:16])
self.feat4 = nn.Sequential(*feat[17:23])
self.feat5 = nn.Sequential(*feat[24:30])
self.fconn = nn.Sequential(
nn.Conv2d(512, 4096, 7),
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Conv2d(4096, 4096, 1),
nn.ReLU(inplace=True),
nn.Dropout(),
)
self.score_fconn = nn.Conv2d(4096, num_classes, 1)
def forward(self, x):
x = self.feat1(x)
x = self.feat2(x)
x = self.feat3(x)
return x
class FCN8(FCN):
def __init__(self, num_classes):
super().__init__(num_classes)
self.score_feat3 = nn.Conv2d(256, num_classes, 1)
self.score_feat4 = nn.Conv2d(512, num_classes, 1)
def forward(self, x):
feat3 = super().forward(x)
feat4 = self.feat4(feat3)
feat5 = self.feat5(feat4)
fconn = self.fconn(feat5)
score_feat3 = self.score_feat3(feat3)
score_feat4 = self.score_feat4(feat4)
score_fconn = self.score_fconn(fconn)
score = F.upsample_bilinear(score_fconn, score_feat4.size()[2:])
score += score_feat4
score = F.upsample_bilinear(score, score_feat3.size()[2:])
score += score_feat3
return F.upsample_bilinear(score, x.size()[2:])
This however works:
class FCN8(nn.Module):
def __init__(self, num_classes):
super().__init__()
feats = list(models.vgg16(pretrained=True).features.children())
self.feats = nn.Sequential(*feats[0:9])
self.feat3 = nn.Sequential(*feats[10:16])
self.feat4 = nn.Sequential(*feats[17:23])
self.feat5 = nn.Sequential(*feats[24:30])
self.fconn = nn.Sequential(
nn.Conv2d(512, 4096, 7),
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Conv2d(4096, 4096, 1),
nn.ReLU(inplace=True),
nn.Dropout(),
)
self.score_feat3 = nn.Conv2d(256, num_classes, 1)
self.score_feat4 = nn.Conv2d(512, num_classes, 1)
self.score_fconn = nn.Conv2d(4096, num_classes, 1)
def forward(self, x):
feats = self.feats(x)
feat3 = self.feat3(feats)
feat4 = self.feat4(feat3)
feat5 = self.feat5(feat4)
fconn = self.fconn(feat5)
score_feat3 = self.score_feat3(feat3)
score_feat4 = self.score_feat4(feat4)
score_fconn = self.score_fconn(fconn)
score = F.upsample_bilinear(score_fconn, score_feat4.size()[2:])
score += score_feat4
score = F.upsample_bilinear(score, score_feat3.size()[2:])
score += score_feat3
return F.upsample_bilinear(score, x.size()[2:])
|
st118389
|
I can’t see anything that might be causing that right now. Can you please write a small script that uses these networks and leaks memory so we can investigate it?
|
st118390
|
@bodokaiser turns out actually first implementation has a bug ( @colesbury found this).
self.feat1 = nn.Sequential(*feat[0:4])
self.feat2 = nn.Sequential(*feat[5:9])
You are dropping layer 4 here.
|
st118391
|
Sorry I cannot reduce this to a minimal example. Maybe I scratched the memory limit of the GPU and there was just a small allocation more with the second example which hit the memory limit.
|
st118392
|
I’m having trouble with the HingeEmbeddingLoss in pytorch 0.1.11.
When I try to specify the margin it throws an error.
import torch
mrl = torch.nn.MarginRankingLoss(margin=0.5) # this works
hel = torch.nn.HingeEmbeddingLoss() # this works
# this throws an error - __init__() got an unexpected keyword argument margin
hel_m = torch.nn.HingeEmbeddingLoss(margin=0.5)
Looking at the source in nn/_functions/loss.py, it seems to be defined correctly.
Thanks
|
st118393
|
I think I got to the bottom of this. For some reason the HingeEmbeddingLoss class is not defined in: https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/loss.py#L202 12
As opposed to MarginRankingLoss where the class is set up correctly: https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/loss.py#L365 9
Not sure why though.
|
st118394
|
that’s weird. Both are even imported side by side: https://github.com/pytorch/pytorch/blame/master/torch/nn/modules/ 3init.py
|
st118395
|
Thanks for your response. I had a quick look at the master branch for loss.py 2 but it has the same problem?
Instead of pass at line 220 should it not contain something like:
class HingeEmbeddingLoss(_Loss):
# Creates a criterion ...
def __init__(self, margin=1.0, size_average=True):
super(HingeEmbeddingLoss, self).__init__()
self.margin = margin
self.size_average = size_average
def forward(self, input1, target):
return self._backend.HingeEmbeddingLoss(self.margin,
self.size_average)(input1, target)
|
st118396
|
actually you’re right. this does give an error on master too. Needs to be fixed like you specified.
Can you send a pull request to patch this? thanks for finding the issue.
|
st118397
|
I made a pull request here:
github.com/pytorch/pytorch
Enable specifying of margin in HingeEmbeddingLoss 21
pytorch:master ← macaodha:patch-1
opened
Apr 27, 2017
macaodha
+9
-1
Thanks
|
st118398
|
Has anyone found a way around these issues:
github.com/pytorch/pytorch
Feature request: Zoneout LSTM 219
opened
Jan 23, 2017
closed
Jan 26, 2020
Smerity
Zoneout is one method to perform recurrent dropout on the hidden state of an RNN and has been shown to work...
feature
triaged
github.com/pytorch/pytorch
Masked copy returns a broken gradient for one of the arguments 75
opened
Mar 16, 2017
closed
Mar 19, 2017
Smerity
When using masked_copy the variable that is copied from is not given a correct (full sized) gradient.
import torch
from torch.autograd import Variable
x...
https://github.com/pytorch/pytorch/pull/1234 54
|
st118399
|
Hi,
I want to clarify that my implementation is correct, I have not used attention yet so I unroll decoder in one call:
everywhere I use batch_first=True
consider simple case: batch_size=2, hidden_size=4, len(vocab) = 10
I pad every sequence(sentence) to max_ length so in our simple case decoder input:
1 6 9 9 4
1 9 9 9 0
[torch.LongTensor of size 2x5]
Outputs from nn.LSTM in the decoder have such format:
Variable containing:
(0 ,.,.) =
0.1351 -0.0738 -0.3071 0.1253
0.2045 0.0473 -0.4745 0.0952
0.1976 0.1333 -0.1086 0.0051
0.1840 0.1820 -0.1250 0.0794
0.2153 0.1870 -0.0804 0.0017
(1 ,.,.) =
0.1388 -0.0739 -0.3141 0.4524
0.2480 -0.0281 -0.3296 0.3183
0.2284 0.0410 -0.1947 0.1689
0.2259 0.0712 -0.1931 0.1656
0.2268 0.0772 -0.0793 0.3217
[torch.cuda.FloatTensor of size 2x5x4 (GPU 0)]
Then I create output_mask because I don’t want to compute pad elements:
Variable containing:
(0 ,.,.) =
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
(1 ,.,.) =
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
0 0 0 0
[torch.cuda.ByteTensor of size 2x5x4 (GPU 0)]
After it I use torch.masked_select and compute masked_outputs and get:
Variable containing:
0.1351
-0.0738
-0.3071
0.1253
0.2045
0.0473
-0.4745
0.0952
0.1976
0.1333
-0.1086
0.0051
0.1840
0.1820
-0.1250
0.0794
0.2153
0.1870
-0.0804
0.0017
0.1388
-0.0739
-0.3141
0.4524
0.2480
-0.0281
-0.3296
0.3183
0.2284
0.0410
-0.1947
0.1689
0.2259
0.0712
-0.1931
0.1656
[torch.cuda.FloatTensor of size 36 (GPU 0)]
Then I use masked_outputs.view((-1, hidden_size)) and get:
Variable containing:
0.1351 -0.0738 -0.3071 0.1253
0.2045 0.0473 -0.4745 0.0952
0.1976 0.1333 -0.1086 0.0051
0.1840 0.1820 -0.1250 0.0794
0.2153 0.1870 -0.0804 0.0017
0.1388 -0.0739 -0.3141 0.4524
0.2480 -0.0281 -0.3296 0.3183
0.2284 0.0410 -0.1947 0.1689
0.2259 0.0712 -0.1931 0.1656
[torch.cuda.FloatTensor of size 9x4 (GPU 0)]
After all I use nn.Linear(hidden_size, len(vocab)) and get outputs from the decoder:
Variable containing:
0.4705 -0.0552 0.4348 -0.0798 0.0775 0.3475 0.2021 0.6573 -0.0601 0.2252
0.4778 0.0825 0.5056 -0.0496 0.1685 0.3090 0.2437 0.7343 -0.0047 0.1911
0.3115 -0.0023 0.3454 -0.2025 0.1078 0.3741 0.2473 0.5391 0.0743 0.2276
0.2770 -0.0017 0.3527 -0.2080 0.1067 0.3662 0.2493 0.5297 0.0664 0.2271
0.2695 0.0146 0.3323 -0.2272 0.1143 0.3801 0.2603 0.5193 0.1007 0.2303
0.3459 -0.1172 0.4360 -0.0959 -0.0210 0.4020 0.2096 0.6514 -0.1421 0.2980
0.3347 -0.0184 0.4390 -0.1099 0.0262 0.4214 0.2549 0.6870 -0.0614 0.3002
0.3110 -0.0216 0.3812 -0.1647 0.0529 0.4106 0.2519 0.6053 0.0044 0.2746
0.3005 -0.0126 0.3806 -0.1708 0.0629 0.4036 0.2545 0.5973 0.0147 0.2681
[torch.cuda.FloatTensor of size 9x10 (GPU 0)]
I need decoder targets to compute loss so decoder_targets:
6 9 9 4 2
9 9 9 2 0
[torch.LongTensor of size 2x5]
Then I also use torch.masked_select for decoder_targets and get masked_targets:
6
9
9
4
2
9
9
9
2
[torch.cuda.LongTensor of size 9 (GPU 0)]
In the end I compute loss:
loss = criterion(decoder_outputs, masked_targets)
Is this correct implementation for the decoder? Can I use such approach?
Thanks!
|
st118400
|
Hi, when I tried to save my model using torch.save(decoder.state_dict(), 'path') I got this error:
Traceback (most recent call last):
File "train.py", line 89, in <module>
torch.save(decoder.state_dict(), '/home/vladislavprh/decoder.pth')
File "/home/vladislavprh/anaconda3/lib/python3.6/site-packages/torch/serialization.py", line 120, in save
return _save(obj, f, pickle_module, pickle_protocol)
File "/home/vladislavprh/anaconda3/lib/python3.6/site-packages/torch/serialization.py", line 192, in _save
serialized_storages[key]._write_file(f)
RuntimeError: std::bad_alloc
I train it on 8 GPUs and every layer in the model refer to the different GPU, by layer I mean: nn.LSTM, nn.Linear etc.
How can I solve it?
Thanks!
|
st118401
|
i wonder if this is related to saving a large checkpoint. We fixed some bugs wrt very large checkpoints and OSX in v0.1.11. Are you on atleast pytorch v0.1.11 ? Also, do you have enough diskspace to write checkpoints?
|
st118402
|
Hi, smth.
Thank you for your reply, the problem was in RAM memory, so I increased memory and now it works
|
st118403
|
Hi. I just started PyTorch.
I would like to compute NLLLoss with importance sampling.
That is,
- \sum_{i=1}^{N} w_i log p(t_i | x_i)
where w_i is a weight for i-th sample.
Note that this weight is different from the weight for a class which is typically used for unbalanced dataset.
It seems that PyTorch NLLLoss is written in C (and that’s why PyTorch is fast?)
I really appreciate if anyone could tell me a simple way to implement NLLLoss with importance sampling.
Thanks.
|
st118404
|
you can always implement a modified NLL loss in python using just torch.* ops, the autograd will take care of the backward for you.
|
st118405
|
Hi, what does Padding=(0,1) in Conv1d do? I would have expected this to perform a padding of zero on the left and a padding of 1 on the right but it does not seem to do this.
|
st118406
|
Conv*d can only do symmetric padding. So i suspect that the 1 here is being ignored.
|
st118407
|
According to the doc 33, if scale_grad_by_freq set to true, then grad will be scaled according to the freq of the words in dictionary.
But where does the Embedding layer find out the freq of each word? I mean I didn’t see any param to tell this freq info to the Embedding layer other than this scale_grad_by_freq param.
|
st118408
|
It’s scaled by the frequency of the words in the mini-batch (not the dictionary). We should fix the docs.
|
st118409
|
Hi,
I’ve created MyNet class nn.Module with 5 category classifcation problem. This model uses InceptionV3 model as feature extractor (I freezed first 2 conv layers, fine tune the rest of layers and add last layer with 5 outputs). At end of training, I’ve got 80% of accuracy in the train set before snapshot the model. After the training, I savded the model by torch.save(model.state_dict(), open(fname, 'w')). When I do testing, I create the model by model=MyNet() and load the weights by model.load_state_dict(torch.load(model_fname)), however, the prediction accuracy is random ~25% even in train set. Might I am doing something wrong?
class MyNet(nn.Module):
def __init__(self, pretrained=False, model_fname=None):
super(MyNet, self).__init__()
if model_fname is None:
self.inception_v3 = models.inception_v3(pretrained)
else:
self.inception_v3 = models.inception_v3()
self.inception_v3.load_state_dict(torch.load(model_fname))
# we ignore the first 2 layers of inception net
ignored_layers = ['Conv2d_1a',
'Conv2d_2a',
'Conv2d_2b']
for name, param in self.inception_v3.named_parameters():
if any([name.startswith(ignored) for ignored in ignored_layers]):
param.requires_grad = False # freeze the first 2 layers
# === adding classification layers ====
# 5 classes classification problem
self.fc = nn.Linear(self.inception_v3.fc.in_features, 5)
def forward(self, x):
# 299 x 299 x 3
x = self.inception_v3.Conv2d_1a_3x3(x)
# 149 x 149 x 32
x = self.inception_v3.Conv2d_2a_3x3(x)
# 147 x 147 x 32
x = self.inception_v3.Conv2d_2b_3x3(x)
# 147 x 147 x 64
x = F.max_pool2d(x, kernel_size=3, stride=2)
# 73 x 73 x 64
x = self.inception_v3.Conv2d_3b_1x1(x)
# 73 x 73 x 80
x = self.inception_v3.Conv2d_4a_3x3(x)
# 71 x 71 x 192
x = F.max_pool2d(x, kernel_size=3, stride=2)
# 35 x 35 x 192
x = self.inception_v3.Mixed_5b(x)
# 35 x 35 x 256
x = self.inception_v3.Mixed_5c(x)
# 35 x 35 x 288
x = self.inception_v3.Mixed_5d(x)
# 35 x 35 x 288
x = self.inception_v3.Mixed_6a(x)
# 17 x 17 x 768
x = self.inception_v3.Mixed_6b(x)
# 17 x 17 x 768
x = self.inception_v3.Mixed_6c(x)
# 17 x 17 x 768
x = self.inception_v3.Mixed_6d(x)
# 17 x 17 x 768
x = self.inception_v3.Mixed_6e(x)
# 17 x 17 x 768
if self.training and self.inception_v3.aux_logits:
# TODO, change this layer with 5 outputs
aux = self.inception_v3.AuxLogits(x)
# 17 x 17 x 768
x = self.inception_v3.Mixed_7a(x)
# 8 x 8 x 1280
x = self.inception_v3.Mixed_7b(x)
# 8 x 8 x 2048
x = self.inception_v3.Mixed_7c(x)
# 8 x 8 x 2048
x = F.avg_pool2d(x, kernel_size=8)
# 1 x 1 x 2048
x = F.dropout(x, training=self.training)
# 1 x 1 x 2048
x = x.view(x.size(0), -1)
# 2048
y = self.fc(x)
return y
|
st118410
|
check if model is set into .train() mode or .eval() mode before testing. It makes a big difference.
|
st118411
|
Hi,
Is there something similar to layerdelay capability with RNN in pytorch, similar to Matlab?
https://fr.mathworks.com/help/nnet/ref/layrecnet.html 7
Another issue, is there a simple trick to implement TimeDelayNet in pytorch - like in Matlab -?
https://fr.mathworks.com/help/nnet/ref/timedelaynet.html 2
Thank you
|
st118412
|
I am new to PyTorch and I am building a slightly unconventional net, so I am not completely sure whether autograd is keeping track of the graphs in the way I expect it to, or whether something is going wrong in my code. I was wondering whether you had any recommendations on how one might go about extracting the graph that is being built and differentiated at each backward() call.
|
st118413
|
you can follow the .creator field of the output Variable to help build out the full graph. This format is changing next week, though (and I will have some notes for it)
|
st118414
|
Hi there,
I have both python2 and python 3 installed and I also have pytorch and torch vision installed on both conda python2 and 3 environments using the conda environment. I previously trained a model in python3 using pytorch installed from anaconda.
I am now working in python2 and I need to import from a class from a module that was written in python3. When I do
import torch, I get ImportError: No module named torch. However, in my python3 environment, I can easily import torch. What could I be doing wrong?
|
st118415
|
Well, after many hours of trying I resolved to adding a conda env python alias to my bashrc file to distinguish the system python 2.7 from the conda one.
Something along the lines of alias py27='~/anaconda2/envs/py27/bin/python2'.
It is weird but before that, every time I called python, the system was executing the system wide python binary in /usr/bin.
Hope this helps someone.
|
st118416
|
Any plan to provide pretrained models of resnet v2 and resnext? Tensorflow now has some of these pretrained models.
|
st118417
|
Didn’t you mean inception-resnet-v2? It is provided in that repository and I have tested it.
|
st118418
|
No. It is the eccv16 paper’s work. You can find the pretrained models in official tf model zoo. https://github.com/tensorflow/models/tree/master/slim 154
|
st118419
|
I asked a similar question about numpy in stackoverflow 66, but since I’ve discovered the power of the GPU since, I can’t go back there.
So I have a 3D tensor representing a list of matrices, e.g.:
In [112]: matrices
Out[112]:
(0 ,.,.) =
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
0 0 0 0 1
(1 ,.,.) =
5 0 0 0 0
0 5 0 0 0
0 0 5 0 0
0 0 0 5 0
0 0 0 0 5
[torch.cuda.FloatTensor of size 2x5x5 (GPU 0)]
and a 2D tensor representing a list of vectors, e.g.:
In [113]: vectors
Out[113]:
1 1
1 1
1 1
1 1
1 1
[torch.cuda.FloatTensor of size 5x2 (GPU 0)]
… and I need element-wise, gpu-powered dot product of these two tensors.
I would expect to be able to use torch.bmm here but I cannot figure out how, especially I don’t understand why this happens:
In [114]: torch.bmm(matrices, vectors.permute(1,0))
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-114-e348783370f7> in <module>()
----> 1 torch.bmm(matrices, vectors.permute(1,0))
RuntimeError: out of range at /py/conda-bld/pytorch_1490979338030/work/torch/lib/THC/generic/THCTensor.c:23
… when matrices[i] @ vectors.permute(1,0)[i] work for any i < len(matrices).
Thanks for your help…
|
st118420
|
Oh I’ve just found out something that works: torch.bmm(matrices,vectors.permute(1,0).unsqueeze(2)).squeeze().permute(1,0).
So I have another question: is there any way to avoid these permutes and [un]squeeze? Should I organize my arrays differently?
|
st118421
|
There’s no way to avoid the permute calls, although you can use .t() (transpose) instead of permute(1, 0). Typically we have the left-most dimension as the batch dimension.
Greg is working on NumPy-style broadcasting which will make the unsqueeze calls unnecessary.
|
st118422
|
In the seq2seq tutorial in here 23, the source sentence was fed into the encoder one token by one token using a for loop:
for ei in range(input_length):
encoder_output, encoder_hidden = encoder(
input_variable[ei], encoder_hidden)
encoder_outputs[ei] = encoder_output[0][0]
This is different from other tutorials like here 34: the whole sentence is fed into the encoder only once and get the encoding.
Are these two examples equivalent (getting the identical encodings)? Thanks!
|
st118423
|
There’s no point in doing it as in that seq2seq tutorial. It’s equivalent but slower.
|
st118424
|
I understand that torch distributed is still experimental. But, I am trying to run it on a Hadoop cluster, according to the examples in this page: https://github.com/pytorch/pytorch/issues/241 112. But nothing seems to work. when I run “torch.distributed.init_process_group(backend=‘tcp’)”, I get this error “AttributeError: module ‘torch._C’ has no attribute ‘_dist_init_process_group’”. Any suggestions? Thanks for the help.
|
st118425
|
Distributed C code isn’t built by default. You’d need to use WITH_DISTRIBUTED=1 env var when building
|
st118426
|
Also, TCP backend is likely to be quite slow, especially that the one in the main repo is a bit outdated. We’ve recently added support for Gloo which should be fast, and MPI is a reasonable default.
|
st118427
|
Hi,
I saw in one of the examples (https://github.com/pytorch/examples/blob/master/mnist/main.py 163) that you were setting the random seed via
torch.manual_seed(args.seed)
if args.cuda:
torch.cuda.manual_seed(args.seed)
I couldn’t find the torch.cuda.manual_seed in the docs, and I was wondering if this is necessary to manually set the random seed or is a simple
torch.manual_seed(args.seed)
sufficient?
|
st118428
|
It is sufficient for CPU determinism, but it won’t affect the GPU PRNG state. We’ve been thinking about merging these two, and we’ll probably do so in the future.
|
st118429
|
Agree, less chance for mysterious inconsistency for programmers who’re not aware of this.
|
st118430
|
Hello,
Hello,
I am new to pytorch. I have been hacking the GAN example to adapt it for image inpainting on MS coco (downsampled to 64x64). I first just wanted to get the code working on the version of MSCOCO I have. Running the code as is produces sharp, albeit structureless images.
First, I wanted to condition the generator on the unmasked part of the image (the central 32x32 pixels are masked). To do this I generated an embedding of the masked image using a pretrained vggnet.
Q. I load the model from torchvision with ‘features’, as I want to used conv7,
pseudo code:
vggnet = models.vgg19(pretrained=True).features
conditioning = vggnet(masked_image)
conditioning = conditioning.view(conditioning.size(0),-1)
Then I concatenate the noise vector and the flattened conditioning. Is this the correct/best way to do this?
Q. I also want to augment the generators loss based on https://arxiv.org/pdf/1607.07539.pdf 13. This approach adds an L1 loss to the generator objective between the masked generator output and the masked image (referred to as context loss in the paper)
To do this I produced a 3x64x64 mask in numpy, moved it to a torch Variable and simply multiplied it by the generators output ( * for elemwise multiply). I then use the L1Loss criterion to compute the loss.
Again, is this the correct way to achieve this?
All the code seems to be running, however since the images still all look like modern art, it is hard for me to say if it is correct.
ps. I noticed that I can’t use a 64x64 mask as it will not automatically broadcast like numpy. Is this the case, or did I make some mistake?
Thanks,
Gautam
|
st118431
|
I am building a cuda extension following the PyTorch C FFI examples. When we pass the data to C function in THCudaTensor*, is there any method that we can check the size of each dimension of the data or we have to pass the batch_size, nchannels, width and height simultaneously? Thank you very much for the help in advance.
|
st118432
|
you can use: THCTensor_(size)(input, dimension)
The full API can be found in the headers…
https://github.com/pytorch/pytorch/blob/master/torch/lib/THC/generic/THCTensor.h#L26 100
|
st118433
|
Is it possible to restrict the range of possible values that a Variable can take? I have a variable that I want to restrict to the range [0, 1] but the optimizer will send it out of this range. I am using torch.clamp() to ultimately clamp the result to [0,1] but I want my optimizer to not update the value to be < 0 or > 1. Like if my variable currently sits at a value of 0.1, and the gradients come in and my optimizer wants to update it by 0.5, which would make the new value -0.4, I want the optimizer to clamp it’s update to 0.1, so it will only get updated up to my bounds.
I know I can register a hook for the variable, which I tried, but that way I can only control the size of the gradient, not the actual update size. I’m sure if I just wrote a custom optimizer I could make it work but there’s no way I can beat the Adam optimizer.
|
st118434
|
Solved by ncullen93 in post #3
For your example (constraining variables to be between 0 and 1), there’s no difference between what you’re suggesting – clipping the gradient update – versus letting that gradient update take place in full and then clipping the weights afterwards. Clipping the weights, however, is much easier than m…
|
st118435
|
For your example (constraining variables to be between 0 and 1), there’s no difference between what you’re suggesting – clipping the gradient update – versus letting that gradient update take place in full and then clipping the weights afterwards. Clipping the weights, however, is much easier than modifying the optimizer.
Here’s a simple example of a UnitNorm clipper:
class UnitNormClipper(object):
def __init__(self, frequency=5):
self.frequency = frequency
def __call__(self, module):
# filter the variables to get the ones you want
if hasattr(module, 'weight'):
w = module.weight.data
w.div_(torch.norm(w, 2, 1).expand_as(w))
Instantiating this with clipper = UnitNormClipper(), then, after the optimizer.step() call, do the following:
model.apply(clipper)
Full training loop example:
for epoch in range(nb_epoch):
for batch_idx in range(nb_batches):
xbatch = x[batch_idx*batch_size:(batch_idx+1)*batch_size]
ybatch = y[batch_idx*batch_size:(batch_idx+1)*batch_size]
optimizer.zero_grad()
xp, yp = model(xbatch, ybatch)
loss = model.loss(xp, yp)
loss.backward()
optimizer.step()
if epoch % clipper.frequency == 0:
model.apply(clipper)
A 0-1 clipper might look like this (not tested):
class ZeroOneClipper(object):
def __init__(self, frequency=5):
self.frequency = frequency
def __call__(self, module):
# filter the variables to get the ones you want
if hasattr(module, 'weight'):
w = module.weight.data
w.sub_(torch.min(w)).div_(torch.max(w) - torch.min(w))
|
st118436
|
Thanks for your reply. If I try to clip the variable after each optimizer step I get the following error:
RuntimeError: Trying to backward through the graph second time, but the buffers have already been freed. Please specify retain_variables=True when calling backward for the first time.
It seems like if you manually mess with the variable state then the variable gets marked dirty or something.
EDIT: Oh, I guess if you only manipulate the .data attribute you don’t get that error. It’s working now, thanks!!
|
st118437
|
I am trying to optimize a function with gradient descent and I have a constraint that the values should be between zero and one. Is clamping the updated values the only way to deal with this kind of problems? is it a common approach for dealing with this constraint in machine learning community? another approach that I tried was to use logarithm of the weights as the variables for optimization, but it just solves the problem of positivity.
It might not be the best place to ask this question but I found this post very related.
|
st118438
|
Yeah I mean you can either clip the weights after some number of gradient updates or you can add a deviation of the weights from your desired value as an extra term added to the model loss function - sort of a lagrangian approach with a penalty on that deviation resulting in a more loose or more strict implicit constraint. It depends on the problem but the lagrangian approach is probably better in most case (it’s basically what you do with regularization/sparsity instead of directly imposing sparsity on weights).
|
st118439
|
Why does torch graph consume a large memory footprint. I get a spike of 7 GB (gpu memory) by instantiating the following model:
class Model(torch.nn.Module):
# Define model
def __init__(self, args, mean, std):
super(Model, self).__init__()
self.numCells = args.numCells
self.mean = Variable(mean, requires_grad = False)
self.std = Variable(std, requires_grad = False)
self.conv1 = nn.Conv2d(3, 64, 5, padding = 2)
self.conv2 = nn.Conv2d(64, 128, 5, padding=2)
self.conv3 = nn.Conv2d(128, 128, 3, padding = 1)
self.conv4 = nn.Conv2d(128, 128, 3, padding =1)
self.conv5 = nn.Conv2d(128, 256, 3, padding = 1)
self.conv6 = nn.Conv2d(256, 256, 3, padding = 1)
self.conv7 = nn.Conv2d(256, 512, 3, padding = 1)
self.conv8 = nn.Conv2d(512, 512, 3, padding = 1)
self.conv9 = nn.Conv2d(512, 512, 5, padding = 2)
self.fc = nn.Linear(32 * 32 * 512, self.numCells * self.numCells * 7)
|
st118440
|
Depends on the value of numCells. For numCells = 21, I’d expect about 6.5 GB of parameters just in self.fc.
|
st118441
|
Ah, actually I see it now.
By the way does defining a loss on this net and doing loss.backward() create a copy of the parameters? I run out of memory on doing loss.backward. My gpu has 12 Gigs of memory and before the call to loss.backward I have exhausted 10 gigs.
|
st118442
|
The gradients are the same size as the model parameters. So if you have 10 GB of parameters you need another 10 GB for gradients (and possibly extra for intermediate calculations).
|
st118443
|
I am looking at the example here: https://pytorch.org/docs/_modules/torch/nn/modules/loss.html 177 . I noticed that the input data is passed as:
input = autograd.Variable(torch.randn(3, 5), requires_grad=True)
Shouldn’t requires_grad be set to False, since this is the input data and not the weights.
|
st118444
|
Here is the example:
>>> m = nn.LogSoftmax()
>>> loss = nn.NLLLoss()
>>> # input is of size nBatch x nClasses = 3 x 5
>>> input = autograd.Variable(torch.randn(3, 5), requires_grad=True)
>>> # each element in target has to have 0 <= value < nclasses
>>> target = autograd.Variable(torch.LongTensor([1, 0, 4]))
>>> output = loss(m(input), target)
>>> output.backward()
With most NN code, you don’t want to set requires_grad=True unless you explicitly want the gradient w.r.t. to your input. In this example, however, requires_grad=True is necessary because otherwise there would be no gradients to compute, since there are no model parameters.
|
st118445
|
I see. So in this example that makes sense.
By the way does requires_grad default to True? I do not see it in the documentation.
|
st118446
|
Hello guys,
I wrote my cost function with Function class. Actually I have implemented the forward and backward of my own cost function. I want to know is it possible to debug backward function like forward (There are some logical problems in backward function; I would like to find them). The pdb has allowed me to debug my forward step by step. But I could not debug backward function like forward. Could you please tell me is it possible?
Thakns
|
st118447
|
Same way you debug forwards:
import pdb
pdb.set_trace()
Here’s a more complete snippet:
import torch
class MyLoss(torch.autograd.Function):
def forward(self, x):
return x.view(-1).sum(0)
def backward(self, x):
import pdb
pdb.set_trace()
return x
v = torch.autograd.Variable(torch.randn(5, 5), requires_grad=True)
loss = MyLoss()(v)
loss.backward()
|
st118448
|
Thank you! I did debugging using python -m pdb sourceFile.py and then used break point setting approaches. i did not know about this approach.
|
st118449
|
Hi,
After making a conda environment, I am trying to install pyTorch into it from the binary, as per the instructions here:
conda install pytorch torchvision -c soumith
This installs successfully, however if I simply run python, and do import torch, I get:
import torch
Traceback (most recent call last):
File “”, line 1, in
ImportError: No module named torch
Am I missing something here?
Thanks
|
st118450
|
Did you “activate” conda? https://conda.io/docs/using/envs.html#change-environments-activate-deactivate 78
Sounds like the Python installation you’re running doesn’t have PyTorch installed.
|
st118451
|
Well, I re-installed it and it SEEMS to be fine now. That was weird. I will let you know if the problem persists. Thanks!
|
st118452
|
I have seen the torch.utils.trainer, a wrapper for model training, but no docs for it.
|
st118453
|
This API will be eventually removed (see GH comment 372)
You may be interested in using this 878 or this instead.
|
st118454
|
class Net(nn.Module):
...
net = Net()
input = torch.Tensor()
output = net(input.cuda())
loss = output - label
loss.backward()
optimizer.step()
If I give a Tensor.cuda() to the net, will be the optim of the Net in the GPU?
|
st118455
|
Solved by xwgeng in post #2
No, you have to call model’s cuda() method manually
|
st118456
|
I have a tensor of size 16 x 28 where 16 is batch size and 28 is sentence length. Every element of the sentence vectors are some index (0 to n). I want to create a 16 x 28 x n tensor where the vectors in 3rd dimension will be one hot encoding of the index which means I want to put 1 in the specified index and rest of the values will be zero. How can I do that using pytorch functionalities?
Right now, I am doing this with loop but I want to avoid looping!
|
st118457
|
If the one-hot vectors are used for retrieving word embeddings, just use Embedding 396 layer instead.
|
st118458
|
You can use scatter 964 for this purpose.
For example:
# your tensor of 16 x 28 dimensions,
# where each element has some index (0 to n)
inp = torch.LongTensor(16, 28) % n
inp_ = torch.unsqueeze(inp, 2)
one_hot = torch.FloatTensor(16, 28, n).zero_()
one_hot.scatter_(2, inp_, 1)
print(inp)
print(one_hot)
|
st118459
|
I created a custom Dataset class for loading 3d float arrays as input and .png segmentation map for targets:
class ADE20K_SIFT(data.Dataset):
"""ADE20K
input is a 3d matrix and target is a .png image
Arguments:
root (string): filepath to ADE20K root folder.
image_set (string): imageset to use (eg: 'training', 'validation', 'testing').
transform (callable, optional): transformation to perform on the
input image
target_transform (callable, optional): transformation to perform on the
target image
dataset_name (string, optional): which dataset to load
(default: 'ADEChallengeData2016')
"""
def __init__(self, root, image_set, transform=None, target_transform=None,
dataset_name='ADEChallengeData2016'):
self.root = root
self.image_set = image_set
self.transform = transform
self.target_transform = target_transform
if image_set == 'training_sift':
image_name = 'train'
anno_folder = 'training_re'
elif image_set == 'validation_sift':
image_name = 'val'
anno_folder = 'validation_re'
else:
raise ValueError('image_set should be either of "training_sift", "validation_sift".')
self._annopath = os.path.join(
self.root, dataset_name, 'annotations', anno_folder, 'ADE_'+image_name+'_re_{:08d}.png')
self._imgpath = os.path.join(
self.root, dataset_name, 'images', image_set, 'ADE_'+image_name+'_{:08d}.sift')
self._imgsetpath = os.path.join(
self.root, dataset_name, 'objectInfo150.txt')
self.dataset_dir = os.path.join(self.root, dataset_name, 'annotations', anno_folder)
with open(self._imgsetpath) as f:
self.class_desc = [line.split('\t')[-1].strip('\n') for line in f.readlines()]
def __getitem__(self, index):
try:
img_id = index
# print('index', index)
target = np.array(Image.open(self._annopath.format(img_id)))
# print (target.shape)
features = np.zeros((49, 49, 130))
if (os.stat(self._imgpath.format(img_id)).st_size > 0): # file is not empty
data = np.array(pd.read_csv(self._imgpath.format(img_id), sep=' ', header=None))
for i in range(data.shape[0]):
features[int(data[i][0]-1)][int(data[i][1]-1)] = data[i][2:]
if self.transform is not None:
features = self.transform(features)
if self.target_transform is not None:
target = self.target_transform(target)
except: # if any error, return zero arrays of correct dimensions
index = -1
features = np.zeros((49, 49, 130))
target = np.zeros((196, 196))
if self.transform is not None:
features = self.transform(features)
if self.target_transform is not None:
target = self.target_transform(target)
return features, target
def __len__(self):
return len(os.listdir(self.dataset_dir))
And loaded the dataset using the inbuilt loader.
# Data augmentation and normalization for training
# Just normalization for validation
input_transforms = transforms.Compose([
transforms.Lambda(lambda img: torch.from_numpy(np.transpose(img, (2, 0, 1))))
])
target_transforms = transforms.Compose([
transforms.Lambda(lambda img: torch.from_numpy(np.array(img)))
])
root = '/home/shivam/Downloads/ADE20K/'
image_set = 'validation_sift'
dsets = ade20k_dataset.ADE20K_SIFT(root, image_set, transform=input_transforms,
target_transform=target_transforms,
dataset_name='ADEChallengeData2016')
dset_loaders = torch.utils.data.DataLoader(dsets, batch_size=2, shuffle=True, num_workers=1)
Now when I’m trying to retrieve (input, target) pair using:
for i, data in enumerate(dset_loaders, 0):
Pytorch gives the following error after fetching some inputs. (Note: It works smoothly for batchsize = 1)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-32-30856e4f68c8> in <module>()
27 # print(ind, inputs.size(), outputs.size())
28
---> 29 for i, data in enumerate(dset_loaders, 0):
30 print(i, data[0].size())
31
/home/shivam/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py in __next__(self)
172 self.reorder_dict[idx] = batch
173 continue
--> 174 return self._process_next_batch(batch)
175
176 next = __next__ # Python 2 compatibility
/home/shivam/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py in _process_next_batch(self, batch)
196 self._put_indices()
197 if isinstance(batch, ExceptionWrapper):
--> 198 raise batch.exc_type(batch.exc_msg)
199 return batch
200
TypeError: Traceback (most recent call last):
File "/home/shivam/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 34, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "/home/shivam/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 79, in default_collate
return [default_collate(samples) for samples in transposed]
File "/home/shivam/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 79, in <listcomp>
return [default_collate(samples) for samples in transposed]
File "/home/shivam/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 66, in default_collate
return torch.stack(batch, 0)
File "/home/shivam/anaconda3/lib/python3.6/site-packages/torch/functional.py", line 56, in stack
return torch.cat(list(t.unsqueeze(dim) for t in sequence), dim)
TypeError: cat received an invalid combination of arguments - got (list, int), but expected one of:
* (sequence[torch.DoubleTensor] tensors)
* (sequence[torch.DoubleTensor] tensors, int dim)
I’m unable to understand this. Also what is the use of collate_fn, and should I write a custom collate_fn to solve this problem?
|
st118460
|
Ok I think I solved it. In the except: clause of the getitem() function, I replaced
target = np.zeros((196, 196))
with
target = np.array(np.zeros((196, 196)), dtype='uint')
This solved the error. Though I’m not very sure how, maybe because earlier torch.cat() was trying to concatenate doubleTensor (from except:) with ByteTensor.
|
st118461
|
Can I count on the parameters() method of my neural network to always return parameters (and, therefore, their weights/gradients) in the same order throughout the life of the network?
Would the same go for how the children are returned in the named_children() method?
I don’t see this explicitly addressed in documentation or in forums.
Thank you in advance.
|
st118462
|
nn.Module stores the modules using an OrderedDict, so both the order of the parameters as well as the modules should be fixed during runtime
|
st118463
|
I used to use optimizing algorithm like Adadelta/SGD. But when I change optimizer to Adagrad, I got following error. So, is there any difference between using Adagrad and using other optimizers?
Traceback (most recent call last):
File "trainer.py", line 187, in <module>
t.train()
File "trainer.py", line 102, in train
train_loss, train_acc = self.train_step(self.data.train)
File "trainer.py", line 154, in train_step
self.optimizer.step()
File "/usr/local/lib/python2.7/dist-packages/torch/optim/adagrad.py", line 80, in step
state['sum'].addcmul_(1, grad, grad)
TypeError: addcmul_ received an invalid combination of arguments - got (int, torch.cuda.FloatTensor, torch.cuda.FloatTensor), but expected one of:
* (torch.FloatTensor tensor1, torch.FloatTensor tensor2)
* (torch.SparseFloatTensor tensor1, torch.SparseFloatTensor tensor2)
* (float value, torch.FloatTensor tensor1, torch.FloatTensor tensor2)
didn't match because some of the arguments have invalid types: (int, torch.cuda.FloatTensor, torch.cuda.FloatTensor)
* (float value, torch.SparseFloatTensor tensor1, torch.SparseFloatTensor tensor2)
didn't match because some of the arguments have invalid types: (int, torch.cuda.FloatTensor, torch.cuda.FloatTensor)
I use the optim class in the following way:
self.optimizer = optim.Adagrad(filter(lambda p: p.requires_grad, self.model.parameters()),
lr=1e-2, weight_decay=0.1)
self.optimizer.zero_grad()
self.optimizer.step()
|
st118464
|
I can’t find what’s wrong because it works fine in my computer. so maybe try updating PyTorch to the latest version?
|
st118465
|
Hi @ShawnGuo, you might try to define your criterion after moving your model’s parameters to gpu.
|
st118466
|
Is it possible to forward a batch of images, let say 64 images, through a network and backward image by image? Here is my code:
def train(epoch):
global steps
global s
global optimizer
epochLoss = 0
for index, (images, labels) in enumerate(trainLoader):
if s in steps:
learning_rate = learning_rate * 0.1
optimizer = optim.SGD(net.parameters(), lr=learning_rate, momentum=momentum, weight_decay=decay)
if cuda:
images = images.cuda()
images = V(images)
optimizer.zero_grad()
output = net(images).cpu() # 64*95*7*7
loss = 0
for ind in range(images.size()[0]): # images.size()[0] = 64
target = V(jsonToTensor(labels[ind]))
cost = criterion(output[ind,:,:,:].unsqueeze(0), target)
loss += cost.data[0]
cost.backward(retain_variables=True) <---- Error Occurres here!
epochLoss += loss
optimizer.step()
print("(%d,%d) -> Current Batch Loss: %f"%(epoch, index, loss))
s = s + 1
losses.append(len(epochLoss), epochLoss)
In above code, criterion is my customized cost function which gets two tensors as input. i have tried above code but I received an error like this:
RuntimeError: inconsistent tensor size at /py/conda-bld/pytorch_1490979338030/work/torch/lib/TH/generic/THTensorMath.c:827
could you please tell me what is the problem? How can i solve this problem?
|
st118467
|
Yes, you should be able to do that. It looks one of your sizes doesn’t match up, but it’s hard to tell where from your snippet. Can you post a link to a full working example?
Here’s a simple snippet showing multiple calls to backards:
import torch
a = torch.autograd.Variable(torch.randn(5, 5), requires_grad=True)
b = torch.autograd.Variable(torch.randn(5, 5), requires_grad=True)
c = a @ b
for i in range(5):
cost = c[i,:].sum()
cost.backward(retain_variables=True)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.