id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st119368
|
Yes, state is a list (each layer has its own dimensionality).
About the equivalence, I’m trying now some basic tests… perhaps I will get it by my own.
Edit: oh my… I misread your post. Yes, indeed state_layer.index_fill_(0, indices, 0) is what I was after. And new.nonzero() is a LongTensor too. Sorry… All it’s needed is to encapsulate it into a Variable.
|
st119369
|
From my (personal) point of view, I would advise against playing with the .data of the Variable by hand (your hack may break in the future), especially in this case where you have an autograd function that does what you want.
|
st119370
|
Hmm, Variable does not support fill_() or zero_() methods… I need a mask, which I don’t really want… but which performs the two steps my hack involve.
So, is there now a workout about this selective zeroing?
OK, making progress:
In [22]: c
Out[22]:
Variable containing:
0.8491
0.1877
0.1560
0.5188
0.7464
[torch.FloatTensor of size 5]
In [23]: set(dir(c)) - set(dir(c.data))
Out[23]:
{'__getattr__',
'__rpow__',
'__setstate__',
'_add',
'_addcop',
'_backward_hooks',
'_blas',
'_creator',
'_do_backward',
'_execution_engine',
'_fallthrough_methods',
'_get_type',
'_grad',
'_static_blas',
'_sub',
'_version',
'backward',
'creator',
'data',
'detach',
'detach_',
'grad',
'index_add',
'index_copy',
'index_fill',
'masked_copy',
'masked_fill',
'output_nr',
'register_hook',
'reinforce',
'requires_grad',
'resize',
'resize_as',
'scatter',
'volatile'}
|
st119371
|
>>> a = V(torch.rand(5), requires_grad=True)
Variable containing:
0.8491
0.1877
0.1560
0.5188
0.7464
[torch.FloatTensor of size 5]
>>> c = a[2] * 0
Variable containing:
0
[torch.FloatTensor of size 1]
I need to end up with the same size…
I’ve also tried
>>> c = a.index_fill(0, 3, 0)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-25-ad8f493be136> in <module>()
----> 1 c = a.index_fill(0, 3, 0)
/home/atcold/anaconda3/lib/python3.5/site-packages/torch/autograd/variable.py in index_fill(self, dim, index, value)
627
628 def index_fill(self, dim, index, value):
--> 629 return IndexFill(dim, value)(self, index)
630
631 def index_fill_(self, dim, index, value):
RuntimeError: expected a Variable argument, but got int
The only think that worked is masking, but it’s not acceptable…
In [12]: mask = V(torch.Tensor([1, 1, 1, 0, 1]))
In [13]: b = a * mask
In [14]: b
Out[14]:
Variable containing:
0.8491
0.1877
0.1560
0.0000
0.7464
[torch.FloatTensor of size 5]
In [15]: m = b.mean()
In [16]: m
Out[16]:
Variable containing:
0.3879
[torch.FloatTensor of size 1]
In [17]: m.backward()
In [18]: a.grad
Out[18]:
Variable containing:
0.2000
0.2000
0.2000
0.0000
0.2000
[torch.FloatTensor of size 5]
|
st119372
|
You Need to give it indices as LongTensor.
See example below:
import torch
from torch.autograd import Variable
a = Variable(torch.rand(5, 3), requires_grad=True)
a = a.clone() # Otherwise we change inplace a leaf Variable
print(a)
ind = Variable(torch.LongTensor([3]))
a.index_fill_(0, ind, 0)
print(a)
a[1, :] = 0
print(a)
|
st119373
|
Love you
Just a correction, "You need to give it indices as [a Variable containing a] LongTensor".
|
st119374
|
Here inputs requires_grad default is True, and labels must be False, and my question is whether tmp_conv and h_init requires_grad True in the forward. Many thx
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
#alexnet
self.conv1 = nn.Conv2d(3, 20, 5, stride=1)
self.conv1_bn = nn.BatchNorm2d(20)
#for initial
self.fc_h2l = nn.Linear(hidden_dim, out_dim)
def forward(self, inputs):
#alexnet
inputs = F.max_pool2d(F.relu(self.conv1_bn(self.conv1(inputs))), (3, 3), stride = 2)
#variable to store inputs
tmp_conv = Variable(torch.zeros(2,batch_size,inputs.size()[1],inputs.size()[2],inputs.size()[3]))
tmp_conv[0,:,:,:,:] = inputs[:,:,:,:].clone()
......
#for initial
h_init = Variable(torch.randn(batch_size,hidden_dim))
step_init = F.sigmoid(self.fc_h2l(h_init))
.....
alexnet = Net()
alexnet.cuda()
#####train
inputs= Variable(inpt.cuda())
labels = Variable(labels.cuda(), requires_grad=False)
|
st119375
|
If I understand correctly, you’re going to save some intermediate values into tmp_conv right? In that case the Variable shouldn’t require grad, because you’re going to overwrite the original content anyway. But I think it would be much cleaner and simpler to use torch.cat or torch.stack.
h_init also doesn’t need to require grad, because you won’t even be optimizing it.
|
st119376
|
Yep, thans, that is exactly what I want. However, in my classification task, tmp_conv and step_init are incorporated to form the final feature representation as below. tmp_conv and h_init should requires_grad True or False?? I am newbie here, hope i don’t bother you.
criterion = nn.CrossEntropyLoss()
fea = func(step_init,tmp_conv)
loss = criterion(fea, labels)
Here func is some sort of function
|
st119377
|
If you don’t optimize them, then leave requires_grad set to False (the default). Set it to True only if these are Variables of which you want to get the gradient (independently of any other Variable). It doesn’t seem to be the case here, so just leave it as is.
|
st119378
|
I wanna get all the parameters in some model (maybe a complext model, like Inception ResNet).
But I cannot get mean/var parameters in BatchNorm layers by this way:
params = list(net.parameters())
so, any suggestions here? thx.
|
st119379
|
Mean and var in BN are not parameters that are optimized, so they’re not returned there. But they are part of the model state so you could use model.state_dict().values().
|
st119380
|
In the example of DCGAN training D, Line214-216 11 which is shown below:
noise.data.resize_(batch_size, nz, 1, 1)
noise.data.normal_(0, 1)
fake = netG(noise)
I think it would be better if we change it to:
noise = Variable(torch.Tensor(batch_size, nz, 1, 1).normal_(0, 1),volatile = True)
fake = Variable(netG(noise).data)
because when training netD, we won’t need the buffer and gradient of netG. this may acclerate training step and reduce memory usage.[details=Summary]This text will be hidden[/details]
|
st119381
|
You are correct, changing it like this is better. Even better, you can do:
fake = netG(noise)
fake.detach()
If you are interested, please send a pull request to fix it, I will merge.
|
st119382
|
detach according to the docs:
Returns a new Variable, detached from the current graph.
Result will never require gradient. If the input is volatile, the output will be volatile too.
What does it mean for the variable to be detached and how does it affect performance?
I get the volatile backprop error with detach unless I do fake = Variable(fake.data)
If the idea is to not use volatile then what does it mean for the generator? Wouldn’t it be less efficient?
|
st119383
|
detach will basically just forget about it’s creator attribute, and hence wont backprop through any paths that created it.
|
st119384
|
Right, but it will also copy the volatile attribute of its predecessor, which means that you must either recast the variable anyway or not use volatile for the generator.
So does detach serve a purpose here? I assume volatile in the generator is good, or why else have that flag option at all?
|
st119385
|
Because you don’t need to use volatile if you’re going to use detach later. It will be nearly equivalent, with volatile possibly being a bit faster and more memory efficient.
|
st119386
|
actually I tried detach at first, but it raise RuntimeError :
-------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-25-5b95d5373ece> in <module>()
19 fake_pic = netg(noise_).detach()
20 output2 = netd(fake_pic)
---> 21 output2.backward(mone) #change for wgan
22 D_x2 = output2.data.mean()
23 optimizerD.step()
/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.pyc in backward(self, gradient, retain_variables)
149 """
150 if self.volatile:
--> 151 raise RuntimeError('calling backward on a volatile variable')
152 if gradient is None and self.requires_grad:
153 if self.data.numel() != 1:
RuntimeError: calling backward on a volatile variable
and I use ipdb to debug, it shows that both fake_pic and the output is volatile. it seems that even with detach, volatile still spread to the whole net.
> /usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py(151)backward()
149 """
150 if self.volatile:
--> 151 raise RuntimeError('calling backward on a volatile variable')
152 if gradient is None and self.requires_grad:
153 if self.data.numel() != 1:
ipdb> u
> <ipython-input-27-5b95d5373ece>(21)<module>()
19 fake_pic = netg(noise_).detach()
20 output2 = netd(fake_pic)
---> 21 output2.backward(mone) #for wgan
22 D_x2 = output2.data.mean()
23 optimizerD.step()
ipdb> output2.volatile
True
ipdb> fake_pic.volatile
True
just as the docs of detach 5 goes:
If the input is volatile, the output will be volatile too.
|
st119387
|
I was working on the commit, but when I pull the latest code, I found this commit 34 is a better solution. So I possibly won’t send a PR.
|
st119388
|
Yes, volatile will be propagated even if you use detach(). I meant that you could remove the volatile flag and use detach with Variables that don’t require grad.
|
st119389
|
The same model without the two self.bn.forward statements gives an accuracy drop to random guessing… does that make sense?
class WideNet(nn.Module):
def __init__(self):
super(WideNet, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
# self.conv3 = nn.Conv2d(10, 20, kernel_size=2)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(5120, 500)
self.fcmid = nn.Linear(500, 50)
self.fc2 = nn.Linear(50, 10)
self.bn1 = nn.BatchNorm2d(10)
self.bn2 = nn.BatchNorm2d(20)
def forward(self, x):
x = F.leaky_relu(F.max_pool2d(self.conv1(x), 2))
x = self.bn1.forward(x)
x = F.upsample_bilinear(x, size=(16, 16))
x = F.leaky_relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = self.bn2.forward(x)
x = F.upsample_bilinear(x, size=(16, 16))
x = x.view(-1, 5120)
x = F.leaky_relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = F.leaky_relu(self.fcmid(x))
x = F.dropout(x, training=self.training)
x = F.leaky_relu(self.fc2(x))
return F.log_softmax(x)
|
st119390
|
that is what batch normalization is used for. When you remove BatchNorm2d layer, I guess your net is suffering from vanishing gradient and saturationwhen training .
when testing, the input of upsample layer will have different scale, therefore most neuron is dead or saturate
|
st119391
|
Not sure, the leaky_relu doesnt seem to have that problem. The upsample is just local replication, so I’m not sure that’s the problem.
The issue is that it drops to 10%. The way I fixed it is to have:
def forward(self, x):
x = self.prelu_ac[0](self.bn[0](F.max_pool2d(self.conv1(x), 2)))
#x = self.bn[0](x)
x = F.upsample_bilinear(x, size=(16, 16))
x = self.prelu_ac[1](self.bn[1](F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)))
#x = self.bn[1](x)
x = F.upsample_bilinear(x, size=(16, 16))
x = x.view(-1, 5120)
x = self.prelu_ac[2](self.bn[2](self.drop1(self.fc1(x))))
#x = self.bn[2](x)
x = F.dropout(x, training=self.training)
x = self.prelu_ac[3](self.bn[3](self.drop2(self.fcmid(x))))
#x = self.bn[3](x)
x = F.dropout(x, training=self.training)
x = self.prelu_ac[4](self.bn[4](self.fc2(x)))
#x = self.bn[4](x)
return F.log_softmax(x)
Any tips for improvement? Should I drop the upsample? I think it helps the second conv layer
|
st119392
|
Hello,
I am using pre-trained VGGNet-16 model where the layers skipping the FC part are wrapped in torch.nn.DataParallel.
The optimizer I used is:
optimizer = optim.SGD([{'params': model.pretrained_model[0][24].parameters()},
{'params': model.pretrained_model[0][26].parameters()},
{'params': model.pretrained_model[0][28].parameters()},
{'params': model.regressor[0][1].parameters()},
{'params': model.regressor[0][4].parameters()}], lr=0.001, momentum=0.9)
which gives me 'DataParallel' object does not support indexing TypeError.
pretrained_model contains the CONV layers only and regressor contains FC layers only.
There is no error if I use model.regressor.parameters(), but I need to update parameters in last few layers in pretrained_model also. How do I fix it?
|
st119393
|
It seems that model.regressor is torch.nn.DataParallel and it doesn’t support indexing. You can extract the encapsulated module using an additional .model index: model.regressor.model[0][1].
|
st119394
|
Oh, I forgot to mention that model.pretrained_model is torch.nn.DataParallel not model.regressor. So, now out of 26 layers in model.pretrained_model, I will be updating only last 3 weight layers in model.pretrained_model.
Following is the model summary:
Sequential (
(0): Sequential (
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU (inplace)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU (inplace)
(4): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU (inplace)
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU (inplace)
(9): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU (inplace)
(12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(13): ReLU (inplace)
(14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): ReLU (inplace)
(16): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(18): ReLU (inplace)
(19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(20): ReLU (inplace)
(21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(22): ReLU (inplace)
(23): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(25): ReLU (inplace)
(26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(27): ReLU (inplace)
(28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(29): ReLU (inplace)
(30): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
)
(1): Sequential (
(0): Dropout (p = 0.5)
(1): Linear (25088 -> 4096)
(2): ReLU (inplace)
(3): Dropout (p = 0.5)
(4): Linear (4096 -> 4096)
(5): ReLU (inplace)
(6): Linear (4096 -> 1000)
)
)
and the class for model creation:
class MyModel(nn.Module):
def __init__(self, pretrained_model):
super(MyModel, self).__init__()
self.pretrained_model = nn.Sequential(*list(pretrained_model.children())[:-1])
self.pretrained_model = torch.nn.DataParallel(self.pretrained_model)
self.regressor = nn.Sequential(net1)
def forward(self, x):
x = self.pretrained_model(x)
x = x.view(-1,35840)
x = self.regressor(x)
x = x.view(-1,57,77)
return x
Also, there is no model attribute in both model.regressor and model.pretrained_model.
|
st119395
|
I don’t understand what finally do you want to index. Basically, if you have a module wrapped in torch.nn.DataParallel you should use its .module attribute to extract it.
|
st119396
|
I think I was not able to frame my question properly. In the SGD optimizer I was using the parameters of the model.pretrained_model as
model.pretrained_model[0][24].parameters()
for which it gave 'DataParallel' object does not support indexing error. But, If I change it to
model.pretrained_model.module[0][24].parameters()
it didn’t give any indexing error.
Thank you.
|
st119397
|
I’m trying to read the source code but confused about where the loss criterion is implemented.
For example, it seems to me MSEloss, KLDivLoss etc are implemented in /torch/nn/modules/loss.py
But they are actually empty function ( “pass”)
I also checked /torch/nn/_functions/thnn , but couldn’t find the implementation details there
Thanks!
|
st119398
|
they are implemented in C here:
https://github.com/pytorch/pytorch/tree/master/torch/lib/THNN/generic 125
|
st119399
|
How to visualize a martix in the Variable type, is there any build-in function to use. Thx.
|
st119400
|
you can use matplotlib or your favorite python visualization.
if x is your Variable
x = Variable(...)
viz_matrix = x.data.numpy()
# visualize viz_matrix with matplotlib
|
st119401
|
I am running the same code on two different machines, one with Titan X and CUDA 7.5 and the other with Pascal 100 and CUDA 8.0 and I am seeing this error with the Pascal setup:
File "./python2.7/site-packages/torch/autograd/variable.py", line 145, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
File "./python2.7/site-packages/torch/nn/_functions/linear.py", line 22, in backward
grad_input = torch.mm(grad_output, weight)
RuntimeError: cublas runtime error : the GPU program failed to execute at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.9_1487343590888/work/torch/lib/THC/THCBlas.cu:246
|
st119402
|
I’m not fully sure why a cublas error occurs. Do other CUDA programs run on the Pascal system?
Also, can you try installing from source: https://github.com/pytorch/pytorch#from-source 12
|
st119403
|
I was trying to install from source on Mac and this error happens when compiling:
CMake Warning (dev):
Policy CMP0042 is not set: MACOSX_RPATH is enabled by default. Run "cmake
--help-policy CMP0042" for policy details. Use the cmake_policy command to
set the policy and suppress this warning.
MACOSX_RPATH is not specified for the following targets:
THPP
This warning is for project developers. Use -Wno-dev to suppress it.
-- Generating done
-- Build files have been written to: /whatever/pytorch/torch/lib/build/THPP
[ 11%] Linking CXX shared library libTHPP.dylib
Undefined symbols for architecture x86_64:
"_THCSByteTensor_cadd", referenced from:
thpp::THCSTensor<unsigned char>::cadd(thpp::Tensor const&, long long, thpp::Tensor const&) in THCSTensor.cpp.o
"_THCSByteTensor_cmul", referenced from:
thpp::THCSTensor<unsigned char>::cmul(thpp::Tensor const&, thpp::Tensor const&) in THCSTensor.cpp.o
"_THCSByteTensor_free", referenced from:
thpp::THCSTensor<unsigned char>::~THCSTensor() in THCSTensor.cpp.o
thpp::THCSTensor<unsigned char>::~THCSTensor() in THCSTensor.cpp.o
thpp::THCSTensor<unsigned char>::~THCSTensor() in THCSTensor.cpp.o
thpp::THCSTensor<unsigned char>::free() in THCSTensor.cpp.o
"_THCSByteTensor_new", referenced from:
thpp::THCSTensor<unsigned char>::THCSTensor(THCState*) in THCSTensor.cpp.o
thpp::THCSTensor<unsigned char>::THCSTensor(THCState*) in THCSTensor.cpp.o
thpp::THCSTensor<unsigned char>::newTensor() const in THCSTensor.cpp.o
"_THCSByteTensor_newClone", referenced from:
thpp::THCSTensor<unsigned char>::clone() const in THCSTensor.cpp.o
"_THCSByteTensor_newContiguous", referenced from:
thpp::THCSTensor<unsigned char>::contiguous() const in THCSTensor.cpp.o
"_THCSByteTensor_retain", referenced from:
thpp::THCSTensor<unsigned char>::clone_shallow() in THCSTensor.cpp.o
thpp::THCSTensor<unsigned char>::retain() in THCSTensor.cpp.o
"_THCSCharTensor_cadd", referenced from:
thpp::THCSTensor<char>::cadd(thpp::Tensor const&, long long, thpp::Tensor const&) in THCSTensor.cpp.o
"_THCSCharTensor_cmul", referenced from:
thpp::THCSTensor<char>::cmul(thpp::Tensor const&, thpp::Tensor const&) in THCSTensor.cpp.o
"_THCSCharTensor_free", referenced from:
thpp::THCSTensor<char>::~THCSTensor() in THCSTensor.cpp.o
thpp::THCSTensor<char>::~THCSTensor() in THCSTensor.cpp.o
thpp::THCSTensor<char>::~THCSTensor() in THCSTensor.cpp.o
thpp::THCSTensor<char>::free() in THCSTensor.cpp.o
"_THCSCharTensor_new", referenced from:
thpp::THCSTensor<char>::THCSTensor(THCState*) in THCSTensor.cpp.o
thpp::THCSTensor<char>::THCSTensor(THCState*) in THCSTensor.cpp.o
thpp::THCSTensor<char>::newTensor() const in THCSTensor.cpp.o
"_THCSCharTensor_newClone", referenced from:
thpp::THCSTensor<char>::clone() const in THCSTensor.cpp.o
"_THCSCharTensor_newContiguous", referenced from:
thpp::THCSTensor<char>::contiguous() const in THCSTensor.cpp.o
"_THCSCharTensor_retain", referenced from:
thpp::THCSTensor<char>::clone_shallow() in THCSTensor.cpp.o
thpp::THCSTensor<char>::retain() in THCSTensor.cpp.o
"_THCSDoubleTensor_cadd", referenced from:
thpp::THCSTensor<double>::cadd(thpp::Tensor const&, double, thpp::Tensor const&) in THCSTensor.cpp.o
"_THCSDoubleTensor_cmul", referenced from:
thpp::THCSTensor<double>::cmul(thpp::Tensor const&, thpp::Tensor const&) in THCSTensor.cpp.o
"_THCSDoubleTensor_free", referenced from:
thpp::THCSTensor<double>::~THCSTensor() in THCSTensor.cpp.o
thpp::THCSTensor<double>::~THCSTensor() in THCSTensor.cpp.o
thpp::THCSTensor<double>::~THCSTensor() in THCSTensor.cpp.o
thpp::THCSTensor<double>::free() in THCSTensor.cpp.o
"_THCSDoubleTensor_new", referenced from:
thpp::THCSTensor<double>::THCSTensor(THCState*) in THCSTensor.cpp.o
thpp::THCSTensor<double>::THCSTensor(THCState*) in THCSTensor.cpp.o
thpp::THCSTensor<double>::newTensor() const in THCSTensor.cpp.o
"_THCSDoubleTensor_newClone", referenced from:
thpp::THCSTensor<double>::clone() const in THCSTensor.cpp.o
"_THCSDoubleTensor_newContiguous", referenced from:
thpp::THCSTensor<double>::contiguous() const in THCSTensor.cpp.o
"_THCSDoubleTensor_retain", referenced from:
thpp::THCSTensor<double>::clone_shallow() in THCSTensor.cpp.o
thpp::THCSTensor<double>::retain() in THCSTensor.cpp.o
"_THCSFloatTensor_cadd", referenced from:
thpp::THCSTensor<float>::cadd(thpp::Tensor const&, double, thpp::Tensor const&) in THCSTensor.cpp.o
"_THCSFloatTensor_cmul", referenced from:
thpp::THCSTensor<float>::cmul(thpp::Tensor const&, thpp::Tensor const&) in THCSTensor.cpp.o
"_THCSFloatTensor_free", referenced from:
thpp::THCSTensor<float>::~THCSTensor() in THCSTensor.cpp.o
thpp::THCSTensor<float>::~THCSTensor() in THCSTensor.cpp.o
thpp::THCSTensor<float>::~THCSTensor() in THCSTensor.cpp.o
thpp::THCSTensor<float>::free() in THCSTensor.cpp.o
"_THCSFloatTensor_new", referenced from:
thpp::THCSTensor<float>::THCSTensor(THCState*) in THCSTensor.cpp.o
thpp::THCSTensor<float>::THCSTensor(THCState*) in THCSTensor.cpp.o
thpp::THCSTensor<float>::newTensor() const in THCSTensor.cpp.o
"_THCSFloatTensor_newClone", referenced from:
thpp::THCSTensor<float>::clone() const in THCSTensor.cpp.o
"_THCSFloatTensor_newContiguous", referenced from:
thpp::THCSTensor<float>::contiguous() const in THCSTensor.cpp.o
"_THCSFloatTensor_retain", referenced from:
thpp::THCSTensor<float>::clone_shallow() in THCSTensor.cpp.o
thpp::THCSTensor<float>::retain() in THCSTensor.cpp.o
"_THCSHalfTensor_cadd", referenced from:
thpp::THCSTensor<__half>::cadd(thpp::Tensor const&, double, thpp::Tensor const&) in THCSTensor.cpp.o
"_THCSHalfTensor_cmul", referenced from:
thpp::THCSTensor<__half>::cmul(thpp::Tensor const&, thpp::Tensor const&) in THCSTensor.cpp.o
"_THCSHalfTensor_free", referenced from:
thpp::THCSTensor<__half>::~THCSTensor() in THCSTensor.cpp.o
thpp::THCSTensor<__half>::~THCSTensor() in THCSTensor.cpp.o
thpp::THCSTensor<__half>::~THCSTensor() in THCSTensor.cpp.o
thpp::THCSTensor<__half>::free() in THCSTensor.cpp.o
"_THCSHalfTensor_new", referenced from:
thpp::THCSTensor<__half>::THCSTensor(THCState*) in THCSTensor.cpp.o
thpp::THCSTensor<__half>::THCSTensor(THCState*) in THCSTensor.cpp.o
thpp::THCSTensor<__half>::newTensor() const in THCSTensor.cpp.o
"_THCSHalfTensor_newClone", referenced from:
thpp::THCSTensor<__half>::clone() const in THCSTensor.cpp.o
"_THCSHalfTensor_newContiguous", referenced from:
thpp::THCSTensor<__half>::contiguous() const in THCSTensor.cpp.o
"_THCSHalfTensor_retain", referenced from:
thpp::THCSTensor<__half>::clone_shallow() in THCSTensor.cpp.o
thpp::THCSTensor<__half>::retain() in THCSTensor.cpp.o
"_THCSIntTensor_cadd", referenced from:
thpp::THCSTensor<int>::cadd(thpp::Tensor const&, long long, thpp::Tensor const&) in THCSTensor.cpp.o
"_THCSIntTensor_cmul", referenced from:
thpp::THCSTensor<int>::cmul(thpp::Tensor const&, thpp::Tensor const&) in THCSTensor.cpp.o
"_THCSIntTensor_free", referenced from:
thpp::THCSTensor<int>::~THCSTensor() in THCSTensor.cpp.o
thpp::THCSTensor<int>::~THCSTensor() in THCSTensor.cpp.o
thpp::THCSTensor<int>::~THCSTensor() in THCSTensor.cpp.o
thpp::THCSTensor<int>::free() in THCSTensor.cpp.o
"_THCSIntTensor_new", referenced from:
thpp::THCSTensor<int>::THCSTensor(THCState*) in THCSTensor.cpp.o
thpp::THCSTensor<int>::THCSTensor(THCState*) in THCSTensor.cpp.o
thpp::THCSTensor<int>::newTensor() const in THCSTensor.cpp.o
"_THCSIntTensor_newClone", referenced from:
thpp::THCSTensor<int>::clone() const in THCSTensor.cpp.o
"_THCSIntTensor_newContiguous", referenced from:
thpp::THCSTensor<int>::contiguous() const in THCSTensor.cpp.o
"_THCSIntTensor_retain", referenced from:
thpp::THCSTensor<int>::clone_shallow() in THCSTensor.cpp.o
thpp::THCSTensor<int>::retain() in THCSTensor.cpp.o
"_THCSLongTensor_cadd", referenced from:
thpp::THCSTensor<long>::cadd(thpp::Tensor const&, long long, thpp::Tensor const&) in THCSTensor.cpp.o
"_THCSLongTensor_cmul", referenced from:
thpp::THCSTensor<long>::cmul(thpp::Tensor const&, thpp::Tensor const&) in THCSTensor.cpp.o
"_THCSLongTensor_free", referenced from:
thpp::THCSTensor<long>::~THCSTensor() in THCSTensor.cpp.o
thpp::THCSTensor<long>::~THCSTensor() in THCSTensor.cpp.o
thpp::THCSTensor<long>::~THCSTensor() in THCSTensor.cpp.o
thpp::THCSTensor<long>::free() in THCSTensor.cpp.o
"_THCSLongTensor_new", referenced from:
thpp::THCSTensor<long>::THCSTensor(THCState*) in THCSTensor.cpp.o
thpp::THCSTensor<long>::THCSTensor(THCState*) in THCSTensor.cpp.o
thpp::THCSTensor<long>::newTensor() const in THCSTensor.cpp.o
"_THCSLongTensor_newClone", referenced from:
thpp::THCSTensor<long>::clone() const in THCSTensor.cpp.o
"_THCSLongTensor_newContiguous", referenced from:
thpp::THCSTensor<long>::contiguous() const in THCSTensor.cpp.o
"_THCSLongTensor_retain", referenced from:
thpp::THCSTensor<long>::clone_shallow() in THCSTensor.cpp.o
thpp::THCSTensor<long>::retain() in THCSTensor.cpp.o
"_THCSShortTensor_cadd", referenced from:
thpp::THCSTensor<short>::cadd(thpp::Tensor const&, long long, thpp::Tensor const&) in THCSTensor.cpp.o
"_THCSShortTensor_cmul", referenced from:
thpp::THCSTensor<short>::cmul(thpp::Tensor const&, thpp::Tensor const&) in THCSTensor.cpp.o
"_THCSShortTensor_free", referenced from:
thpp::THCSTensor<short>::~THCSTensor() in THCSTensor.cpp.o
thpp::THCSTensor<short>::~THCSTensor() in THCSTensor.cpp.o
thpp::THCSTensor<short>::~THCSTensor() in THCSTensor.cpp.o
thpp::THCSTensor<short>::free() in THCSTensor.cpp.o
"_THCSShortTensor_new", referenced from:
thpp::THCSTensor<short>::THCSTensor(THCState*) in THCSTensor.cpp.o
thpp::THCSTensor<short>::THCSTensor(THCState*) in THCSTensor.cpp.o
thpp::THCSTensor<short>::newTensor() const in THCSTensor.cpp.o
"_THCSShortTensor_newClone", referenced from:
thpp::THCSTensor<short>::clone() const in THCSTensor.cpp.o
"_THCSShortTensor_newContiguous", referenced from:
thpp::THCSTensor<short>::contiguous() const in THCSTensor.cpp.o
"_THCSShortTensor_retain", referenced from:
thpp::THCSTensor<short>::clone_shallow() in THCSTensor.cpp.o
thpp::THCSTensor<short>::retain() in THCSTensor.cpp.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [libTHPP.1.dylib] Error 1
make[1]: *** [CMakeFiles/THPP.dir/all] Error 2
make: *** [all] Error 2
|
st119404
|
Hello,
It seems to be a minor issue. I’m training a toy siamese network. My label is either -1 or 1, so I was using a LongTensor to store the label. It seems to me that torch complains because target is supposed to be a FloatTensor?
File “/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/loss.py”, line 110, in forward
buffer[torch.eq(target, -1.)] = 0
TypeError: torch.eq received an invalid combination of arguments - got (torch.cuda.LongTensor, float), but expected one of:
(torch.cuda.LongTensor tensor, int value)
didn’t match because some of the arguments have invalid types: (torch.cuda.LongTensor, float)
(torch.cuda.LongTensor tensor, torch.cuda.LongTensor other)
didn’t match because some of the arguments have invalid types: (torch.cuda.LongTensor, float)
(torch.cuda.LongTensor tensor, int value)
didn’t match because some of the arguments have invalid types: (torch.cuda.LongTensor, float)
(torch.cuda.LongTensor tensor, torch.cuda.LongTensor other)
didn’t match because some of the arguments have invalid types: (torch.cuda.LongTensor, float)
Thanks!
|
st119405
|
What loss are you using? I think casting the target to a FloatTensor should fix it.
|
st119406
|
I’m using HingeEmbeddingLoss.
Yes, casting to FloatTensor fixes the problem.
I assumed that a binary label should be an integer until it complains.
Thanks!
|
st119407
|
Hi there,
I’ve have installed Pytorch based on this command: [conda install pytorch torchvision -c soumith]. In the DOC page of pytorch there are some functions related to torch.nn.utils. For example:
torch.nn.utils.clip_grad_norm()
torch.nn.utils.rnn.PackedSequence()
…
My question is here, when I am going to access to these functions (especially clip_grad_norm), the Anaconda says me:
AttributeError: module ‘torch.nn.modules.utils’ has no attribute 'clip_grad_norm.
So, I would like to know how can I repair this error? I think somehow the installation procedure, above mentioned command, is a little incomplete!
Thanks
|
st119408
|
If you want to use latest version of PyTorch, you should install from source 48.
Below is an example for installing PyTorch from source.
$ git clone https://github.com/pytorch/pytorch
$ export CMAKE_PREFIX_PATH=/home/yunjey/anaconda3 # your anaconda path
$ conda install numpy mkl setuptools cmake gcc cffi
$ conda install -c soumith magma-cuda80
$ export MACOSX_DEPLOYMENT_TARGET=10.9 # if OSX
$ pip install -r requirements.txt
$ python setup.py install
|
st119409
|
We’re working on fixing some issues with compilers over optimizing the binaries with AVX2 vector instructions, that are not available in lots of older CPUs. We’re going to publish them once we’re done with that.
|
st119410
|
I want to check the output value of each layer I build. but I didn’t find any element in layers. Is there any way to do this?
For example, I have a model:
model = nn.Sequential(nn.Linear(128, 64), nn.Linear(64, 32), nn.Linear(32, 1))
av = Variable(torch.rand(7, 128), requires_grad = True)
model.forward(av)
I want to see the output of each linear layer.
|
st119411
|
You should have a look here:
How to extract features of an image from a trained model
Hi all,
I try examples/imagenet of pytorch. It is awesome and easy to train, but I wonder how can I forward an image and get the feature extraction result?
After I train with examples/imagenet/main.py, I get model as,
model_best.pth.tar
And I load this file with
model = torch.load('model_best.pth.tar')
which gives me a dict. How can I use forward method to get a feature (like fc7 layer’s output of Alexnet) of an image with this model?
|
st119412
|
PyTorch is compatible with both Python 2.7 and 3.5.
Nevertheless, which version of python should i us if i want to get little errors/bugs?
|
st119413
|
both are fully supported.
But if you want to invest in the future, go with 3.5, it is better and faster.
|
st119414
|
This code will produce a segmentation fault if is torch is imported
from skimage.data import astronaut
from matplotlib import pyplot as plt
import torch
f = plt.figure()
plt.imshow(astronaut())
f.savefig('0.pdf')
If you comment out import torch it will work fine.
If you change the extension to PNG, it will also work fine.
I have no idea what’s going on.
|
st119415
|
github.com/pytorch/pytorch
Importing scipy.misc after torch causes segfault 35
opened
Feb 19, 2017
closed
Mar 5, 2017
wkentaro
As shown in below. Is this something expected?
This seems most closely related to #595, but it is closed without any specific...
medium priority (this tag is deprecated)
|
st119416
|
Thank you.
This was seriously driving me nuts. Like something would have start flying very soon, e.g. a wireless mouse.
|
st119417
|
I’m fine-tuning a same custom dataset using Torch and PyTorch based on ResNet 34.
With Torch, I use fb.resnet.torch 11 with learning rate 0.001. After 50 epoch:
top1: 89.267 top5: 97.933
With PyTorch, I use code below for fine-tuing. After 90 epoch:
learning rate = 0.01, top1: 78.500, top5: 94.083
learning rate = 0.001, top1: 74.792, top5: 92.583
You can see that PyTorch fine-tuning result is still not so good as Torch. Fine-tuning ResNet 18 has similar results.
Any suggestion or guidance?
PyTorch code used for fine-tuning:
class FineTuneModel(nn.Module):
def __init__(self, original_model, arch, num_classes):
super(FineTuneModel, self).__init__()
# Everything except the last linear layer
self.features = nn.Sequential(*list(original_model.children())[:-1])
self.classifier = nn.Sequential(
nn.Linear(512, num_classes)
)
# Freeze those weights
for p in self.features.parameters():
p.requires_grad = False
def forward(self, x):
f = self.features(x)
f = f.view(f.size(0), -1)
y = self.classifier(f)
return y
|
st119418
|
Your fine tune model looks OK to me. I don’t have enough context to see what could be different. Can you post your fine-tuning code?
|
st119419
|
one possible reason is that, torch adopts more image transforms for data augmentation than pytorch.
|
st119420
|
I think that the difference comes from the fact that you fix all the weights in the pytorch model (except from the last classifier), while in lua torch you are fine-tuning the whole network.
|
st119421
|
@shicai as far as I remember some of the transforms were not ported, because they didn’t have a noticeable effect on the accuracy
|
st119422
|
@shicai @fmassa thanks for the suggestions!
I think it makes sense that the two reasons make the difference:
different data augmentations or image transforms
“hard” fine-tuning and “soft” fine-tuning
From my dataset experiment result, I think maybe “soft” fine-tuning is better than “hard” fine-tuning.
@apaszke from my dataset experiment result, maybe these transforms have effect indeed
|
st119423
|
If we have a dataloader
train_loader = torch.utils.data.DataLoader(trainset_imoprt, batch_size=200, shuffle=True)
semi_loader = torch.utils.data.DataLoader(trainunl_imoprt, batch_size=200, shuffle=True)
valid_loader = torch.utils.data.DataLoader(validset_import, batch_size=200, shuffle=True)
How do we apply a transform to it? Like saying train_loader.dataset.transform = transform does nothing?
|
st119424
|
look at existing examples: https://github.com/pytorch/examples 499 or documentation for correct usage.
|
st119425
|
I implemented the same autoencoders with BCE criterion using pytorch and torch7.
When I tried to train them using SGD with momentum, the convergence speeds were almost same within about 2000 iterations. But, the convergence of pytorch becomes quite slower than that of torch7.
Has anyone compared the convergence performances ?
|
st119426
|
We’ve compared converegence on a variety of tasks: word language models, convnets for classification, GANs, etc.
Must be something subtle.
|
st119427
|
I also tried rmsprop with batch normalization, the same things were observed.
The below is a modified rmsprop employed in Torch7.
square_avg:mul(alpha)
square_avg:addcmul(1.0-alpha, grad, grad)
avg:sqrt(square_avg+eps)
params:mul(1-lr*weight_decay):addcdiv(-lr, grad, avg)
And, the below is an equivalent rmsprop in pytorch
square_avg.mul_(alpha).addcmul_(1 - alpha, grad, grad)
avg = square_avg.add(group['eps']).sqrt()
p.data.mul_(1-group['lr']*weight_decay).addcdiv_(-group['lr'], grad, avg)
Are they equivalent ?
Also, my pytorch code didn’t work at all before 0.19 version due to some run time errors. Is that related to this convergence difference ?
|
st119428
|
I can’t say that they’re equivalent for sure, because I can’t see where avg comes from in the Lua version, but hey look the same to me. As @smth said, we’ve tested convergence on many tasks, and we can’t help you with the limited information we have. I’d suggest to look for NaNs or infs in grads. You might also want to compare the gradients you’re getting from both versions.
|
st119429
|
Thanks for your reply.
Is it possible that the convergence becomes slower even if some gradients becomes NaNs or infs ?
My model is just autoencoder consisting of convolution, maxpooling, maxunpooling, relu, and sigmoid.
Anyway, to compare the gradients, I found that the gradients can be saved in a flatted form by
torch.cat([g.grad.view(-1) for g in model.parameters()],0)
|
st119430
|
No, but these values can destabilize or break the weights of the model. It would be better to use g.grad.data to get a flattened tensor of all gradients instead of a Variable.
|
st119431
|
Kind of weird bug. if i do:
train_loader = torch.utils.data.DataLoader(trainset_imoprt, batch_size=200, shuffle=True)
semi_loader = torch.utils.data.DataLoader(trainunl_imoprt, batch_size=200, shuffle=True)
valid_loader = torch.utils.data.DataLoader(validset_import, batch_size=200, shuffle=True)
features = train_loader.dataset.train_data.numpy()
labels = train_loader.dataset.train_labels.numpy()
img = features
img = img.astype('float32')
lab = labels
img, lab = torch.from_numpy(img), torch.from_numpy(lab)
train = torch.utils.data.TensorDataset(img.unsqueeze(1), lab)
train_loader = torch.utils.data.DataLoader(train, batch_size=64, shuffle=False)
The same model that gives me 96% on just the train_loader now dissolves to random guessing (for MNIST - 10% accuracy). Is there something I’m doing wrong?
|
st119432
|
Maybe it’s because of the different batch size or shuffle? Also, I don’t understand why you have to unsqueeze a dimension that wasn’t used by the first train_loader.
|
st119433
|
What’s a good way to access a convolutional filter (the learned weights) in the model?
Say you have a model in which you declared self.conv1 = nn.Conv2d(…
During training you want to print out the weights that it learns, or more specifically get one of the filters responsible for the feature maps.
edit: The filter in question nn.Conv2d(1, 1, 5, stride=1, padding=2, bias=False)
|
st119434
|
That works.
Parameter containing:
(0 ,0 ,.,.) =
-0.1781 -0.3750 0.0752 -0.1060 0.1356
0.1607 -0.2711 0.1783 0.2942 0.0471
0.1992 0.0228 -0.1627 -0.4729 -0.0560
0.1801 -0.0715 0.0305 -0.0124 -0.1072
0.2290 0.3730 0.1166 -0.1296 0.0746
[torch.cuda.FloatTensor of size 1x1x5x5 (GPU 0)]
|
st119435
|
How can you call the conv operator with non learnable tensors? Suppose I have that tensor I posted earlier that I really like, I want it to be constant and not have to worry about it being unfrozen during a p.requires_grad = True operation such as is often performed in GANs.
Or I have it as a variable declared outside the class of the model where I have direct control over it from the main execution flow.
|
st119436
|
If you define the non-learnable weight tensor as a torch.Tensor and store it using module.register_buffer, you can use it during forward by wrapping it in a new Variable.
|
st119437
|
So I need to manually set self.conv.weight = Variable(filter) inside forward? The way you said “give it to” implies that it can be passed as an argument? I don’t see how.
|
st119438
|
Don’t save the variable anywhere! It should be created at every forward and use as a local Variable. I don’t really know what you want to do but I guess this might help:
filter = Variable(self.filter) # self.filter is a registered buffer (tensor)
result = F.conv2d(input, filter)
|
st119439
|
Hi,
Is there a way to view output images of the model in PyTorch other than converting to numpy and viewing with matplotlib?
I am training a vggnet-16 model on nyu-v2 depth dataset and I am getting salt and pepper noise in PyTorch whereas I am not getting any salt and pepper noise in Keras.
Help please.
|
st119440
|
How are you displaying the images from Keras? Maybe they’re in an incorrect range
|
st119441
|
Hi,
For Keras I am using simple Matplotlib functions. The range is same in both frameworks. The problem was with the dataset and the salt and pepper noise diminishes if I increase the number of epochs. It worked in Keras, but haven’t checked in PyTorch yet.
|
st119442
|
I’m porting some code from Caffe which column major and it would be helpful to know if torch is similarly column major or is row major.
|
st119443
|
The data layout from torch and caffe are the same, and we have borrowed caffe kernels in the past 51.
Indeed, caffe is also row-major.
|
st119444
|
This “deformation_layer” isn’t part of upstream Caffe, so maybe it’s only their code to make the use of BLAS cleaner.
// reshape top blob to requested shape
vector out_shape;
out_shape.push_back(batch_size_);
if( param.nz() > 0) out_shape.push_back(param.nz());
out_shape.push_back(param.ny());
out_shape.push_back(param.nx());
out_shape.push_back(param.ncomponents());
top[0]->Reshape( out_shape);
|
st119445
|
I read the demo
github.com
pytorch/examples/blob/master/dcgan/main.py 65
from __future__ import print_function
import argparse
import os
import random
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
from torch.autograd import Variable
parser = argparse.ArgumentParser()
parser.add_argument('--dataset', required=True, help='cifar10 | lsun | imagenet | folder | lfw | fake')
parser.add_argument('--dataroot', required=True, help='path to dataset')
parser.add_argument('--workers', type=int, help='number of data loading workers', default=2)
This file has been truncated. show original
I don’t understand this correct += pred.eq(target.data).cpu().sum().
Why do you use .cpu() function?
|
st119446
|
github.com
pytorch/examples/blob/master/dcgan/main.py 16
from __future__ import print_function
import argparse
import os
import random
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
parser = argparse.ArgumentParser()
parser.add_argument('--dataset', required=True, help='cifar10 | lsun | mnist |imagenet | folder | lfw | fake')
parser.add_argument('--dataroot', required=True, help='path to dataset')
parser.add_argument('--workers', type=int, help='number of data loading workers', default=2)
parser.add_argument('--batchSize', type=int, default=64, help='input batch size')
This file has been truncated. show original
image.png778×307 22.6 KB
Can you see the link?
It is the mnist example in pytorch example!
Thanks very much
|
st119447
|
as you see in the comment of the line above the one you highlighted, pred has the index of the max log-probability. Hence, checking pred == target and summing total correct predictions in the mini-batch adds up to total correctly predicted.
|
st119448
|
But why use cpu() ?
It seems that using correct += pred.eq(target.data).sum() would be faster
|
st119449
|
I installed pytorch on several machines, from source and from conda and I am getting different execution times for matrix multiplication. All installs are with Anaconda 4.3.0, python 3.6. However I can’t figure out if pytorch is using MKL or OpenBLAS or other backend. Right now the macOS install is the fastest despite the machine having the slowest CPU.
The reason I ran these tests is because I noticed a severe slowdown (~10 times slower) of a multiprocessing RL algo I am working on when executed on the Linux machines.
On the Linux machines torch seems to be using only a single thread when doing the multiplication, as opposed to macOS. Even though torch.get_num_threads() return the correct no of threads on each system.
###Results:
macOS: Sierra, CPU: Intel i7-4870HQ (8) @ 2.50GHz, 16GB RAM, GeForce GT 750M. Installed from sources.
Allocation: 5.921
Torch Blas: 7.277
Numpy Blas: 7.841
Torch cuBlas: 0.205
Ubuntu 16.10, CPU: Intel i7-4720HQ (8) @ 3.60GHz, 16GB RAM, GeForce GTX 960M. Installed from sources.
Allocation: 4.030
Torch Blas: 21.112
Numpy Blas: 21.82
Torch cuBlas: 0.121
CentOS 7.2, CPU: Intel Xeon E5-2640v4 (40) @ 2.40GHz, 16GB RAM, Titan X. Installed both from source and with conda. Also ran the test with python 3.5 and pytorch built from sources.
Allocation: 4.557
Torch Blas: 19.646
Numpy Blas: 20.155
Torch cuBlas: 0.155
Finally, this is the output of np.__config__.show() on all the machines:
openblas_lapack_info:
NOT AVAILABLE
lapack_opt_info:
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['/opt/anaconda3/include']
library_dirs = ['/opt/anaconda3/lib']
libraries = ['mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'iomp5', 'pthread']
blas_mkl_info:
...
blas_opt_info:
...
lapack_mkl_info:
...
The code I am using:
import time
import torch
import numpy
torch.set_default_tensor_type("torch.FloatTensor")
w = 5000
h = 40000
is_cuda = torch.cuda.is_available()
start = time.time()
a = torch.rand(w,h)
b = torch.rand(h,w)
a_np = a.numpy()
b_np = b.numpy()
if is_cuda:
a_cu = a.cuda()
b_cu = b.cuda()
allocation = time.time()
print("Allocation ", allocation - start)
c = a.mm(b)
th_blas = time.time()
print("Torch Blas ", th_blas - allocation)
c = a_np.dot(b_np)
np_blas = time.time()
print("Numpy Blas ", np_blas - th_blas)
if is_cuda:
c = a_cu.mm(b_cu)
cu_blas = time.time()
print("Torch cuBlas ", cu_blas - np_blas)
print("Final", time.time() - start)
edit: For comparison here are the results of the same script on Lua Torch on the last machine from above:
Allocation: 4.426
Torch Blas: 2.777
Torch cuBlas: 0.097
At this point I am more inclined to believe my linux pytorch installs are using a BLAS fallback. Hoping this isn’t Python’s overhead…
|
st119450
|
Did that, it didn’t help. Is the variable relevant for runtime only? Or I should also try to recompile torch with it exported?
MKL seems properly configured:
>> mkl.get_max_threads()
>> 20
I also installed today anaconda2 with python 2.7 and pytorch (from conda) on the ubuntu laptop described above. I got the same figures. I could reproduce this on CentOS, Ubuntu, three four different machines, python 3.6, 3.5 and 2.7, with pytorch installed from source and from conda.
Can someone else run the script above and report the numbers?
|
st119451
|
I’m seeing similar behavior. I ran your script from both PyTorch built from source and the Conda installation:
built from source:
Allocation 6.476427316665649
Torch Blas 4.414772272109985
Numpy Blas 2.665677547454834
Torch cuBlas 0.14812421798706055
Final 13.705262184143066
conda:
Allocation 5.521166086196899
Torch Blas 39.35049605369568
Numpy Blas 39.40145969390869
Final 84.42150139808655
It looks like something is wrong with Conda.
A minor note: Your script only measures cuBlas launch time. Not execution time. You need a torch.cuda.synchronize() call to measure execution time
|
st119452
|
Thanks for taking the time to check this. Also for the tip on benchmarking cuda.
I’ll try to build it again altghough I did this several times with same result.
|
st119453
|
Can you paste the full log from python setup.py install into a gist? Maybe your MKL isn’t picked up.
|
st119454
|
I won’t be able to post the log until later today, however I looked specifically for messages related to MKL before starting the thread and it was picking MKL headers/objects from anaconda and also passing BLAS-related tests for operations such as gemm. I’ll post the log as soon as I can nevertheless.
|
st119455
|
Let me know if there’s anything else I can look into, especially if you can’t reproduce this situation.
|
st119456
|
What fixed things for me was adding “iomp5” to FindMKL.cmake:
diff --git a/torch/lib/TH/cmake/FindMKL.cmake b/torch/lib/TH/cmake/FindMKL.cmake
index e68ae6a..7c9325a 100644
--- a/torch/lib/TH/cmake/FindMKL.cmake
+++ b/torch/lib/TH/cmake/FindMKL.cmake
@@ -50,7 +50,7 @@ ENDIF ("${SIZE_OF_VOIDP}" EQUAL 8)
IF(CMAKE_COMPILER_IS_GNUCC)
SET(mklthreads "mkl_gnu_thread" "mkl_intel_thread")
SET(mklifaces "gf" "intel")
- SET(mklrtls)
+ SET(mklrtls "iomp5")
ELSE(CMAKE_COMPILER_IS_GNUCC)
SET(mklthreads "mkl_intel_thread")
SET(mklifaces "intel")
otherwise mkl_sequential library is used (and your log shows this), but I don’t know nearly enough about compilers and threading libraries interactions to know if it is a robust solution. @apaszke, @colesbury, I can submit a PR if you think that’s Ok.
|
st119457
|
@ngimel yes, please send a PR. It would be even simpler for us if you could send it to torch/torch7 directly. Thanks!
|
st119458
|
@ngimel @apaszke I can confirm this fixes the issue indeed. Thank you for all the support, much appreciated!
|
st119459
|
Hi,
I want to use a pretrained model to supervise another model, which need to set the batch normalization into eval mode. However, with lstm in my pretrained model, it arises follow message:
Traceback (most recent call last):
File "main.py", line 153, in val
cost.backward()
File "/home/jrmei/.local/lib/python2.7/site-packages/torch/autograd/variable.py", line 145, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
File "/home/jrmei/.local/lib/python2.7/site-packages/torch/autograd/function.py", line 208, in _do_backward
result = super(NestedIOFunction, self)._do_backward(gradients, retain_variables)
File "/home/jrmei/.local/lib/python2.7/site-packages/torch/autograd/function.py", line 216, in backward
result = self.backward_extended(*nested_gradients)
File "/home/jrmei/.local/lib/python2.7/site-packages/torch/nn/_functions/rnn.py", line 199, in backward_extended
self._reserve_clone = self.reserve.clone()
AttributeError: 'CudnnRNN' object has no attribute 'reserve'
Do you have any suggestions? Thanks very much.
|
st119460
|
your pretrained model does not have reserve looks like it.
Is there a way I can reproduce your problem? Do you have a small 30 line script that will give the same problem?
|
st119461
|
import torch
import torch.nn as nn
from torch.autograd import Variable
rnn = nn.Sequential(nn.LSTM(10, 10, 2, bidirectional=True))
rnn = rnn.cuda()
rnn.eval()
i = torch.Tensor(2, 1, 10).cuda()
i = Variable(i, requires_grad=True)
o, _ = rnn(i)
o = o.mean()
o.backward(torch.ones(1).cuda())
|
st119462
|
May I ask when will be the next update?
My work are really interrupted by some bugs.
|
st119463
|
import torch
from torch.autograd import Variable
inputs = Variable(torch.randn(4,5), requires_grad = True)
#inputs = torch.randn(4,5)
cum_out = torch.cumprod(inputs, dim =0)
print(cum_out)
RuntimeError Traceback (most recent call last)
<ipython-input-6-427be1ef1fd7> in <module>()
4 #inputs = torch.randn(4,5)
5
----> 6 cum_out = torch.cumprod(inputs, dim =0)
7
8 print(cum_out)
RuntimeError: Type Variable doesn't implement stateless method cumprod
|
st119464
|
I have the following code in an extension of torch.autograd.Function:
self.avarwic_infor = {
c: {
‘ind_c’: ind_range[y==c],
‘mask_c’: y==c,
‘prob_c’: torch.FloatTensor([torch.sum(y==c)]) / torch.FloatTensor([self.N])
} for c in torch.range(0, self.C-1).type(‘torch.ByteTensor’)
}
where y has type LongTensor, self.N and self.C have type long.
it works fine if I use CPU, but got the following error if GPU is used:
TypeError: indexing a tensor with an object of type ByteTensor. The only supported types are integers, slices, numpy scalars and torch.ByteTensor.
is this error because of I am using dictionary and GPU simultaneously or I did something else wrong?
Thank you very much!
|
st119465
|
I really appreciate your answer, but the next problem is how I can know the function is run in GPU or CPU, is there any flag indicates it? I tried to pass a bool argument from outside of the function but another error raises.
RuntimeError: expected a Variable argument, but got bool
Is it because of the torch.autograd.Function accepts Variable only? if so, how can I determine the function is run in GPU or not?
Thank you!
|
st119466
|
I need to load raw int16 binary files on-the-fly during training. In Lua Torch I could just do the following:
img = torch.ShortStorage(filename)
It appears that that functionality does not exist in PyTorch. So instead I am loading into a numpy array as follows:
img = np.fromfile(filename, 'int16')
The problem is that np.fromfile is extremely slow. For my large data files (>100MB), np.fromfile loads four orders of magnitude slower than the old torch.ShortStorage method. How can I get the fast load speeds in PyTorch?
|
st119467
|
We don’t support memory mapping files right now. I’ve opened an issue 91 for that
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.