id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st115468
|
In the tutorial http://pytorch.org/tutorials/intermediate/char_rnn_generation_tutorial.html 15. Why the same input could produces different output? After training, the model parameters supposed to be fixed, isn’t it?
|
st115469
|
It looks like you solved it already but the reason this happens is the dropout layer adding some randomness. The dropout could be “turned off” to make the model deterministic with rnn.train(False) in the sample function.
|
st115470
|
Hi, spro, I’m a begginner of DL and Pytorch, I cannt understand why the net graph in http://pytorch.org/tutorials/intermediate/char_rnn_generation_tutorial.html 5 is that, I donnt think the previous output is the next step input, could you tell me why? Thanks!
|
st115471
|
When generating, the previous output is the next input, however when training, the correct output is the next input. This training technique is known as “teacher forcing” - look that up for more on why it’s used.
|
st115472
|
I need to reproduce the same output on different machines. However, it’s different. There is no dropout layer, and I set torch.backends.cudnn.enabled = False, but still different. Any idea how to solve this?
|
st115473
|
Maybe you have different version of CUDA and /or PyTorch on the machines?
Can you print the versions?
import torch
import sys
print('__Python VERSION:', sys.version)
print('__pyTorch VERSION:', torch.__version__)
print('__CUDA VERSION')
from subprocess import call
print('__CUDNN VERSION:', torch.backends.cudnn.version())
print('__Number CUDA Devices:', torch.cuda.device_count())
print("OS: ", sys.platform)
print("Python: ", sys.version)
print("PyTorch: ", torch.__version__)
print("Numpy: ", np.__version__)
|
st115474
|
1060
('__Python VERSION:', '2.7.6 (default, Oct 26 2016, 20:30:19) \n[GCC 4.8.4]')
('__pyTorch VERSION:', '0.2.0_1')
__CUDA VERSION
('__CUDNN VERSION:', 6021)
('__Number CUDA Devices:', 1L)
('OS: ', 'linux2')
('Python: ', '2.7.6 (default, Oct 26 2016, 20:30:19) \n[GCC 4.8.4]')
('PyTorch: ', '0.2.0_1')
('Numpy: ', '1.13.1')
GTX TITAN X
('__Python VERSION:', '2.7.5 (default, Sep 15 2016, 22:37:39) \n[GCC 4.8.5 20150623 (Red Hat 4.8.5-4)]')
('__pyTorch VERSION:', '0.2.0_1')
__CUDA VERSION
('__CUDNN VERSION:', 6021)
('__Number CUDA Devices:', 1L)
('OS: ', 'linux2')
('Python: ', '2.7.5 (default, Sep 15 2016, 22:37:39) \n[GCC 4.8.5 20150623 (Red Hat 4.8.5-4)]')
('PyTorch: ', '0.2.0_1')
('Numpy: ', '1.13.0')
TITAN XP
('__Python VERSION:', '2.7.5 (default, Nov 6 2016, 00:28:07) \n[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)]')
('__pyTorch VERSION:', '0.2.0_1')
__CUDA VERSION
('__CUDNN VERSION:', 6021)
('__Number CUDA Devices:', 1L)
('OS: ', 'linux2')
('Python: ', '2.7.5 (default, Nov 6 2016, 00:28:07) \n[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)]')
('PyTorch: ', '0.2.0_1')
('Numpy: ', '1.13.1')
K80
('__Python VERSION:', '2.7.5 (default, Sep 15 2016, 22:37:39) \n[GCC 4.8.5 20150623 (Red Hat 4.8.5-4)]')
('__pyTorch VERSION:', '0.2.0_1')
__CUDA VERSION
('__CUDNN VERSION:', 6021)
('__Number CUDA Devices:', 1L)
('OS: ', 'linux2')
('Python: ', '2.7.5 (default, Sep 15 2016, 22:37:39) \n[GCC 4.8.5 20150623 (Red Hat 4.8.5-4)]')
('PyTorch: ', '0.2.0_1')
('Numpy: ', '1.13.0')
it seems there’s no big difference, but the outputs are different, e.g. softmax output max difference is 0.05, that is not acceptable in my case.
|
st115475
|
It seems if they are all titan xp, the output is the same. And I haven’t tried any random seed, what random seed should I set? cuda.seed or anything else?
|
st115476
|
Try setting theses three seeds
np.random.seed()
torch.manual_seed()
torch.cuda.manual_seed()
|
st115477
|
torch.backends.cudnn.enabled = False
np.random.seed(41)
torch.manual_seed(41)
torch.cuda.manual_seed(41)
model = models.models['alex_22']()
model.load_model()
net = model.cuda()
def get_input(n, c, h, w):
return torch.randn(n, c, h, w)
load = True
# load = False
save_pth = 'tensors.pth' #'no_cudnn_tensors.pth'
saves = {}
if not load:
a = get_input(1, 3, 127, 127)
b = get_input(1, 3, 255, 255)
saves['a'] = a
saves['b'] = b
else:
saves = torch.load(save_pth)
a = saves['a']
b = saves['b']
def Var(x):
return Variable(x.cuda())
output = net(Var(a), Var(b))[1].data
b1 = net.forward_one_branch(Var(a), net.conv_r1, net.conv_cls1)[1].data
b2 = net.forward_one_branch(Var(b), net.conv_r2, net.conv_cls2)[1].data
f1 = net.features(Var(a)).data
f2 = net.features(Var(b)).data
if not load:
saves['o'] = output
saves['b1'] = b1
saves['b2'] = b2
saves['f1'] = f1
saves['f2'] = f2
torch.save(saves, save_pth)
print 'saving'
else:
o2 = saves['o']
ob1 = saves['b1']
ob2 = saves['b2']
of1 = saves['f1']
of2 = saves['f2']
print (o2 - output).abs().max()
print (b1 - ob1).abs().max()
print (b2 - ob2).abs().max()
print (f1 - of1).abs().max()
print (f2 - of2).abs().max()
use this test code, I run on titan x and 1060,
output is
0.0687821805477
0.0696254000068
0.0968679785728
0.415367662907
0.437078356743
The difference is relative huge.
model part is two inputs feed in x, y, (1,3,127, 127)->(256, 4, 4), (1, 3, 255, 255) -> (255, 20, 20), and correlation two output.
features is a modified alexnet
class AlexNet5(nn.Module):
def __init__(self):
super(AlexNet5, self).__init__()
self.conv1 = nn.Conv2d(3, 96, kernel_size=11, stride=2)
self.conv2 = nn.Conv2d(96, 256, kernel_size=5)
self.conv3 = nn.Conv2d(256, 384, kernel_size=3)
self.conv4 = nn.Conv2d(384, 384, kernel_size=3)
self.bn1 = nn.BatchNorm2d(96)
self.bn2 = nn.BatchNorm2d(256)
self.bn3 = nn.BatchNorm2d(384)
self.bn4 = nn.BatchNorm2d(384)
self.conv5 = nn.Conv2d(384, 256, kernel_size=3)
self.bn5 = nn.BatchNorm2d(256)
self.feature_size = 256
def forward(self, x):
x = F.relu(F.max_pool2d(self.bn1(self.conv1(x)), kernel_size=3, stride=2))
x = F.relu(F.max_pool2d(self.bn2(self.conv2(x)), kernel_size=3, stride=2))
x = F.relu(self.bn3(self.conv3(x)))
x = F.relu(self.bn4(self.conv4(x)))
x = self.bn5(self.conv5(x))
return x
|
st115478
|
And you are running both on the CPU?
I dont see any GPU related tensors, for instance:
X_tensor = Variable(torch.from_numpy(a).cuda())
|
st115479
|
Cant seem to find anything strange, if you want, upload a self contained Jupyter notebook to git with the data and I can run it locally to compare the results.
|
st115480
|
I don’t know why but right now, k80, titan and titan xp’s outputs only differ at most 1e-6, but 1060 got a huge difference at most 0.5. Seems that 1060 I installed http://download.pytorch.org/whl/cu80/torch-0.2.0.post3-cp27-cp27m-manylinux1_x86_64.whl 1, and others are http://download.pytorch.org/whl/cu80/torch-0.2.0.post3-cp27-cp27mu-manylinux1_x86_64.whl . Can it produce such a huge difference?
|
st115481
|
Finally I locate the problem, I installed pytorch in 1060 using conda. After changing it to the system python, the output is the same.
|
st115482
|
Hi, I wonder how i could do alternating training, e.g., input -> net1 -> net2 -> output.
Say I have defined net1 to have layers like:
def forward():
#non-conv layers are not shown here
input = torch.nn.Conv2d()
layer2 = torch.nn.Conv2d()
layer3 = torch.nn.Conv2d()
output = torch.nn.Conv2d()
return output
For alternating training, i want to train net1 first, then freeze all params in net1 except for the last conv layer and train net2. When searching around, i put together something like below, but haven’t got it running yet:
net1.train()
net2.train()
optim1=SGD(net1.parameters)
optim2=SGD(net2.parameters)
for i, (data, label) in enumerate(data_loader):
data = Variable(data.cuda(async=True))
label = Variable(data.cuda(async=True))
net1_out = net1(data)
net1_loss = loss1(net1_out, label)
optim1.zero_grad()
net1_loss.backward()
optim1.step()
for param in net1.parameters():
param.require_grad = False
net1.output.require_grad=True #error: object has no attribute 'output'
net2_out = net2(net1_out)
net2_loss = loss2(net2_out, label)
optim2.zero_grad()
net2_loss.backward()
optim2.step()
Question:
how to properly unfreeze the last layer in net1 when training net2? Note the simple network structure here is for simplicity, i wonder if there is a general way of retrieving the last layer in a network, or any particular layer.
Would this work as I expected, i.e., training alternating between net1 and net2?
Thanks!
|
st115483
|
Hi all,
I’m facing a cryptic error which I can’t find any discussion about.
I’m simply trying to pass the output of a LSTM layer to a linear layer. The code is as follows:
output, hidden = hidden
logits = self.fully_connected(output)
fully_conected is simply nn.Linear. and output is a torch.cuda.FloatTensor of size (batch_size, n_hidden).
The error message I’m receiving is the following:
TypeError: CudaThreshold_updateOutput received an invalid combination of arguments - got (int, torch.cuda.FloatTensor, torch.cuda.FloatTensor, int, int, Linear), but expected (int state, torch.cuda.FloatTensor input, torch.cuda.FloatTensor output, float threshold, float val, bool inplace)
Does anyone know what might be the cause of this error?
|
st115484
|
somewhere in your code, instead of passing the output of the linear layer to the next layer, you are passing the Linear layer itself.
Is there a small snippet that reproduces this?
|
st115485
|
I’m not sure.
What I’m trying to do is copy the parameters of a language model and use it to pretrain a text classifier. I’m trying to copy the word embedding and a LSTM unit. WHat I’m doing is the following:
I load both modules with pytorch:
checkpoint = torch.load(load_model)
embed = checkpoint[‘embed’]
rnn = checkpoint[‘rnn’]
From the rnn I try to get only the rnn cell:
rnn.layers[0]
Out[13]: LSTMCell(256, 1024)
I try to pass these modules to a new RNN:
self.embedding = copy.deepcopy(pretrained_embed)
self.rnn_cell = copy.deepcopy(pretrained_lstm_cell)
self.rnn_size = pretrained_lstm_cell.hidden_size
Then I try to connect the copied component to a new Linear layer from which I expect to get the logits of my text classifier:
self.fully_connected = nn.Linear(self.rnn_size, num_classes)
Maybe I’m copying the LSTMCell in the wrong way and it is still attached to the linear output of my language model? If that is the case, what would be the correct way to reuse my pretrained parameters in this new graph?
|
st115486
|
In case anyone faces the same problem, what happened was that I appended the F.relu operation directly in the graph to the nn.Linear. I believe this caused a problem when I tried to use the layer in the forward method.
|
st115487
|
Hi,
I am converting some of my old lua-torch codes into pytorch. I am having some problems with RNNs implemented from scratch. They seem much slower in pytorch than in lua-torch. I can not use pre-built modules such as nn.LSTM() or nn.GRU() because I need to implement rnn cells which are non-traditional.
Below are two codes (one in pytorch and one in lua-torch) in which an LSTM cell is built from scratch, then it is run forward and backward 1000 times with fake data. On a Titan X the computational time are:
pytorch: 4.6s
lua-torch: 1.4s
Am I doing something wrong in my pytorch code? Can I speed this up?
PYTORCH CODE:
import torch
import torch.nn as nn
from torch.autograd import Variable
import torch.nn.functional as F
import time
input_size=500
hidden_size=500
batch_size=20
class LSTMCell(nn.Module):
def __init__(self, input_size, hidden_size):
super(LSTMCell, self).__init__()
self.hidden_size=hidden_size
self.lin = nn.Linear( input_size+hidden_size , 4*hidden_size )
def forward(self, x, state0):
h0,c0=state0
x_and_h0 = torch.cat((x,h0), 1)
u=self.lin(x_and_h0)
i=F.sigmoid( u[ : , 0*self.hidden_size : 1*self.hidden_size ] )
f=F.sigmoid( u[ : , 1*self.hidden_size : 2*self.hidden_size ] )
g=F.tanh( u[ : , 2*self.hidden_size : 3*self.hidden_size ] )
o=F.sigmoid( u[ : , 3*self.hidden_size : 4*self.hidden_size ] )
c= f*c0 + i*g
h= o*F.tanh(c)
return (h,c)
# construct LSTM Cell
rnn = LSTMCell(input_size, hidden_size)
rnn.cuda()
# generate fake data
x=torch.rand(batch_size,input_size).cuda()
h0=torch.rand(batch_size,hidden_size).cuda()
c0=torch.rand(batch_size,hidden_size).cuda()
grad_h=torch.rand(batch_size,hidden_size).cuda()
grad_c=torch.rand(batch_size,hidden_size).cuda()
# run the cell 1000 times forward and backward
t0=time.time()
for i in range(1000):
xx=Variable(x,requires_grad=True)
hh0=Variable(h0,requires_grad=True)
cc0=Variable(c0,requires_grad=True)
hh,cc=rnn(xx, (hh0,cc0) )
torch.autograd.backward(variables=(hh,cc) , grad_variables=(grad_h,grad_c) )
print('time in s : '+ str(time.time()-t0) )
LUA-TORCH CODE:
require('nngraph')
require('cunn')
input_size=500
hidden_size=500
batch_size=20
local function LSTMCell()
local x=nn.Identity()()
local h0=nn.Identity()()
local c0=nn.Identity()()
local x_and_h0=nn.JoinTable(2)({x,h0})
local u=nn.Linear(input_size+hidden_size , 4*hidden_size )(x_and_h0)
local u_reshaped=nn.Reshape(batch_size,4,hidden_size)(u)
local tbl=nn.SplitTable(2)(u_reshaped)
local f = nn.Sigmoid()(nn.SelectTable(1)(tbl))
local i = nn.Sigmoid()(nn.SelectTable(2)(tbl))
local g = nn.Tanh()( nn.SelectTable(3)(tbl))
local o = nn.Sigmoid()(nn.SelectTable(4)(tbl))
local c = nn.CAddTable()({ nn.CMulTable()({f,c0}), nn.CMulTable()({i,g}) })
local h = nn.CMulTable()({ o, nn.Tanh()(c) })
local mod = nn.gModule({x,h0,c0},{h,c})
return mod
end
-- construct LSTM Cell
rnn= LSTMCell()
rnn:cuda()
-- generate fake data
x=torch.rand(batch_size,input_size):cuda()
h0=torch.rand(batch_size,hidden_size):cuda()
c0=torch.rand(batch_size,hidden_size):cuda()
grad_h=torch.rand(batch_size,hidden_size):cuda()
grad_c=torch.rand(batch_size,hidden_size):cuda()
-- run the cell 1000 times forward and backward
t0 = torch.tic()
for i=1,1000 do
h, c = unpack( rnn:forward({x,h0,c0}) )
tbl=rnn:backward({x,h0,c0}, {grad_h,grad_c})
end
print('time in s : ' .. torch.toc(t0))
|
st115488
|
Hi,
I have been trying to implement some keras code for ages in pytorch. The keras code is as follows:
There’s an embedding layer first then:
model.add(Dropout(0.2))
model.add(Convolution1D(100, 10, activation=‘relu’))
model.add(MaxPooling1D(4, 4))
model.add(Dropout(0.2))
model.add(Convolution1D(100, 8, activation=‘relu’))
model.add(MaxPooling1D(2, 2))
model.add(Dropout(0.2))
model.add(Convolution1D(80, 8, activation=‘relu’))
model.add(MaxPooling1D(2, 2))
model.add(Dropout(0.2))
model.add(Bidirectional(LSTM(80, consume_less=‘gpu’),merge_mode=‘concat’))
model.add(Dropout(0.2))
model.add(Dense(20, activation=‘relu’))
model.add(Dropout(0.5))
model.add(Dense(1, activation=‘sigmoid’))
model.compile(loss=‘binary_crossentropy’, optimizer=‘rmsprop’, metrics=[‘accuracy’])
My PyTorch code is as follows:
class Net(nn.Module):
def init(self):
super(Net, self).init()
self.dropout_one = nn.Dropout(p=0.2)
self.conv1 = nn.Conv1d(100,100,10)
self.pool1 = nn.MaxPool1d(4, 4)
self.conv2 = nn.Conv1d(100,100,8)
self.pool2 = nn.MaxPool1d(2,2)
self.conv3 = nn.Conv1d(100,80,8)
self.bdlstm = nn.LSTM(input_size=80,hidden_size=80, num_layers=1, bidirectional=True, batch_first=True)
self.dropout_two = nn.Dropout(p=0.5)
self.fc1 = nn.Linear(160,20)
self.fc2 = nn.Linear(20,1)
def forward(self,x):
x = self.dropout_one(x)
x = self.conv1(x)
x = F.relu(x)
x = self.pool1(x)
x = self.dropout_one(x)
x = self.conv2(x)
x = F.relu(x)
x = self.pool2(x)
x = self.dropout_one(x)
x = self.conv3(x)
x = F.relu(x)
x = self.pool2(x)
x = self.dropout_one(x)
x,hidden=self.bdlstm(x.permute(0,2,1))
x = x[:,-1,:]
x=x.view(-1,160)
x = self.dropout_one(x)
x = self.fc1(x)
x = F.relu(x)
x = self.dropout_two(x)
x = self.fc2(x)
x = F.sigmoid(x)
return x
with
criterion = nn.BCELoss()
optimiser = optim.RMSprop(net.parameters(), lr=0.001, momentum=0.9)
Weights update in the pytorch model but the accuracy just hops around 0.5, whereas in the keras it does improve. Can anyone see what I’m doing wrong (I suspect the BDLSTM?)
Thank you!
|
st115489
|
Update - I tried changing the criterion algorithm to SGD (no weights changed) and then Adam (weights change and it’s actually learning)
Is there an issue with either of these optimisations or am I missing something?
|
st115490
|
I am doing something similar with Conv1d, I would love to see the data input you used and its dimension. My code is here:
https://github.com/QuantScientist/Deep-Learning-Boot-Camp/blob/master/day%2002%20PyTORCH%20and%20PyCUDA/PyTorch/31-PyTorch-using-CONV1D-on-one-dimensional-data.ipynb 32
|
st115491
|
Is there any available implementation of whitening data with ZCA using torchvision transform?
I may try to implement it, but a problem would be saving the mean and the components computed for the training set and reusing them for the test set. The only way I see is using an up-value, maybe some of you have a better idea?
|
st115492
|
I want to implement a custom activation function with learnable parameters. For example one that takes the input x and returns a polinomial of specified order, of x.
def poli_activation(x, order=2):
input_tens = []
# is this the way to make coeff a vector of parameters?
coeff = torch.nn.parameter(torch.randn(order+1))
# need a vector of powers of x , for example (x^2, x, 1)
for idx in range(order):
element = torch.pow(x, idx)
input_tens.append(element)
# make this vector a variable and implement polynomial
# as dot product.
input_tens = torch.Tensor(input_tens)
input_tens = torch.autograd.Variable(input_tens.cuda())
output = torch.dot(coeff, input_tens)
return output
Now this is 1) a very inefficient implementation of such a function and 2) does not work. Does anyone have an idea of how this can be efficiently implemented in pytorch? I want the network to be able to learn the polynomial coefficients.
|
st115493
|
Hi,
I am trying to incorporate a conv1d() layer into a network I am using for classification.
When I use it like so, two different layers and re-shaping using view this works:
image.png981×857 47 KB
However, if I incorporate the layers into a single Net (see below), then I am not sure how to re-shape inside the Sequential layer itself. Maybe a transpose? not sure how to do it:
image.png954×693 38.9 KB
Full code is available here:
https://github.com/QuantScientist/Deep-Learning-Boot-Camp/blob/master/day%2002%20PyTORCH%20and%20PyCUDA/PyTorch/31-PyTorch-using-CONV1D-on-one-dimensional-data.ipynb 3
Many thanks
|
st115494
|
EDIT:
Fixed like so:
class Net2(nn.Module):
def __init__(self, n_feature, n_hidden, n_output):
super(Net2, self).__init__()
self.n_feature=n_feature
self.l1 = nn.Sequential(
torch.nn.Linear(n_feature, n_hidden),
torch.nn.BatchNorm1d(n_hidden, eps=1e-05, momentum=0.1, affine=True),
torch.nn.LeakyReLU (0.2)
)
self.c1= nn.Sequential(
torch.nn.Conv1d(n_feature, n_feature * MULT_FACTOR, kernel_size=(5,), stride=(1,), padding=(1,)),
torch.nn.BatchNorm1d(n_hidden, eps=1e-05, momentum=0.1, affine=True),
torch.nn.LeakyReLU (0.2)
)
def forward(self, x):
print ('(x.size():' + str (x.size()))
x=self.l1(x)
print ('(x.size():' + str (x.size()))
# for CNN
x = x.view(NUM_ROWS,self.n_feature,MULT_FACTOR)
print ('(x.size():' + str (x.size()))
x=self.c1(x)
print ('(x.size():' + str (x.size()))
x=F.sigmoid (x)
return x
net = Net2(n_feature=N_FEATURES, n_hidden=Layer1Size, n_output=1) # define the network
lgr.info(net)
b = net(X_tensor_train)
print ('(b.size():' + str (b.size())) # torch.Size([108405, 928])
Reference (https://gist.github.com/spro/c87cc706625b8a54e604fb1024106556 7)
|
st115495
|
I want to apply plain softmax in pytorch, so I used that way:
softmax = F.softmax # or softmax = nn.Softmax()
k = torch.randn([1,2])
print(k) # a tensor
print(softmax(k)) # a variable container
and interesting that, softmax(k) becomes a Variable container, which makes my next code torch.matmul(w, k) fail for the reason that “w” is a plain tensor, and every function of pytorch cannot has mixed inputs.
In my view, the softmax function of pytorch do too many things
And I also tested other function in nn package like nn.CosineSimilarity, it will not do this redundant thing.
Could someone tell me the reason? Thanks a lot
add,
I found that every nn activation will do this thing.
So is that a good coding style to make everything into Variable?
|
st115496
|
v = torch.autograd.Variable(torch.randn([1,2]))
v_activation = torch.nn.functional.softmax(v)
print(v.data)
print(v_activation.data)
You need to take a look at the documentation and tutorials to learn how Pytorch works, then you will understand why Variable is required. Use .data to access variable’s underlying data.
|
st115497
|
I want to apply index from torch.max to another tensor
For example,
a = tr.FloatTensor([4, 1], [3, 10]])
b = tr.FloatTensor([1, 2], [3, 4]])
_, idx_a = tr.max(a, 1) # [0, 1]
b[idx_a] # expected result is [1.4]
# actual result is [[1,2], [3,4]]
I can do this with little dirty code. like
b = tr.FloatTensor([b[idx] for idx in idx_a])
but I want to use built in function/method in pytorch
any elegant suggestion??
BTW, I use pytorch 2.0
|
st115498
|
Hi,
I am trying to replicate the result of C3D model. But I found that it occupies large GPU memory than estimated.
Here is the model:
### self.features_frame
self.features_frame = [
### part 1
nn.Conv3d(3, 64, kernel_size=(3, 3, 3), padding=(1, 1, 1)),
norm_layer(64),
nn.MaxPool3d(kernel_size=(1, 2, 2), stride=(1, 2, 2)),
nn.ReLU(True),
### part 2
nn.Conv3d(64, 128, kernel_size=(3, 3, 3), padding=(1, 1, 1)),
norm_layer(128),
nn.MaxPool3d(kernel_size=(2, 2, 2), stride=(2, 2, 2)),
nn.ReLU(True),
### part 3
nn.Conv3d(128, 256, kernel_size=(3, 3, 3), padding=(1, 1, 1)),
norm_layer(256), nn.ReLU(True),
nn.Conv3d(256, 256, kernel_size=(3, 3, 3), padding=(1, 1, 1)),
norm_layer(256),
nn.MaxPool3d(kernel_size=(2, 2, 2), stride=(2, 2, 2)),
nn.ReLU(True),
### part 4
nn.Conv3d(256, 256, kernel_size=(3, 3, 3), padding=(1, 1, 1)),
norm_layer(256), nn.ReLU(True),
nn.Conv3d(256, 256, kernel_size=(3, 3, 3), padding=(1, 1, 1)),
norm_layer(256),
nn.MaxPool3d(kernel_size=(2, 2, 2), stride=(2, 2, 2)),
nn.ReLU(True),
### part 5
nn.Conv3d(256, 256, kernel_size=(3, 3, 3), padding=(1, 1, 1)),
norm_layer(256), nn.ReLU(True),
nn.Conv3d(256, 512, kernel_size=(3, 3, 3), padding=(1, 1, 1)),
norm_layer(512),
nn.MaxPool3d(kernel_size=(2, 7, 7), stride=(2, 2, 2)),
nn.ReLU(True),
]
self.features_frame = nn.Sequential(*self.features_frame)
### self.classifier
self.classifier = [
nn.Linear(512, 128),
norm_layer(128), nn.ReLU(True),
nn.Linear(128, 10)
]
self.classifier = nn.Sequential(*self.classifier)
I am using Pytorch-0.2. Batch size is 1. The input size is (1, 3, 16, 112, 112). It occupies 1035MB gpu memory.
However, if I just modify the number of channels in the conv3d layer in the part 5, from 256 to 512.
### part 5
nn.Conv3d(256, 512, kernel_size=(3, 3, 3), padding=(1, 1, 1)),
norm_layer(512), nn.ReLU(True),
nn.Conv3d(512, 512, kernel_size=(3, 3, 3), padding=(1, 1, 1)),
norm_layer(512),
nn.MaxPool3d(kernel_size=(2, 7, 7), stride=(2, 2, 2)),
nn.ReLU(True),
It occupies 10013MB GPU memory, which is ten times larger then 1035MB.
I have read the previous questions about GPU memory, but still have no idea why it happens in this case. From my calculation, I think the modified network would occupies at most 4 or 5 times larger than the previous one, but not 10 times.
Appreciate if someone helps me. Thanks.
|
st115499
|
Maybe try disabling/enabling cudnn.benchmark mode? Depending on the algorithm used for the specific image size, it might require a lot of memory. Apart from that, I do not have other ideas at the moment.
|
st115500
|
Depends on the network, the batch size and the GPU you are using.
This link gives some measures 1.1k on torch models (which should be somewhat similar in run-time compared to PyTorch).
As an illustration, in that use-case, VGG16 is 66x slower on a Dual Xeon E5-2630 v3 CPU compared to a Titan X GPU.
|
st115501
|
Using GPU for depth learning, how much faster can you speed up than using CPU alone?
|
st115502
|
Hi,
I have implemented the pack_padded_sequence version of the SNLI classifier based on the code from examples: https://github.com/pytorch/examples/tree/master/snli 3.
My code is here: https://github.com/OanaCamburu/SNLI 4
The running time for an epoch (all rest being equal) is about 4 times higher, while performance has essentially remained the same.
For the longer running time, I suppose it’s because there is more switch between CPU-GPU due to the necessary sortings (https://github.com/OanaCamburu/SNLI/blob/master/train_packedseq.py#L20 1 and https://github.com/OanaCamburu/SNLI/blob/master/train_packedseq.py#L36 1) being executed on the CPU. Is it possible to make the code more efficient? Is there any intention to add a pack_padded_sequence attribute to RNNs in future releases?
For the non-improvement in performance, I know this wasn’t guaranteed but I still expected a bit of an increase. Curious if anyone has experience same behaviour in other tasks.
Thanks!
|
st115503
|
I wrote one of the most comprehensive deep learning tutorials for using PyTorch for Numer.AI stock market prediction.
It is a binary classification problem, and the tutorial includes Kaggle style ROC_AUC plots which are rarely seen in PyTorch.
Comments are welcomed, I am sure I have bugs and mistakes.
github.com
QuantScientist/Deep-Learning-Boot-Camp/blob/master/day02-PyTORCH-and-PyCUDA/PyTorch/18-PyTorch-NUMER.AI-Binary-Classification-BCELoss.ipynb 498
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Deep Learning Bootcamp November 2017, GPU Computing for Data Scientists\n",
"\n",
"<img src=\"../images/bcamp.png\" align=\"center\">\n",
"\n",
"## 18 PyTorch NUMER.AI Deep Learning Binary Classification using BCELoss \n",
"\n",
"Web: https://www.meetup.com/Tel-Aviv-Deep-Learning-Bootcamp/events/241762893/\n",
"\n",
"Notebooks: <a href=\"https://github.com/QuantScientist/Data-Science-PyCUDA-GPU\"> On GitHub</a>\n",
"\n",
This file has been truncated. show original
2017-09-04 10_24_49-Deep-Learning-Boot-Camp_18-PyTorch-NUMER.AI-Binary-Classification-BCELoss.ipynb .png888×632 31.9 KB
Best,
|
st115504
|
Since I have not used the weights /didnt need to use them. Once I do, I shall update the tutorial.
|
st115505
|
I am trying to solve a classic unbalanced classification problem where I have class 1 and class 0 with distribution ratio of 0.04/0.96. I need to correctly identify the minority class 1 (0.04% of all the data samples). For that, I created a simple feed-forward neural network using BCECriterion as a loss function. Now, I tried random undersampling and ensemble approach both giving me around +5-7% accuracy improvement. Last thing I wanted to try is cost sensitive loss function. I know that BCECriterion has weigths parameter but it is somewhat “ill defined” and difficult to understand when reading the source code. Before I assumed that naively giving a tensor [0.9, 0.1] to binary criterion would work but it doesn’t. Can somebody please explain what is the purpose of weights parameter in this criterion and given the problem above how should I apply weights to BCECriterion?
|
st115506
|
Have you tried balancing the data in sk-learn prior to using it in PyTorch?
One example is provided here:
nick becker – 23 Dec 16
The Right Way to Oversample in Predictive Modeling 56
Model Evaluation, Oversampling, Predictive Modeling
https://www.kaggle.com/varunsharma3/credit-card-fraud-detection-using-smote-0-99-auc 23
|
st115507
|
I am aware of imbalance-learn lib and yes, I tried using it but unfortunately my model preprocessing is different from what they use and I have no confidence that it would dramatically improve the score because I am already using random undersampling and I also implemented ensemble learning trying 5 up to 40 models trained on different parts of the data with majority voting. It improved the f1 score only slightly. Using sklearn however does not answer the question.
So, last thing I want to try is loss function weights to be sure that feed forward network is not capable of classifying this data well enough.
|
st115508
|
Hello,
I found in keras a nice multilayer perceptron of the form
model.add(Dense(512, input_shape=(784,)))
model.add(Activation(‘tanh’))
model.add(Dropout(0.2))
model.add(Dense(512))
model.add(Activation(‘linear’))
model.add(Dropout(0.2))
model.add(Dense(512))
model.add(Activation(‘sigmoid’))
model.add(Dropout(0.2))
model.add(Dense(512))
model.add(Activation(‘linear’))
model.add(Dropout(0.2))
model.add(Dense(10))
model.add(Activation(‘softmax’))
I was not sure how to do the linear layers in pytorch, trying to mimic the tutorial I have
class Net(nn.Module):
def init(self):
super(Net, self).init()
self.hidden = nn.Linear(784,512)
self.hidden2 = nn.Linear(512,512)
self.hidden3 = nn.Linear(512,10)
self.out = nn.Linear(10,1)
def forward(self, x):
x = F.tanh(self.hidden(x))
x = F.dropout(self.hidden(x),0.2)
x = F.sigmoid(self.hidden(x))
x = F.dropout(self.hidden(x),0.2)
x = F.softmax(self.hidden(x))
x = self.out(x)
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
But,
How do you create the purely linear layers?
As follow up nn itself has linear and nonlinear methods, but to do softmax etc I still use linear (per the example), what is the nonlinear layer functionality for?
Can someone describe the purpose/point of the num_flat_features function?
Am I settuping up the dropout right?
My apologies if these are basic questions, but I couldn’t quite find the right examples. In this case I’m just trying to create a multilayer perceptron with many nonlinear layers, so sometimes I am not sure if some of the functionality in the examples is for convolutional or more complicated nets specifically,
|
st115509
|
The basic building blocks of deep networks are of the form: Linear layer + Point-wise non-linearity / activation.
Keras rolls these two into one, called “Dense.”
(I’m not sure why the Keras example you have follows Dense with another activation, that doesn’t make sense to me.)
To make a simple multi-layer perception in PyTorch you should stack nn.Linear (a simple linear layer that computes w^Tx + b) and nn.ReLU.
If you’d like a softmax followed by cross entropy loss at the end, you can use CrossEntropyLoss (which performs the softmax and the loss in one function for numerical reasons).
|
st115510
|
Thank you for responding
In my set up I would like a set of linearities with a nonlinear but continuously differentiable activation function, so
layer 1: sigmoid(w_1^T x+b_1)
layer 2: softmax(w^T_2 y_1 +b_2) etc. etc.
Am I doing this wrong in the code? Instead of nn.Linear should I use nn.sigmoid etc.?
And what should the F. function be in the forward pass for the linear part?
this is what I was going by, it is the only example of pytorch multilayer perceptron
Getting started: Basic MLP example (my draft)?
Hi, I’ve gone through the PyTorch tutorials, and looked at a couple examples, and I’m still having trouble getting started – I’m just trying to make a basic MLP for now. What I have below is my (existing) Keras version, and then an attempt at a PyTorch version, cobbled together from trying to read the docs and posts on this forum…still not finished because I’m not sure I’m doing this right.
Can someone tell me if I’m on the right track?
This MLP is intended to take, as input, a vector that’s…
thanks
|
st115511
|
There is nn.Sequential in pytorch. You can add modules like in Keras.
1,2: don’t understand the questions
3: I don’t know
4: dropout functional should be used as: F.dropout(x, 0.2, self.training)
|
st115512
|
Well is this code correct? For constructing several layers, with the nonlinear activation functions tanh, sigmoid, softmax?
class Net(nn.Module):
def init(self):
super(Net, self).init()
self.hidden = nn.Linear(784,512)
self.hidden2 = nn.Linear(512,512)
self.hidden3 = nn.Linear(512,10)
self.out = nn.Linear(10,1)
def forward(self, x):
x = F.tanh(self.hidden(x))
x = F.dropout(self.hidden(x),0.2)
x = F.sigmoid(self.hidden(x))
x = F.dropout(self.hidden(x),0.2)
x = F.softmax(self.hidden(x))
x = self.out(x)
|
st115513
|
You use the same layer over and over (self.hidden)
The reason why you need to instantiate the layers in the init method is, that they have parameters (the weights) that have be be bound to the object.
In the forward method you can use your layers and apply functions (without parameters) like relu or softmax or tanh
|
st115514
|
OK, I tried interlacing this model with the MNIST example. It seems the model is not correctly implemented.
The code is:
from future import print_function
import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.autograd import Variable
parser = argparse.ArgumentParser(description=‘PyTorch MNIST Example’)
parser.add_argument(’–batch-size’, type=int, default=64, metavar=‘N’,
help=‘input batch size for training (default: 64)’)
parser.add_argument(’–test-batch-size’, type=int, default=1000, metavar=‘N’,
help=‘input batch size for testing (default: 1000)’)
parser.add_argument(’–epochs’, type=int, default=10, metavar=‘N’,
help=‘number of epochs to train (default: 10)’)
parser.add_argument(’–no-cuda’, action=‘store_true’, default=False,
help=‘disables CUDA training’)
parser.add_argument(’–seed’, type=int, default=1, metavar=‘S’,
help=‘random seed (default: 1)’)
parser.add_argument(’–log-interval’, type=int, default=10, metavar=‘N’,
help=‘how many batches to wait before logging training status’)
args = parser.parse_args()
args.cuda = not args.no_cuda and torch.cuda.is_available()
torch.manual_seed(args.seed)
if args.cuda:
torch.cuda.manual_seed(args.seed)
kwargs = {‘num_workers’: 1, ‘pin_memory’: True} if args.cuda else {}
train_loader = torch.utils.data.DataLoader(
datasets.MNIST(’…/data’, train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST(’…/data’, train=False, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.batch_size, shuffle=True, **kwargs)
class Net(nn.Module):
def init(self):
super(Net, self).init()
self.hidden = nn.Linear(784,512)
self.hidden2 = nn.Linear(512,512)
self.hidden3 = nn.Linear(512,10)
self.out = nn.Linear(10,1)
def forward(self, x):
x = F.tanh(self.hidden(x))
#x = F.dropout(self.hidden(x),0.2)
x = F.sigmoid(self.hidden2(x))
#x = F.dropout(self.hidden(x),0.2)
x = F.softmax(self.hidden3(x))
x = self.out(x)
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
model = Net()
print(model)
if args.cuda:
model.cuda()
optimizer = optim.SGD(model.parameters(), lr=.01, momentum=0)
def train(epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
if args.cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % args.log_interval == 0:
print(‘Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}’.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.data[0]))
def test():
model.eval()
test_loss = 0
correct = 0
for data, target in test_loader:
if args.cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data, volatile=True), Variable(target)
output = model(data)
test_loss += F.nll_loss(output, target, size_average=False).data[0] # sum up batch loss
pred = output.data.max(1)[1] # get the index of the max log-probability
correct += pred.eq(target.data).cpu().sum()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
for epoch in range(1, args.epochs + 1):
train(epoch)
test()
the output is:
Net (
(hidden): Linear (784 -> 512)
(hidden2): Linear (512 -> 512)
(hidden3): Linear (512 -> 10)
(out): Linear (10 -> 1)
)
Traceback (most recent call last):
File “”, line 1, in
File “mymodel.py”, line 145, in
train(epoch)
File “mymodel.py”, line 116, in train
output = model(data)
File “/home/slava/dev/miniconda2/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 206, in call
result = self.forward(*input, **kwargs)
File “mymodel.py”, line 85, in forward
x = F.tanh(self.hidden(x))
File “/home/slava/dev/miniconda2/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 206, in call
result = self.forward(*input, **kwargs)
File “/home/slava/dev/miniconda2/lib/python2.7/site-packages/torch/nn/modules/linear.py”, line 54, in forward
return self._backend.Linear()(input, self.weight, self.bias)
File "/home/slava/dev/miniconda2/lib/python2.7/site-packages/torch/nn/functions/linear.py", line 10, in forward
output.addmm(0, 1, input, weight.t())
RuntimeError: matrices expected, got 4D, 2D tensors at /b/wheel/pytorch-src/torch/lib/TH/generic/THTensorMath.c:1232
|
st115515
|
You want in forward this:
def forward(self, x):
x = F.tanh(self.hidden(x))
x = F.dropout(x,0.2)
x = F.sigmoid(self.hidden2(x))
x = F.dropout(x,0.2)
x = F.softmax(self.hidden3(x))
x = self.out(x)
Return x
|
st115516
|
Still getting this error:
Traceback (most recent call last):
File “”, line 1, in
File “mymodel.py”, line 146, in
train(epoch)
File “mymodel.py”, line 117, in train
output = model(data)
File “/home/slava/dev/miniconda2/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 206, in call
result = self.forward(*input, **kwargs)
File “mymodel.py”, line 85, in forward
x = F.tanh(self.hidden(x))
File “/home/slava/dev/miniconda2/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 206, in call
result = self.forward(*input, **kwargs)
File “/home/slava/dev/miniconda2/lib/python2.7/site-packages/torch/nn/modules/linear.py”, line 54, in forward
return self._backend.Linear()(input, self.weight, self.bias)
File "/home/slava/dev/miniconda2/lib/python2.7/site-packages/torch/nn/functions/linear.py", line 10, in forward
output.addmm(0, 1, input, weight.t())
RuntimeError: matrices expected, got 4D, 2D tensors at /b/wheel/pytorch-src/torch/lib/TH/generic/THTensorMath.c:1232
|
st115517
|
Hi,
if your inputs are arranged as pictures and you want to feed them to a linear layer, you want to flatten them first, e.g x = x.view (-1, 784). Would that help in your case?
Best regards
Thomas
|
st115518
|
Thank you, good find. I put this at the start of the forward call.
Now I am having an error with calculating the loss function:
Traceback (most recent call last):
File “”, line 1, in
File “mymodel.py 2”, line 147, in
train(epoch)
File “mymodel.py 2”, line 119, in train
loss = F.nll_loss(output, target)
File “/home/slava/dev/miniconda2/lib/python2.7/site-packages/torch/nn/functional.py”, line 501, in nll_loss
return f(input, target)
File “/home/slava/dev/miniconda2/lib/python2.7/site-packages/torch/nn/_functions/thnn/auto.py”, line 41, in forward
output, *self.additional_args)
RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes’ failed. at /b/wheel/pytorch-src/torch/lib/THNN/generic/ClassNLLCriterion.c:57
|
st115519
|
I’m sorry I’m afraid I do not understand. What is the score vector and where do I pass it to? Note that the rest of the code I took from the mnist example in the pytorch package, main.py, only the net I use is different, so the training and testing code should be correct.
|
st115520
|
I changed the softmax to log_softmax but it does not change the output, I am still having a problem in calculating the loss function
Traceback (most recent call last):
File “”, line 1, in
File “mymodel.py”, line 147, in
train(epoch)
File “mymodel.py”, line 119, in train
loss = F.nll_loss(output, target)
File “/home/slava/dev/miniconda2/lib/python2.7/site-packages/torch/nn/functional.py”, line 501, in nll_loss
return f(input, target)
File “/home/slava/dev/miniconda2/lib/python2.7/site-packages/torch/nn/_functions/thnn/auto.py”, line 41, in forward
output, *self.additional_args)
RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes’ failed. at /b/wheel/pytorch-src/torch/lib/THNN/generic/ClassNLLCriterion.c:57
|
st115521
|
it should be set up like in examples in pytorch docs i.e.
input is of size nBatch x nClasses = 3 x 5
input = autograd.Variable(torch.randn(3, 5))
each element in target has to have 0 <= value < nclasses
target = autograd.Variable(torch.LongTensor([1, 0, 4]))
output = F.nll_loss(F.log_softmax(input), target)
output.backward()
Can you provide link to code you have now?
something like this:
def forward(self, x):
x = x.view (-1, 784)
x = F.tanh(self.hidden(x))
x = F.dropout(x,0.2)
x = F.sigmoid(self.hidden2(x))
x = F.dropout(x,0.2)
x = F.softmax(self.hidden3(x))
x = self.out(x)
Return F.log_softmax(x)
has to be after output of last linear layer
|
st115522
|
OK, here is the code now,
from future import print_function
import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.autograd import Variable
parser = argparse.ArgumentParser(description=‘PyTorch MNIST Example’)
parser.add_argument(’–batch-size’, type=int, default=64, metavar=‘N’,
help=‘input batch size for training (default: 64)’)
parser.add_argument(’–test-batch-size’, type=int, default=1000, metavar=‘N’,
help=‘input batch size for testing (default: 1000)’)
parser.add_argument(’–epochs’, type=int, default=10, metavar=‘N’,
help=‘number of epochs to train (default: 10)’)
parser.add_argument(’–no-cuda’, action=‘store_true’, default=False,
help=‘disables CUDA training’)
parser.add_argument(’–seed’, type=int, default=1, metavar=‘S’,
help=‘random seed (default: 1)’)
parser.add_argument(’–log-interval’, type=int, default=10, metavar=‘N’,
help=‘how many batches to wait before logging training status’)
args = parser.parse_args()
args.cuda = not args.no_cuda and torch.cuda.is_available()
torch.manual_seed(args.seed)
if args.cuda:
torch.cuda.manual_seed(args.seed)
kwargs = {‘num_workers’: 1, ‘pin_memory’: True} if args.cuda else {}
train_loader = torch.utils.data.DataLoader(
datasets.MNIST(’…/data’, train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST(’…/data’, train=False, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.batch_size, shuffle=True, **kwargs)
class Net(nn.Module):
def init(self):
super(Net, self).init()
self.hidden = nn.Linear(784,512)
self.hidden2 = nn.Linear(512,512)
self.hidden3 = nn.Linear(512,10)
self.out = nn.Linear(10,1)
def forward(self, x):
x = x.view (-1, 784)
x = F.tanh(self.hidden(x))
x = F.dropout(x,0.2)
x = F.sigmoid(self.hidden2(x))
x = F.dropout(x,0.2)
x = F.softmax(self.hidden3(x))
x = self.out(x)
return F.log_softmax(x)
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
model = Net()
print(model)
if args.cuda:
model.cuda()
optimizer = optim.SGD(model.parameters(), lr=.01, momentum=0)
def train(epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
if args.cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(F.log_softmax(output), target)
loss.backward()
optimizer.step()
if batch_idx % args.log_interval == 0:
print(‘Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}’.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.data[0]))
def test():
model.eval()
test_loss = 0
correct = 0
for data, target in test_loader:
if args.cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data, volatile=True), Variable(target)
output = model(data)
test_loss += F.nll_loss(output, target, size_average=False).data[0] # sum up batch loss
pred = output.data.max(1)[1] # get the index of the max log-probability
correct += pred.eq(target.data).cpu().sum()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
for epoch in range(1, args.epochs + 1):
train(epoch)
test()
this is the output:
Net (
(hidden): Linear (784 -> 512)
(hidden2): Linear (512 -> 512)
(hidden3): Linear (512 -> 10)
(out): Linear (10 -> 1)
)
Traceback (most recent call last):
File “”, line 1, in
File “mymodel.py”, line 147, in
train(epoch)
File “mymodel.py”, line 119, in train
loss = F.nll_loss(F.log_softmax(output), target)
File “/home/slava/dev/miniconda2/lib/python2.7/site-packages/torch/nn/functional.py”, line 501, in nll_loss
return f(input, target)
File “/home/slava/dev/miniconda2/lib/python2.7/site-packages/torch/nn/_functions/thnn/auto.py”, line 41, in forward
output, *self.additional_args)
RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes’ failed. at /b/wheel/pytorch-src/torch/lib/THNN/generic/ClassNLLCriterion.c:57
for comparison here is the main.py file that I am basing my code on:
from future import print_function
import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.autograd import Variable
Training settings
parser = argparse.ArgumentParser(description=‘PyTorch MNIST Example’)
parser.add_argument(’–batch-size’, type=int, default=64, metavar=‘N’,
help=‘input batch size for training (default: 64)’)
parser.add_argument(’–test-batch-size’, type=int, default=1000, metavar=‘N’,
help=‘input batch size for testing (default: 1000)’)
parser.add_argument(’–epochs’, type=int, default=10, metavar=‘N’,
help=‘number of epochs to train (default: 10)’)
parser.add_argument(’–lr’, type=float, default=0.01, metavar=‘LR’,
help=‘learning rate (default: 0.01)’)
parser.add_argument(’–momentum’, type=float, default=0.5, metavar=‘M’,
help=‘SGD momentum (default: 0.5)’)
parser.add_argument(’–no-cuda’, action=‘store_true’, default=False,
help=‘disables CUDA training’)
parser.add_argument(’–seed’, type=int, default=1, metavar=‘S’,
help=‘random seed (default: 1)’)
parser.add_argument(’–log-interval’, type=int, default=10, metavar=‘N’,
help=‘how many batches to wait before logging training status’)
args = parser.parse_args()
args.cuda = not args.no_cuda and torch.cuda.is_available()
torch.manual_seed(args.seed)
if args.cuda:
torch.cuda.manual_seed(args.seed)
kwargs = {‘num_workers’: 1, ‘pin_memory’: True} if args.cuda else {}
train_loader = torch.utils.data.DataLoader(
datasets.MNIST(’…/data’, train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST(’…/data’, train=False, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.batch_size, shuffle=True, **kwargs)
class Net(nn.Module):
def init(self):
super(Net, self).init()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x)
model = Net()
if args.cuda:
model.cuda()
optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum)
def train(epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
if args.cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % args.log_interval == 0:
print(‘Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}’.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.data[0]))
def test(epoch):
model.eval()
test_loss = 0
correct = 0
for data, target in test_loader:
if args.cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data, volatile=True), Variable(target)
output = model(data)
test_loss += F.nll_loss(output, target).data[0]
pred = output.data.max(1)[1] # get the index of the max log-probability
correct += pred.eq(target.data).cpu().sum()
test_loss = test_loss
test_loss /= len(test_loader) # loss function already averages over batch size
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
for epoch in range(1, args.epochs + 1):
train(epoch)
test(epoch)
|
st115523
|
At first glance I see u got log_softmax twice
In def train:
Loss = F.nll_loss(F.log_softmax(output), target)
And in return of forward u got one.
Can change to:
Loss = F.nll_loss(output, target)
|
st115524
|
Oh, it should be log softmax? Interesting That information is about 15 minutes too late for the tutorial I just made
Edit, using log_softmax does effectively train a ton better
|
st115525
|
Hi! I’m just curious why it’s made this way - it seems counterintuitive to me, and sometimes leads to conceptual confusion. Is there any idea behind?
|
st115526
|
The reason is that the cudnn backend defines the parameter ordering in this way
http://docs.nvidia.com/deeplearning/sdk/cudnn-user-guide/index.html#cudnnGetRNNWorkspaceSize 9
There is a ‘batch_first’ option which you can set to True so that your input tensor can have its first dimension equal to the batch_size.
|
st115527
|
Hi, I was implementing a Time-Delay Neural Network as a warmup to using PyTorch and have gotten it to work on a CPU but the same model won’t work on the GPU because one of the variables that I have is not a .cuda type variable.
class TDNN(nn.Module):
def __init__(......):
self.context = Variable(torch.LongTensor(context))
self.kernel = nn.Parameter(torch.Tensor(output_dim, input_dim, self.kernel_width).normal_(0,stdv))
self.bias = nn.Parameter(torch.Tensor(output_dim).normal_(0,stdv))
In the above snippet, I’m pretty sure that after calling net = TDNN().cuda() the kernel and bias are now .cuda type tensors but context is not. I tried including the context as a parameter but then when I initialize my optimizer like: optimizer = optim.Adam(net.parameters()) it gives me an error since I don’t want the gradient to be computed for the context variable.
To give some more details: the context tensor is being used in a torch.index_select call. This is why I have it initialized as a LongTensor.
I could probably figure out a hack to solve this but I was wondering what the best practice was for a problem like this.
Thanks!
|
st115528
|
So one example of a hack that would work is the following:
I introduced a self.cuda_flag:
if type(self.bias.data) == torch.cuda.FloatTensor and self.cuda_flag == False:
self.context = self.context.cuda()
self.cuda_flag = True
While this does solve the problem, I’m wondering if there is a more graceful and efficient way to do this.
|
st115529
|
you could use register_buffer 37
class TDNN(nn.Module):
def __init__(......):
self.register_buffer('context', torch.LongTensor(context))
self.kernel = nn.Parameter(torch.Tensor(output_dim, input_dim, self.kernel_width).normal_(0,stdv))
self.bias = nn.Parameter(torch.Tensor(output_dim).normal_(0,stdv))
|
st115530
|
Thanks!
I had a feeling I was missing something when I saw that the description of nn.module.cuda() which states that it moves the module’s parameters and buffers to the GPU. I didn’t even know some separate buffers existed. Guess I gotta read through the API docs more thoroughly!
|
st115531
|
Excuse me,brother.I learned the tdnn recently. And i want to use it to implement speech separation.
This is my graduation design.I learned the paper 《A time delay neural network architecture for efficient modeling of long temporal contexts》 published in 2015.I want to implement the architecture, But I don’t know How. can you send me your TDNN project? my contact information is [email protected] am Chinese.My 5 English is poor,Please forgive me and thank you very much.Looking forward to your reply. I really appreciate it.
|
st115532
|
Hello
Torch newbie here. Installed Torch on a Ubuntu 16.04 (x64) computer running Python 3.6. I’ve been trying to run a script from an project from Udemy. The torch models import fine, but as soon as I get to the section where I need to run commands using them, the Python kernel exits… Seen quite a few posts (via Google) regarding this issue but no suggested resolutions.
Any ideas from you folks ? Thanbks
Clive
|
st115533
|
I have a dataset that contains multi-modal images with different sizes, and the corresponding modals image pairs need to get the same random transformation process, so they need to be produced at the same time.
I found that most of the examples using torch.utils.data.DataLoader to produce (images, labels), But how to produce a training data ((img1,img2,…), label)? here img1,img2,and img3 have different sizes and belong to the same category.
Any advice would be appreciated.
Thank you!
|
st115534
|
Hi.
Is it possible to get PyTorch compiled for CUDA version 6.5?
The link https://pytorch.org/binaries 86 seems to be broken…
PS I have old laptop with GeForce 330M and the newest available driver for Ubuntu 16.04 is nvidia-340. So the max version of CUDA I can use is 6.5.
|
st115535
|
The minimum CUDA version we want to support is 7.0.
You can try building from source, but we dont plan to support 6.5
|
st115536
|
Hi, to install pytorch from source we need do:
"conda install -c soumith magma-cuda80 # or magma-cuda75 if CUDA 7.5"
but I only have CUDA 7.0, Could you help me? Thank you
|
st115537
|
Hi,
I get the following Assertion error during the start of the training(on GPU). But restarting the training or changing the GPUs sometimes continues the training without any errors. The same training procedure seems to work for other parameters without an issue.
/tmp/luarocks_cutorch-scm-1-4771/cutorch/lib/THC/THCTensorIndex.cu:321: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 1, SrcDim = 1, IdxDim = -2]: block: [0,0,0], thread: [0,0,0] Assertion srcIndex < srcSelectDimSize failed.
|
st115538
|
I am trying to build a network similar to this figure:
Here is the code for training:
import torch
import torchvision
import torch.nn as nn
import numpy as np
import torch.utils.data as data
import torchvision.transforms as transforms
import torchvision.datasets as dsets
from torch.autograd import Variable
import pdb
class RegNet(nn.Module):
def __init__(self):
super(RegNet,self).__init__()
self.Reg_x = nn.Linear(4,4)
self.Reg_y = nn.Linear(4,3)
def forward(self,x):
x_x = self.Reg_x(x)
x_y = self.Reg_y(x)
return x_x,x_y
# Input
z = Variable(torch.ones(5,4),requires_grad=True)
# Rep Layer
Rep = nn.Linear(4,4)
# Reg Layer
Reg = RegNet()
label_x = Variable(torch.ones(5,4))
label_y = Variable(torch.zeros(5,3))
criterion = nn.MSELoss()
optimizer1 = torch.optim.SGD(Rep.parameters(), lr=0.01)
optimizer2 = torch.optim.SGD(Reg.parameters(), lr=0.01)
# train
# feed-forward
rep_out = Rep(z)
pred_x,pred_y = Reg(rep_out)
loss_x = criterion(pred_x,label_x)
loss_y = criterion(pred_y,label_y)
loss = loss_x + loss_y
loss.backward()
I have a similar implementation in Torch. On comparing the performance, the predictions pred_y are consistent with Torch implementation while pred_x is way off. Is there something not correct in my implementation.
|
st115539
|
I’m trying to implement some customized Convolution and I need the im2col function
I checked the implementation in SpatialDilatedConvolution.c and the im2col is written as:
THNN_(col2im)(
THTensor_(data)(gradColumns),
nInputPlane, inputHeight, inputWidth, outputHeight, outputWidth,
kH, kW, padH, padW, dH, dW,
dilationH, dilationW,
THTensor_(data)(gradInput_n)
);
Since I’m just using Float type, so I tried THNN_Floatim2col, but it does not work
My questions are:
How can I call im2col in c extensions with Float type (in GPU and CPU version)?
Do I need to include any head files?
|
st115540
|
What is the application scenario for torch.Storage? When should I use it? I began to learn pytorch
|
st115541
|
Hi,
torch.Storage is the data structure that is underlying a torch.Tensor.
You can see the storage as a 1D array of data in memory, and a tensor as a fancy multidimensional way to see this memory and use it.
If you are using pytorch for neural networks, you should not need to use them directly.
|
st115542
|
albanD:
Hi,
torch.Storage is the data structure that is underlying a torch.Tensor.
You can see the storage as a 1D array of data in memory, and a tensor as a fancy multidimensional way to see this memory and use it.
If you are using pytorch for neural networks, you should not need to use them directly.
Thank you. It can only be a 1 D array?
for example:Torch.FloatStorage (3)
|
st115543
|
albanD:
.storage()
Thank you.
I probably know. What’s the advanced usage of it?
|
st115544
|
There is not really a advance usage of it, it’s one of the layer on how to have very generic tensors.
It can allow to share the same storage for multiple tensors for example. Or handle some operations more efficiently.
|
st115545
|
albanD:
There is not really a advance usage of it, it’s one of the layer on how to have very generic tensors.
It can allow to share the same storage for multiple tensors for example. Or handle some operations more efficiently.
I see. Thank you very much
|
st115546
|
Hi
I am using CNN to classify text. The CNN input dimensions are BxM where B is batchsize and M is number of words/features (0 padded to length of max feature size in the batch). M varies from batch to batch. I would like to batchnormalize the input before the nn.Embedding() layer. How can I do this if I do not know size of M apriori when setting the model up?
class Testmodel(nn.Module):
def __init__(self, args):
super(Testmodel,self).__init__()
self.args = args
V = args.embed_num
D = args.embed_dim
self.embed = nn.Embedding(V, D)
......
self.fc1 = nn.Linear(1000, C)
and the forward
def forward(self, x):
x = self.embed(x)
....
logit = self.fc1(x)
return logit
Thank you
|
st115547
|
It seems to me that batch norm is not applicable to your situation.
For the training phase, it could actually work. In practice B and M can change from one batch to another.
For the test phase, it wouldn’t be possible. A CNN’s output for a certain example is always independent of other examples. If you predict on image X1 alone, it will give you a result y1. If you predict on a batch [X1,X2], you want the output for X1 to be y1, the same as before; not some value dependent on X2.
Therefore, during the test phase, the mean and variance vectors used by batch norm are fixed; thus, their size (M) is also fixed.
EDIT: Here I assume some basic understanding of batch norm. See this article I’ve written for more details: https://vitalab.github.io/deep-learning/2017/02/09/batch-norm.html 58. You can also ask me questions here if you feel my answer is not so clear.
|
st115548
|
Hi Carl
I understand the point you make. It makes sense.
Thank you for your help
|
st115549
|
I loaded a torch7 model into pytorch. I try doing model.updateParameters(0.1) to update parameters of the model (which is a typical CNN). I get an hour like below, any ideas of why?
TypeError Traceback (most recent call last)
in ()
62
63
—> 64 tissue_fate_3d.updateParameters(0.01)
65
66 # get the parameter and gradients (faltten version)
/usr/local/lib/python2.7/dist-packages/torch/legacy/nn/Container.pyc in updateParameters(self, learningRate)
30
31 def updateParameters(self, learningRate):
—> 32 self.applyToModules(lambda m: m.updateParameters(learningRate))
33
34 def training(self):
/usr/local/lib/python2.7/dist-packages/torch/legacy/nn/Container.pyc in applyToModules(self, func)
24 def applyToModules(self, func):
25 for module in self.modules:
—> 26 func(module)
27
28 def zeroGradParameters(self):
/usr/local/lib/python2.7/dist-packages/torch/legacy/nn/Container.pyc in (m)
30
31 def updateParameters(self, learningRate):
—> 32 self.applyToModules(lambda m: m.updateParameters(learningRate))
33
34 def training(self):
/usr/local/lib/python2.7/dist-packages/torch/legacy/nn/Module.pyc in updateParameters(self, learningRate)
77
78 def updateParameters(self, learningRate):
—> 79 params, gradParams = self.parameters()
80 if params:
81 for p, gp in zip(params, gradParams):
TypeError: ‘NoneType’ object is not iterable
|
st115550
|
I am creating a custom loss function. I pulled out relevant test code from pytorch to test the function.
I am getting a failure when the function check_criterion_jacobian is called.
A small example demonstrating the problem can be found here
gist.github.com
https://gist.github.com/mellorjc/1fd19932e6ae92d6d0b4da3fa3ed70d6 9
example.py
from torch.autograd import Variable
import torch
from torch.nn import functional as F
def astype(val, typelike):
return val.type('torch.' + type(typelike.data).__name__)
class TestLoss(torch.nn.Module):
def __init__(self, rg, w):
This file has been truncated. show original
example_test.py
from torch.autograd import Variable, gradcheck
from torch.autograd.gradcheck import gradgradcheck
import argparse
import unittest
import sys
import tempfile
import unittest
from copy import deepcopy
from itertools import product
import torch
This file has been truncated. show original
In the example I expand the input and then perform an elementwise multiplication between a simple function of this and a fixed array. The loss then returns the cross entropy between the target and the result of the elementwise multiplication.
When I remove the element wise multiplication I do not get the error.
I thought at first that I might need to set requires_grad for the variable wrapping the fixed array but the error occurs irrespective of the value of requires_grad.
How do I avoid this error?
|
st115551
|
Is there any available example online on how to do the training for models from torch.legacy.nn?
|
st115552
|
I want to train my network like so:
epoch 1 - 10 learning rate = 0.001
epoch 10 - 90 learning rate = 0.1
epoch 90 - 120 learning rate = 0.01
epoch 120 - 200 learning rate = 0.001
The reason why I need a small learning rate at the beginning is to kickstart the network into training. I tried using LambdaLR 9 to define
lambda1 = lambda epoch: epoch * 100
lambda2 = lambda epoch: epoch * 0.1
but I cannot specify the milestones. On the other hand, MultiStepLR 2 does not let you use multiple gammas. Is there a way around this ?
|
st115553
|
I can think of a dirty way to define lambda for LambdaLR:
lambda epoch: 0.001 if 1 <= epoch < 10 else 0.1 if 10 <= epoch < 90 else 0.01 if 90 <= epoch < 120 else 0.001
or alternatively, you can maybe create your own learning rate scheduler by inheriting the torch.optim._LRScheduler class.
|
st115554
|
Let input is a tensor, dim is a dimension to mask, and mask is a ByteTensor. And, the following statement is true:
len(mask.size())==1 and input.size(dim)==mask.size(0)
I wrote this simple function for this task,
def masked_index(input, dim, mask):
assert(len(mask.size())==1 and input.size(dim)==mask.size(0))
sizes = input.size()
for i in xrange(len(sizes)-1):
mask = mask.unsqueeze(1)
mask = mask.expand_as(input)
return input[mask].view(-1, sizes[1], sizes[2])
, however, I don’t know if there is a better solution for this.
The gist is that sometimes we want to select indices on a dimension using a mask (ByteTensor), which usually comes from comparison ops (e.g. torch.eq()), instead of indices (LongTensor).
@Soumith_Chintala Any comment is welcome!
THIS FUNCTION HAS A BUG! PLEASE SEE THE BELOW COMMENT.
|
st115555
|
The masked_index is not working properly if dim != 0. One of my colleagues suggest this function:
def masked_index(input, dim, mask):
assert len(mask.size())==1 and input.size(dim)==mask.size(0), \
'{}!=1 or {}!={}'.format(len(mask.size()), input.size(dim), mask.size(0))
indices = torch.arange(0,mask.size(0))[mask].long()
return input.index_select(dim, indices)
using torch.arange, we can get the corresponding indices easily. (ref. range vs. arange 18)
|
st115556
|
Some little remarks:
you could use mask.dim() rather than len(mask.size()) I think
you could use mask.nonzero() rather than torch.arange(0,mask.size(0))[mask].long(), it gives the same result but is way more explicit !
All in all, I’m not sure this require a separate function, since you can simply write:
input.index_select(dim, mask.nonzero())
Hope this helps !
|
st115557
|
I used thoses expression and encountered some problem.
my mask is a Variable, when I use mask.nonzero() it errors as Variable object has no attribute 'nonzero';
when I use mask.data.nonzero(),it shows
{RuntimeError}invalid argument 3: expecting vector of indices at /opt/conda/conda-bld/pytorch_1502006348621/work/torch/lib/THC/generic/THCTensorIndex.cu:405
so I used input.index_select(dim, mask.data.nonzero().suqeeze(1)), but It throwed another error:
{RuntimeError}save_for_backward can only save input or output tensors, but argument 0 doesn't satisfy this condition, Here my input is a variable .
Any idea what to do next? thanks in advance!
|
st115558
|
emm~ Here input is a Variable, I’m computing the loss of one network, I’d like to choose some rows to compute the loss based on the mask.
|
st115559
|
Indeed, this works well with a Tensor but Variable hasn’t got the nonzero method.
To do that, you need to get the tensor with mask.data, then apply the nonzero method, and then convert back to a Variable since it expects a Variable as input.
Also, it’s good to note that when you get the error below, it is often due to the fact that you input a Tensor in places where you should input a Variable:
{RuntimeError} save_for_backward can only save input or output tensors, but argument 0 doesn't satisfy this condition,
So you could do:
input.index_select(dim, Variable(mask.data.nonzero()))
Tell me if it’s ok ! (Though I didn’t test it, I guess it should be fine)
|
st115560
|
I just did that exactly as what you’ve said, it works!
But there is another question came with it, would this conversion: Variable -> Tensor -> Variable destroy the chain which conduct the gradient of mask to its creator? Is there some specific cases that the mask is part of the network and this operation should apply gradient to the mask?
|
st115561
|
Yes, indeed, it destroys the chain. I guess in many contexts this is not a problem.
I think a context where you want to optimize on the mask is more likely to be some kind of RL problem where you have a discrete action space.
Variable.nonzero is not implemented yet as discussed in the link below. However, if it was, I wonder how the backward would be implemented, since it outputs indices…
Indexing a Variable with a mask generated from another Variable
Four months passed, we still can not use nonzero() for Variables…
See an interesting discussion here too:
Find non zero elements in a tensor
Yes, of course your total loss L is (piecewise) differentiable. It can be more formally defined as:
L = sum( Li ) / sum( 1{Li ≠ 0} ),
where 1{c} is the indicator function (which is 1 when c is true and 0 otherwise).
Clearly, the function f ( Li ) = 1{Li ≠ 0} has derivative equal to 0 everywhere, except at Li = 0, where the derivative does not exist. In practice, you may assume that it is 0 everywhere.
Your biggest concern should be ensuring that you have no problem in using the tensor losses…
|
st115562
|
I’m trying to use the basic LSTM example, which is as follows:
lstm = nn.LSTM(3, 3) # Input dim is 3, output dim is 3
inputs = [autograd.Variable(torch.randn((1, 3)))
for _ in range(5)] # make a sequence of length 5
# initialize the hidden state.
hidden = (autograd.Variable(torch.randn(1, 1, 3)),
autograd.Variable(torch.randn((1, 1, 3))))
for i in inputs:
# Step through the sequence one element at a time.
# after each step, hidden contains the hidden state.
out, hidden = lstm(i.view(1, 1, -1), hidden)
and it works fine. However, when I try to use it in a class (like as follows)
class Seq(nn.Module):
def __init__(self):
super(Seq, self).__init__()
self.lstm = nn.LSTM(3, 3) # Input dim is 3, output dim is 3
def forward(self, inp):
inputs = [autograd.Variable(torch.randn((1, 3)))
for _ in range(5)] # make a sequence of length 5
# initialize the hidden state.
hidden = (autograd.Variable(torch.randn(1, 1, 3)),
autograd.Variable(torch.randn((1, 1, 3))))
for i in inputs:
# Step through the sequence one element at a time.
# after each step, hidden contains the hidden state.
out, hidden = self.lstm(i.view(1, 1, -1), hidden)
if __name__ == '__main__':
...
....
seq = Seq()
seq.double()
criterion = nn.MSELoss()
optimizer = optim.LBFGS(seq.parameters(), lr=0.3)
#begin to train
for i in range(15):
print('STEP: ', i)
def closure():
optimizer.zero_grad()
output = []
for i in range(inp.size(0)):
out = seq(inp[0,:])
output.append(out)
loss = criterion(out, target)
print('loss:', loss.data.numpy()[0])
loss.backward()
return loss
optimizer.step(closure)
I get an exception:
TypeError: torch.addmm received an invalid combination of arguments - got (int, torch.DoubleTensor, int, torch.FloatTensor, torch.DoubleTensor, out=torch.DoubleTensor), but expected one of:
* (torch.DoubleTensor source, torch.DoubleTensor mat1, torch.DoubleTensor mat2, *, torch.DoubleTensor out)
* (torch.DoubleTensor source, torch.SparseDoubleTensor mat1, torch.DoubleTensor mat2, *, torch.DoubleTensor out)
* (float beta, torch.DoubleTensor source, torch.DoubleTensor mat1, torch.DoubleTensor mat2, *, torch.DoubleTensor out)
* (torch.DoubleTensor source, float alpha, torch.DoubleTensor mat1, torch.DoubleTensor mat2, *, torch.DoubleTensor out)
* (float beta, torch.DoubleTensor source, torch.SparseDoubleTensor mat1, torch.DoubleTensor mat2, *, torch.DoubleTensor out)
* (torch.DoubleTensor source, float alpha, torch.SparseDoubleTensor mat1, torch.DoubleTensor mat2, *, torch.DoubleTensor out)
* (float beta, torch.DoubleTensor source, float alpha, torch.DoubleTensor mat1, torch.DoubleTensor mat2, *, torch.DoubleTensor out)
didn't match because some of the arguments have invalid types: (int, torch.DoubleTensor, int, torch.FloatTensor, torch.DoubleTensor, out=torch.DoubleTensor)
* (float beta, torch.DoubleTensor source, float alpha, torch.SparseDoubleTensor mat1, torch.DoubleTensor mat2, *, torch.DoubleTensor out)
didn't match because some of the arguments have invalid types: (int, torch.DoubleTensor, int, torch.FloatTensor, torch.DoubleTensor, out=torch.DoubleTensor)
on the out, hidden = self.lstm(i.view(1, 1, -1), hidden) line, which is weird because I’m just executing the exact same thing. It seems to have something to do with initializing self.lstm in the init method; I’ve been having problems with this outside of this simple example.
Anyone have any idea what’s up?
Best,
Mark
|
st115563
|
I trained a simple network on MNIST dataset with multi-gpu and single-gpu. But the multi-gpu version took 61 seconds, the single-gpu version took 18 seconds? Is it to do with data switching between diffetent gpu?
|
st115564
|
I would suggest trying something larger in terms of both model and data.
MNIST data is really tiny and your model is likely very small, so you are probably running into more overhead than advantage when using multiple GPUs.
I played around a little but I didn’t actually see much of an advantage unless I moved to large-scale datasets like ImageNet, Pascal etc. where models are larger as well.
|
st115565
|
I need to use elementwise mutliplication (torch.mul) and matrix multiplication (torch.mm 29) inside model definition in forward pass in CPU/Cuda. Is there any CUDA version of these operations which can be used w/o switching into CPU mode?
|
st115566
|
The operations don’t reside on cuda, the tensors do. When you move your model to CUDA using model.cuda() and pass into the model function tensors say input and target that are on cuda() all computations will take place on GPU I believe.
So you can define your model using just torch.mm 86 or torch.mul, just make sure that the tensors/variables that are passed into the model() are on cuda and that your model is also on cuda.
Hope this helps!
|
st115567
|
Thanks Spandan. The thing that I was trying: there is a main branch of network which takes input images. At some point inside the network, I would like to learn another probability (a tensor with a distribution and a linear upsamling + convtranspose).
When I create Tensor inside the network, it cannot decide cuda/cpu. Thus, I give it as a second input and not automatically change all parts inside the net definition to GPU/CPU and torch.mm 37 or torch.mul won’t be an issue.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.