id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st115868
|
My bad I got confused between INPUT_SEQ_LEN and INPUT_SIZE
By the way with no GPU I got :
#input_seq_len = 500: // 2.8GB
|
st115869
|
After some tests, I don’t think there is any solutions. I was afraid weights were duplicated along the sequence, leading to a 512^2 * INPUT_SEQ_LEN memory usage, but weight sharing seems to be correctly implemented and it is not the case.
Still when backpropagating, we have to store the forward pass first, So whatever we do we will have a 512 * INPUT_SEQ_LEN consumption. The only solution is indeed to limit back-propagation through time to a limited number of step.
|
st115870
|
I have a Variable x with shape 1x2048x2x2 and another y shape of 1x2048 , and I need to do a torch.cat operation between them. The current solution is
y = y.unsqueeze(-1)
y = y.unsqueeze(-1)
y = y.expand_as(x)
I wonder if there is a more elegant way to solve the problem? Thanks
|
st115871
|
I am trying to do sequence classification by using policy gradient method. As I understood “reinforce” method does that out-of-the-box. What I need is to add combine binary cross entropy loss.
How should I do properly the gradient step? What I have now is something like this:
for action, r in zip(policy.saved_actions, rewards):
action.reinforce(r)
optimizer.zero_grad()
autograd.backward(policy.saved_actions + [bce_loss], [None for _ in policy.saved_actions] + [None])
where bce_loss is the binary cross entropy for sequence classification and action is a stochastic variable. Do I do this correctly? One strange moment is if I do not do action.reinforce, I receive no error during the backpropagation step. Shouldn’t any exception be raised as we did not assign the reward on the stochastic variable?
|
st115872
|
I have a bounding boxes variable, whose size is [num_boxes,4], and I want to get all the coordinates within all the boxes’ area. Do you have any idea to do this job efficiently by PyTorch?
Thank you!
|
st115873
|
Since the bounding boxes are not of same size, I can’t think of any way to vectorize :(
|
st115874
|
alan_ayu:
I have a bounding boxes variable, whose size is [num_boxes,4], and I want to get all the coordinates within all the boxes’ area. Do you have any idea to do this job efficiently by PyTorch?
I think that might be solved with numpy.mgrid, but I am not sure if there is any PyTorch equivalent. Nevertheless, numpy is usually a faster choice in python.
|
st115875
|
Lets assume a torch.Tensor bboxes of size [# bboxes, 4] where the last dimension represents the dimensions of a single bounding box in the order left, top, bottom, right.
Lets also assume there is a torch.Tensor points of size [# points, 2] where the last dimension represents the dimensions of a single point in the order x, y.
All coordinates are in a system where the origin is located at the top-left corner, and the axis spans right-down.
import torch
# Bounding boxes and points.
bboxes = [[0, 0, 30, 30], [50, 50, 100, 100]]
bboxes = torch.FloatTensor(bboxes)
points = [[99, 50], [101, 0], [30, 30]]
points = torch.FloatTensor(points)
# Keep a reference to the original strided `points` tensor.
old_points = points
# Permutate all points for every single bounding box.
points = points.unsqueeze(1)
points = points.repeat(1, len(bboxes), 1)
# Create the conditions necessary to determine if a point is within a bounding box.
# x >= left, x <= right, y >= top, y <= bottom
c1 = points[:, :, 0] <= bboxes[:, 2]
c2 = points[:, :, 0] >= bboxes[:, 0]
c3 = points[:, :, 1] <= bboxes[:, 3]
c4 = points[:, :, 1] >= bboxes[:, 1]
# Add all of the conditions together. If all conditions are met, sum is 4.
# Afterwards, get all point indices that meet the condition (a.k.a. all non-zero mask-summed values)
mask = c1 + c2 + c3 + c4
mask = torch.nonzero((mask == 4).sum(dim=-1)).squeeze()
# Select all points that meet the condition.
print(old_points.index_select(dim=0, index=mask))
Enjoy! :).
|
st115876
|
I have an RNN cell where I keep the activations as model state. I realize this is sort of against normal PyTorch conventions, so maybe I shouldn’t be doing it, but it has been working fine, and I think it has advantages in the context of my larger model (although they might be illusory or debatable).
I recently tried to use DataParallel, and it didn’t work; the activations (which are just Variables set on the model) do not persist between calls to forward.
Maybe modules should support something like a register_device_variable method that would allow Variables to persist between calls to forward (but not across checkpoints)?
But in the meantime, is there a good workaround? I thought about having a static array of the Variables and then having forward find the Variable to use by getting the device of one of its parameters and then store the new Variable back before returning, but I haven’t tried it yet.
|
st115877
|
I’m very new to Pytorch so please bear with me…
I have currently implemented the forward part of a network such that it can take a variable length input and return a fixed length output. I feed it a tensor of size [variable length x 30] and it returns a tensor of size [1 x 30]
However, I need to be able to train the network on lots of inputs, all of differing length, and their corresponding outputs. The examples I have seen on the Pytorch website use tensors wrapped inside Variable()s for both the inputs and outputs, which they then pass to the model and the loss function. However, I don’t think I can combine all my inputs into a single tensor as they are all of differing length.
Any ideas?
|
st115878
|
I have a code I wrote in Torch which I am migrating to PyTorch. Its a regression-based learning problem. In Torch, after defining my network, I define concatenate my inputs along the 2nd axis as follows
inputs = torch.cat({inputs[1], inputs[2], inputs[3], inputs[4],
inputs[5], inputs[6], inputs[7], inputs[8],
inputs[9]}, 2)
And the network is as defined:
(lstm1): LSTM(9, 9)
(lstm2): LSTM(9, 6)
(lstm3): LSTM(6, 6)
(drop): Dropout (p = 0.3)
(fc1): Linear (6 -> 3)
(fc2): Linear (3 -> 3)
In torch, I can forward call and backward call okay as the inputs tensor id of batch size [100, 9] torch.*Tensor.
I construct the network the same way in PyTorch and concatenate my nine inputs along the 1st axis as
inputs = torch.cat((base_in, base_out, left_in,
left_out, right_in, right_out,
z, pitch, yaw), 1)
I define my model in PyTorch as
class LSTMModel(nn.Module):
def __init__(self, nFeatures, nCls, nHidden, nineq=12, neq=0, eps=1e-4,
noutputs=3,numLayers=1):
super(LSTMModel, self).__init__()
self.nFeatures = nFeatures
self.nHidden = nHidden
self.nCls = nCls
self.nineq = nineq
self.neq = neq
self.eps = eps
self.cost = nn.MSELoss(size_average=False)
self.noutputs = noutputs
# self.neunet = nn.Sequential()
self.lstm1 = nn.LSTM(nHidden[0],nHidden[0],num_layers=numLayers)
self.lstm2 = nn.LSTM(nHidden[0],nHidden[1],num_layers=numLayers)
self.lstm3 = nn.LSTM(nHidden[1],nHidden[2],num_layers=numLayers)
self.drop = nn.Dropout(0.3)
self.fc1 = nn.Linear(nHidden[2], noutputs)
self.fc2 = nn.Linear(noutputs, noutputs)
self.M = Variable(torch.tril(torch.ones(nCls, nCls)))
self.L = Parameter(torch.tril(torch.rand(nCls, nCls)))
self.G = Parameter(torch.Tensor(nineq/2, nCls).uniform_(-1,1))
"""
define constraints, z_i, and slack variables, s_i,
for six valves. z_i and c_i are learnable parameters
"""
self.z0 = Parameter(torch.zeros(nCls))
self.s0 = Parameter(torch.ones(nineq/2))
self.z0p = Parameter(torch.zeros(nCls))
self.s0p = Parameter(torch.ones(nineq/2))
def forward(self, x):
nBatch = x.size(0)
# LSTM-dropout-LSTM-dropout-lstm-dropout-FC-QP-FC
x = x.view(nBatch, -1)
x = self.lstm1(x)
x = self.drop(x)
x = self.lstm2(x)
x = self.drop(x)
x = self.lstm3(x)
x = self.drop(x)
x = self.fc1(x)
But my calls to forward gives me runtime errors like so
RuntimeError: matrices expected, got 1D, 2D tensors at /home/robotec/Documents/NNs/locuclab-pytorch/torch/lib/TH/generic/THTensorMath.c:1224
I wonder what I must be doing wrong.
|
st115879
|
The input to an nn.LSTM module should be a time series, num_timesteps x batch_size x num_features. You’re passing it a single timestep – you most likely want to use nn.LSTMCell.
|
st115880
|
Also, note that {...} is a set in Python, so it doesn’t preserve ordering! You should use [...] in the torch.cat call!
|
st115881
|
Thank you, guys. There’s still; something I am not too clear of. When I am defining a model in torch, usually, I can use nn.Sequential to place my layers in a container form such as
neunet = nn.Sequential()
local rnn
rnn = nn.FastLSTM(9, 9, 5)
neunet:add(rnn)
rnn = nn.FastLSTM(9, 6, 5)
neunet:add(rnn)
rnn = nn.FastLSTM(6, 3, 5)
neunet:add(rnn)
neunet:add(nn.Dropout(0.3))
neunet:add(nn.Linear(3, 3, bias))
neunet = nn.Sequencer(neunet, 3)
Now in PyTorch, I defined the same model (thanks to James’ pointing out to me that nn.LSTM is different from nn.LSTMCell):
class LSTMModel(nn.Module):
def __init__(self, noutputs=3,numLayers=3):
super(LSTMModel, self).__init__()
self.nHidden = [9, 6, 6]
self.cost = nn.MSELoss(size_average=False)
self.noutputs = noutputs
self.num_layers = numLayers
self.lstm0 = nn.LSTM(self.nHidden[0], self.nHidden[0], self.num_layers,
batch_first= True, dropout=0.3)
self.lstm1 = nn.LSTM(self.nHidden[1], self.nHidden[1], self.num_layers,
batch_first= True, dropout=0.3)
self.lstm2 = nn.LSTM(self.nHidden[2], self.nHidden[2], self.num_layers,
batch_first= True, dropout=0.3)
self.fc = nn.Linear(self.nHidden[2], noutputs)
def forward(self, x):
h0 = Variable(torch.zeros(self.num_layers, x.size(0), self.nHidden[0]))
c0 = Variable(torch.zeros(self.num_layers, x.size(0), self.nHidden[0]))
#Now propagate the rnn
#rnn layer 1
out0, _ = self.lstm0(x, (h0, c0))
#second rnn layer
h1 = Variable(torch.zeros(self.num_layers, x.size(0), self.nHidden[1]))
c1 = Variable(torch.zeros(self.num_layers, x.size(0), self.nHidden[1]))
out1, _ = self.lstm1(out0, (h1, c1))
#third rnn layer
h2 = Variable(torch.zeros(self.num_layers, x.size(0), self.nHidden[2]))
c2 = Variable(torch.zeros(self.num_layers, x.size(0), self.nHidden[2]))
out2, _ = self.lstm2(out1, (h2, c2))
#hidden layer of last time step
out = self.fc(out2[:, -1, :])
return out
So I forward a DoubleTensor defined as
x = Variable(torch.randn(50000, 9), requires_grad=False) through this model and I get weird errors like
RuntimeError: matrices expected, got 1D, 2D tensors at /home/robotec/Documents/NNs/locuclab-pytorch/torch/lib/TH/generic/THTensorMath.c:1224
|
st115882
|
You never need to create zero tensors as the initial hidden state of an nn.LSTM; that’s done automatically. But I’m still confused as to what you’re trying to do with the LSTMs; are you passing in a time series? If so, it needs to be three-dimensional (time by batch by features).
|
st115883
|
Yes, it is a time series. Could you give a minimal working example of how I can correctly form a PyTorch model and a time series input tensor based off the snippet I wrote above? Sorry to be a pain.
|
st115884
|
If x is 50000 by 9, then what’s your time dimension, what’s your batch dimension, and what’s your feature dimension?
|
st115885
|
@jekbradbury, how do you suggest the initial states of the hidden state be initialized as in the example I give?
Like this?
h0 = Variable((self.num_layers, x.size(0), self.nHidden[0]))
|
st115886
|
Hi there
i was having similar question in this topic.
i have a time sequence data (accelerations sample in time) and i want to figure out the step length with a Bi-LSTM network, but i have problem making my input data. can you help me how to make dataset with my own data ? i dont know how to put them in order if i want to make BiLSTM input and what should my features be
i think i might use a window over my data and chose with a width of 50 accelerations i would have 50 features, but since there must be a time dimension it does’nt make sense to have a window over data as a feature
I would appreciate if you could help me with this.
|
st115887
|
Screen Shot 2017-08-20 at 12.56.06 AM.png2518×738 267 KB
Here (red) from conv5 to conv6 the height, width is becoming from (75,75) to (38,38) due to conv2d with kernel size 3, stride = 2, padding =1. I give this output as skip connection when decoding (yellow part). backblock with size (1,256,75,75) corresponds to output of conv6 which is given to deconv1 with (1,128,76,76) as skip connection. Due to eariler convTranspose2d layer with kernel size 4, stride =2,padding=1, the height, width become from (38,38) to (76,76). Resulting in dimension mismatch with skip connection. How can I resolve this issue?
|
st115888
|
I want to write a loss layer, I have its numpy implementation, but it involves some complicate indexing and masking during the forward and backward, I used “np.subtract.outer()”, scipy.spatial.distance’s “cdist()” or sklearn.metrics.pairwise’s “pairwise_distances()”, and “np.einsum(‘ik,jk->ijk’, …)” . Until now I couldn’t find an easy way to achieve these operations using tensor, can anyone give some suggestions? thanks.
|
st115889
|
I am using Anaconda + Intel Python, and noticed that installing PyTorch downgrades Intel MKL and libgcc to older versions. Is there any way around this?
Screen Shot 2017-08-19 at 11.13.42 PM.jpg1526×722 462 KB
|
st115890
|
I am using python 3.6 on Ubuntu 16.04.
I have just updated pytorch via conda.
But, I got an following error when torch was imported.
Python 3.6.0 |Anaconda 4.3.1 (64-bit)| (default, Dec 23 2016, 12:22:00)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type “help”, “copyright”, “credits” or “license” for more information.
import torch
Traceback (most recent call last):
File “”, line 1, in
File “/home/sypark/anaconda3/lib/python3.6/site-packages/torch/init.py”, line 53, in
from torch._C import *
ImportError: /home/sypark/anaconda3/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so: undefined symbol: PySlice_AdjustIndices
Please help me how to solve this problem.
|
st115891
|
this makes no sense yet, because python 3.6 should be having that. Let’s try to figure this out.
Can you please paste your output of the following command:
ldd /home/sypark/anaconda3/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so
|
st115892
|
I downgraded pytorch to 0.1.10. Then, there is no problem.
Also, I found the same question on
stackoverflow.com
undefined symbol: PySlice_AdjustIndices when importing PyTorch 1
python, anaconda, pytorch
asked by
Monica Heddneck
on 09:12PM - 29 Mar 17 UTC
He had the same problem.
The output is below.
linux-vdso.so.1 => (0x00007ffc94684000)
libshm.so => /home/sypark/anaconda3/lib/python3.6/site-packages/torch/lib/libshm.so (0x00007f9936bfa000)
libcudart.so.8.0 => /home/sypark/anaconda3/lib/python3.6/site-packages/torch/lib/libcudart.so.8.0 (0x00007f9936991000)
libcudnn.so.5 => /home/sypark/anaconda3/lib/python3.6/site-packages/torch/lib/libcudnn.so.5 (0x00007f9931bbb000)
libpython3.6m.so.1.0 => /home/sypark/anaconda3/lib/python3.6/site-packages/torch/…/…/…/libpython3.6m.so.1.0 (0x00007f99316b6000)
libTH.so.1 => /home/sypark/anaconda3/lib/python3.6/site-packages/torch/lib/libTH.so.1 (0x00007f9931106000)
libTHS.so.1 => /home/sypark/anaconda3/lib/python3.6/site-packages/torch/lib/libTHS.so.1 (0x00007f9930ed3000)
libTHPP.so.1 => /home/sypark/anaconda3/lib/python3.6/site-packages/torch/lib/libTHPP.so.1 (0x00007f9930b05000)
libTHNN.so.1 => /home/sypark/anaconda3/lib/python3.6/site-packages/torch/lib/libTHNN.so.1 (0x00007f9930800000)
libTHC.so.1 => /home/sypark/anaconda3/lib/python3.6/site-packages/torch/lib/libTHC.so.1 (0x00007f9922a49000)
libTHCS.so.1 => /home/sypark/anaconda3/lib/python3.6/site-packages/torch/lib/libTHCS.so.1 (0x00007f992282f000)
libTHCUNN.so.1 => /home/sypark/anaconda3/lib/python3.6/site-packages/torch/lib/libTHCUNN.so.1 (0x00007f991694d000)
libgcc_s.so.1 => /home/sypark/anaconda3/lib/python3.6/site-packages/torch/…/…/…/libgcc_s.so.1 (0x00007f9916737000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f99164fe000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f9916134000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f9915f2c000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f9915c22000)
/lib64/ld-linux-x86-64.so.2 (0x000055f3b1b3f000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f9915a1e000)
libstdc++.so.6 => /home/sypark/anaconda3/lib/python3.6/site-packages/torch/lib/…/…/…/…/libstdc++.so.6 (0x00007f9915708000)
libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x00007f9915504000)
libmkl_intel_lp64.so => /home/sypark/anaconda3/lib/python3.6/site-packages/torch/lib/…/…/…/…/libmkl_intel_lp64.so (0x00007f9914ae2000)
libmkl_intel_thread.so => /home/sypark/anaconda3/lib/python3.6/site-packages/torch/lib/…/…/…/…/libmkl_intel_thread.so (0x00007f991307c000)
libmkl_core.so => /home/sypark/anaconda3/lib/python3.6/site-packages/torch/lib/…/…/…/…/libmkl_core.so (0x00007f9911584000)
libiomp5.so => /home/sypark/anaconda3/lib/python3.6/site-packages/torch/lib/…/…/…/…/libiomp5.so (0x00007f99111da000)
libgomp.so.1 => /home/sypark/anaconda3/lib/python3.6/site-packages/torch/lib/…/…/…/…/libgomp.so.1 (0x00007f9910fca000)
libcublas.so.8.0 => /home/sypark/anaconda3/lib/python3.6/site-packages/torch/lib/libcublas.so.8.0 (0x00007f990e617000)
libcurand.so.8.0 => /home/sypark/anaconda3/lib/python3.6/site-packages/torch/lib/libcurand.so.8.0 (0x00007f990a6ac000)
libcusparse.so.8.0 => /home/sypark/anaconda3/lib/python3.6/site-packages/torch/lib/libcusparse.so.8.0 (0x00007f9907b99000)
|
st115893
|
it would help me if you gave the output of the command:
python --version
Thanks.
|
st115894
|
My python version is below.
Python 3.6.0 :: Anaconda 4.3.1 (64-bit)
Also, I updated pytorch by the command as
conda update pytorch torchvision -c soumith
Thanks.
|
st115895
|
this is helpful, thank you. I’ll try to fix this error by tomorrow in the new binaries.
|
st115896
|
I do not the reason, I got the out of memory error.
Compared to the latest, the previous version consumes 9GB only (12GB total).
I should stay at the previous version for a while.
Thanks.
|
st115897
|
My application is Inception-resnetv2 based autoencoder.
I would like to share my code to solve this problem, but I am not allowed to do this by the company for a while.
I will post the github address if I am allowed.
Sorry.
|
st115898
|
Hey Man, I upgrade my pytorch with conda update pytorch torchvision -c soumith. But after updating i get this error:
in ()
----> 1 import torch
/home/mohammad/anaconda3/lib/python3.6/site-packages/torch/init.py in ()
51 sys.setdlopenflags(_dl_flags.RTLD_GLOBAL | _dl_flags.RTLD_NOW)
52
—> 53 from torch._C import *
54
55 all += [name for name in dir(_C)
ImportError: /home/mohammad/anaconda3/lib/python3.6/site-packages/torch/lib/libTHC.so.1: cannot read file data
So what shoud i do then? How can I solve this?
|
st115899
|
I am using python 3.6 on Amazon linux.
I have just updated pytorch via conda too.
I also encountered a similar problem。
import torchvision
Traceback (most recent call last):
File “”, line 1, in
File “/home/ec2-user/anaconda3/lib/python3.6/site-packages/torchvision-0.1.8-py3.6.egg/torchvision/init.py”, line 2, in
File “/home/ec2-user/anaconda3/lib/python3.6/site-packages/torchvision-0.1.8-py3.6.egg/torchvision/datasets/init.py”, line 1, in
File “/home/ec2-user/anaconda3/lib/python3.6/site-packages/torchvision-0.1.8-py3.6.egg/torchvision/datasets/lsun.py”, line 2, in
File “/home/ec2-user/anaconda3/lib/python3.6/site-packages/PIL/Image.py”, line 56, in
from . import _imaging as core
ImportError: /home/ec2-user/anaconda3/lib/python3.6/site-packages/PIL/_imaging.cpython-36m-x86_64-linux-gnu.so: undefined symbol: PySlice_Unpack
|
st115900
|
that is weird. what is your output of:
python --version
and what is the output of conda list | grep pytorch
|
st115901
|
smth:
conda list | grep pytorch
$ python --version
Python 3.6.0 :: Anaconda custom (64-bit)
$ conda list | grep pytorch
pytorch 0.1.11 py360_4cu80 [cuda80] soumith
|
st115902
|
ok the problem is pytorch version 0.1.11_4. It had this problem and I updated it to 0.1.11_5 which did not have this issue.
Can you do:
conda uninstall pytorch
conda install pytorch -c soumith
Verify that it is installing 0.1.11_5
|
st115903
|
[ec2-user@ip-172-31-40-200 Notebooks]$ conda install pytorch -c soumith
Fetching package metadata .....
......
Solving package specifications: .
Package plan for installation in environment /home/ec2-user/anaconda3:
The following NEW packages will be INSTALLED:
pytorch: 0.1.11-py360_4cu80 soumith [cuda80]
Proceed ([y]/n)?
y
ec2-user@ip-172-31-40-200 Notebooks]$ conda list | grep pytorch
pytorch 0.1.11 py360_4cu80 [cuda80] soumith
|
st115904
|
I’m using HingeLoss as a loss function but got this error.
inconsistent tensor size at d:\downloads\pytorch-master-1\torch\lib\th\generic/THTensorMath.c:134
The documentation says: Measures the loss given an input x which is a 2D mini-batch tensor and a labels y, a 1D tensor containing values (1 or -1).
I’m feeding a 2D mini-batch tensor
Outputs of model Variable containing:
0.0325 0.2188
0.0325 0.2188
0.0325 0.2188
0.0325 0.2188
[torch.FloatTensor of size 4x2]
torch.Size([4, 2])
1D target variable
Lables of the class Variable containing:
-1
-1
-1
-1
[torch.FloatTensor of size 4]
torch.Size([4])
This is the full error
Epoch 0/9
----------
LR is set to 0.001
Outputs of model Variable containing:
0.0325 0.2188
0.0325 0.2188
0.0325 0.2188
0.0325 0.2188
[torch.FloatTensor of size 4x2]
torch.Size([4, 2])
Lables of the class Variable containing:
-1
-1
-1
-1
[torch.FloatTensor of size 4]
torch.Size([4])
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-14-c1d26d6a84db> in <module>()
----> 1 model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=10)
<ipython-input-13-84abd5130424> in train_model(model, criterion, optimizer, lr_scheduler, num_epochs)
61 preds[preds == 1] = 1
62
---> 63 loss = criterion(outputs, labels)
64 # Typecasting labels
65 labels = labels.type(torch.LongTensor)
C:\Users\Prakritidev Verma\Anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
204
205 def __call__(self, *input, **kwargs):
--> 206 result = self.forward(*input, **kwargs)
207 for hook in self._forward_hooks.values():
208 hook_result = hook(self, input, result)
C:\Users\Prakritidev Verma\Anaconda3\lib\site-packages\torch\nn\modules\loss.py in forward(self, input, target)
226 def forward(self, input, target):
227 return self._backend.HingeEmbeddingLoss(self.margin,
--> 228 self.size_average)(input, target)
229
230
C:\Users\Prakritidev Verma\Anaconda3\lib\site-packages\torch\nn\_functions\loss.py in forward(self, input, target)
103 buffer = input.new()
104 buffer.resize_as_(input).copy_(input)
--> 105 buffer[torch.eq(target, -1.)] = 0
106 output = buffer.sum()
107
RuntimeError: inconsistent tensor size at d:\downloads\pytorch-master-1\torch\lib\th\generic/THTensorMath.c:134
What am I doing wrong? Please let me know.
Thanks
|
st115905
|
Suppose that I have 30K images , the input of my nerwork is an image,and the output is a scalar. According to the general practice, I should divide the 30K images into many batches, and calculate the output separately. But now, I need to calculate softmax of the total 30K outputs, and the loss backward. So out of memory. What should I do?
|
st115906
|
Hi !
In the GAN example (https://github.com/pytorch/examples/blob/master/dcgan/main.py 3),
while training the D-network on fake data:
# train with fake
noise.resize_(batch_size, nz, 1, 1).normal_(0, 1)
noisev = Variable(noise)
fake = netG(noisev)
I want to add the pull-away penalty proposed by [1]. So, I add the following lines:
numerator = fake.view(batch_size,3*32*32) # assume that image size is 3x32x32
norm_fake = numerator*numerator.transpose(0,1)
Here, even though size of numerator is [batchsize, 3072] (I printed the numerator.size()), the size of norm_fake is also [batchsize, 3072] (I expect that it becomes [batchsize, batchsize]). I don’t know what’s wrong…
Is this an issue or there is something that I don’t know about?
Thanks in advance,
Kimin Lee.
[1] Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network.
arXiv preprint arXiv:1609.03126, 2016.
|
st115907
|
This issue is solved by replacing numerator*numerator.transpose(0,1) with torch.matmul(numerator,numerator.transpose(0,1)).
|
st115908
|
I’m making my own version of the tutorial here 15 on classifying surnames by their language of origin. I’d like to use an LSTM and train the model on batches of variable length sequences. In other words, I’m to solve a many-to-one (many time steps to one label) classification problem with an LSTM and variable length inputs.
The part I’m struggling with is properly designing the forward pass on my model to use packed sequences. If I were writing this model to handle batches of sequences that all had the same number of time steps, I might write something like this:
def forward(self, inp, hidden):
out, hidden = self.lstm(inp, hidden)
last_lstm_step = out[-1] # Since we only produce one label
decoded = self.linear_decoder(last_lstm_step)
return decoded, hidden
But since this model operates with PackedSequences which have a variable number of time-steps, we can’t just use out[-1] to get the last time step for each input sequence. Instead, we may try something like this:
def forward(self, inp, hidden):
out, hidden = self.lstm(inp, hidden)
(all_timesteps, lengths) = pad_packed_sequence(out)
last_step = last_steps(out, lengths)
decoded = self.linear_decoder(last_step)
return decoded, hidden
Where last_steps is something like this:
def last_steps(x, lengths):
lasts = []
for i, j in zip(range(x.size()[1]), lengths):
lasts.append(x[j - 1][i].view(1, -1))
return torch.cat(lasts, 0)
Unfortunately, this forward pass seems not to work with batches larger than one or two. With larger batch sizes the network fails to learn and often falls into guessing the same label for every sample. I suspect I’m doing something wrong in the “unpacking and getting last steps” phase of the forward pass, but I’m not sure what. Any help much appreciated.
|
st115909
|
An example using batches of variable length sentences could also be a great extension to this 59 tutorial. I haven’t found anything about PackedSequence in the tutorials so far.
|
st115910
|
Example of Many-to-One LSTM
Recurrent modules from torch.nn will get an input sequence and output a sequence of the same length. Just take the last element from that output sequence.
This reply by @Tudor_Berariu on the topic of Many-to-One LSTMs relates closely to what I’m trying to do. But how do I “Just take the last element from that output sequence.” when my input batch is a PackedSequence of samples of variable length (in terms of time-steps)?
|
st115911
|
output, (hn, cn) = LSTM(packed_input)
hn containing the hidden state for t=seq_len, which means it should be the last element from the output sequence. Even your LSTM takes variable-length of input, it should be something like this:
a b c d
e f g 0
h i j 0
k l 0 0
# hn:
d
g
j
l
|
st115912
|
I have recently tried to implement my customized RNN.
Looking over the posts regarding customized RNN in PyTorch. Many people suggest a good example to start with:
BN-LSTM
github.com
jihunchoi/recurrent-batch-normalization-pytorch/blob/master/bnlstm.py 11
"""Implementation of batch-normalized LSTM."""
import torch
from torch import nn
from torch.autograd import Variable
from torch.nn import functional, init
class SeparatedBatchNorm1d(nn.Module):
"""
A batch normalization module which keeps its running mean
and variance separately per timestep.
"""
def __init__(self, num_features, max_length, eps=1e-5, momentum=0.1,
affine=True):
"""
Most parts are copied from
torch.nn.modules.batchnorm._BatchNorm.
"""
This file has been truncated. show original
The code can run smoothly when using only single GPU. When I started to use this code with multiple GPUs, I received an error with either LSTMCell or BNLSTMCell:
RuntimeError: arguments are located on different GPUs at /py/conda-bld/pytorch_1493677666423/work/torch/lib/THC/generic/THCTensorMathBlas.cu:232
Looking at the code, I can’t seem to figure where exactly is causing this problem. Could someone point me a direction to solve this problem?
|
st115913
|
Hi folks,
I’m trying to write a model where each image is made up of pixels and each pixel associated with a latent vector. This quickly explodes the memory usage of the embedding matrix since it grows like (number of images x number of pixels x latent dimensionality) which quickly exceeds GPU capacities. However, updates are sparse: only the vectors for images in a given minibatch are updated. Is there a drop-in solution like a “file-backed” embedding matrix that is cached only in memory, instead of being entirely in memory? Or is there a another recommended technique here?
Thanks!
chris moody
|
st115914
|
I’m kind of lost a little bit. Is it possible to do a High Frequency Error Norm (HFEN) in pytorch?
|
st115915
|
Given a LongTensor or coordinates, how can I select specific values from a sparse tensor?
v =
0 0 3
4 0 5
[torch.FloatTensor of size 2x3]
i = torch.LongTensor([[1, 0]]).t()
To get something like:
4
|
st115916
|
Hi
I am trying out pytorch by implementing a model written originally in theano
Here is the original model
GitHub
miyyer/rmn
rmn - relationship modeling networks (NAACL 2016)
Here is my pytorch implementation
I am not able to reproduce the results and I think this has to do with the fact that
For implementing a custom recurrence over time, the theano uses scan function while I am using a for loop and maybe due to some reason the backpropogation doesn’t happen properly although i see a constant decrease in loss towards convergence.
I had to implement a custom loss function, which I added together into the same forward function that computes the output. I’m not sure if this is causes any problem in the behind because most examples i see online are using an inbuilt loss function
I read the use of detach and used it in a place that I thought it was needed but I am not sure if it is needed in any other place
Incase someone has implemented this paper and has any feedback as to where I am making errors would be great to receive those.
Thanks
import torch.nn as nn
import torch.nn.init as init
import torch, copy, random, time, pdb, numpy as np
from torch.autograd import Variable
import torch.nn.functional as F
from util import *
from torch import optim
from itertools import ifilter
class Config(object):
def __init__(self, compared=[], **kwargs):
self.name = "rmn"
self.word_drop = 0.75
self.desc_dim = 30
self.book_dim = 50
self.num_negs = 50
self.char_dim = 50
self.alpha_train_point = 15
self.train_epochs = 20
self.alpha_init_val = 0.5
self.eval = False
self.vocb_size = None
self.emb_dim = None
self.num_books = None
self.num_chars = None
def __repr__(self):
ks = sorted(k for k in self.__dict__ if k not in ['name'])
return '\n'.join('{:<30s}{:<s}'.format(k, str(self.__dict__[k])) for k in ks)
# ---global data and config initializations---
BATCH_SIZE, prc_batch_cn = 50, 0
span_data, span_size, wmap, cmap, bmap = load_data('data/relationships.csv.gz', 'data/metadata.pkl')
config = Config()
"""
it is basically dividing each word representation by the sqrt(sum(x_i^2))
so we have 16414, 300 divided by 16414, 1 ...one normalizer for each word of the vocabulary
it is basically making it a unit vector for each word, so we are ignoring vector length/magnitude and only relying on
direction to use it in downstream tasks like similarity calculation and so on
"""
We = cPickle.load(open('data/glove.We', 'rb')).astype('float32')
We = torch.from_numpy(We)
We = F.normalize(We)
config.vocab_size, config.emb_dim, d_word = We.size(0), We.size(1), We.size(1)
config.num_chars = len(cmap)
config.num_books = len(bmap)
config.vocab_size = len(wmap)
# this is basically one data point, where it is in turn composed of multiple time steps or spans
num_traj = len(span_data)
revmap = {}
for w in wmap:
revmap[wmap[w]] = w
# ---initialization close
class RMNModel(nn.Module):
def __init__(self, config, emb_data):
super(RMNModel, self).__init__()
# the embedding layer to lookup the pre-trained glove embedding of span words
self.w_embed = nn.Embedding(config.vocab_size, config.emb_dim)
self.w_embed.weight.requires_grad = False
self.w_embed.weight.data.copy_(emb_data)
self.c_embed = nn.Embedding(config.num_chars, config.char_dim)
self.b_embed = nn.Embedding(config.num_books, config.book_dim)
self.softmax = nn.Softmax()
self.sigmoid = nn.Sigmoid()
self.relu = nn.ReLU()
self.dropout = nn.Dropout(config.word_drop)
self.w_d_h = nn.Linear(config.emb_dim, config.desc_dim, bias=False)
self.w_d_prev = nn.Linear(config.desc_dim, config.desc_dim, bias=False)
self.w_rel = nn.Linear(config.desc_dim, config.emb_dim, bias=False)
# the below 3 layers form the transformation from individual span rep to h_in
self.w_c_to_emb = nn.Linear(config.char_dim, config.emb_dim, bias=False)
self.w_b_to_emb = nn.Linear(config.book_dim, config.emb_dim, bias=False)
self.w_vs_to_emb = nn.Linear(config.emb_dim, config.emb_dim, bias=True)
self.v_alpha = nn.Linear(config.emb_dim*2 + config.desc_dim, 1, bias=False)
self.alpha = Variable(config.alpha_init_val * torch.ones(1,), requires_grad=False)
if torch.cuda.is_available():
self.alpha = self.alpha.cuda()
self.train_alpha = False
def set_train_alpha(self, val):
self.train_alpha = val
def update_alpha(self, input):
self.alpha = self.sigmoid(self.v_alpha(Variable(input.data)))
#this is batch_size * 1
# the dimension of input is T * B * S where T is the max number of spans available for a given (c1,c2,b) that
# is considered in a batch B is the batch size and S is the max span size or the
def forward(self, input):
# seq is size N * M where N = batch size and M = max sequence length
bk_id, char_ids, seq, seq_mask, neg_seq, neg_seq_mask, spans_count_l = input
drop_mask = self.dropout(seq_mask)
if self.training:
drop_mask = drop_mask * (1 - config.word_drop)
# v_s has dimension say 8 * 116 * 300
# is of size N * M * 300
v_s = self.w_embed(seq)
temp_ones = Variable(torch.ones(drop_mask.size(0), 1)).cuda()
# mean out the sequence dimension
seq_mask = seq_mask.unsqueeze(2)
v_s_mask = v_s * seq_mask
seq_mask_sums = torch.sum(seq_mask, 1)
seq_mask_sums = torch.max(seq_mask_sums, temp_ones)
v_s_mask = torch.sum(v_s_mask, 1) / seq_mask_sums
drop_mask = drop_mask.unsqueeze(2)
drop_mask_sums = torch.sum(drop_mask, 1)
drop_mask_sums = torch.max(drop_mask_sums, temp_ones)
v_s_dropmask = v_s * drop_mask
v_s_dropmask = torch.sum(v_s_dropmask, 1) / drop_mask_sums
v_s_dropmask = self.w_vs_to_emb(v_s_dropmask)
# now v_s is of size (8, 300) one word embedding for each span
if neg_seq is not None:
v_n = self.w_embed(neg_seq)
#the negative words are not dropped out
neg_seq_mask = neg_seq_mask.unsqueeze(2)
v_n = v_n * neg_seq_mask
v_n = torch.sum(v_n, 1) / torch.sum(neg_seq_mask, 1)
v_b, v_c = self.b_embed(bk_id), self.c_embed(char_ids)
# returns vars of size 1*50 and 1*2*50
c1_var = v_c[:,0,:]
c2_var = v_c[:,1,:]
v_b, v_c_1, v_c_2 = self.w_b_to_emb(v_b), self.w_c_to_emb(c1_var), self.w_c_to_emb(c2_var)
# v_c_1 is of size N*300 and v_b of N*300
v_c = v_c_1 + v_c_2
if spans_count_l is not None:
# the second dimension is basically storing the maximum number of time steps that we can have for any data point
seq_in = Variable(torch.zeros(BATCH_SIZE, max(spans_count_l), 300))
seq_in_dp = Variable(torch.zeros(BATCH_SIZE, max(spans_count_l), 300))
neg_seq_in = Variable(torch.zeros(BATCH_SIZE, config.num_negs, 300))
if torch.cuda.is_available():
seq_in = seq_in.cuda()
seq_in_dp = seq_in_dp.cuda()
neg_seq_in = neg_seq_in.cuda()
cum_spans_count = 0
cntr = 0
for i in spans_count_l:
# for the original with only sequence mask
cur_seqq = v_s_mask[cum_spans_count:(cum_spans_count + i), :]
if i != max(spans_count_l):
pad_res = torch.cat((cur_seqq, Variable(torch.zeros(max(spans_count_l) - i, 300)).cuda()), 0)
seq_in[cntr, :, :] = pad_res
else:
seq_in[cntr, :, :] = cur_seqq
# for the original with dropout and sequence mask both
cur_seqq_dp = v_s_dropmask[cum_spans_count:(cum_spans_count + i), :]
if i != max(spans_count_l):
pad_res_dp = torch.cat((cur_seqq_dp, Variable(torch.zeros(max(spans_count_l) - i, 300)).cuda()), 0)
seq_in_dp[cntr, :, :] = pad_res_dp
else:
seq_in_dp[cntr, :, :] = cur_seqq_dp
if neg_seq is not None:
neg_seq_in[cntr,:,:] = v_n[cntr*config.num_negs:(cntr + 1)*config.num_negs, :]
cum_spans_count += i
cntr += 1
if neg_seq is not None:
del v_n
del v_s
# initalize
total_loss = 0
prev_d_t = Variable(torch.zeros(BATCH_SIZE, config.desc_dim), requires_grad=False)
zrs = Variable(torch.zeros(BATCH_SIZE, config.num_negs), requires_grad=False)
if torch.cuda.is_available():
zrs = zrs.cuda()
prev_d_t = prev_d_t.cuda()
trajn = []
# compute the d_t vectors in parallel
for t in range(max(spans_count_l)):
# the dropout one is used here to calculate the mixed span representation
v_st_dp = seq_in_dp[:, t, :].detach()
# the default only seq mask and no dropout applied is used to calculate the loss
v_st_mask = seq_in[:, t, :].detach()
# 20 * 300
h_in = v_st_dp + v_b + v_c
h_t = self.relu(h_in)
d_t = self.alpha * self.softmax(self.w_d_h(h_t) + self.w_d_prev(prev_d_t)) + (1 - self.alpha) * prev_d_t
# dt is of size batch_Size * 30
sv = np.sum(np.isnan(d_t.data.cpu().numpy()).astype(int))
if sv > 0:
#pdb.set_trace()
print("got nan in d_t")
# size is 1 * 300
if self.train_alpha:
self.update_alpha(torch.cat((h_t, d_t, v_st_dp), 1))
sv2 = np.sum(np.isnan(self.alpha.data.cpu().numpy()).astype(int))
if sv2 > 0:
print("got nan in alpha")
#pdb.set_trace()
# save the relationship state for each time step and return it as the trajectory for the given data point
# each data point corresponds to a single character pair and book and all spans of it
if config.eval:
trajn.append(d_t.data.cpu()) # move it out of gpu memory
if neg_seq is None:
continue
# this is the reconstruction vector made using the dictionary and the hidden state vector d_t
r_t = self.w_rel(d_t) # is of size BATCH * 300
# normalization here
r_t = F.normalize(r_t)
v_st_mask = F.normalize(v_st_mask) # default is euclidean along the dim=1
neg_seq_in = F.normalize(neg_seq_in, 2, 2) # default eps is 1e-12
# this is the negative loss in the max margin equation
# BATCH_SIZE * NUM_NEG * 300 times BATCH_SIZE * 1 * 300
#v_n_res = torch.bmm(neg_seq_in, r_t.unsqueeze(2)).squeeze(2)
v_n_res = neg_seq_in * r_t.unsqueeze(1)
v_n_res = torch.sum(v_n_res, 2)
# BATCH_SIZE * NUM_NEG
# each of these is a matrix of size BATCH_SIZE * 300
# we are doing a similarity between the two vectors like a dot product
recon_loss = r_t * v_st_mask
recon_loss = torch.sum(recon_loss, 1, keepdim=True)
# now the recon loss is of size BATCH_SIZE * 1
cur_loss = torch.sum(torch.max(zrs, 1 - recon_loss + v_n_res), 1)
# this is batch_size * 1
# this mask is for removing data points which dont have a valid value for this time step
mask = Variable(torch.from_numpy((t < np.array(spans_count_l)).astype('float')).float()).cuda()
loss = torch.dot(cur_loss, mask)
total_loss += loss
prev_d_t = d_t
w_rel_mat = self.w_rel.weight
# w_rel is a weight matrix of size d * K so we want to normalize each of the K descriptors along the 0 axis
w_rel_mat_unit = F.normalize(w_rel_mat, 2, 0)
w_rel_mm = torch.mm(w_rel_mat_unit.t(), w_rel_mat_unit)
id_mat = Variable(torch.eye(w_rel_mat_unit.size(1))).cuda()
w_rel_mm = w_rel_mm.sub(id_mat)
ortho_penalty = 1e-6 * torch.norm(w_rel_mm)
if total_loss is not None:
total_loss += ortho_penalty
del seq_in, seq_in_dp, neg_seq_in, prev_d_t, d_t, seq_mask, zrs
# if you want to return multiple things put them into a list else it throws an error
return total_loss, trajn
def train_epoch(mdl, optimizer):
random.shuffle(span_data)
losses, bk_l, ch_l, curr_l, cm_l, dp_l, ns_l, nm_l, num_spans = [], [], [], [], [], [], [], [], []
prc_batch_cn, batch_cnt = 0, 0
#temp_data = span_data[:200]
for book, chars, curr, cm in span_data:
# for each relation with s spans we generate n negative spans
ns, nm = generate_negative_samples(num_traj, span_size, config.num_negs, span_data)
book = torch.from_numpy(book).long()
chars = torch.from_numpy(chars).long().view(1, 2)
curr = torch.from_numpy(curr).long()
ns = torch.from_numpy(ns).long()
cm = torch.from_numpy(cm)
nm = torch.from_numpy(nm)
# word dropout
if torch.cuda.is_available():
book = book.cuda() # one book
chars = chars.cuda() # one pair of character
curr = curr.cuda() # list of spans for the above relation
cm = cm.cuda() # the sequence mask for each span
ns = ns.cuda()
nm = nm.cuda()
bk_l.append(book)
ch_l.append(chars)
curr_l.append(curr)
num_spans.append(curr.size(0))
cm_l.append(cm)
ns_l.append(ns)
nm_l.append(nm)
batch_cnt += 1
if batch_cnt % BATCH_SIZE == 0:
batch_cnt = 0
bk_in = Variable(torch.cat(bk_l))
ch_in = Variable(torch.cat(ch_l))
curr_in = Variable(torch.cat(curr_l))
cm_in = Variable(torch.cat(cm_l))
ns_in = Variable(torch.cat(ns_l))
nm_in = Variable(torch.cat(nm_l))
# call training function here to get cost and loss
optimizer.zero_grad()
loss, _ = mdl([bk_in, ch_in, curr_in, cm_in, ns_in, nm_in, num_spans])
prc_batch_cn += 1
losses.append(loss.data[0])
loss.backward()
torch.nn.utils.clip_grad_norm(mdl.parameters(), 10)
optimizer.step()
del bk_l[:], ch_l[:], curr_l[:], cm_l[:], ns_l[:], nm_l[:], num_spans[:]
del bk_in, ch_in, curr_in, cm_in, ns_in, nm_in
if len(num_spans) > 0:
# process the remaining element which were not the % BATCH SIZE
global BATCH_SIZE
BATCH_SIZE = len(num_spans)
mdl.alpha = mdl.alpha[0].repeat(BATCH_SIZE, 1)
bk_in = Variable(torch.cat(bk_l))
ch_in = Variable(torch.cat(ch_l))
curr_in = Variable(torch.cat(curr_l))
cm_in = Variable(torch.cat(cm_l))
ns_in = Variable(torch.cat(ns_l))
nm_in = Variable(torch.cat(nm_l))
# call training function here to get cost and loss
optimizer.zero_grad()
loss, _ = mdl([bk_in, ch_in, curr_in, cm_in, ns_in, nm_in, num_spans])
prc_batch_cn += 1
losses.append(loss.data[0])
loss.backward()
torch.nn.utils.clip_grad_norm(mdl.parameters(), 10)
optimizer.step()
del bk_l[:], ch_l[:], curr_l[:], cm_l[:], ns_l[:], nm_l[:], num_spans[:]
return sum(losses) / len(span_data)
def train(n_epochs):
print d_word, span_size, config.desc_dim, config.vocab_size, config.num_chars, config.num_books, num_traj
print 'compiling...'
# build neural network here
mdl = RMNModel(config, We)
# enter train mode
mdl.train()
# transfer to gpu
if torch.cuda.is_available():
mdl.cuda()
# print parameters and initialize them here
for name, p in mdl.named_parameters():
print(name, p.size(), p.requires_grad, type(p))
if name == 'c_embed.weight' or name == 'b_embed.weight':
print('init', name)
init.normal(p)
elif name == 'w_embed.weight':
continue
elif 'bias' not in name:
print('init', name)
init.xavier_uniform(p)
else:
print('init', name)
init.constant(p, 0)
params = list(filter(lambda p: p.requires_grad, mdl.parameters()))
print('total params', len(params))
optimizer = optim.Adam(params)
print 'done compiling, now training...'
min_loss = None
for epoch in range(n_epochs):
if epoch >= config.alpha_train_point:
mdl.set_train_alpha(True)
mdl.w_rel.weight.requires_grad = False
start_time = time.time()
eloss = train_epoch(mdl, optimizer)
end_time = time.time()
print 'done with epoch: ', epoch, ' cost =', eloss, 'time: ', end_time - start_time
if min_loss is None or eloss < min_loss:
torch.save(mdl.state_dict(), "model_16.pth")
torch.save(optimizer.state_dict(), "optimizer_16.pth")
global BATCH_SIZE
BATCH_SIZE = 50
mdl.alpha = mdl.alpha[0].repeat(BATCH_SIZE, 1)
torch.save(mdl.state_dict(), "model_16_last.pth")
"""
Since the descriptors are represented in the same 300 dimension space as that of the vocabulary
we can find nearest neighbors of the descriptor vector and select a label from the 10 most similar vocab words
"""
def save_descriptors(descriptor_log, weight_mat, We, revmap):
We = We.numpy()
# original weight matrix is emb_dim * desc_dim
print 'writing descriptors...'
R = F.normalize(weight_mat, 2, 0).cpu().numpy() # now this is of emb_dim * desc_dim
log = open(descriptor_log, 'w')
for ind in range(R.shape[1]):
desc = R[:,ind]
# We is vocab * 300
sims = We.dot(desc)
# this is a short cut way to reverse the array [::-1]
ordered_words = np.argsort(sims)[::-1]
desc_list = [ revmap[w] for w in ordered_words[:10]]
log.write(' '.join(desc_list) + '\n')
print('descriptor %d:' % ind)
print(desc_list)
log.flush()
log.close()
def save_trajectories(trajectory_log, span_data, bmap, cmap, mdl):
potter_books = ['B019PIOJYU', 'B019PIOJY0', 'B019PIOJVI', 'B019PIOJV8', 'B019PIOJZE', 'B019PIOJZ4', 'B019PIOJWW']
print 'writing trajectories...'
tlog = open(trajectory_log, 'wb')
traj_writer = csv.writer(tlog)
traj_writer.writerow(['Book', 'Char 1', 'Char 2', 'Span ID'] + \
['Topic ' + str(i) for i in range(30)])
bc = 0
print(len(span_data))
for book, chars, curr, cm in span_data:
c1, c2 = [cmap[c] for c in chars]
bname = bmap[book[0]]
if bname != 'Dracula' and bname != 'BourneBetrayal' and bname != 'RisingTides' and bname != 'BourneDeception':
continue
if c1 != 'Arthur' and c2 != 'Arthur':
continue
book = torch.from_numpy(book).long()
chars = torch.from_numpy(chars).long().unsqueeze(0)
curr = torch.from_numpy(curr).long()
cm = torch.from_numpy(cm)
if torch.cuda.is_available():
book = Variable(book).cuda()
chars = Variable(chars).cuda()
curr = Variable(curr).cuda()
cm = Variable(cm).cuda()
_, traj = mdl([book, chars, curr, cm, None, None, [cm.size(0)]])
print("{} {} {} {}".format(bname, c1, c2, len(traj)))
for ind in range(len(traj)):
step = traj[ind].squeeze(0)
traj_writer.writerow([bname, c1, c2, ind, step.numpy().tolist()])
bc += 1
if bc > 5:
break
tlog.flush()
tlog.close()
def test():
global BATCH_SIZE
BATCH_SIZE = 1
print 'loading data...'
descriptor_log = 'descriptors_model_16.log'
trajectory_log = 'trajectories_16.log'
print d_word, span_size, config.desc_dim, config.vocab_size, config.num_chars, config.num_books, num_traj
config.eval = True
mdl = RMNModel(config, We)
if torch.cuda.is_available():
mdl.cuda()
saved_state = torch.load("model_16.pth")
mdl.load_state_dict(saved_state)
mdl.eval()
#save_trajectories(trajectory_log, span_data, bmap, cmap, mdl)
save_descriptors(descriptor_log, mdl.w_rel.weight.data, We, revmap)
if __name__ == '__main__':
train(config.train_epochs)
#test()
|
st115917
|
Suppose I do this:
x = Variable(torch.rand(3,5))
y = torch.Tensor(torch.rand(3,5))
print(x + y)
It gives error. I can use Variable.data if I want the output to be of Tensor type. But I want the sum to be a Variable. How can I do this?
This example is trivial. When both x and y are matrices, things get complicated (like doing operation between a Variable and a Tensor that has come through register_buffer).
I am new to PyTorch. So is there a good reference that explain all these things?
|
st115918
|
Thanks. Do I have to wrap y every time I use it?
Also, this requires gradient with respect to y to be calculated during backward. Since y is a tensor (like running_var in BatchNorm`, I don’t want the gradient to flow through it.
|
st115919
|
The requires_grad attrribute of a Variable is False by default. This means that the gradient will not flow though y
In [2]: y = torch.autograd.Variable(torch.Tensor(4))
In [3]: y.requires_grad
Out[3]: False
|
st115920
|
First of all,I am a freshman in machine learning and pytorch.I downloaded an example of MNIST test from github.But it arose the same RuntimeError after I tried some different MNST test code.The RuntimeError is An attempt has been made to start a new process before the current process has finished its bootstrapping phase.
…The "freeze_support()"line can be omiitted if the program is not going to be frozen to produce an executable.But I didn’t imoport the mutipleprocessing module.So I am confused of it and want to know how to solve the problem.
http://blog.csdn.net/victoriaw/article/details/72354307 38
The website above is where I copy the test code.And I run the pytorch in Anaconda in win10 only using my CPU. I haven’t intalled the cuda
|
st115921
|
I am running this ConvLSTM code which is slightly changed from CortexNet implementation of ConvLSTM 27. But I am getting RuntimeError: tensors are on different GPUs. Could anyone help me direct where I am doing it wrong? I
import torch
from torch import nn
import torch.nn.functional as f
from torch.autograd import Variable
# Define some constants
KERNEL_SIZE = 3
PADDING = KERNEL_SIZE // 2
class ConvLSTMCell(nn.Module):
"""
Generate a convolutional LSTM cell
"""
def __init__(self, input_size, hidden_size):
super(ConvLSTMCell,self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.Gates = nn.Conv2d(input_size + hidden_size, 4 * hidden_size, KERNEL_SIZE, padding=PADDING)
def forward(self, input_, prev_state):
# get batch and spatial sizes
batch_size = input_.data.size()[0]
spatial_size = input_.data.size()[2:]
if prev_state is None:
state_size = [batch_size, self.hidden_size] + list(spatial_size)
prev_state = (
Variable(torch.zeros(state_size)),
Variable(torch.zeros(state_size))
)
prev_hidden, prev_cell = prev_state
prev_hidden, prev_cell = prev_hidden.cuda(),prev_cell.cuda()
stacked_inputs = torch.cat([input_, prev_hidden], 1)
gates = self.Gates(stacked_inputs)
# chunk across channel dimension
in_gate, remember_gate, out_gate, cell_gate = gates.chunk(4, 1)
# apply sigmoid non linearity
in_gate = f.sigmoid(in_gate)
remember_gate = f.sigmoid(remember_gate)
out_gate = f.sigmoid(out_gate)
# apply tanh non linearity
cell_gate = f.tanh(cell_gate)
# compute current cell and hidden state
cell = (remember_gate * prev_cell) + (in_gate * cell_gate)
hidden = out_gate * f.tanh(cell)
return hidden, cell
def _main():
"""
Run some basic tests on the API
"""
# define batch_size, channels, height, width
b, c, h, w = 1, 3, 4, 8
d = 5 # hidden state size
lr = 1e-1 # learning rate
T = 6 # sequence length
max_epoch = 20 # number of epochs
# set manual seed
torch.manual_seed(0)
print('Instantiate model')
model = ConvLSTMCell(c, d)
print(repr(model))
print('Create input and target Variables')
x = Variable(torch.rand(T, b, c, h, w))
y = Variable(torch.randn(T, b, d, h, w))
x = x.cuda()
y = y.cuda()
print('Create a MSE criterion')
loss_fn = nn.MSELoss()
print('Run for', max_epoch, 'iterations')
for epoch in range(0, max_epoch):
state = None
loss = 0
for t in range(0, T):
state = model(x[t], state)
loss += loss_fn(state[0], y[t])
print(' > Epoch {:2d} loss: {:.3f}'.format((epoch+1), loss.data[0]))
# zero grad parameters
model.zero_grad()
# compute new grad parameters through time!
loss.backward()
# learning_rate step against the gradient
for p in model.parameters():
p.data.sub_(p.grad.data * lr)
if __name__ == '__main__':
_main()
__author__ = "Alfredo Canziani"
__credits__ = ["Alfredo Canziani"]
__maintainer__ = "Alfredo Canziani"
__email__ = "[email protected]"
__status__ = "Prototype" # "Prototype", "Development", or "Production"
__date__ = "Jan 17"
|
st115922
|
hmishfaq:
model = ConvLSTMCell(c, d)
I haven’t looked closely in your code, but I had a similar problem and the reason was that I forgot convert the model to cuda.
model = ConvLSTMCell(c,d).cuda()
It can also be any other variable that you forgot to attatch cuda. Hope it helps!
|
st115923
|
Hi guys,
I’m new to PyTorch and usally write my networks in tensorflow. I have some questions on to correctly do stuff in PyTorch.
Suppose I have a two-layer network called A(x):
class A(nn.Module):
def __init__(self):
super(A, self).__init__()
self.fc1 = nn.Linear(100, 100)
self.fc2 = nn.Linear(100, 10)
def forward(self, x):
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
x = F.relu(x)
return x
Now I need outputs from fc1 and fc2 before applying relu. What is the ‘PyTorch’ way of achieving this? I was thinking of writing something like this:
def hidden_outputs(self, x):
outs = {}
x = self.fc1(x)
outs['fc1'] = x
...
return outs
and then calling A.hidden_outputs(x) from another script. Also, is it okay to write any function in addition to forward in the class? Can I for example write:
def apply_softmax(self, x):
x = self.forward(x)
x = F.softmax(x)
return x
and use the function above to calculate gradients etc. in another script?
...
net = A()
x = data
out = A.softmax(x)
out.backward()
x_grad = x.grad
...
|
st115924
|
Here’s how you would rewrite A(x).
class A(nn.Module):
def __init__(self):
super(A, self).__init__()
self.fc1 = nn.Linear(100, 100)
self.fc2 = nn.Linear(100, 10)
def forward(self, x):
fc1 = self.fc1(x)
a = F.relu(fc1)
fc2 = self.fc2(a)
b = F.relu(fc2)
return fc1, fc2, a, b
This would give you both the hidden outputs of FC1 and FC2, and also the RELU-applied outputs as well.
If you would like to re-use the weights for one of the FC’s in some other script as you have mentioned,
# Load model.
model = A()
model.load_state_dict(...)
# Re-use trained FC1 layer weights.
fc1_outputs = model.fc1(Variable(...))
fc1_outputs = F.softmax(fc1_outputs)
and so on.
|
st115925
|
Thanks. Your suggestion looks clean and I like the idea to just have one function in the class. I will just return a dictionary with all the outputs in the forward function:
def forward(x):
outs = {}
fc1 = self.fc1(x)
outs['fc1'] = fc1
relu1 = F.relu(fc1)
outs['relu_fc1'] = relu1
...
return outs
|
st115926
|
When I execute the following code, I get a segmentation fault that I do not really understand.
import torch
from torch import nn
from torch.autograd import Variable
v = Variable(torch.randn(1, 1, 590, 45, 80), volatile=True)
model = nn.Sequential(nn.Conv3d(1,16,5), nn.ELU(),nn.Conv3d(16,16,5))
While the tensors are quite large, they should easily fit into the memory of my machine. Is there a maximal tensor size? Thanks for you help.
|
st115927
|
The goal is to accumulate gradients and then on the N timestep update the model. Not sure how to do it?
I’m not sure this works:
On every timestep call loss.backward() and then on N iteration: call optimizer.step(); optimizer.zero_grad()
Would this work or would the gradients calculated by loss.backward() be overwritten every time step?
|
st115928
|
Call loss.backward() every time you want to accumulate gradients (they will be summed up) and afterwards call step() on the optimizer you’re using to update the model’s parameters.
|
st115929
|
I tried to install PyTorch using Anaconda on Mac with the following command:
conda install pytorch torchvision -c soumith
The installation seemed successful. However, after I ran python and did:
import torch
The following error occurred:
What was missing? I tried reinstalling several times but that didn’t help.
Thanks!
|
st115930
|
Hi,
This may help. https://stackoverflow.com/questions/45673949/error-importing-pytorch-python 8. Also do a clean installation by removing previous pytorch libraries that may remain in the system.
|
st115931
|
Thank you! I’ve actually seen that thread. However, the command I had previously used to install the packages was exactly the same as suggested in the answer to that question:
conda install pytorch torchvision -c soumith
So I wonder what was still missing.
Also, to remove the libraries, is it enough to call:
conda uninstall pytorch torchvision -c soumith
? If so, I’ve uninstalled and reinstalled PyTorch several times, but it still didn’t work.
Any further suggestions?
|
st115932
|
I created a very simple autoencoder and have been looking to feed in data that I have converted from Numpy. I found that when I convert from numpy “float” array using torch.from_numpy it gives me an array of torch DoubleTensor.
When I feed this into my model it causes problems with the linear layer, giving the message:
torch.addmm received an invalid combination of arguments…
when I convert the torch double tensor to type float this error goes away.
Is this a bug or is it a requirement for only feed float and not double into the linear layer?
Sorry if this is an obvious question but I couldn’t see any notes in the documents covering this and I’m sure others will find the same issue at some time.
|
st115933
|
Hi, I need more information especially about the errors, but I think it is because nn.Linear 's weight is float. Try
torch.from_numpy(NUMPY_ARRAY).float()
|
st115934
|
Thanks, I did get it to work by creating a new array:
new_array=torch.Tensor.float(torch.from_numpy(numpy_float_array))
which I think is doing the same thing as you are suggesting. My concern was that whilst I can get it to work others are likely to find the same since most numpy float arrays seem to be 64 bit and hence convert to Double in Pytorch. I would therefore think that it is worthwhile flagging this as a potential issue in the documentation? It took me quite a while to track down the problem and I would like to avoid others wasting their time.
Many thanks
Jonh
|
st115935
|
Many thanks Vishwak, I wasn’t aware that you could control type in this way when converting from numpy, that looks a good solution.
Regards
John
|
st115936
|
I think the error message is not good because the user does not use torch.addmm but nn.Linear so nn.Linear should throw an Error. Otherwise, PyTorch needs automatic casting.
|
st115937
|
I want to index a two-dimension tensor like I do in numpy.
a = torch.rand(5,5)
b = b = torch.LongTensor([4,3,2,1,0])
But typing a[b,b] gives an error:
TypeError: indexing a tensor with an object of type LongTensor. The only supported types are integers, slices, numpy scalars and torch.LongTensor or torch.ByteTensor as the only argument.
How do I index a two-dimension tensor using other two tensors?
|
st115938
|
doing this is not supported yet.
For now you have to do:
a[b, :][:, b]
We plan to tackle this soon.
|
st115939
|
Thanks for your reply. But this still gives the same error:
>>> a[b,:][:,b]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: indexing a tensor with an object of type LongTensor. The only supported types are integers, slices, numpy scalars and torch.LongTensor or torch.ByteTensor as the only argument.
My pytorch vision is:
$ conda list | grep torch
pytorch 0.1.10 py35_1cu80 [cuda80] soumith
torchvision 0.1.6 py35_19 soumith
|
st115940
|
The answer you gave is not correct .
a = torch.rand(5,5)
b = b = torch.LongTensor([4,3,2,1,0])
a[b, :][:, b]
Traceback (most recent call last):
File “”, line 1, in
TypeError: indexing a tensor with an object of type LongTensor. The only supported types are integers, slices, numpy scalars and torch.LongTensor or torch.ByteTe
|
st115941
|
x[b].transpose(1, 0)[b].transpose(1, 0)
should do the trick.
Strange that the error message to x[b, :] reads:
TypeError: indexing a tensor with an object of type torch.LongTensor. The only supported types are integers, slices, numpy scalars and torch.LongTensor or torch.ByteTensor as the only argument.
Which seems self-contradictory
|
st115942
|
Can also try adding something like:
_oldgetitem = torch.FloatTensor.__getitem__
def _getitem(self, slice_):
if type(slice_) is tuple and torch.LongTensor in [type(x) for x in slice_]:
i = [j for j, ix in enumerate(slice_)
if type(ix) == torch.LongTensor][0]
return self.transpose(0, i)[slice_[i]].transpose(i, 0)
else:
return _oldgetitem(self, slice_ )
torch.FloatTensor.__getitem__ = _getitem
Then x[b, :][:, b] works as expected. Maybe a bit of a hack though.
Regarding x[b, :][:, b] vs. x[b, b] – this is something which has never been solved for numpy arrays.
|
st115943
|
Hi,
I’m building a very simple model with only one conv-BatchNorm and Relu. But when I feed much input x, the following error occurs. Any one knows the reason?
RuntimeError: Expected object of type CUDAFloatType but found type CPUFloatType for argument #3 ‘weight’
My x is already CUDAFloatType
Thanks
|
st115944
|
Hi,
I’m reading the pytorch source code,in sgd.py 1 has this line
p.data.add_(-group['lr'], d_p)
I think this code means p=p-lr*d_p right? So my question is why add_() above can achieve lr*d_p, the multiple operation.
Thanks
|
st115945
|
Please check the documentation for add 4.
The inplace version of torch.add(input, value=1, other, out=None) is input.add_(value=1, other, out=None). So p.data.add_(-group['lr'], d_p) is p.data.add_(value=-group['lr'], other=d_p).
I feel this kind of APIs is followed most of the vectorized operation design.
|
st115946
|
I am using pinned memory to speed up the training process. In my program, I use a thread to move the training data to the pinned memory. Occasionally, I encountered the following error in the middle of the training:
THCudaCheck FAIL file=/py/conda-bld/pytorch_1493677666423/work/torch/lib/THC/THCCachingHostAllocator.cpp line=258 error=11 : invalid argument
Exception in thread Thread-4:
Traceback (most recent call last):
File "/home/heilaw/.conda/envs/pytorch/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/home/heilaw/.conda/envs/pytorch/lib/python3.5/threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "./train.py", line 43, in pin_memory
data["xs"] = [x.pin_memory() for x in data["xs"]]
File "./train.py", line 43, in <listcomp>
data["xs"] = [x.pin_memory() for x in data["xs"]]
File "/home/heilaw/.conda/envs/pytorch/lib/python3.5/site-packages/torch/tensor.py", line 78, in pin_memory
return type(self)().set_(storage.pin_memory()).view_as(self)
File "/home/heilaw/.conda/envs/pytorch/lib/python3.5/site-packages/torch/storage.py", line 84, in pin_memory
return type(self)(self.size(), allocator=allocator).copy_(self)
RuntimeError: cuda runtime error (11) : invalid argument at /py/conda-bld/pytorch_1493677666423/work/torch/lib/THC/THCCachingHostAllocator.cpp:258
Any idea why this would happen? How can I debug it?
|
st115947
|
if you can give me a script to reproduce this, what I would do is put a printf in https://github.com/pytorch/pytorch/blob/master/torch/lib/THC/THCCachingHostAllocator.cpp#L258 87 and see what the size or ptr values are when the failure occurs. From there, I would backtrack to see why these failure values are being generated.
|
st115948
|
It’s an ongoing project so I cannot share that with you. But I will see if I can write a script for you to reproduce that. Thanks!
|
st115949
|
I haven’t had a second to write a self-contained, reproducing example script, but wanted to chime in to say that I resolved this issue by making the Tensor contiguous before pinning it. (I had taken a slice earlier.)
|
st115950
|
Python 2.7.13 |Anaconda 4.4.0 (x86_64)| (default, Dec 20 2016, 23:05:08)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> import torch
>>> torch.__version__
'0.1.12_2'
>>> import torch.nn as nn
>>> from torch.autograd import Variable
>>> v = Variable(torch.randn(2,3))
>>> v
Variable containing:
0.6709 1.5865 2.2447
-0.1978 -2.0900 -0.8279
[torch.FloatTensor of size 2x3]
>>> dp = nn.Dropout(0.5)
>>> dp
Dropout (p = 0.5)
>>> v_dp = dp(v)
>>> v_dp
Variable containing:
1.3418 3.1730 0.0000
-0.0000 -4.1799 -1.6558
[torch.FloatTensor of size 2x3]
>>> v
Variable containing:
0.6709 1.5865 2.2447
-0.1978 -2.0900 -0.8279
[torch.FloatTensor of size 2x3]
Hi
Please notice the above erroneous behaviour of dropout. I am not sure why it multiplies by 2
EDIT: solved
I’m checking for model mode and accordingly adjusting it back, I am not sure tho why this scaling was added. The docs say so that the backward reduces to an identity.
if self.training:
drop_mask = drop_mask * (1 - config.word_drop)
|
st115951
|
multiplying by the factor of drop_prob^(-1) is the core idea of dropout, you can read more about it in www.deeplearningbook.org 16
|
st115952
|
I’d like to keep track of an LSTM hidden state and at some point revert to an earlier state. I’m running into errors when I try to reuse the previous state. I’m brand new to pytorch and any help would be greatly appreciated!
There are two things I’ve tried so far. The first is to save off the output Variables and try to reuse them later. This leads to an error in backwards() since we are going over the graph again and it has been discarded already. This makes sense to me and retaining the graph seems to not be what I want.
The second is to get the Tensors out of the output Variables and create new Variables with those saved Tensors. This seems to break autograd, giving me a ‘there are no graph nodes that require computing gradients’ runtime error. I don’t understand why this is the case.
Thanks in advance!
|
st115953
|
I’m not sure what is going wrong, but you should only get the second error if there are really no graph nodes requiring a gradient, which should not be the case if you use the hidden states as input to an LSTM and then backpropagate on the output since the LSTM parameters will require gradients.
|
st115954
|
Thanks for the response. I had an error in my code where I was passing a tensor instead of a variable to the loss function. No wonder there were no graph nodes requiring gradients!
|
st115955
|
After running this example dcg code
github.com
jcjohnson/pytorch-examples/blob/master/nn/dynamic_net.py 4
import random
import torch
from torch.autograd import Variable
"""
To showcase the power of PyTorch dynamic graphs, we will implement a very strange
model: a fully-connected ReLU network that on each forward pass randomly chooses
a number between 1 and 4 and has that many hidden layers, reusing the same
weights multiple times to compute the innermost hidden layers.
"""
class DynamicNet(torch.nn.Module):
def __init__(self, D_in, H, D_out):
"""
In the constructor we construct three nn.Linear instances that we will use
in the forward pass.
"""
super(DynamicNet, self).__init__()
self.input_linear = torch.nn.Linear(D_in, H)
self.middle_linear = torch.nn.Linear(H, H)
This file has been truncated. show original
which randomly chooses the number of hidden layers [0, 3] each iteration, I printed the parameters in the network with print [x for x in model.parameters()] and it only shows the original nn.modules.
[Parameter containing:
0.0682 -0.0155 -0.3447
0.1916 -0.1639 0.1732
[torch.FloatTensor of size 2x3]
, Parameter containing:
0.0412
-0.3786
[torch.FloatTensor of size 2]
, Parameter containing:
0.3711 0.2467
-0.4274 0.6104
[torch.FloatTensor of size 2x2]
, Parameter containing:
0.5531
0.0944
[torch.FloatTensor of size 2]
, Parameter containing:
0.0678 -0.4616
[torch.FloatTensor of size 1x2]
, Parameter containing:
0.2338
[torch.FloatTensor of size 1]
]
It doesn’t show the dynamic layers used during training. Question: how does pytorch store the dynamic parameters? Are the dynamic parameters used during inference?
|
st115956
|
I met some error on both of my machines; I don’t know if someone else can reproduce it.
import torch
x = torch.cuda.FloatTensor(1, 1, 16384, 16384)
x = torch.autograd.Variable(x, requires_grad=True)
y = x.expand(2, x.size(1), x.size(2), x.size(3))
grid = torch.rand(2, 1, 1, 2)
import torch.nn.functional as F
z = F.grid_sample(y, torch.autograd.Variable(grid.cuda()))
z.sum().backward()
I got error
Traceback (most recent call last):
File “”, line 1, in
File “/opt/conda/lib/python2.7/site-packages/torch/autograd/variable.py”, line 156, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File “/opt/conda/lib/python2.7/site-packages/torch/autograd/init.py”, line 98, in backward
variables, grad_variables, retain_graph)
File “/opt/conda/lib/python2.7/site-packages/torch/autograd/function.py”, line 91, in apply
return self._forward_cls.backward(self, *args)
File “/opt/conda/lib/python2.7/site-packages/torch/autograd/function.py”, line 194, in wrapper
outputs = fn(ctx, *tensor_args)
File “/opt/conda/lib/python2.7/site-packages/torch/nn/_functions/vision.py”, line 48, in backward
grad_output)
RuntimeError: CUDNN_STATUS_BAD_PARAM
It works fine when running under cpu mode.
It works also fine when replacing last line with z.backward(z.data.clone().fill_(1))
torch version is ‘0.2.0+0cd149f’
|
st115957
|
This seems like a bug right.
Because it works for z.backward(z.data.clone().fill_(1)).
|
st115958
|
Same problem for me, find this thread, when googled similar problem with grid_sample backwards
|
st115959
|
I have PyTorch on RAMV8, when I test some models such as resnet50 etc, I find the performance just not as good as I except, of course I have installed BLAS lib on ARM, I want to ask whether it has the third lib such as NNPACK that can improve the performance.
|
st115960
|
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
Can someone explain to me what the 3,6,5 numbers at the first and the seconds convolution layer mean?
|
st115961
|
The Documentation for nn.Conv2d suggests that:
In the first nn.Conv2d, 3 is the number of input channels, 6 is the number of output channels and 5 in the length of the kernel, assuming a square kernel. So your kernel is a [5,5] box.
In the second nn.Conv2d, 6 is the number of input channels, 16 is the number of output channels and 5 is the size of the kernel.
More information available here: link 1
|
st115962
|
Hello,
I am trying to do the following:
Load the values x and y using the data loader.
Do a forward pass, and update the parameters.
I need to change some values of y based on some criterion, and I need to be able to sample from the new updated y from next iteration onwards.
How can this be done?
|
st115963
|
You can do this when you get it from the DataLoader itself. Assume you have a training loop like this:
my_data_loader = torch.utils.data.DataLoader(dataset=dataset, batch_size=batch_size, shuffle=shuffle)`
for i, itr in enumerate(my_data_loader):
X, Y = itr
Y = my_function(Y)
# Do something
I think this should work
|
st115964
|
No. I don’t think my snippet saves the changes. But there is probably very little overhead in doing something like this, you might as well do this.
|
st115965
|
After running many batches, I get an oom with lstm:
t....................THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1502009910772/work/torch/lib/THC/generic/THCStorage.cu line=66 error=2 : out of memory
Traceback (most recent call last):
File "train.py", line 353, in <module>
run(**args.__dict__)
File "train.py", line 271, in run
loss.backward(rationale_selected_node)
File "/mldata/conda/envs/pytorch/lib/python3.6/site-packages/torch/autograd/variable.py", line 156, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File "/mldata/conda/envs/pytorch/lib/python3.6/site-packages/torch/autograd/__init__.py", line 98, in backward
variables, grad_variables, retain_graph)
File "/mldata/conda/envs/pytorch/lib/python3.6/site-packages/torch/autograd/function.py", line 291, in _do_backward
result = super(NestedIOFunction, self)._do_backward(gradients, retain_variables)
File "/mldata/conda/envs/pytorch/lib/python3.6/site-packages/torch/autograd/function.py", line 299, in backward
result = self.backward_extended(*nested_gradients)
File "/mldata/conda/envs/pytorch/lib/python3.6/site-packages/torch/nn/_functions/rnn.py", line 313, in backward_extended
self._reserve_clone = self.reserve.clone()
RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1502009910772/work/torch/lib/THC/generic/THCStorage.cu:66
The weird thing is, I printed the gpu memory every 3 seconds whilst it was running, and nothing odd at the moment this occurs:
| N/A 68C P0 151W / 150W | 6796MiB / 7618MiB | 89% Default |
| N/A 67C P0 137W / 150W | 6796MiB / 7618MiB | 76% Default |
| N/A 68C P0 102W / 150W | 6796MiB / 7618MiB | 74% Default |
| N/A 68C P0 158W / 150W | 6796MiB / 7618MiB | 98% Default |
| N/A 67C P0 118W / 150W | 6796MiB / 7618MiB | 97% Default |
| N/A 67C P0 107W / 150W | 6796MiB / 7618MiB | 96% Default |
| N/A 67C P0 122W / 150W | 6796MiB / 7618MiB | 96% Default |
| N/A 68C P0 115W / 150W | 6796MiB / 7618MiB | 97% Default |
| N/A 68C P0 135W / 150W | 6796MiB / 7618MiB | 96% Default |
| N/A 67C P0 139W / 150W | 6796MiB / 7618MiB | 76% Default |
| N/A 67C P0 121W / 150W | 6796MiB / 7618MiB | 98% Default |
| N/A 67C P0 141W / 150W | 6796MiB / 7618MiB | 74% Default |
| N/A 68C P0 160W / 150W | 6796MiB / 7618MiB | 96% Default |
| N/A 68C P0 101W / 150W | 6796MiB / 7618MiB | 97% Default |
| N/A 68C P0 159W / 150W | 6796MiB / 7618MiB | 81% Default |
| N/A 68C P0 140W / 150W | 6796MiB / 7618MiB | 75% Default |
| N/A 68C P0 144W / 150W | 6796MiB / 7618MiB | 75% Default |
| N/A 66C P0 80W / 150W | 6796MiB / 7618MiB | 95% Default |
| N/A 67C P0 108W / 150W | 6796MiB / 7618MiB | 75% Default |
| N/A 68C P0 131W / 150W | 6796MiB / 7618MiB | 75% Default |
| N/A 68C P0 135W / 150W | 6796MiB / 7618MiB | 76% Default |
| N/A 67C P0 102W / 150W | 6796MiB / 7618MiB | 95% Default |
| N/A 67C P0 53W / 150W | 6796MiB / 7618MiB | 98% Default |
| N/A 67C P0 137W / 150W | 6796MiB / 7618MiB | 97% Default |
| N/A 67C P0 116W / 150W | 6796MiB / 7618MiB | 96% Default |
| N/A 67C P0 130W / 150W | 6796MiB / 7618MiB | 98% Default |
| N/A 68C P0 95W / 150W | 6796MiB / 7618MiB | 97% Default |
| N/A 66C P0 161W / 150W | 6796MiB / 7618MiB | 74% Default |
| N/A 67C P0 158W / 150W | 6796MiB / 7618MiB | 95% Default |
| N/A 66C P0 104W / 150W | 6796MiB / 7618MiB | 82% Default |
| N/A 67C P0 94W / 150W | 6796MiB / 7618MiB | 97% Default |
| N/A 67C P0 150W / 150W | 6796MiB / 7618MiB | 73% Default |
| N/A 67C P0 140W / 150W | 6796MiB / 7618MiB | 75% Default |
| N/A 67C P0 100W / 150W | 6796MiB / 7618MiB | 96% Default |
| N/A 66C P0 96W / 150W | 6796MiB / 7618MiB | 96% Default |
| N/A 67C P0 122W / 150W | 6796MiB / 7618MiB | 74% Default |
| N/A 68C P0 133W / 150W | 6796MiB / 7618MiB | 97% Default |
| N/A 60C P0 42W / 150W | 0MiB / 7618MiB | 97% Default |
| N/A 58C P0 42W / 150W | 0MiB / 7618MiB | 100% Default |
| N/A 56C P0 41W / 150W | 0MiB / 7618MiB | 97% Default |
| N/A 55C P0 41W / 150W | 0MiB / 7618MiB | 99% Default |
(printing using bash command while true; do { nvidia-smi | grep Default; sleep 3; } done)
|
st115966
|
No theories for why this is happening and/or solutions? I think it’s weird that it continues with memory entirely unchnaged, for like ~170 batches, and then sudddenly Bam! oom
The code is here by the way: https://github.com/hughperkins/rationalizing-neural-predictions/tree/30edf139f2c6d89b99fb97dfaf82d1b44c5bfd57 24
|
st115967
|
I have the following code and the last line is giving me that error. Is there a way I can do indexing copy like numpy?
N, C, H, W = features.size()
gram = torch.zeros((N, C, C))
print(gram.size())
for i in range(N):
f = features[i, :, :, :]
print(f.size())
f = f.view(C, -1)
print(f.size())
g = torch.mm(f, f.t())
print(g.size())
gram[i, :, :] = g
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.