id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st102300 | What kind of model architecture are you using?
The Quatro K1200 has if I recall correctly 4GB of GPU RAM.
Could you try to check the memory usage with torch.cuda.memory_allocated()? |
st102301 | jpainam:
I’m feeding two images of size (224,224); (448,44
Another way to investigate this issue is resize the images to 32x32 and 64x64 just for debugging, and try to see how much memory this reduced size is consuming. Then regardless if you find the error or not, if the image size you are using is affecting your memory, maybe you can resize the images a bit, not necessary to 32x32 or 64x64, but to some other values that can fit your job to the GPU. You can also run ’ watch -n 2 nvidia-smi’ from the terminal to watch the GPU. |
st102302 | Thanks guys, reducing the size of the image helps me understand it was due to the memory size.
Beside, i moved to more robust GPUs and want to use both GPU( 0 and 1). But only end up running one GPU.
torch.cuda.set_device(0), torch.cuda.set_device(1) didn’t help as well as os.environ["CUDA_VISIBLE_DEVICES"] = "0,1"
What can i do to use both GPUs? |
st102303 | You could use nn.DataParallel to split your batch onto your GPUs.
Here 10 is a good tutorial to get you started. |
st102304 | I’m puzzled when I notice my PyTorch script using 600% CPU (viewed in top) when I’ve set tensors and model to GPU. Here’s what I’ve checked with .is_cuda:
Input sequence tensors
Input sequence length tensors
Target tensors
PackedSequence tensors
Initialized hidden tensors
Output tensors
Output hidden tensors
Loss
Also, my data loaders have num_workers = 1.
The GPU is being used (viewed in nvidia-smi -l 3)
Am I missing something? Why is there still so much CPU being used? I’m wondering if my pipeline is not working as intended. |
st102305 | There may be other things that becomes the bottleneck of training process such as disk IO if the dataset is large, so the GPU is idle all the time. |
st102306 | In the code, I make sure to model.cuda() and Variable(x.type(dtype)) where dtype=torch.cuda.FloatTensor.
I also print x.is_cuda() and next(model.parameters()).is_cuda(), both are True.
So it seems that everything is on gpu. But when I execute the program (which runs model(x)) then monitor the gpu usage with watch -n 1 nvidia-smi, the Volatile GPU-Util shows 0%. So it looks like gpu is not working, and the processing speed is indeed too slow, plus that I don’t feel the heat from my video card. I exclude the possibility that my card doesn’t support CUDA because it was working until I work with the customized dataloader for my dataset.
Is there anything I am missing, in order to make sure the program will be using GPU? |
st102307 | Thanks for your reply, it might be.
But is that going to cause 0% GPU utility all the time? After all at least model forward should have been on GPU. Sorry I don’t understand the interaction between CPU and GPU. |
st102308 | Yeah, but loading in on CPU. If data is not loaded, you can’t forward the model. |
st102309 | But I actually check and print the output type with is_cuda() after the line of model(), and see it returns True. So the data batch is loaded and has been through the forward part, right? |
st102310 | Have you solve this problem? I have the same problem with my own collate_fn, and wondering if it is the problem。。 |
st102311 | Maybe you should check the state of CPU and hard disk to see whether they are overload. |
st102312 | I am compiling PyTorch source code from scratch.
Then I met an error with following message. How to avoid this message?
I also refer following issue, but it is not resolved.
https://github.com/pytorch/pytorch/issues/5539 7
extra_link_args += ['-L/usr/lib/x86_64-linux-gnu/']
Current output log tail
Processing dependencies for torch==0.5.0a0+788b2e9
Finished processing dependencies for torch==0.5.0a0+788b2e9
warning: no library file corresponding to '/usr/lib/x86_64-linux-gnu/libcudnn.so.7' found (skipping)
warning: no library file corresponding to '/usr/lib/x86_64-linux-gnu/libcudnn.so.7' found (skipping) |
st102313 | Thank you for commenting. and sorry for mis-titling
One further question. (if you know the reason)
The warning message come from distutils.
Does this mean setup.py’s parameter “libraries” is not working correctly?
https://docs.python.org/3/distutils/setupscript.html#library-options 29
Ref.
https://github.com/python/cpython/blob/3.6/Lib/distutils/ccompiler.py#L1111 13 |
st102314 | I noticed that on pytorch.org, both CUDA and CPU conda package is available. May I know if the CPU package has the MKL-DNN optimizations for PyTorch/Caffe2? Thanks. |
st102315 | the conda package does not have MKL-DNN optimizations. We will be including MKL-DNN from binaries of next major version. |
st102316 | Recently, I found pack_sequence, pack_padded_sequence, and pad_packed_sequence for RNN modules. But I am not sure when these functions are useful.
Q1. To use pack_padded_sequence, sorting tensors by length is needed for every mini-batch. This is cost. On the other hand, I heard pack_padded_sequence skip calculation of padded elements. Compared them, skipping calculation is much beneficial? The below link is an example of pack_padded_sequence.
gist.github.com
https://gist.github.com/Tushar-N/dfca335e370a2bc3bc79876e6270099e 74
pad_packed_demo.py
import torch
import torch.nn as nn
from torch.autograd import Variable
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
import torch.nn.functional as F
import numpy as np
import itertools
def flatten(l):
return list(itertools.chain.from_iterable(l))
This file has been truncated. show original
Q2. Although pack_sequence looks similar to pack_sequence, pack_sequence doesn’t need to use padded data. But what kind of situations can we use this function?
Thank you! |
st102317 | With pack_sequence in PyTorch 0.4, you can directly call it without padding zeros.
>>> import torch
>>> import torch.nn.utils.rnn as rnn_utils
>>> a = torch.Tensor([1, 2, 3])
>>> b = torch.Tensor([4, 5])
>>> c = torch.Tensor([6])
>>> packed = rnn_utils.pack_sequence([a, b, c]) |
st102318 | I’m having the same confusion as your Q2. The following link is the only use case about pack_sequence I found: Code 368. |
st102319 | Hello!
My case is to convert some model containing Recurrent module (GRU) from pytorch to caffe2.
I see that the only way of doing that is by using onnx.
My smallest example is the following:
import sys
import numpy as np
import torch
import torch.onnx
import onnx
import caffe2.python.onnx.backend as backend
from caffe2.python.onnx.backend import Caffe2Backend
def main():
# step 0: prepare model which takes sequences of size 8 as input and has 1 forward hidden layer of size 4.
model_pytorch = torch.nn.GRU(input_size=8,
hidden_size=4,
num_layers=1)
model_pytorch.eval()
x = torch.randn(2, 1, 8) # seq_len x batch x input_size
h = torch.zeros(1, 1, 4) # (num_layers * n_directions) x batch x hidden_size
try:
_ = model_pytorch(x, h) # checking that model inference is OK
except (Exception, RuntimeError) as e:
print(e)
print(' ===== Unsuccessfull model inference run, exiting ===== ')
return
finally:
print(' ===== Step 0 finished ===== ')
# step 1: convert to ONNX
onnx_proto_output = "temp.onnx"
try:
torch.onnx.export(model_pytorch, (x, h), onnx_proto_output, export_params=True, verbose=True)
except (Exception, RuntimeError) as e:
print(e)
print(' ===== Unsuccessfull pytorch->ONNX run, exiting ===== ')
return
finally:
print(' ===== Step 1 finished ===== ')
# step 2: check ONNX model using Caffe2-ONNX backend
model_onnx = onnx.load(onnx_proto_output)
print(onnx.checker.check_model(model_onnx))
print(onnx.helper.printable_graph(model_onnx.graph))
x_ = x.numpy()
h_ = h.numpy()
try:
outputs = backend.run_model(model_onnx, (x_, h_))
print(outputs)
except (Exception, RuntimeError) as e:
print(e)
print(' ===== Unsuccessfull Caffe2.onnx run, exiting ===== ')
return
finally:
print(' ===== Step 2 finished ===== ')
# step 3: save model to caffe2 format
try:
input_size_net, predict_net = Caffe2Backend.onnx_graph_to_caffe2_net(model_onnx)
with open('init_net.pb', "wb") as f:
f.write(init_net.SerializeToString())
with open('predict_net.pb', "wb") as f:
f.write(predict_net.SerializeToString())
except (Exception, RuntimeError) as e:
print(e)
print(' ===== Unsuccessfull Caffe2 save run, exiting ===== ')
return
finally:
print(' ===== Step 3 finished')
if __name__ == '__main__':
main()
It fails on line 57 at command run_model(...) of step 2:
ONNX FATAL: list index out of range
My small research shows that the error is around the line 425 of this file 1:
424 if x.name == W:
425 input_size = x.type.tensor_type.shape.dim[2].dim_value
426 break
It turns out that in this case the matrix which we’re considering has only 2 dims (12x8):
name: "10"
type {
tensor_type {
elem_type: FLOAT
shape {
dim {
dim_value: 12
}
dim {
dim_value: 8
}
}
}
}
I wonder if there’s versions mismatch between the onnx/caffe2 ways of handling the models?
If so, what’s the good way of fixing that?
I use pytorch 0.4.0, onnx 1.2.1 (latest from source), caffe2 (latest from source).
Thanks! |
st102320 | Solved by lysuhin in post #2
Ok, with no answers provided, it seems I figured out a workaround that suits my particular case.
I hardcoded the necessary size @ line 410 and commented out the following Reshape/Squeeze block (i.e. reverting the commit) |
st102321 | Ok, with no answers provided, it seems I figured out a workaround that suits my particular case.
I hardcoded the necessary size @ line 410 and commented out the following Reshape/Squeeze block (i.e. reverting the commit 2) |
st102322 | I hit into same issue and after looking at pytorch onnx exporter code and caffe2 onnx importer code ,realized onnx importer is expecting lstm parameters in bit different format and realized it was version conflict.
Updating to pytorch 0.4.1(they have released it 5 days back)resolved my issue. It might be worth trying for you[quote=“lysuhin, post:1, topic:21826”]
[/quote] |
st102323 | Hello,
I’m working on a model that has dynamic masking to avoid resample actions already taken.
Tha mask function is:
def apply_mask( attentions, mask, prev_idxs):
if mask is None:
mask = torch.zeros(attentions.size()).byte().cuda()
maskk = mask.clone()
if prev_idxs is not None:
for i,j in zip(range(attentions.size(0)),prev_idxs.data):
maskk[i,j[0]] = 1
attentions[maskk] = -np.inf
return attentions, maskk
When I apply the .multinomial() to the probabilities, it occasionally samples actions with zero probability.
For example, when I run the following function:
def count(n):
k = 0
for j in range(n):
attentions = Variable(torch.Tensor(128,50).uniform_(-10, 10).cuda())
prev_actions = None
mask = None
actions = []
for di in range(50):
attentions, mask = apply_mask( attentions, mask, prev_actions)
probs = F.softmax(attentions).cuda()
prev_actions = probs.multinomial()
for old_idxs in actions:
# compare new idxs
if old_idxs.eq(prev_actions).data.any():
k+=1
print(' [!] resampling')
actions.append(prev_actions)
return k
I obtain a relative frequency of 0.00043 of these bad samples on a 100000 run.
Is there a problem with the .multinomial() function or there is a better way to apply the mask?
Thank you in advance. |
st102324 | I ran your code, and I’m unable to replicate your results (nothing was resampled).
Could you please try some of the following: (it would help debugging)
building from source and testing again
running on CPU and seeing if the problem still persists
running the script below and seeing what happens
import torch
dist = torch.zeros(1000).cuda()
dist[0] = 1
x = torch.multinomial.sample(dist, 1000, True)
sum(x) # should equal 0 |
st102325 | If it is any help, I looked into it and isolated the issue a bit more. It happens consistently (on GPU) and is reproducible, on two different machines, Pytorch 0.3 and Cuda 9.0/9.1. I found that it depends on on the range of the logits as well, if I change logits_range to 1 or 100 it does not happen. Saving the random state I can trigger the incorrect sampling immediately.
import torch
from torch.autograd import Variable
import torch.nn.functional as F
import numpy as np
from tqdm import tqdm
def test(n, hot=False, logits_range=10):
torch.manual_seed(1234)
logits = Variable(torch.Tensor(128, 50).uniform_(-logits_range, logits_range).cuda())
# Set randomly 40 elements per row to 0
mask = torch.zeros_like(logits).byte()
_, idx_mask = Variable(torch.Tensor(128, 50).uniform_(0, 1).cuda()).topk(40, 1)
mask.scatter_(1, idx_mask, True)
logits[mask] = -np.inf
probs = F.softmax(logits, dim=1)
assert (probs[mask] == 0).all()
assert (torch.abs(probs.sum(1) - 1) < 1e-6).all()
if hot:
with open('rng_state.pt', 'rb') as f:
rng_state = torch.load(f)
torch.cuda.set_rng_state(rng_state)
for j in tqdm(range(n)):
rng_state = torch.cuda.get_rng_state()
sample = probs.multinomial(1).squeeze(-1)
mask_sample = mask.gather(1, sample.unsqueeze(-1)).squeeze(-1)
if mask_sample.any():
print("Sampled value that was masked and had probability 0 in iteration {}".format(j))
wrong = torch.nonzero(mask_sample).squeeze(-1)
print("Wrong samples: indices {}, sampled {}, probs {}".format(
wrong.data.cpu().numpy().tolist(),
sample[wrong].data.cpu().numpy().tolist(),
probs[wrong, sample[wrong]].data.cpu().numpy().tolist()
))
if hot:
break
with open('rng_state.pt', 'wb') as f:
torch.save(rng_state, f)
if __name__ == "__main__":
with torch.cuda.device(0):
test(100000, hot=False) |
st102326 | Thanks for the details, @wouter. There’s an issue open for this here 132, it might help to post there as well. |
st102327 | I ran into the same issue today as well.
I see there was a fix. Is my only chance to get that fix to recompile from source at the moment? |
st102328 | A fix was included in PyTorch 0.4, but unfortunately the problem is not fully fixed. See discussion here:
github.com/pytorch/pytorch
Issue: CUDA multinomial with replacement can select zero-probability events 39
opened by coventry
on 2018-01-25
closed by apaszke
on
I'm running Ubuntu 16.04.3 on an AWS P3.2xlarge, with the NVIDIA libcuda1-384 package, version 384.111-0ubuntu0.16.04.1, and pytorch version 0.3.0.post4 and cuda90,...
bug
high priority |
st102329 | qGgxATi66o8.jpg1132×856 173 KB
I run this and get this error
UnicodeDecodeError Traceback (most recent call last)
in ()
----> 1 net.to(device)
c:\users\илья\appdata\local\programs\python\python36\lib\site-packages\torch\nn\modules\module.py in to(self, *args, **kwargs)
377 return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
378
–> 379 return self._apply(convert)
380
381 def register_backward_hook(self, hook):
c:\users\илья\appdata\local\programs\python\python36\lib\site-packages\torch\nn\modules\module.py in _apply(self, fn)
183 def _apply(self, fn):
184 for module in self.children():
–> 185 module._apply(fn)
186
187 for param in self._parameters.values():
c:\users\илья\appdata\local\programs\python\python36\lib\site-packages\torch\nn\modules\module.py in _apply(self, fn)
189 # Tensors stored in modules are graph leaves, and we don’t
190 # want to create copy nodes, so we have to unpack the data.
–> 191 param.data = fn(param.data)
192 if param._grad is not None:
193 param._grad.data = fn(param._grad.data)
c:\users\илья\appdata\local\programs\python\python36\lib\site-packages\torch\nn\modules\module.py in convert(t)
375
376 def convert(t):
–> 377 return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
378
379 return self._apply(convert)
c:\users\илья\appdata\local\programs\python\python36\lib\site-packages\torch\cuda_init_.py in _lazy_init()
160 _check_driver()
161 torch._C._cuda_init()
–> 162 _cudart = _load_cudart()
163 _cudart.cudaGetErrorName.restype = ctypes.c_char_p
164 _cudart.cudaGetErrorString.restype = ctypes.c_char_p
c:\users\илья\appdata\local\programs\python\python36\lib\site-packages\torch\cuda_init_.py in _load_cudart()
57 # First check the main program for CUDA symbols
58 if platform.system() == ‘Windows’:
—> 59 lib = find_cuda_windows_lib()
60 else:
61 lib = ctypes.cdll.LoadLibrary(None)
c:\users\илья\appdata\local\programs\python\python36\lib\site-packages\torch\cuda_init_.py in find_cuda_windows_lib()
30 proc = Popen([‘where’, ‘cudart64*.dll’], stdout=PIPE, stderr=PIPE)
31 out, err = proc.communicate()
—> 32 out = out.decode().strip()
33 if len(out) > 0:
34 if out.find(’\r\n’) != -1:
UnicodeDecodeError: ‘utf-8’ codec can’t decode byte 0x88 in position 9: invalid start byte
windows 10
python 3.6.4
cuda 9.0 |
st102330 | Hm, this looks more like a OS/Python issue to me rather than a Pytorch-specific problem. I guess it’s because you have non-ascii characters in your paths (c:\users\илья\).
Maybe try running the code as a .py script adding the following at the top of the script:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
If possible for debugging, I would try to install PyTorch in a directory that has a path with ascii-characters only. If that doesn’t help, we know that there’s a different issue, and otherwise, this might be something @smth knows more about. |
st102331 | I have a multi-label classification problem, the total number of labels are extremely huge which is 111,556, so I am planning to use sparse vector to save the one-hot version of labels. However, I find that there is no multi-label loss function which is supporting sparse vector. Below is example code.
So how to solve this problems? Thanks.
import torch
i = torch.LongTensor([[0,1],[0,12]])
v = torch.FloatTensor([1, 1])
i_predict = torch.LongTensor([[0,2],[0,12]])
v_predict = torch.FloatTensor([1, 1])
predict = torch.sparse.FloatTensor(i_predict.t(), v_predict, torch.Size([1,111556]))
target = torch.sparse.FloatTensor(i.t(), v, torch.Size([1, 111556]))
criterion = torch.nn.BCELoss()
loss = criterion(predict, target)
print loss
if input.nelement() != target.nelement():
1476 raise ValueError("Target and input must have the same number of elements. target nelement ({}) "
1477 “!= input nelement ({})”.format(target.nelement(), input.nelement()))
RuntimeError: numel is not implemented for type torch.sparse.FloatTensor |
st102332 | I have created a custom nn module, which I added to a VGG layer. However, the program runs very slowly after adding the custom LSTM layer. Essentially, the LSTM layer makes an LSTM cell for each row, where the weights are two types of learnable convolutions. One convolution takes the input and convolves, while the other convolution convolves the previous LSTM cell’s hidden layer. These two inputs are then added, and using the equations defined in DeepMind’s Pixel RNN paper, the next gates for the next LSTM cell are computed. I am wondering why the model learns so slowly? I have included code below.
Here is the function which creates the convolutions and LSTM cells, along with the the definition of my custom LSTM cell.
class RLSTM(nn.Module):
def __init__(self,ch):
super(RLSTM,self).__init__()
self.ch=ch
self.input_to_state = torch.nn.Conv2d(self.ch,4*self.ch,kernel_size=(1,3),padding=(0,1)).cuda()
self.state_to_state = torch.nn.Conv2d(self.ch,4*self.ch,kernel_size=(1,3),padding=(0,1)).cuda() # error is here: hidPrev is an array - not a valid number of input channel
self.cell_list = []
def forward(self, image):
size = image.size()
b = size[0]
indvs = list(image.split(1,0))
tensor_array = []
for i in range(b):
tensor_array.append(self.RowLSTM(indvs[i]))
seq=tuple(tensor_array)
trans = torch.cat(seq,0)
return trans.cuda()
def RowLSTM(self, image):
# input-to-state (K_is * x_i) : 1x3 convolution. generate h x n x n tensor. hxnxn tensor contains all i -> s info
self.cell_list=[]
igates = []
n = image.size()[2]
ch=image.size()[1]
for i in range(n):
if i==0:
isgates = self.splitIS(self.input_to_state(image)) # convolve, then split into gates (4 per row)
cell=RowLSTMCell(0,isgates[0][0],isgates[0][1],isgates[0][2],isgates[0][3],torch.zeros(ch,n,1).cuda(),torch.zeros(ch,n,1).cuda())
cell.c=isgates[0][0]*isgates[0][3]
cell.h=torch.tanh(cell.c)*isgates[0][1]
# now have dummy variables for first row
self.cell_list.append(cell)
else:
cell_prev = self.cell_list[i-1]
hid_prev = cell_prev.getHiddenState()
ssgates = self.splitSS(self.state_to_state(hid_prev.unsqueeze(0)))
gates = self.addGates(isgates, ssgates,i)
ig, og, fg, gg = gates[0], gates[1], gates[2], gates[3]
cell = RowLSTMCell(cell_prev, ig, og, fg, gg, 0 ,0) #MORE zeros
cell.compute()
self.cell_list.append(cell)
# now have a list of all cell data, concatenate hidden state into 1 x h x n x n
hidden_layers = []
for i in range(n):
hid = self.cell_list[i].h
hidden_layers.append(torch.unsqueeze(hid,0))
seq = tuple(hidden_layers)
tensor = torch.cat(seq,3)
return tensor
def splitIS(self, tensor): #always going to be splitting into 4 pieces, so no need to add extra parameters
inputStateGates={}
size=tensor.size() # 1 x 4h x n x n
out_ft=size[1] # get 4h for the nxnx4h tensor
num=size[2] # get n for the nxn image
hh=out_ft/4 # we want to split the tensor into 4, for the gates
tensor = torch.squeeze(tensor).cuda() # 4h x n x n
# First, split by row: Creates n tensors of 4h x n x 1
rows = list(tensor.split(1,2))
for i in range(num):
# Each row is a tensor of 4h x n x 1, split it into 4 of h x n x 1
row=rows[i]
# print("Each row using cuda: "+str(row.is_cuda))
inputStateGates[i]=list(row.split(hh,0))
return inputStateGates
def splitSS(self, tensor): # 1 x 4h x n x 1, create 4 of 1 x h x n x 1
size=tensor.size()
out_ft=size[1] # get 4h for the 1x4hxn tensor
num=size[2] # get n for the 1xhxn row
hh=out_ft/4 # we want to split the tensor into 4, for the gates
tensor = tensor.squeeze(0).cuda() # 4h x n x 1
splitted=list(tensor.split(hh,0))
return splitted
def addGates(self, i2s,s2s,key):
""" these dictionaries are of form {key : [[i], [o], [f], [g]]}
we want to add pairwise elemeents """
# i2s is of form key: [[i], [o], [f], [g]] where each gate is hxn
# s2s is of form [[h,n],[h,n],[h,n], [h,n]]
gateSum = []
for i in range(4): # always of length 4, representing the gates
gateSum.append(torch.sigmoid(i2s[key][i] + s2s[i]))
return gateSum
Next, here is the definition of the LSTM cell.
class RowLSTMCell(): #inherit torch.nn.LSTM?
def __init__(self,prev_row, i, o, f, g, c, h):
self.c=c
self.h=h
self.i=i
self.i = self.i.cuda()
self.o=o
self.o = self.o.cuda()
self.g=g
self.g = self.g.cuda()
self.f=f
self.f = self.f.cuda()
self.prev_row=prev_row
def getStateSize(self):
return self._state_size
def getOutputSize(self):
return self._output_size
def compute(self):
c_prev = self.prev_row.getCellState()
h_prev = self.prev_row.getHiddenState()
self.c = self.f * c_prev + self.i * self.g
self.h = torch.tanh(self.c) * self.o
def getHiddenState(self):
return self.h
def getCellState(self):
return self.c
I add the RLSTM layer in between two specific convolutions of the VGG-16 model. However, now the training process becomes very slow. I am wondering why that is? Any help is much appreciated. |
st102333 | When I am installing pytorch using
pip install torch
and then import torch in python, it throws the following error:
>>> import torch
Traceback (most recent call last):
_ File “”, line 1, in _
_ File “/home/shubhamb/anaconda2/lib/python2.7/site-packages/torch/init.py”, line 78, in _
_ from torch.C import *
ImportError: /state/partition1/softwares/glibc-2.14/lib/libc.so.6: version `GLIBC_2.17’ not found (required by /home/shubhamb/anaconda2/lib/python2.7/site-packages/torch/lib/libgomp-c0d7b783.so.1)
But when I install it with
conda install pytorch torchvision cuda80 -c pytorch
import torch is successful.
Is it really necessary to have glibc2.17 ? and how does installation from pip and conda differ?
With conda installation, I am facing further issues with warp-ctc.
Kindly help.
Thanks |
st102334 | is it necessary that the hyper parameters which worked in tensorflow should match with pytorch? and give exact same result? |
st102335 | Hi,
I want to change the softmax results to onehot gate.
for example:
x=nn.Linear(10,20)(input)
task_gate =nn.softmax()(x) (e.g., the results is 0.5, 0.2, 0.3)
I want to change the (0.5, 0.2, 0.3) to (1, 0, 0). Also, x need to be optimized. |
st102336 | Hi,
The function that transform (0.5, 0.2, 0.3) to (1, 0, 0) will have gradients that are 0 almost everywhere.
So you won’t be able to optimize anything as all the gradients you will get will be 0. |
st102337 | yes. I only want to get gradient instead of None gradient. So, which function can help me? Thanks. |
st102338 | But if you get a gradient tensor it will be full of 0s is that what you want? You can just create a tensor with torch.zeros(your_size). |
st102339 | Hey guys, I am trying to load a trained CNN classifier that I saved so I can modify the linear layers, but I get a size mismatch error when performing a forward pass (train or eval, doesn’t matter). Here is the output:
Exception NameError: “global name ‘FileNotFoundError’ is not defined” in <bound method _DataLoaderIter.del of <torch.utils.data.dataloader._DataLoaderIter object at 0x7fcbc02c3350>> ignored
Traceback (most recent call last):
File “train_triplet_loss.py”, line 169, in
outputs = F.softmax(old_model(images))
File “/home/zswartz/.local/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 491, in call
result = self.forward(*input, **kwargs)
File “/home/zswartz/.local/lib/python2.7/site-packages/torch/nn/modules/container.py”, line 91, in forward
input = module(input)
File “/home/zswartz/.local/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 491, in call
result = self.forward(*input, **kwargs)
File “/home/zswartz/.local/lib/python2.7/site-packages/torch/nn/modules/linear.py”, line 55, in forward
return F.linear(input, self.weight, self.bias)
File “/home/zswartz/.local/lib/python2.7/site-packages/torch/nn/functional.py”, line 994, in linear
output = input.matmul(weight.t())
RuntimeError: size mismatch, m1: [72 x 2], m2: [144 x 100] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:249
If I completely strip away the linear layers, and just leave the conv layers, there is no size mismatch error. Keep in mind, I am using the same exact data loader that I used to train the network in the first place. This isn’t a huge deal because I plan to strip away the linear layers regardless, but I would like to verify that the frozen model performs as it was trained. I have a feeling that this may have something to do with the fact that I use “.view(-1, 144)” to flatten my final feature map in the forward method before the first linear layer, which is where this error is occurring. |
st102340 | Solved by justusschock in post #13
You could simply add a torch.nn.ReLU() layer to use relu inside your sequential model. |
st102341 | Did you change the view() after loading the model?
If you model was fine before you saved it, it should also work after loading it.
The shape of m1 looks like it could be [144 x 1], but this is just a guess.
Could you explain a bit more how you saved and reloaded the model?
Also, the code would be interesting to see. |
st102342 | Hey! Thanks for the reply.
I use torch.save(model.sate_dict, filename) to save, and then I use:
model = train.Net 21()
model.load_state_dict(torch.load(filename))
in order to load the model.
I think the problem has something to do with the fact that I am using:
old_model = nn.Sequential(*list(model.children())).cuda(),
after loading the model.
I have done it this way so I have the ability to create a sub-network by indexing model.children(), which works as long as I index up to but not including the first linear layer.
I’m not sure how much liberty I have in sharing all of the code, but I can include snippets.
Thank you! |
st102343 | Could you check again, that the forward pass runs successfully:
x = torch.randn( YOUR_SIZE )
output = model(x)
torch.save(model.state_dict(), filename)
model = train.Net()
model.load_state_dict(torch.load(filename))
output = model(x)
old_model = nn.Sequential(...)
x = x.to('cuda')
output = old_model(x) |
st102344 | So, the model runs perfectly as long as I don’t stick the layers together using nn.Sequential(…).
…Any ideas? |
st102345 | I’m under the impression that using sequential might mess with the flattening of the final feature map. |
st102346 | Thanks for the hint! You are absolutely right.
You are re-creating the model as a nn.Sequential module, so that the view you are probably using in forward will be missing.
You can fix using a Flatten module between your layers. Here is a small example:
class Flatten(nn.Module):
def __init__(self):
super(Flatten, self).__init__()
def forward(self, x):
return x.view(x.size(0), -1)
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 3, 1, 1)
self.fc1 = nn.Linear(6*24*24, 10)
self.fc2 = nn.Linear(10, 2)
def forward(self, x):
x = F.relu(self.conv1(x))
x = x.view(x.size(0), -1)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
model = MyModel()
x = torch.randn(1, 3, 24, 24)
output = model(x)
layers = list(model.children())[:1] + [Flatten()] + list(model.children())[1:]
model = nn.Sequential(*layers)
output = model(x) |
st102347 | Awesome! Sorry, did you mean to include an instance of your Flatten class somewhere in your model? |
st102348 | Hey, sorry to resurface this issue, but I have a lingering question. Does stitching things together using sequential ignore everything that occurs in the forward method, including functional relus? |
st102349 | Yes it does. You can only use instances of classes to give them to the sequential model. For almost every function you can simply wrap it inside a torch module (and for some methods as Relu such a wrapper exists already). |
st102350 | so, If I’m removing and adding layers of a saved model by using nn.sequential, how would I reintroduce the relu’s?? |
st102351 | You could simply add a torch.nn.ReLU() layer to use relu inside your sequential model. |
st102352 | Hi everyone,
I have reconfigured my code to work with python3, and now when I load my saved model, model.children() outputs the entire model as one layer. This was not happening when I was using python2.
When running this code:
for child in model.children():
print(“PRINTING LAYER”)
print(child)
I get output:
PRINTING LAYER
Net(
(conv11): Conv2d(1, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(conv12): Conv2d(192, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(conv13): Conv2d(192, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(conv14): Conv2d(192, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(conv15): Conv2d(192, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(conv16): Conv2d(192, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(pool1): MaxPool2d(kernel_size=4, stride=4, padding=0, dilation=1, ceil_mode=False)
(conv21): Conv2d(192, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(conv22): Conv2d(192, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(pool2): MaxPool2d(kernel_size=6, stride=6, padding=0, dilation=1, ceil_mode=False)
(d): Dropout(p=0.5)
(fc1): Linear(in_features=64, out_features=40, bias=True)
(fc2): Linear(in_features=40, out_features=15, bias=True)
) |
st102353 | I have a huge single data file, maybe 70G and each row in it is a sample. What should i do to load them in batch and using dataloader.
I do not have enough Mem to load all the data |
st102354 | Assuming you have enough RAM to load the dataset at once you can either load it with what library you would do it usually or you can load it once and save it as huge torch tensor (which might be faster for loading) and then load it via torch.load. Once you have the dataset loaded to your RAM you can simply index it. |
st102355 | all the samples are in a single file, just like a txt file
each row in txt file is a training sample, totally one hundred million samples |
st102356 | You could do the things mentioned in this 99 post inside your dataloader and then convert the loaded data to tensors. This should be quite straightforward and efficient. |
st102357 | i want to train a Multitasking network ,and the data need two label.one is a 2D np.ndarray,another is a onehot label to appoint the kind of the input.So how could i make the dataset?One input could have two different labels? |
st102358 | You could create a Dataset and return your appropriate targets.
I used a tensor for the 2D target instead of a numpy.array.
You can easily transform it to a tensor using:
target1 = torch.from_numpy(arr)
Also, I created the second target as a tensor containing class indices. The loss functions for classification need an index tensor instead of a one-hot encoded tensor.
Here is a small example:
class MyDataset(Dataset):
def __init__(self, data, target1, target2):
self.data = data
self.target1 = target1
self.target2 = target2
def __getitem__(self, index):
x = self.data[index]
y1 = self.target1[index]
y2 = self.target2[index]
return x, y1, y2
def __len__(self):
return len(self.data)
data = torch.randn(100, 3, 24, 24)
target1 = torch.randn(100, 10, 10) # your 2d tensor
target2 = torch.empty(100, dtype=torch.long).random_(10) # 10 class indices
dataset = MyDataset(data, target1, target2)
x, y1, y2 = dataset[0] |
st102359 | I have a very sparse dataset that is organized as a scipy sparse csr_matrix and it is too large to convert it to a single dense numpy array. For now, I can only extract part of it and convert that part to an numpy array, then to a tensor and forward the tensor. But the csr_matrix to numpy array step is still awfully time-consuming. I wonder whether there is a better method to feed the sparse matrix. |
st102360 | Solved by albanD in post #7
Hi,
I am afraid functions like .index_select() are not available at the moment and you would need them to get a mini batch from your dataset.
You could potentially keep your array as a scipy one. Extract the minibatch from the scipy array (I expect this is possible but I don’t know). And then conv… |
st102361 | There seems to be experimental support for sparse matrices in PyTorch. I’ve never used them before but maybe this will be helpful - torch.sparse 178
EDIT: You might want to have a look at this discussion 198 on GitHub regarding the state of sparse tensors in PyTorch. |
st102362 | Thank you for your timely reply. I read torch.sparse in PyTorch documents before posting but wasn’t aware of the github discussion.
Right now I have a solution as below, which is quite fast:
def spy_sparse2torch_sparse(data):
"""
:param data: a scipy sparse csr matrix
:return: a sparse torch tensor
"""
samples=data.shape[0]
features=data.shape[1]
values=data.data
coo_data=data.tocoo()
indices=torch.LongTensor([coo_data.row,coo_data.col])
t=torch.sparse.FloatTensor(indices,torch.from_numpy(values).float(),[samples,features])
return t
But it is still not very helpful. When I print(t[0]), it says RuntimeError: Sparse tensors do not have strides. Then how should I extract a minibatch of it?
The .to_dense() method is impossible because it returns RuntimeError: $ Torch: not enough memory: you tried to allocate 141GB. Buy new RAM! at /pytorch/aten/src/TH/THGeneral.c:218 |
st102363 | I’m able to reproduce the same error when I run similar code on PyTorch 0.4.1. I can’t help you out here. Maybe @albanD or @smth have some insights. |
st102364 | Hi,
That should work.
At the moment, you cannot access elements of sparse tensors that way, you can access indices and values directly.
But you can still perform some pointwise operations on them and use them in matrix matrix multiplications.
What do you want to do with them? What are the operations your net needs to be able to do with them that are not available? |
st102365 | Thanks a lot!
I need to sample a mini-batch out of the whole dataset, feed a classifier that mini-batch and update the weights of the classifier. If mini-batch sampling is possible, I can finish the task.
My PyTorch version is 0.4.0. If there is some mechanism to do that, it will be great |
st102366 | Hi,
I am afraid functions like .index_select() are not available at the moment and you would need them to get a mini batch from your dataset.
You could potentially keep your array as a scipy one. Extract the minibatch from the scipy array (I expect this is possible but I don’t know). And then convert the minibatch to a torch (sparse) tensor just before feeding it to your net. |
st102367 | @11130 you could think of contributing to this thread - Sparse tensor use cases 444 |
st102368 | Why I have a problem when import torch using Jupyter?
I properly ran no problem with pip3 until I changed to PyCharm by using .virtualenvs to import libraries.
after I configured that I I cannot import a torch when running with Jupyter Notebook. Any Ideas to ignore this situation for configuration? Should we use pip3 or ./virtualenvs or any? |
st102369 | It might be that your jupyter notebook is pointing to a different python path than the one you have PyTorch installed on.
You can run
import sys
sys.executable
to check which python is being used in Jupyter.
I use conda to manage my environments and the output for the above code looks similar to this
'/Users/user_name/anaconda/envs/env_name/bin/python' |
st102370 | @Viraat: Thanks, I solved it by installed anaconda as your suggestion, then I also installed both python3 -m pip3 install ipykernel and python3 -m pip3 install -user. |
st102371 | To keep things clean I would suggest using Conda to install jupyter as well. If you haven’t tried it out you should have a look at Jupyter lab 4. It’s in beta but it’s very neat! |
st102372 | I implemented a custom Module that consists of several Convolution. However, I couldn’t apply weight norm because it use getattr(module, ‘weight’) to get the weight variable. Is there any way to apply weight_norm directly to a module that has multiple sub-module or weight ? |
st102373 | I want to load a model,I need the latest_net_G_A.pth ,when I load the file ,an error appeared,it is the ‘unexpected key “model.10.conv_block.5.weight” in state_dict’
2018-07-30 23-07-54屏幕截图.png770×194 28.5 KB |
st102374 | Do you have an instance of a model with the exact same structure as the model you saved?
How do you save/load the model? |
st102375 | I saved the .pth file during the training process,then I load the model by the way as follows
2018-07-31 15-01-33屏幕截图.png844×166 48.8 KB |
st102376 | Did you save only the statedict or the entire model?
If you saved only the statedict you need to do something like
model = MyModel() # create instance of your model with exact same structure as the one you saved
model.load_state_dict(torch.load("/your/path/to/statedict.pth"
To check whether you saved the statedict or the entire model we need to see your saving code. |
st102377 | yes ,I have checked my code .Isaved the statedict rather than the entire model.thank you for you reply! |
st102378 | Hi!, I’d like to know if there’s some difference between these two functions. I thought ConvTranspose3d was the inverse of MaxPool3d but now I saw that there is also MaxUnpool3d, so Wich one should I apply?
I want to implement a variation of the 3D U-net model 11.
Captura de pantalla de 2018-07-30 13-42-51.jpg1003×540 78.5 KB
I’m speaking about the up yellow arrows of the model above (3D U-net model)
Thanks! |
st102379 | Solved by justusschock in post #3
Conv Transpose is the inverse of Conv, it works somehow like a convolution, while maxunpool only repeats your pixels to achieve the right output shape. Rather than maxunpool I would use torch.nn.interpolate if you dont want some trainable parameters and conv transpose if you want some |
st102380 | Conv Transpose is the inverse of Conv, it works somehow like a convolution, while maxunpool only repeats your pixels to achieve the right output shape. Rather than maxunpool I would use torch.nn.interpolate 13 if you dont want some trainable parameters and conv transpose if you want some |
st102381 | Hi I am wondering if anyone has figured out a way to solve the generalized eigenvalue problem as presented in this paper: https://arxiv.org/abs/1511.04707 43. The original work was done in theano using theano.tensor.slinalg.Eigvalsh . but in pytorch there is not an easy way of solving this generalized eigenvalue problem with a similar function. I’m wondering if anyone has any tips on how to either recast the problem or if there is another way of solving this.
Thanks! |
st102382 | Isn’t torch.symeig 48 very similar?
It doesn’t have backward just yet, but people are working on it 44.
Best regards
Thomas |
st102383 | torch.symeig solves a regular symmetric eigenvalue problem but not the generalized. In my case I have essentially Ax = lambda*Bx where A and B are symmetric PD matrices. I guess if B is non-singular I could solve this by computing inv(B)*A and solving that regular eigenvalue problem but there are no longer guarantees on the symmetry of that matrix and the inverse is generally unstable. |
st102384 | There is a better, albeit computationally expensive reduction using the cholesky decomposition (but use upper=False) of B:
http://www.alglib.net/eigen/symmetric/generalizedsymmevd.php 53
In the meantime, the symeig backward was merged in master.
Best regards
Thomas |
st102385 | class MarginRankingLoss(_Loss):
def __init__(self, margin=0, size_average=True, reduce=True):
super(MarginRankingLoss, self).__init__(size_average, reduce)
self.margin = margin
def forward(self, input1, input2, target):
return F.margin_ranking_loss(input1, input2, target, self.margin, self.size_average,
self.reduce)
I wonder whether both input1 and input2 get gradients and back-propagate them. |
st102386 | For example say my starting layer is
Conv2d(3, 256, 1) -> Conv2d(256, 512, 4, 2, 1)
and I want to grow my layer on the left side of it so the next layer’s structure should be
Conv2d(3, 128, 1) -> Conv2d(128, 256, 4, 2, 1) -> Conv2d(256, 512, 4, 2, 1).
In this case should I slice the old conv layer or copy over the state_dict? Currently I have a working model where I slice off the old layer, destroy the old network, and add the 2 new layers plus the old copy in a list then unpack in nn.Sequential() so I can make a new network. Finally I would grab that new network’s params and feed it to a new optimizer. My model works and it produces good results but I am not sure if I am following best practice. |
st102387 | If i have tensor A = torch.rand(30,500,50,50) what is the smartest and fastest way to normalize each layer (the layers in A.size(1)) to have values between a and b.
The naive way is:
B = torch.zeros(A.size())
for b in range(A.size(0)):
for c in range(A.size(1)):
B[b,c,:,:] = ((b-a)*(A[b,c,:,:]-torch.min(A[b,c,:,:]))/(torch.max(A[b,c,:,:])-torch.min(A[b,c,:,:]))) + a
But it is super slow… |
st102388 | Solved by ptrblck in post #2
I haven’t timed the code yet, but it should be faster than for loops:
x1 = torch.randn(30, 500, 50, 50)
x1_min = torch.min(x, dim=3, keepdim=True)[0].min(2, keepdim=True)[0]
x1_max = torch.max(x, dim=3, keepdim=True)[0].max(2, keepdim=True)[0]
x2 = (g-f)*(x1 - x1_min) / (x1_max - x1_min) + f |
st102389 | I haven’t timed the code yet, but it should be faster than for loops:
x1 = torch.randn(30, 500, 50, 50)
x1_min = torch.min(x, dim=3, keepdim=True)[0].min(2, keepdim=True)[0]
x1_max = torch.max(x, dim=3, keepdim=True)[0].max(2, keepdim=True)[0]
x2 = (g-f)*(x1 - x1_min) / (x1_max - x1_min) + f |
st102390 | Say I have the convolution
conv = torch.nn.Conv2d(self.ch,4*self.ch,kernel_size=(1,3),padding=(0,1))
This convolutional network should be adjusting its kernel/weights over training. Is there a way for me to track the values of the kernel/weights of the convolution? Something like conv.parameters()? |
st102391 | Solved by isalirezag in post #2
I guess you can watch the weights of your conv via
print(conv.weight) |
st102392 | I have a model that I can load with either torch.load or torch.load_state_dict, but when I do these operations I get back a type that is OrderedDict.
I don’t see where, once a model is trained, it can be loaded a a Module instance and then run to produce some output. In more general terms, I’m looking how to get a Module from the result of a call to one of torch’s load functions, not anything that concerns the shape of the parameters to be returned by an arbitrary model or usage scenario because I know that those are determined by the designers of that model. |
st102393 | Because of some issue i removed pytorch and again installed it, after that i am getting this error on running my models
File "train_unet.py", line 46, in <module>
g = UNet(3).type(dtype)
File "/data/anaconda2/lib/python2.7/site-packages/torch/nn/modules/modul
e.py", line 277, in type
return self._apply(lambda t: t.type(dst_type))
File "/data/anaconda2/lib/python2.7/site-packages/torch/nn/modules/modul
e.py", line 185, in _apply
module._apply(fn)
File "/data/anaconda2/lib/python2.7/site-packages/torch/nn/modules/modu$
e.py", line 185, in _apply
module._apply(fn)
File "/data/anaconda2/lib/python2.7/site-packages/torch/nn/modules/modu$
e.py", line 185, in _apply
module._apply(fn)
File "/data/anaconda2/lib/python2.7/site-packages/torch/nn/modules/modu$
e.py", line 185, in _apply
module._apply(fn)
File "/data/anaconda2/lib/python2.7/site-packages/torch/nn/modules/modu$
e.py", line 191, in _apply
param.data = fn(param.data)
File "/data/anaconda2/lib/python2.7/site-packages/torch/nn/modules/modu$
e.py", line 277, in <lambda>
return self._apply(lambda t: t.type(dst_type))
RuntimeError: CUDA error: unknown error
How to fix this? |
st102394 | Do you get the same error using dummy CUDA calls like:
torch.cuda.init()
# or
a = torch.randn(1, device='cuda')
Did you update the NVIDIA drivers? If so, I would restart the machine and see, if the error disappears. |
st102395 | No, actually the pytorch version got updated to 0.4 which was not compatible with cuda version ,so i downgraded it to 0.3, now everything is working fine. Thanks |
st102396 | I am writing a program, and I am trying to implement DeepMind’s RowLSTM function from their Pixel RNN paper. The authors explicitly define how to do a convolution which determines the values of the four gates of an LSTM cell. How would one implement this in PyTorch (specifically setting the values of the gates of the LSTM cell). From what I understand, each row of the image should become an LSTM cell, and the next row’s LSTM cell is computed using a 1x3 convolution of the hidden states of the previous row. So, a lot of accessing of the LSTM gates is necessary. How does one do this? Thank you in advance for any help, it is much appreciated. |
st102397 | As far as I know, Facebook has recently announced that it is merging ATen and Caffe2. My guess is that they will add a neural network library into ATen, and maybe auto differentiation. Am I right? Is there any approximate time for releasing the first version?
And, is it going to affect Pytorch? |
st102398 | These questions are mostly targeted in this post 79 and as far as I know, the release should be scheduled in late summer/autumn. |
st102399 | I am trying to build a recommender system that predicts an output class which is categorical in nature. I have implemented the same for the movie ratings database where I convert the dataset into a matrix with the rows representing user id, columns representing movie ids and values are ratings.
The problem I face is useing multiclass output labels, I want to understand how can I use the matrix after one hot encoding the variables? Will I need to create two separate matrices? |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.