id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st119468
|
Apparently numpy has (fast) memory mapping:
img = np.memmap(filename, dtype='int16', mode='r').__array__()
That gives me a numpy array of int16. I can’t use .numpy() to immediately convert that object into a torch tensor, because conversion from int16 is not supported. That’s okay in my case – I can do the preprocessing in numpy – but thought it was worth pointing out.
|
st119469
|
That’s weird, int16 should be equivalent to torch.ShortTensor. Not sure why it doesn’t work for you. Can you please print an error? Are you sure you’ve used the correct function? .numpy() is a torch method, you won’t find that in numpy. You should use torch.from_numpy.
|
st119470
|
I meant torch.from_numpy. The error is quite clear:
RuntimeError: can’t convert a given np.ndarray to a tensor - it has an invalid type. The only supported types are: double, float, int64, int32, and uint8.
|
st119471
|
I am trying to install pytorch 0.1.9 using pip on python 2.7 with cuda 8.0 and I’m getting a connection timeout error running “pip install https://s3.amazonaws.com/pytorch/whl/cu80/torch-0.1.9.post2-cp27-none-linux_x86_64.whl 9” from my ubuntu command line.
|
st119472
|
Hi,
I am trying to practice pytorch by a small example of CNN on mnist. However, I got a weird performance on the test dataset, it first go up and then go down and finally begin to converge.
As shown in this picture,
I use SGD with learning rate = 0.01.
The architecture is define as follow:
class MyDeepNeural(nn.Module):
def __init__(self, p_keep_conv):
super(MyDeepNeural, self).__init__()
self.conv = nn.Sequential()
self.conv.add_module('conv1', nn.Conv2d(1, 32, kernel_size=3, padding=1))
self.conv.add_module('relu1', nn.ReLU())
self.conv.add_module('pool1', nn.MaxPool2d(kernel_size=2))
self.conv.add_module('drop1', nn.Dropout(1 - p_keep_conv))
self.conv.add_module('conv2', nn.Conv2d(32, 64, kernel_size=3, padding=1))
self.conv.add_module('relu2', nn.ReLU())
self.conv.add_module('pool2', nn.MaxPool2d(kernel_size=2))
self.conv.add_module('drop2', nn.Dropout(1 - p_keep_conv))
self.conv.add_module('conv3', nn.Conv2d(64, 128, kernel_size=3, padding=1))
self.conv.add_module('relu3', nn.ReLU())
self.conv.add_module('pool3', nn.MaxPool2d(kernel_size=2))
self.conv.add_module('drop3', nn.Dropout(1 - p_keep_conv))
self.fc = nn.Sequential()
self.fc.add_module('fc1',nn.Linear(128*9, 625))
self.fc.add_module('relu4',nn.ReLU())
self.fc.add_module('fc2',nn.Linear(625,10))
self.fc.add_module('softmax',nn.Softmax())
|
st119473
|
Apparently that’s the model dynamics on your dataset. It happens.
BTW it’s often better to use log_softmax in your model.
|
st119474
|
We don’t support that at the moment. PyTorch models are much more flexible than those in Lua, so it’s easier to go the other way around. You could try recoding your model and dumping the weights to HDF5 format.
|
st119475
|
I understand why you made that decision, but it’s very bad to be not able to port between one framework on two languages.
We have production code on Lua\Torch for Android phones. Of course we can’t use Python on smartphones, but as training and for analyze Python much better. That’s why I’m disappointed.
|
st119476
|
@WildChlamydia I think it would be nice to have that, but it’s not a feature that we’re going to implement anytime soon. There are lots of very important things to implement. PR are welcome though.
Maybe you’ll find the thread on exporting models to C interesting.
|
st119477
|
Hi,
I am trying to implement a batch matrix multiplication like the first equation in this image.
The weight and bias are defined as a parameter in the model. I am making a copy of the bias term to the entire batch.
def batch_matmul_bias(seq, weight, bias, nonlinearity=''):
s = None
bias_dim = bias.size()
for i in range(seq.size(0)):
_s = torch.mm(seq[i], weight)
_s_bias = _s + bias.expand(bias_dim[0], _s.size()[0])
print _s_bias.size()
if(nonlinearity=='tanh'):
_s_bias = torch.tanh(_s_bias)
_s_bias = _s_bias.unsqueeze(0)
if(s is None):
s = _s_bias
else:
s = torch.cat((s,_s_bias),0)
return s.squeeze()
The forward pass works, but when doing the backward pass, I am getting a size mismatch error.
RuntimeError: sizes do not match at /data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.7_1485448159614/work/torch/lib/THC/generated/../generic/THCTensorMathPointwise.cu:216
Can you help me fix it?
Thank you.
|
st119478
|
Hi,
Is there any reason why you do not use builtin functions?
I guess you could do something along the lines of:
import torch.nn.functional as F
def batch_matmul_bias(seq, weight, bias, nonlinearity=''):
s = F.linear(seq, weight, bias)
if nonlinearity=='tanh':
s = F.tanh(s)
return s
|
st119479
|
I just looked at the API carefully, and it looks like Linear supports batch samples and no bias as well, this will save a lot of time. Thank you!
|
st119480
|
Hi,
Be careful because these functions supports only batch mode.
So if your input is not a batch, don’t forget to use .unsqueze(0) to make it as a batch of 1 element.
|
st119481
|
I managed to fix it. For future reference, I was sloppy and did not properly reshape the bias term. Doing a transpose of the bias term is the one I forgot.
_s_bias = _s + bias.expand(bias_dim[0], _s.size()[0]).transpose(0,1)
Thank you for the wonderful effort that you’ve put in here, debugging is a lot easier.
BTW:
Can you explain this?
Screenshot from 2017-02-27 23-39-59.png649×520 45.2 KB
|
st119482
|
Is there some straightforward way to reshape while using add_module() ?
ATM I have to resort to
class RESHAP(nn.Module):
def __init__(self, nz):
super(RESHAP, self).__init__()
self.nz = nz
def forward(self, input):
return input.view(-1, self.nz, 1, 1)
def __repr__(self):
return self.__class__.__name__ + ' ()'
main.add_module(‘initial.{0}.reshape’.format(nz/4), RESHAP(nz/4))
|
st119483
|
@Veril if you dont use a sequential, then you can reshape with torch.view inside your forward function.
|
st119484
|
@Sandeep42 it’s a bug that has been fixed 2 days ago (gradient’s weren’t viewed properly). The problem is that the basic ops like +, *, - and / don’t care about the sizes of the inputs, but only that their number of elements match. The result has always the size of the first operand - this explains your screenshot.
I think we should add some strict size checking there anyway (once we add broadcasting)
|
st119485
|
I am looking forward to that. The fact that you can just interactively work with Tensors, and debug them is already a great thing. Thanks!
|
st119486
|
I’m trying to understand the interpretation of gradInput tensors for simple criterions using backward hooks on the modules. Here are three modules (two criterions and a model):
import torch
import torch.nn as nn
import torch.optim as onn
import torch.autograd as ann
class L1Loss(nn.Module):
def __init__(self):
super(L1Loss, self).__init__()
def forward(self, input_var, target_var):
'''
L1 loss:
|y - x|
'''
return (target_var - input_var).norm()
class CosineLoss(nn.Module):
def __init__(self):
super(CosineLoss, self).__init__()
def forward(self, input_var, target_var):
'''
Cosine loss:
1.0 - (y.x / |y|*|x|)
'''
return 1.0 - input_var.dot(target_var) / (input_var.norm()*target_var.norm())
class Model(nn.Module):
def __init__(self, mode=None):
super(Model, self).__init__()
def hook_func(module, grad_i, grad_o):
print 'Grad input:', grad_i
self.input_encoder = nn.Linear(20, 10)
self.target_encoder = nn.Linear(20, 10)
if mode == 'cos':
self.criterion = CosineLoss()
elif mode == 'l1':
self.criterion = L1Loss()
self.criterion.register_backward_hook(hook_func)
self.optimizer = onn.Adam(self.parameters(), lr=1e-5)
def forward(self, input_var_1, input_var_2):
return self.input_encoder(input_var_1), self.target_encoder(input_var_2)
def train(self, input_np, target_np):
input_var = ann.Variable(input_np)
target_var = ann.Variable(target_np)
input_encode, target_encode = self.forward(input_var, target_var)
loss = self.criterion(input_encode, target_encode)
loss.backward()
self.optimizer.step()
return loss.data[0]
If I run a few iterations using L1Loss:
mod = Model(mode='l1')
for i in range(5):
inp = torch.rand(1, 20)
tar = torch.rand(1, 20)
loss_val = mod.train(inp, tar)
print 'Iteration\t{0}\tLoss\t{1}'.format(i, loss_val)
I see grad input is a single tensor of shape (1, 10):
Grad input: (Variable containing:
-0.2466 -0.0966 0.0659 -0.1954 0.3573 -0.5367 0.5818 0.0758 0.2598 -0.2447
[torch.FloatTensor of size 1x10]
,)
I was expecting two tensors of that shape, one for each input. On the other hand, if I run with the cosine loss:
mod = Model(mode='cos')
for i in range(5):
inp = torch.rand(1, 20)
tar = torch.rand(1, 20)
loss_val = mod.train(inp, tar)
print 'Iteration\t{0}\tLoss\t{1}'.format(i, loss_val)
I find grad input is a single scalar value:
Grad input: (Variable containing:
-1
[torch.FloatTensor of size 1]
,)
In both cases I was expecting two gradInput tensors, one corresponding to each input to the criterion’s forward function.
Where is my interpretation amiss? Is there something wrong with the implementations of the criterions? I’m particularly surprised by the cosine loss - the L1 loss seems to interpret the second input (targer_var) as a ground truth that is not being optimized, but I’m not clear on the cosine loss function. Neither am I clear on what has changed between the two - both calculations are on the outputs of each encoder, with a scalar loss returned by forward, but the shape of gradInput is different for both.
|
st119487
|
I’m sorry, but it’s a know problems with hooks right now. They only work properly on modules that create only a single Function in their forward (i.e. all simple modules like Linear, Conv2d, but not containers or custom losses like that). Your expectations are correct, this is how they should work.
We’ll be rolling out some serious autograd refactors in the upcoming week or two, and it should be fixed afterwards.
|
st119488
|
Thanks for the prompt reply Adam. Is there any current workaround for this? My aim particularly concerns the cosine loss. I was experimenting with multiplying the gradients by tensors of feature weights (in other words, some weight-adjusted learning rates for the output features of the encoders).
|
st119489
|
You could register hooks on individual Variables (in forward). See the docs 138. These should work fine.
|
st119490
|
Thanks, will give it a try! Looking forward to the refactors. Pytorch is simply awesome - love what you guys have done.
Edit: register_hook on the variables worked a treat, thanks again.
|
st119491
|
Hi,
Is pre-trained VGG16 available? I have updated torchvision to 0.1.7 version and it says vgg16() takes no argument. Also in (https://github.com/pytorch/vision/issues/28 144), it is mentioned that pre-trained VGG models are now available in master.
|
st119492
|
it’s available in master. You can install torchvision master via:
pip install https://github.com/pytorch/vision/archive/master.zip
We will be generating versioned binaries v0.1.8 soon
|
st119493
|
smth:
nerating versioned binaries v0.1.8 soon
Will the BN version of VGG available?
|
st119494
|
no, we dont have plans to include BN version of VGG. IF you plan to add it, we accept pull requests. Thank you.
|
st119495
|
Hi All,
I have a numpy array of modified MNIST, which has the dimensions of a working dataset (Nx28x28), and labels(N,)
I want to convert this to a PyTorch Dataset, so I did:
train = torch.utils.data.TensorDataset(img, labels.view(-1))
train_loader = torch.utils.data.DataLoader(train, batch_size=64, shuffle=False)
This causes an AssertionError for the dimensions when I try to optimize my model, and there’s zero transparency, since if I load the data from PyTorch my tensors and the train.dataset.train_data.dim() are the same.
|
st119496
|
I tried this too,
train = torch.utils.data.TensorDataset(img, labels)
train_loader = torch.utils.data.DataLoader(train, batch_size=64, shuffle=False)
No dice
|
st119497
|
can you show a result below?
print(img.size(), label.size())
I think increasing a dimension will work,
train = torch.utils.data.TensorDataset(img, labels.unsqueeze(1))
|
st119498
|
torch.Size([18000, 28, 28]) torch.Size([18000])
unsqueeze(1) gave the same error
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-18-f3ca3f765752> in <module>()
1 for epoch in range(1, 101):
----> 2 train(epoch)
3 test(epoch, valid_loader)
<ipython-input-17-f91e8ba0f29c> in train(epoch)
6 data, target = Variable(data), Variable(target)
7 optimizer.zero_grad()
----> 8 output = model(data)
9 loss = F.nll_loss(output, target)
10 loss.backward()
/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)
208
209 def __call__(self, *input, **kwargs):
--> 210 result = self.forward(*input, **kwargs)
211 for hook in self._forward_hooks.values():
212 hook_result = hook(self, input, result)
<ipython-input-15-7f886ceeb28f> in forward(self, x)
10
11 def forward(self, x):
---> 12 x = F.relu(F.max_pool2d(self.conv1(x), 2))
13 x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), ))
14 # x = F.relu(F.max_pool2d(self.conv3(x), 2))
/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)
208
209 def __call__(self, *input, **kwargs):
--> 210 result = self.forward(*input, **kwargs)
211 for hook in self._forward_hooks.values():
212 hook_result = hook(self, input, result)
/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/modules/conv.pyc in forward(self, input)
233 def forward(self, input):
234 return F.conv2d(input, self.weight, self.bias, self.stride,
--> 235 self.padding, self.dilation, self.groups)
236
237
/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/functional.pyc in conv2d(input, weight, bias, stride, padding, dilation, groups)
35 f = ConvNd(_pair(stride), _pair(padding), _pair(dilation), False,
36 _pair(0), groups)
---> 37 return f(input, weight, bias) if bias is not None else f(input, weight)
38
39
/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/conv.pyc in forward(self, input, weight, bias)
30 self.save_for_backward(input, weight, bias)
31 if k == 3:
---> 32 input, weight = _view4d(input, weight)
33 output = self._update_output(input, weight, bias)
34 if k == 3:
/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/conv.pyc in _view4d(*tensors)
171 output = []
172 for t in tensors:
--> 173 assert t.dim() == 3
174 size = list(t.size())
175 size.insert(2, 1)
AssertionError:
|
st119499
|
I think I found a problem
conv layer take a input as batchnumber X colorDepth X height X Width but, your dataset don’t have colordepth.
I hope below works
img.unsqueeze(1)
|
st119500
|
Doesn’t work, gives:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-24-f3ca3f765752> in <module>()
1 for epoch in range(1, 101):
----> 2 train(epoch)
3 test(epoch, valid_loader)
<ipython-input-23-f91e8ba0f29c> in train(epoch)
6 data, target = Variable(data), Variable(target)
7 optimizer.zero_grad()
----> 8 output = model(data)
9 loss = F.nll_loss(output, target)
10 loss.backward()
/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)
208
209 def __call__(self, *input, **kwargs):
--> 210 result = self.forward(*input, **kwargs)
211 for hook in self._forward_hooks.values():
212 hook_result = hook(self, input, result)
<ipython-input-21-7f886ceeb28f> in forward(self, x)
10
11 def forward(self, x):
---> 12 x = F.relu(F.max_pool2d(self.conv1(x), 2))
13 x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), ))
14 # x = F.relu(F.max_pool2d(self.conv3(x), 2))
/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)
208
209 def __call__(self, *input, **kwargs):
--> 210 result = self.forward(*input, **kwargs)
211 for hook in self._forward_hooks.values():
212 hook_result = hook(self, input, result)
/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/modules/conv.pyc in forward(self, input)
233 def forward(self, input):
234 return F.conv2d(input, self.weight, self.bias, self.stride,
--> 235 self.padding, self.dilation, self.groups)
236
237
/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/functional.pyc in conv2d(input, weight, bias, stride, padding, dilation, groups)
35 f = ConvNd(_pair(stride), _pair(padding), _pair(dilation), False,
36 _pair(0), groups)
---> 37 return f(input, weight, bias) if bias is not None else f(input, weight)
38
39
/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/conv.pyc in forward(self, input, weight, bias)
31 if k == 3:
32 input, weight = _view4d(input, weight)
---> 33 output = self._update_output(input, weight, bias)
34 if k == 3:
35 output, = _view3d(output)
/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/conv.pyc in _update_output(self, input, weight, bias)
86
87 self._bufs = [[] for g in range(self.groups)]
---> 88 return self._thnn('update_output', input, weight, bias)
89
90 def _grad_input(self, input, weight, grad_output):
/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/conv.pyc in _thnn(self, fn_name, input, weight, *args)
145 impl = _thnn_convs[self.thnn_class_name(input)]
146 if self.groups == 1:
--> 147 return impl[fn_name](self, self._bufs[0], input, weight, *args)
148 else:
149 res = []
/home/dhruv/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/conv.pyc in call_update_output(self, bufs, input, weight, bias)
223 args = parse_arguments(self, fn.arguments[5:], bufs, kernel_size)
224 getattr(backend, fn.name)(backend.library_state, input, output, weight,
--> 225 bias, *args)
226 return output
227 return call_update_output
TypeError: DoubleSpatialConvolutionMM_updateOutput received an invalid combination of arguments - got (int, torch.DoubleTensor, torch.DoubleTensor, torch.FloatTensor, torch.FloatTensor, torch.DoubleTensor, torch.DoubleTensor, long, long, int, int, int, int), but expected (int state, torch.DoubleTensor input, torch.DoubleTensor output, torch.DoubleTensor weight, [torch.DoubleTensor bias or None], torch.DoubleTensor finput, torch.DoubleTensor fgradInput, int kW, int kH, int dW, int dH, int padW, int padH)
|
st119501
|
Hi,
I am extracting matrices (patches) from a tensor by using select and narrow. I would like to know if those operations are returning view to the same underlying storage or copy.
This question is related to a more complex process where :
1/ all patches positions are generated when my DataLoader is called
2/ each patch is extracted from the tensor only on __getitem__ call
Generating patches positions is not really heavy on memory, but extracting is such an expensive operation, it grows linearly and eventually blows 32 GB of RAM.
I would like to understand what is done wrong. To give you some idea, here is the logic behind the code:
class PatchExtractor(data.Dataset):
def __init__(self, root, patch_size, transform=None,
target_transform=None):
self.root = root
self.patch_size = patch_size
self.transform = transform
self.target_transform = target_transform
# extract all patch positions
self.dataset = make_dataset(root,
patch_size)
if loader is None:
self.loader = Loader()
else:
self.loader = loader
def __getitem__(self, index):
path, args, target = self.dataset[index]
# img is a tensor returned by a succession of narrow and select
img = self.loader.load(path, args, self.patch_size)
if self.transform is not None:
img = self.transform(img)
if self.target_transform is not None:
target = self.target_transform(target)
return img, target
def __len__(self):
return len(self.dataset)
I don’t think I am mistaking on the DataLoader. self.loader.load returns precisely this kind of results :
self.images[path].select(0, position[0]) \
.narrow(0, y-border_width, patch_size) \
.narrow(1, z-border_width, patch_size)
From my opinion, it seems everytime a patch is loaded and transfered to cuda, it is not freed after usage. I don’t store those values, I use the same train loop as in the imagenet example. The fact that a large bunch of my memory is being freed right after an epoch ends (ie when testing starts) guides me to that observation.
def train(train_loader, model, criterion, optimizer, epoch):
batch_time = AverageMeter()
data_time = AverageMeter()
losses = AverageMeter()
top1 = AverageMeter()
top5 = AverageMeter()
# switch to train mode
model.train()
end = time.time()
for i, (input, target) in enumerate(train_loader):
# measure data loading time
data_time.update(time.time() - end)
target = target.cuda(async=True)
input_25 = torch.autograd.Variable(input[0]).cuda()
input_51 = torch.autograd.Variable(input[1]).cuda()
input_75 = torch.autograd.Variable(input[2]).cuda()
target_var = torch.autograd.Variable(target)
# compute output
output = model(patch25=input_25, patch51=input_51, patch75=input_75)
loss = criterion(output, target_var)
# debug loss value
# print('raw loss is {loss.data[0]:.5f}\t'.format(loss=loss))
# measure accuracy and record loss
prec1, prec5 = accuracy(output.data, target, topk=(1, 5))
losses.update(loss.data[0], input[0].size(0))
top1.update(prec1[0], input[0].size(0))
top5.update(prec5[0], input[0].size(0))
# compute gradient and do SGD step
optimizer.zero_grad()
loss.backward()
optimizer.step()
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if i % args.print_freq == 0:
print('Epoch: [{0}][{1}/{2}]\t'
'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t'
'Data {data_time.val:.3f} ({data_time.avg:.3f})\t'
'Loss {loss.val:.4f} ({loss.avg:.4f})\t'
'Prec@1 {top1.val:.3f} ({top1.avg:.3f})\t'
'Prec@5 {top5.val:.3f} ({top5.avg:.3f})'.format(
epoch, i, len(train_loader), batch_time=batch_time,
data_time=data_time, loss=losses, top1=top1, top5=top5))
Should I del my variables right after backprop ?
Thank you for the feedback
|
st119502
|
select, narrow and indexing operations (except when using a LongTensor index) return views onto the same memory. However, the transforms operate out-of-place, so afterwards, you’re going to get a copy. Again, cating views also needs to allocate a new tensor for the output. Of course the outputs of __getindex__ should be out of scope a moment later, so the memory pressure stays constant.
Is it the CPU memory that’s blowing up or the GPU? Are you caching anything inside the model? Did you try adding gc.collect() calls inside the loop (not recommended, but if that helps it means that you have reference cycles somewhere)? deling the Variables shouldn’t be necessary.
|
st119503
|
As a side note, it’s faster to transfer whole input to the GPU at once, and create Variables with the slices afterwards. And unless target is in pinned memory, async=True is a no-op.
|
st119504
|
Also, if you’re using DataLoader with many workers, you might want to use a lower number. Each worker probably has its own copy of the dataset in memory and the patches it extracts are getting accumulated in the queue, because it’s a fast operation.
|
st119505
|
It’s the CPU memory that is blowing, the GPU memory stays stable.
I am not caching anything inside the model, only torch.nn ops. I will try gc.collect() and let you know. Thanks for the advice, I am not sure I understood the pinned memory feature, can it be related to my problem somehow ?
To give you an idea, in test mode I am around 9 GB used, in training mode it’s around 20 GB. The thing is that memory usage in test mode is stable between 8-9 GB, but during training the RAM is slowly eatean, batcth after batch, constantly growing until the next test step where it goes back to 8-9 GB.
|
st119506
|
I’m back @apaszke . No del and gc.collect() did not help.
However reducing the number of workers from 8 to 2 did reduce the memory footprint by half, at the cost of increasing data loading time. Still, even when lowering the number of workers, the memory usage keeps growing (slower) from batch to batch, until it reaches a stable point. I guess you have implemented some kind of maximum caching for the queue, and the limit is satisfied. This is both a good point to see it constant and sad because it shows a problem I have no idea how to solve. Reducing the batch size should help also right ? as it is prepared on the CPU side, I am currently at 4096 with 3 inputs of 75x75, 51x51, 25x25.
He is my current train loop :
for i, (input, target) in enumerate(train_loader):
# measure data loading time
data_time.update(time.time() - end)
target = target.cuda(async=True)
input_25 = torch.autograd.Variable(input[0]).cuda()
input_51 = torch.autograd.Variable(input[1]).cuda()
input_75 = torch.autograd.Variable(input[2]).cuda()
target_var = torch.autograd.Variable(target)
# compute output
output = model(patch25=input_25, patch51=input_51, patch75=input_75)
loss = criterion(output, target_var)
# debug loss value
# print('raw loss is {loss.data[0]:.5f}\t'.format(loss=loss))
# measure accuracy and record loss
prec1, prec5 = accuracy(output.data, target, topk=(1, 5))
losses.update(loss.data[0], input[0].size(0))
top1.update(prec1[0], input[0].size(0))
top5.update(prec5[0], input[0].size(0))
# compute gradient and do SGD step
optimizer.zero_grad()
loss.backward()
optimizer.step()
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if i % args.print_freq == 0:
print('Epoch: [{0}][{1}/{2}]\t'
'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t'
'Data {data_time.val:.3f} ({data_time.avg:.3f})\t'
'Loss {loss.val:.4f} ({loss.avg:.4f})\t'
'Prec@1 {top1.val:.3f} ({top1.avg:.3f})\t'
'Prec@5 {top5.val:.3f} ({top5.avg:.3f})'.format(
epoch, i, len(train_loader), batch_time=batch_time,
data_time=data_time, loss=losses, top1=top1, top5=top5))
del input_25
del input_51
del input_75
del input
gc.collect()
|
st119507
|
I have monitored instantiating the DataLoader weights around 4.2 GB in CPU memory, this is were all patches positions are extracted and stored. This is just a list of 15 millions tuples.
Does it mean I will cost around 4.2x4 if I have 4 processes building the batches ?
|
st119508
|
Yes, exactly. We have an upper bound on the queue size, so that if the workers are fast, they won’t fill up the memory, and it’s scaled linearly with each worker. If you find yourself with 1 or 2 workers saturating the queue there’s no point in using more of them. Also, as I said, remember that if you’re using an in-memory dataset, each worker is likely to keep its own copy, so the memory usage will be quite high. You could try to load the images lazily or use some kind of an in-memory database, that all workers will contact for the data.
|
st119509
|
There is no way for each worker to have its unique set of data, so that each one does not need a full copy of the original ?
I mean splitting the dataset between every worker ?
Also what I the reason the validation mode is way more efficient on CPU memory than training, because on CPU side, this is basically the same to me, I mean the data processing is the same, but it uses 10 GB in RAM.
only differences are :
# instead of model.train()
model.eval()
# using volatile
input_25 = torch.autograd.Variable(input[0], volatile=True).cuda()
|
st119510
|
No, there’s no easy way to do it. DataLoader isn’t meant to load from multiple splits, but from a single dataset. One possible solution would be to call .share_memory_() on the tensor that holds the images. This way, when you fork the workers, they should inherit it (unless something I don’t remember about stops them).
Can you show me how you instantiate both training and validation data loaders?
|
st119511
|
Yes sure. I have focused my work on reducing the DataLoader footprint. I have found using a list of int was really really expensive : python ints are 28 bytes long. By using a numpy array I can reduce my RAM footprint from 1 GB to 80 MB. I will see if it scales well on a server.
train_loader = torch.utils.data.DataLoader(
medfolder.MedFolder(traindir, file_extensions, patch_size, label_map, file_map,
transform=Compose([
transforms.CenterSquareCrops([25, 51, 75]),
transforms.Unsqueeze(0)
])),
batch_size=args.batch_size, shuffle=True,
num_workers=args.workers, pin_memory=True)
val_loader = torch.utils.data.DataLoader(
medfolder.MedFolder(valdir, file_extensions, patch_size, label_map, file_map,
transform=Compose([
transforms.CenterSquareCrops([25, 51, 75]),
transforms.Unsqueeze(0)
])),
batch_size=args.batch_size, shuffle=False,
num_workers=args.workers, pin_memory=True)
The only difference is that the validation dataset is not shuffled. I will give you some updates on this tomorrow. I think I am nearing the end of this problem.
Really amazed by the community support and your work !!
Thanks !
|
st119512
|
Ugh, nice finding! That should probably do it. Note that PyTorch tensors also keep the data packed, so you can use them instead of numpy indices too.
Just be careful with pinned memory when it’s already under a high pressure. Pinned memory is never swapped out, so it can possibly freeze or crash the system if there’s too much of it.
Thanks!
|
st119513
|
I found that the computation became slower and slower when training a huge volume of data with cuda. The batch size is 1003224*224. At the very beginning, it costs 0.4 second to train each chuck. Little by little, it costs nearly 3 seconds. In contrast, if selecting a small group of data from the original dataset for training, the speed keeps the same all along. Therefore, I wonder how to train a model on a large dataset. The main codes are simplified as follows to make the codes more readability.
x= torch.autograd.Variable(torch.zeros(batchSize,3,2*imgW,2*imgW)).cuda(0)
yt=torch.autograd.Variable(torch.LongTensor(batchSize).zero_()).cuda(0)
model = models.resnet18(pretrained=True).cuda(0)
for epoch in range(0,epochs):
correct=0
for t in range(0, trainSize, batchSize):
idx=0
for i in range(t, maxSize): # the batch of input x and target yt are assigned by local patches of # a large image here
x[idx,:,:,:]=image[:, i-imgW:i+imgW, i-imgW:i+imgW]
yt[idx]=torch.from_numpy(Class[i])
idx=idx+1
optimizer.zero_grad()
output = model(x)
loss = criterion(m(output), yt)
pred = output.data.max(1)[1]
correct += pred.eq(yt.data).sum()
loss.backward()
optimizer.step()
Thank you for your instruction beforehand.
|
st119514
|
You should never reuse Variables indefinitely like you do with x and y. They keep track of all operations you do on them, including assignments! Every execution of the for i in range(t, maxSize): loop will make the history of x and y longer by a single assignment, so the graphs will get huge very quickly. A fix is to move the declarations of x and yt inside the inner loop like that:
model = models.resnet18(pretrained=True).cuda(0)
for epoch in range(0,epochs):
correct=0
for t in range(0, trainSize, batchSize):
idx=0
x_data = []
y_data = []
for i in range(t, maxSize): # the batch of input x and target yt are assigned by local patches of # a large image here
x[idx,:,:,:]=image[:, i-imgW:i+imgW, i-imgW:i+imgW]
yt[idx]=torch.from_numpy(Class[i])
x_data.append(image[:, i-imgW:i+imgW, i-imgW:i+imgW])
y_data.append(torch.from_numpy(Class[i]))
idx=idx+1
x= torch.autograd.Variable(torch.stack(x_data, 0).cuda(0))
yt=torch.autograd.Variable(torch.cat(y_data, 0).cuda(0))
optimizer.zero_grad()
output = model(x)
loss = criterion(m(output), yt)
pred = output.data.max(1)[1]
correct += pred.eq(yt.data).sum()
loss.backward()
optimizer.step()
|
st119515
|
Got it! Thank you very much!
I have read the related examples, and noticed that Variable is defined in the loop. I did not know it is necessary to do like this.
Thank you, Adam, for your instruction again!
|
st119516
|
Hi,
I have been hunting a memory leak on GPU for a few days, and there seems to be a pytorch issue.
In the code bellow, I define a dummy Function that does nothing, and just forward through it a large tensor. Depending on .cuda() being inside or outside Variable(), there may or may not be a leak.
Did I miss something?
import torch
from torch import Tensor
from torch.autograd import Variable
from torch.autograd import Function
######################################################################
# That's not pretty
import os
import re
def cuda_memory():
f = os.popen('nvidia-smi -q')
fb_total, fb_used = -1, -1
for line in f:
if re.match('^ *FB Memory Usage', line):
fb_total = int(re.search(': ([0-9]*) MiB', f.readline()).group(1))
fb_used = int(re.search(': ([0-9]*) MiB', f.readline()).group(1))
return fb_total, fb_used
######################################################################
class Blah(Function):
def forward(self, input):
return input
######################################################################
blah = Blah()
for k in range(0, 10):
x = Variable(Tensor(10000, 200).normal_()).cuda()
y = blah(x)
fb_total, fb_used = cuda_memory()
print(k, fb_used, '/', fb_total)
for k in range(0, 10):
x = Variable(Tensor(10000, 200).cuda().normal_())
y = blah(x)
fb_total, fb_used = cuda_memory()
print(k, fb_used, '/', fb_total)
prints:
0 257 / 8113
1 265 / 8113
2 265 / 8113
3 265 / 8113
4 265 / 8113
5 265 / 8113
6 265 / 8113
7 265 / 8113
8 265 / 8113
9 265 / 8113
0 267 / 8113
1 267 / 8113
2 275 / 8113
3 283 / 8113
4 291 / 8113
5 299 / 8113
6 307 / 8113
7 315 / 8113
8 323 / 8113
9 331 / 8113
|
st119517
|
I think you should try to set Variable with volatile = True and try this code again, because by default, the framework will track your usage of variable to calculate the gradient when you call backward function.
|
st119518
|
This should be now fixed in master. It was a reference cycle, so it wouldn’t get freed until the Python’s GC kicked in.
|
st119519
|
Is it possible to just use arbitrary differentiable/supported functions to create other functions without having to implement their backward as described in examples?
One thing I really liked about TF is how you can just create an arbitrary compute graph of differentiable pieces. It’s not obvious how to do that here unless I’m missing something?
Lets say I want to quickly implement a GELU: y = 0.5 * x * (1 + tanh(sqrt(2 / pi) * (x + 0.044715 * x^3))))
Can I just do that using the differentiable tanh() and pow() ? Or will I have to create a special class and describe backward()?
|
st119520
|
That code implements GELU (well, with tanh replaced by F.tanh etc.). If you want to wrap it in a Python def, that works too.
|
st119521
|
I see, so what then is the reason behind “def backward(self, grad_output):” when implementing your own modules? Why isn’t that redundant?
|
st119522
|
You need to define backward if you’re implementing your own autograd.Function classes, not for your own Module classes. The difference is that the code in Module.forward operates on Variables using differentiable operations like F.tanh and other Modules, while you need to define a new autograd.Function subclass only if you want to define a totally new operation that can’t be written in terms of differentiable ops. It’s also helpful to define an autograd.Function rather than composing existing differentiable operations if the forwards or backwards passes would see a major performance benefit from custom C implementations.
Ultimately everything you use in a module is defined in terms of autograd.Functions (e.g. F.tanh implements forward and backward) but you rarely have to define one yourself.
|
st119523
|
It seems that you can’t use numpy constants in such a construct, it leads it being stuck on the CPU :
def gelu(x):
return 0.5 * x * (1 + F.tanh(np.sqrt(2 / np.pi) * (x + 0.044715 * x*x*x)))
However if you replace the square root with 0.79788456080 everything works fine. Intentional?
|
st119524
|
What’s the error you’re getting on the GPU? You’re only doing scalar ops from numpy.
|
st119525
|
Not getting any error. It just seems to never get around to training, GPU isn’t busy and a single CPU core is at 100%.
|
st119526
|
^CProcess Process-1:
Traceback (most recent call last):
File "/home/rrr/anaconda/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
Traceback (most recent call last):
File "main.py", line 121, in <module>
self.run()
File "/home/rrr/anaconda/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/home/rrr/anaconda/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 26, in _worker_loop
output = net(input)
File "/home/rrr/anaconda/lib/python2.7/site-packages/torch/nn/modules/module.py", line 202, in __call__
r = index_queue.get()
File "/home/rrr/anaconda/lib/python2.7/multiprocessing/queues.py", line 378, in get
result = self.forward(*input, **kwargs)
File "main.py", line 79, in forward
x = gelu(self.fc1(x))
File "main.py", line 61, in gelu
return 0.5 * x * (1 + F.tanh(np.sqrt(2 / np.pi) * (x + 0.044715 * x*x*x)))
File "/home/rrr/anaconda/lib/python2.7/site-packages/torch/autograd/variable.py", line 818, in __iter__
return recv()
File "/home/rrr/anaconda/lib/python2.7/site-packages/torch/multiprocessing/queue.py", line 21, in recv
buf = self.recv_bytes()
KeyboardInterrupt
return iter(map(lambda i: self[i], range(self.size(0))))
File "/home/rrr/anaconda/lib/python2.7/site-packages/torch/autograd/variable.py", line 818, in <lambda>
return iter(map(lambda i: self[i], range(self.size(0))))
File "/home/rrr/anaconda/lib/python2.7/site-packages/torch/autograd/variable.py", line 68, in __getitem__
return Index(key)(self)
File "/home/rrr/anaconda/lib/python2.7/site-packages/torch/autograd/_functions/tensor.py", line 16, in forward
result = i.index(self.index)
KeyboardInterrupt
|
st119527
|
I think it’s an unrelated problem. You sent a tensor from one process to another, but before the receiver has managed to take it out of the queue, the sender has already died. You either have to ensure that the sender is alive as long as it’s tensors are in a queue, or you have to switch to a file_system sharing strategy (not really recommended).
|
st119528
|
Well I’m not sure what the problem is but changing np.sqrt(2 / np.pi) to its value immediately fixes the problem. So I can’t say it’s unrelated.
|
st119529
|
I looked into the snippet and it seems that numpy is trying to be overly smart. np.sqrt(2 / np.pi) is a numpy.float64 object, not a regular float. Since it’s the first argument to multiplication, and apparently implements __mul__, then it can decide on what to do now. And apparently it starts treating Variables like sequences, but Variables are not regular sequences, because you can index them as many times as you want. That’s why it keeps adding dims, until it hits the numpy limit, and returns a very deeply nested list that contains single element Variables.
If you create a regular float object out of it, or reverse the order (i.e. put the constant after the expression with x the result should be ok).
I think the only fix we can do is to add scalar types to torch. We’ve been talking about that for some time now, and it’s probably going to happen, but rather in some farther future.
|
st119530
|
What would be an easy way of retrieving the LSTM output for all layers and for all steps, that is, not only the last layer, as it is done by default?
|
st119531
|
You’ll have to build your own network out of calls to nn.LSTMCell. The CUDNN kernel that serves as the backend to nn.LSTM doesn’t return the intermediate activations you’re looking for.
|
st119532
|
as an example you could check this example in ONMT - https://github.com/pytorch/examples/blob/master/OpenNMT/onmt/Models.py#L44 106
|
st119533
|
Is it possible to compute higher order gradients in pytorch? If so, are there any example projects that show how this should be done?
|
st119534
|
Higher order gradients are not yet supported in pytorch, but will be added in the next release, as can be seen in the ROADMAP for the next release 91
|
st119535
|
X_train = torch.from_numpy(X_train)
Y_train = torch.from_numpy(Y_train)
X_test = torch.from_numpy(X_test)
Y_test = torch.from_numpy(Y_test)
[all dtypes torch.FloatTensor confirmed]
train = torch.utils.data.TensorDataset(X_train, Y_train)
trainloader = torch.utils.data.DataLoader(train, batch_size=BATCH_SIZE, shuffle=True)
train_iter = iter(trainloader)
data = train_iter.next()
x, y = data
print y
And I get torch.DoubleTensor
x is torch.FloatTensor
|
st119536
|
I can’t reproduce your issue. To resolve a problem we need a self-contained snippet that we can run.
|
st119537
|
The following reproduces this.
import numpy as np
import torch
import torch.utils.data
X_train = np.random.uniform(-1, 1, (1000,11)).astype(np.float32)
Y_train = np.hstack((np.zeros(500), np.ones(500))).astype(np.float32)
X_train = torch.from_numpy(X_train)
Y_train = torch.from_numpy(Y_train)
print X_train
print Y_train
train = torch.utils.data.TensorDataset(X_train, Y_train)
trainloader = torch.utils.data.DataLoader(train, batch_size=128, shuffle=True)
train_iter = iter(trainloader)
data = train_iter.next()
x, y = data
print x
print y
[torch.FloatTensor of size 1000x11]
[torch.FloatTensor of size 1000]
[torch.FloatTensor of size 128x11]
[torch.DoubleTensor of size 128]
Maybe this is intentional but I can’t figure out why make a float into a double in that context, and why only the second one.
|
st119538
|
Hi,
The problem comes from the fact that your Y_train is a 1D tensor and thus when the batch is created, its stacking plain numbers (creating a double tensor to have best precision)
Reshaping your Y_train to a 2D tensor solved the problem:
Y_train = torch.from_numpy(Y_train).view(-1, 1)
@apaszke changing this 68 line with:
return self.data_tensor.narrow(0, index, 1), self.target_tensor.narrow(0, index, 1)
should solve the issue by always returning Tensor and not numbers. Would this break something I’m not aware of? (Let me know if you want me to send a PR for that)
|
st119539
|
copy_ doesn’t care if it gets data in shape (batch) or (batch,) ? A quick test shows no changes in loss behavior.
input = Variable(torch.FloatTensor(BATCH_SIZE, dims).cuda())
label = Variable(torch.FloatTensor(BATCH_SIZE).cuda())
x, y = train_iter.next()
input.data.resize_(x.size()).copy_(x)
label.data.resize_(x.size(0)).copy_(y)
|
st119540
|
No copy_ won’t care: http://pytorch.org/docs/tensors.html#torch.Tensor.copy_ 60 it juste requires the two tensor to have the same number of elements !
|
st119541
|
Hello, I’m trying to implement custom optimizers, then need to get values of objective functions. As far as I understand, suppose f(x) is an objective function,
p.data is x
p.grad.data is the gradient of f at x
where
for group in self.param_groups:
for p in group['params']:
pass
Is there any good way to get the value f(x) from optimizer? Thank you.
|
st119542
|
If you need to use the value of a function inside the step() function you need to implement your optimizer as requiring that closure argument. Once you call the closure it will return you the loss and you can inspect its value. You can take a look at how L-BFGS is implemented 28.
|
st119543
|
@apaszke Thank you.
but still I’m not sure what kind of closure is required.
If your loss function is, for example nn.NLLLoss or F.nll_loss, then what is closure?
Please show me an example or detailed explanation. Thank you.
|
st119544
|
@apaszke Thank you, I skipped to the next section.
Finally my implementation worked with closure()
|
st119545
|
I mean to combine branches of sub networks together. It was usually done by nn.Concat in Lua Torch.I searched but only find torch.nn.sequential .
|
st119546
|
Maybe define each branch as an nn.Sequential and put them in a list. Then during forward use torch.cat to concatenate their outputs along some axis. For example,
class InceptionBlock(nn.Module):
def __init__(self, num_in, num_out):
super(InceptionBlock, self).__init__()
self.branches = [
nn.Sequential(
nn.Conv2d(num_in, num_out, kernel_size=1),
nn.ReLU()),
nn.Sequential(
nn.Conv2d(num_in, num_out, kernel_size=1),
nn.ReLU(),
nn.Conv2d(num_out, num_out, kernel_size=3, padding=1),
nn.ReLU()),
...
]
# **EDIT**: need to call add_module
for i, branch in enumerate(self.branches):
self.add_module(str(i), branch)
def forward(self, x):
# Concatenate branch results along channels
return torch.cat([b(x) for b in self.branches], 1)
EDIT: Need to call add_module to register the branches
|
st119547
|
I find the previous solution has problems with DataParallel, which complains RuntimeError: tensors are on different GPUs, even in the bleeding-edge version.
It might be related to https://github.com/pytorch/pytorch/issues/689 72. A temporary solution is to create a member variable for each branch (e.g., self.b1 = nn.Sequential(...)), instead of grouping them into a list.
|
st119548
|
@Cysu or you could use self.branches = nn.ModuleList([...]). This will ensure that they remain in sync even when used with data parallel.
|
st119549
|
In the github example located here: https://github.com/pytorch/examples/blob/master/imagenet/main.py#L163 212
Why is the input variable not moved to cuda?
Thank you!
|
st119550
|
If your input has to use 2 GPUs, it’s more efficient to send the first half of the input to GPU1 and second half to GPU2, rather than sending the entire input to GPU1, and then sending half of it from GPU1 to GPU2.
That’s the reason that the input is not transferred to the GPU at the code location that you pointed out.
That example is multi-GPU ready – the model is wrapped in an nn.DataParallel, and nn.DataParallel broadcasts the input living on the CPU efficiently to the number of GPUs being used.
|
st119551
|
Traceback (most recent call last):
File “train.py 5”, line 55, in
for iteration, batch in enumerate(data_loader):
File “/home/vishal/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 251, in iter
return DataLoaderIter(self)
File “/home/vishal/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 103, in init
self.sample_iter = iter(self.sampler)
File “/home/vishal/anaconda3/lib/python3.6/site-packages/torch/utils/data/sampler.py”, line 50, in iter
return iter(torch.randperm(self.num_samples).long())
RuntimeError: must be strictly positive at /home/soumith/local/builder/wheel/pytorch-src/torch/lib/TH/generic/THTensorMath.c:1473
Exception ignored in: <bound method DataLoaderIter.del of <torch.utils.data.dataloader.DataLoaderIter object at 0x7fc28cc66898>>
Traceback (most recent call last):
File “/home/vishal/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 212, in del
self._shutdown_workers()
File “/home/vishal/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 204, in _shutdown_workers
if not self.shutdown:
AttributeError: ‘DataLoaderIter’ object has no attribute ‘shutdown’
|
st119552
|
Without any context, we cannot reply with an answer. What are you looking for, is there a script to reproduce this error?
|
st119553
|
I am trying to implemen fast neural style. The training function is here
for epoch in range(args.epochs):
for iteration, batch in enumerate(data_loader):
x = Variable(batch[0])
x = batch_rgb_to_bgr(x)
if args.cuda:
x = x.cuda()
y_hat = model(x)
xc = Variable(x.clone())
optimizer.zero_grad()
loss = loss_function(args.content_weight, args.style_weight, xc, xs, y_hat)
loss.backward()
optimizer.step()
print("===> Epoch[{}]({}/{}): Loss: {:.4f}".format(epoch, iteration, len(data_loader), loss.data[0]))
torch.save(model.state_dict(), 'model_{}.pth'.format(epoch))
torch.save(model.state_dict(), 'model.pth')
and the dataloader code
train_set = datasets.ImageFolder(args.dataset_path, transform)
data_loader = DataLoader(dataset=train_set, num_workers=args.threads, batch_size=args.batchSize, shuffle=True)
|
st119554
|
The problem is that your train_set has 0 images.
I’ve figured it out from this part of your error:
RuntimeError: must be strictly positive at /home/soumith/local/builder/wheel/pytorch-src/torch/lib/TH/generic/THTensorMath.c:1473
You can check this by doing:
print(len(train_set))
datasets.ImageFolder will pick up all images from subfolders of dataset_path, but not in the root directory of dataset_path itself. Maybe that’s your mistake?
for example if dataset_path has:
a.png
cat/b.png
dog/c.png
ImageFolder will have length=2 and will have 2 classes ['cat', 'dog'] with indices [0, 1] and will have two images b.png, c.png in it.
It will not have a.png
Also, another thing – ImageFolder will only pick up the following image extensions:
github.com
pytorch/vision/blob/master/torchvision/datasets/folder.py#L7-L10 8
IMG_EXTENSIONS = ['.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm']
def is_image_file(filename):
Hope that helps figure out your issue.
|
st119555
|
How can I masked (or index) evaluate a tensor?
For example:
In[2]: import numpy as np
In[3]: a = np.array([[1,2], [3,4], [5,6]])
In[4]: b = np.array([[7,8], [9,10]])
In[5]: a[[0,1]] = b
In[6]: a
Out[6]:
array([[ 7, 8],
[ 9, 10],
[ 5, 6]])
but
In[2]: import torch
In[3]: a = torch.FloatTensor([[1,2], [3,4], [5,6]])
In[4]: b = torch.FloatTensor([[7,8], [9,10]])
In[5]: a[torch.ByteTensor([0,1])] = b
Traceback (most recent call last):
File “/home/lii/anaconda2/lib/python2.7/site-packages/IPython/core/interactiveshell.py”, line 2881, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File “”, line 1, in
a[torch.ByteTensor([0,1])] = b
RuntimeError: Number of elements of destination tensor != Number of elements in mask at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.7_1485444530918/work/torch/lib/TH/generic/THTensorMath.c:43
Thank you!
|
st119556
|
I find index_copy_ method works, a.index_copy(0, torch.LongTensor([0,1]), b) gives the right answer.
|
st119557
|
Thank you for your answer, I will have another question and need your help soon!
|
st119558
|
In SGD optimizer code,
if momentum != 0:
param_state = self.state[p]
if 'momentum_buffer' not in param_state:
param_state['momentum_buffer'] = d_p.clone()
else:
buf = param_state['momentum_buffer']
d_p = buf.mul_(momentum).add_(1 - dampening, d_p)
p.data.add_(-group['lr'], d_p)
It seems that the update in pytorch is something like this:
v = momentum * v + (1-damping) * dp
p = p - lr * v
I felt just weird that learning rate is multiplied by both terms. Instead, I expect that
v = momentum * v + (1-damping) * lr * dp
p = p - v
I just want to know it is a bug or intended.
My second question is that in optim codes, in-place operators are commonly used. In this docs 5, I read in-place operators are not useful.
So is this fine to use like this p = p - self.lr * dp? I wonder that there are more benefits of in-place operators when coding optimizers.
|
st119559
|
It’s intended. I think some frameworks use one definition, while others use the second one.
In-place operations are not encouraged if they don’t have to be used, but you have to use them in the optimizer, because you want to actually modify the parameters of the model. You don’t have access to the original module/container/whatever object is holding the parameters, so p = p - self.lr * dp wouldn’t work, because you can’t reassign the p in the object that holds it.
|
st119560
|
representation = Variable(th.zeros(batch_size , max_length , self.HIDDEN_SIZE * 2))
for i in xrange(batch_size):
for j in xrange(length[i]):
representation[i][j] = th.cat((hidden_forward[max_length - length[i] + j][i]\
, hidden_backward[max_length - 1 - j][i]) , 0)
return representation
In short, I want to implement a bi-directional RNN.
hidden_forward and hidden_backward are list of hidden states from previous rnn.
this code yields an error ’ RuntimeError: in-place operations can be only used on variables that don’t share storage with any other variables, but detected that there are 2 objects sharing it’
however, if I replace representation[i][j] with representation[i , j], the code runs just well.
I’m wondering what’s the difference between these two ways to mention a particular part of a high-dimension tensor?
|
st119561
|
When you index with x[i][j], then an intermediate Tensor x[i] is created first, and the operation [j] is applied on it. If you index with x[i, j] then there’s no intermediate operation.
|
st119562
|
Digging through the issues and I can’t find anything on this particular instance:
I’m trying to serialize a network that I’ve moved to the GPU and cast to HalfTensor. When I call torch.save(net,filename) I get the following error:
File "<stdin>", line 1, in <module>
File "torch/serialization.py", line 123, in save
return _save(obj, f, pickle_module, pickle_protocol)
File "torch/serialization.py", line 218, in _save
_add_to_tar(save_storages, tar, 'storages')
File "torch/serialization.py", line 31, in _add_to_tar
fn(tmp_file)
File "torch/serialization.py", line 189, in save_storages
storage_type = normalize_storage_type(type(storage))
File "torch/serialization.py", line 99, in normalize_storage_type
return getattr(torch, storage_type.__name__)
AttributeError: 'module' object has no attribute 'HalfStorage'
I’ve tried casting to float, bringing the network back onto the cpu, and casting into a few other datatypes but it looks like a call to float() doesn’t change the underlying storage type in a way that would make this possible. If I don’t half() the tensor I can still save it just fine, but once I’ve called half() nothing I do apparently changes the underlying storage type back.
Any tips (or if this has been fixed in a recent PR that I’m not seeing) would be appreciated. Thanks again for all the help, this has been an extremely pleasant experience thus far.
Best,
Andy
|
st119563
|
So, it turns out, we didn’t implement CPU HalfTensor in pytorch yet. I’m sorry for your breakage, but is it an option to typecast the model to float() for now?
I’ve opened an issue and we’ll fix it soon: https://github.com/pytorch/pytorch/issues/838 18
|
st119564
|
It is, I’m currently just .float().cpu().numpy() 'ing my weights and dumping them to .npz’s like I used to. Thanks for the response!
|
st119565
|
I have a question regarding narrowing tensors when using autograd.
If I have a variable x1 obtained by narrowing another variable x, the backward phase seems to correctly compute the gradients for x, but x1.grad is zero. Is this an expeced behaviour?
import torch
from torch.autograd import Variable
x = Variable(torch.linspace(1, 12, 12).view(3, 4), requires_grad=True)
x1 = x[:,:2] # x1 is 3 x 2
x2 = x[:,1:] # x2 is 3 x 3
y1 = 2 * x1
y2 = 3 * x2
y1.backward(torch.ones(3, 2))
y2.backward(torch.ones(3, 3))
print(x.grad) # This is correct
# Variable containing:
# 2 5 3 3
# 2 5 3 3
# 2 5 3 3
# [torch.FloatTensor of size 3x4]
print(x1.grad) # This is zero
# Variable containing:
# 0 0
# 0 0
# 0 0
# [torch.FloatTensor of size 3x2]
|
st119566
|
x1.grad is zero because x1 is a non-leaf variable. We dont populate the gradients of non-leaf variables. If you want access to them you’d have to use Hooks 25
|
st119567
|
What i want to do is to load the coco caption data set with torch.utils.DataLoader.
I resized all coco images to the fixed size and saved them into data/train2014resized directory.
import torch
import torchvision.datasets as dset
import torchvision.transforms as transforms
cap = dset.CocoCaptions(root = './data/train2014resized',
annFile = './data/annotations/captions_train2014.json',
transform=transforms.ToTensor())
print('Number of samples: ', len(cap))
img, target = cap[3] # this works well
train_loader = torch.utils.data.DataLoader(
cap, batch_size=1, shuffle=False, num_workers=1)
data_iter = iter(train_loader)
print (data_iter.next()) # this returns an error.
When I ran the code above, i got a huge RuntimeError message. The below is the bottom of the error message.
File "/Users/yunjey/anaconda2/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 75, in default_collate
return [default_collate(samples) for samples in transposed]
File "/Users/yunjey/anaconda2/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 71, in default_collate
elif isinstance(batch[0], collections.Iterable):
File "/Users/yunjey/anaconda2/lib/python2.7/abc.py", line 132, in __instancecheck__
if subclass is not None and subclass in cls._abc_cache:
File "/Users/yunjey/anaconda2/lib/python2.7/_weakrefset.py", line 75, in __contains__
return wr in self.data
RuntimeError: maximum recursion depth exceeded in cmp
What can i do for solving this problem?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.